Skip to content

Instantly share code, notes, and snippets.

@danaugrs
Last active July 3, 2019 19:26
Show Gist options
  • Save danaugrs/29bdb9ba0ec4bd8d252140a678c8a2b6 to your computer and use it in GitHub Desktop.
Save danaugrs/29bdb9ba0ec4bd8d252140a678c8a2b6 to your computer and use it in GitHub Desktop.
Snippet to show how easy Huskarl makes it for an agent to learn from multiple environment instances simultaneously and also to parallelize those instances over multiple CPU cores.
# We will be running multiple concurrent environment instances
instances = 16
# Create a policy for each instance with a different distribution for epsilon
policy = [hk.policy.Greedy()] + [hk.policy.GaussianEpsGreedy(eps, 0.1) for eps in np.arange(0, 1, 1/(instances-1))]
# Create Advantage Actor-Critic agent
agent = hk.agent.A2C(model, actions=dummy_env.action_space.n, nsteps=2, instances=instances, policy=policy)
# Create simulation, train and then test
sim = hk.Simulation(create_env, agent)
sim.train(max_steps=5000, instances=instances, max_subprocesses=8)
sim.test(max_steps=1000)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment