Skip to content

Instantly share code, notes, and snippets.

View danaugrs's full-sized avatar
🐯
Excited for the future!

Daniel Salvadori danaugrs

🐯
Excited for the future!
View GitHub Profile
@danaugrs
danaugrs / huskarl-minimal-dqn-cartpole.py
Last active July 3, 2019 14:17
Minimal example of creating a Huskarl DQN agent and visualizing it learning how to balance a cartpole.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import huskarl as hk
import gym
# Setup gym environment
create_env = lambda: gym.make('CartPole-v0').unwrapped
dummy_env = create_env()
# Build a simple neural network with 3 fully connected layers as our model
@danaugrs
danaugrs / huskarl-parallel-environments-snippet.py
Last active July 3, 2019 19:26
Snippet to show how easy Huskarl makes it for an agent to learn from multiple environment instances simultaneously and also to parallelize those instances over multiple CPU cores.
# We will be running multiple concurrent environment instances
instances = 16
# Create a policy for each instance with a different distribution for epsilon
policy = [hk.policy.Greedy()] + [hk.policy.GaussianEpsGreedy(eps, 0.1) for eps in np.arange(0, 1, 1/(instances-1))]
# Create Advantage Actor-Critic agent
agent = hk.agent.A2C(model, actions=dummy_env.action_space.n, nsteps=2, instances=instances, policy=policy)
# Create simulation, train and then test
==========
VULKANINFO
==========
Vulkan Instance Version: 1.1.114
Instance Extensions:
====================
WARNING: [Loader Message] Code 0 : ReadDataFilesInRegistry: Registry lookup failed to get layer manifest files.
==========
VULKANINFO
==========
Vulkan Instance Version: 1.1.114
Instance Extensions:
WARNING: [Loader Message] Code 0 : ReadDataFilesInRegistry: Registry lookup failed to get layer manifest files.
==========
VULKANINFO
==========
Vulkan Instance Version: 1.1.114
Instance Extensions: