Skip to content

Instantly share code, notes, and snippets.

@danaugrs
Last active July 3, 2019 14:17
Show Gist options
  • Save danaugrs/ee6bbe00d793f2517426a31f9973fba0 to your computer and use it in GitHub Desktop.
Save danaugrs/ee6bbe00d793f2517426a31f9973fba0 to your computer and use it in GitHub Desktop.
Minimal example of creating a Huskarl DQN agent and visualizing it learning how to balance a cartpole.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import huskarl as hk
import gym
# Setup gym environment
create_env = lambda: gym.make('CartPole-v0').unwrapped
dummy_env = create_env()
# Build a simple neural network with 3 fully connected layers as our model
model = Sequential([
Dense(16, activation='relu', input_shape=dummy_env.observation_space.shape),
Dense(16, activation='relu'),
Dense(16, activation='relu'),
])
# Create Deep Q-Learning Network agent
agent = hk.agent.DQN(model, actions=dummy_env.action_space.n, nsteps=2)
# Create simulation, train and then test
sim = hk.Simulation(create_env, agent)
sim.train(max_steps=3000, visualize=True)
sim.test(max_steps=1000)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment