Skip to content

Instantly share code, notes, and snippets.

@andreimuntean
Created March 12, 2017 00:09
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save andreimuntean/8568efc905b2a5cf04a4a0bc70362ba1 to your computer and use it in GitHub Desktop.
Save andreimuntean/8568efc905b2a5cf04a4a0bc70362ba1 to your computer and use it in GitHub Desktop.

A3C

Deep reinforcement learning using an asynchronous advantage actor-critic (A3C) model written in TensorFlow.

This AI does not rely on hand-engineered rules or features. Instead, it masters the environment by looking at raw pixels and learning from experience, just as humans do.

For Pong, an average score of 18 was reached in 72 hours of training on an 8-core CPU. Training and evaluation code is available at github.com/andreimuntean/a3c.

Dependencies

  • OpenAI Gym 0.8
  • TensorFlow 1.0

Learning Environment

Uses environments provided by OpenAI Gym.

Preprocessing

Each frame is transformed into a 47×47 grayscale image with 32-bit float values between 0 and 1. No image cropping is performed. Reward signals are restricted to -1, 0 and 1.

Network Architecture

The input layer consists of a 47×47 grayscale image.

Four convolutional layers follow, each with 32 filters of size 3×3 and stride 2 and each applying the rectifier nonlinearity.

A recurrent layer follows, consisting of 256 LSTM units.

Lastly, the network diverges into two output layers – one is a probability distribution over actions (represented as logits), the other is a single linear output representing the value function.

Acknowledgements

Implementation inspired by the OpenAI Universe reference agent.

Heavily influenced by DeepMind's seminal paper 'Asynchronous Methods for Deep Reinforcement Learning' (Mnih et al., 2016).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment