Skip to content

Instantly share code, notes, and snippets.

@andreimuntean
Last active April 15, 2018 12:50
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save andreimuntean/4357873dcbcb33657a667f3688cef5b2 to your computer and use it in GitHub Desktop.
Save andreimuntean/4357873dcbcb33657a667f3688cef5b2 to your computer and use it in GitHub Desktop.

A3C

Deep reinforcement learning using an asynchronous advantage actor-critic (A3C) model written in TensorFlow.

This AI does not rely on hand-engineered rules or features. Instead, it masters the environment by looking at raw pixels and learning from experience, just as humans do.

The Flappy Bird Gym environment was mastered in 48 hours of training using a CPU with 8 cores. Training and evaluation code is available at github.com/andreimuntean/a3c.

Dependencies

  • OpenAI Gym 0.8
  • TensorFlow 1.0

Learning Environment

Uses environments provided by OpenAI Gym.

Preprocessing

Each frame is transformed into a 47×47 grayscale image with 32-bit float values between 0 and 1. No image cropping is performed. Reward signals are restricted to -1, 0 and 1.

Network Architecture

The input layer consists of a 47×47 grayscale image.

Four convolutional layers follow, each with 32 filters of size 3×3 and stride 2 and each applying the rectifier nonlinearity.

A recurrent layer follows, consisting of 256 LSTM units.

Lastly, the network diverges into two output layers – one is a probability distribution over actions (represented as logits), the other is a single linear output representing the value function.

Acknowledgements

Implementation inspired by the OpenAI Universe reference agent.

Heavily influenced by DeepMind's seminal paper 'Asynchronous Methods for Deep Reinforcement Learning' (Mnih et al., 2016).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment