Skip to content

Instantly share code, notes, and snippets.

@yingzwang
Last active July 8, 2022 08:29
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save yingzwang/2c5b455907942c7bdf3c0fece640095b to your computer and use it in GitHub Desktop.
Save yingzwang/2c5b455907942c7bdf3c0fece640095b to your computer and use it in GitHub Desktop.
Deep-Q learning implementation in Tensorflow and Keras (solving CartPole-v0)
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deep-Q learning implementation in Tensorflow and Keras\n",
"with an example application to solving `CartPole-v0` environment.\n",
"![dqn](https://user-images.githubusercontent.com/38169187/46908388-63807200-cf22-11e8-99f3-b471405495b3.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# import"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import gym\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"import tensorflow as tf\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# replay buffer"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# import numpy as np\n",
"from collections import deque\n",
"import random\n",
"\n",
"class ReplayBuffer:\n",
" \"\"\"Fixed-size buffer to store experience tuples.\"\"\"\n",
"\n",
" def __init__(self, buffer_size=int(1e5), random_seed=1234):\n",
" \"\"\"Initialize a ReplayBuffer object.\n",
" Params\n",
" ======\n",
" buffer_size: maximum size of buffer\n",
" The right side of the deque contains the most recent experiences. \n",
" \"\"\"\n",
" self.buffer_size = buffer_size\n",
" self.buffer = deque(maxlen=buffer_size)\n",
" random.seed(random_seed)\n",
"\n",
" def __len__(self):\n",
" \"\"\"Return the current size of internal memory.\"\"\"\n",
" return len(self.buffer)\n",
" \n",
" def add(self, s, a, r, done, s2):\n",
" \"\"\"Add a new experience to buffer.\n",
" Params\n",
" ======\n",
" s: one state sample, numpy array shape (s_dim,)\n",
" a: one action sample, scalar (for DQN)\n",
" r: one reward sample, scalar\n",
" done: True/False, scalar\n",
" s2: one state sample, numpy array shape (s_dim,)\n",
" \"\"\"\n",
" e = (s, a, r, done, s2)\n",
" self.buffer.append(e)\n",
" \n",
" def sample_batch(self, batch_size):\n",
" \"\"\"Randomly sample a batch of experiences from buffer.\"\"\"\n",
" \n",
" # ensure the buffer is large enough for sampleling \n",
" assert (len(self.buffer) >= batch_size)\n",
" \n",
" # sample a batch\n",
" batch = random.sample(self.buffer, batch_size)\n",
" \n",
" # Convert experience tuples to separate arrays for each element (states, actions, rewards, etc.)\n",
" states, actions, rewards, dones, next_states = zip(*batch)\n",
" states = np.asarray(states).reshape(batch_size, -1) # shape (batch_size, s_dim)\n",
" next_states = np.asarray(next_states).reshape(batch_size, -1) # shape (batch_size, s_dim)\n",
" actions = np.asarray(actions) # shape (batch_size,), for DQN, action is an int\n",
" rewards = np.asarray(rewards) # shape (batch_size,)\n",
" dones = np.asarray(dones, dtype=np.uint8) # shape (batch_size,)\n",
" return states, actions, rewards, dones, next_states"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DQN tf summary"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"def build_summaries():\n",
" \"\"\"\n",
" tensorboard summary for monitoring training process\n",
" \"\"\"\n",
" \n",
" # performance per episode\n",
" ph_reward = tf.placeholder(tf.float32) \n",
" tf.summary.scalar(\"Reward_ep\", ph_reward)\n",
" ph_Qmax = tf.placeholder(tf.float32)\n",
" tf.summary.scalar(\"Qmax_ep\", ph_Qmax)\n",
" \n",
" # merge all summary op (must be done at the last step)\n",
" summary_op = tf.summary.merge_all()\n",
" \n",
" return summary_op, ph_reward, ph_Qmax\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DQN neural network model"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using TensorFlow backend.\n"
]
}
],
"source": [
"import time\n",
"from keras import layers, initializers, regularizers\n",
"from functools import partial\n",
"\n",
"def build_net(model_name, state, a_dim, args, trainable):\n",
" \"\"\"\n",
" neural network model\n",
" model input: state\n",
" model output: Qhat\n",
" \"\"\"\n",
" h1 = int(args['h1'])\n",
" h2 = int(args['h2'])\n",
" \n",
" my_dense = partial(layers.Dense, trainable=trainable)\n",
" with tf.variable_scope(model_name):\n",
" net = my_dense(h1, name=\"l1-dense-{}\".format(h1))(state) \n",
" net = layers.Activation('relu', name=\"relu1\")(net) \n",
" net = my_dense(h2, name=\"l2-dense-{}\".format(h2))(net)\n",
" net = layers.Activation('relu', name=\"relu2\")(net)\n",
" net = my_dense(a_dim, name=\"l3-dense-{}\".format(a_dim))(net)\n",
" Qhat = layers.Activation('linear', name=\"Qhat\")(net)\n",
" nn_params = tf.trainable_variables(scope=model_name)\n",
" return Qhat, nn_params"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DQN agent"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"class DeepQNetwork:\n",
" def __init__(self, sess, a_dim, s_dim, args):\n",
" self.a_dim = a_dim\n",
" self.s_dim = s_dim\n",
" self.h1 = args[\"h1\"]\n",
" self.h2 = args[\"h2\"]\n",
" self.lr = args[\"learning_rate\"]\n",
" self.gamma = args[\"gamma\"]\n",
" self.epsilon_start = args[\"epsilon_start\"]\n",
" self.epsilon_stop = args[\"epsilon_stop\"]\n",
" self.epsilon_decay = args[\"epsilon_decay\"]\n",
" self.epsilon = self.epsilon_start # current exploration probability\n",
" self.update_target_C = args[\"update_target_C\"]\n",
" self.update_target_tau = args['update_target_tau']\n",
" self.learn_step_counter = 0\n",
" \n",
" # initialize replay buffer\n",
" self.replay_buffer = ReplayBuffer(int(args['buffer_size']), int(args['random_seed']))\n",
" self.minibatch_size = int(args['minibatch_size'])\n",
"\n",
" self.s = tf.placeholder(tf.float32, [None, self.s_dim], name='state') # input State\n",
" self.s_ = tf.placeholder(tf.float32, [None, self.s_dim], name='state_next') # input Next State\n",
" self.r = tf.placeholder(tf.float32, [None,], name='reward') # input Reward\n",
" self.a = tf.placeholder(tf.int32, [None,], name='action') # input Action\n",
" self.done = tf.placeholder(tf.float32, [None,], name='done')\n",
" \n",
" # initialize NN, self.q shape (batch_size, a_dim)\n",
" self.q, self.nn_params = build_net(\"DQN\", self.s, a_dim, args, trainable=True)\n",
" self.q_, self.nn_params_ = build_net(\"target_DQN\", self.s_, a_dim, args, trainable=False)\n",
" for var in self.nn_params:\n",
" vname = var.name.replace(\"kernel:0\", \"W\").replace(\"bias:0\", \"b\")\n",
" tf.summary.histogram(vname, var)\n",
"\n",
" with tf.variable_scope(\"Qmax\"):\n",
" self.Qmax = tf.reduce_max(self.q_, axis=1) # shape (batch_size,)\n",
"\n",
" with tf.variable_scope(\"yi\"):\n",
" self.yi = self.r + self.gamma * self.Qmax * (1 - self.done) # shape (batch_size,)\n",
" \n",
" with tf.variable_scope(\"Qa_all\"):\n",
" Qa = tf.Variable(tf.zeros([self.minibatch_size, self.a_dim]))\n",
" for aval in np.arange(self.a_dim):\n",
" tf.summary.histogram(\"Qa{}\".format(aval), Qa[:, aval])\n",
" self.Qa_op = Qa.assign(self.q)\n",
" \n",
" with tf.variable_scope(\"Q_at_a\"):\n",
" # select the Q value corresponding to the action\n",
" one_hot_actions = tf.one_hot(self.a, self.a_dim) # shape (batch_size, a_dim)\n",
" q_all = tf.multiply(self.q, one_hot_actions) # shape (batch_size, a_dim)\n",
" self.q_at_a = tf.reduce_sum(q_all, axis=1) # shape (batch_size,)\n",
" \n",
" with tf.variable_scope(\"loss_MSE\"):\n",
" self.loss = tf.losses.mean_squared_error(labels=self.yi, predictions=self.q_at_a)\n",
" \n",
" with tf.variable_scope(\"train_DQN\"):\n",
" self.train_op = tf.train.AdamOptimizer(self.lr).minimize(loss=self.loss, var_list=self.nn_params)\n",
" \n",
" with tf.variable_scope(\"soft_update\"):\n",
" TAU = self.update_target_tau \n",
" self.update_op = [tf.assign(t, (1 - TAU)*t + TAU*e) for t, e in zip(self.nn_params_, self.nn_params)]\n",
" \n",
" \n",
" def choose_action(self, sess, observation):\n",
" # Explore or Exploit\n",
" explore_p = self.epsilon # exploration probability\n",
" \n",
" if np.random.uniform() <= explore_p:\n",
" # Explore: make a random action\n",
" action = np.random.randint(0, self.a_dim)\n",
" else:\n",
" # Exploit: Get action from Q-network\n",
" observation = np.reshape(observation, (1, self.s_dim))\n",
" Qs = sess.run(self.q, feed_dict={self.s: observation}) # shape (1, a_dim)\n",
" action = np.argmax(Qs[0])\n",
" return action\n",
"\n",
" \n",
" def learn_a_batch(self, sess):\n",
" # update target every C learning steps\n",
" if self.learn_step_counter % self.update_target_C == 0:\n",
" sess.run(self.update_op)\n",
" \n",
" # Sample a batch\n",
" s_batch, a_batch, r_batch, done_batch, s2_batch = self.replay_buffer.sample_batch(self.minibatch_size)\n",
" \n",
" # Train\n",
" _, _, Qhat, loss = sess.run([self.train_op, self.Qa_op, self.q_at_a, self.loss], feed_dict={\n",
" self.s: s_batch, self.a: a_batch, self.r: r_batch, self.done: done_batch, self.s_: s2_batch})\n",
" \n",
" # count learning steps\n",
" self.learn_step_counter += 1\n",
" \n",
" # decay exploration probability after each learning step\n",
" if self.epsilon > self.epsilon_stop:\n",
" self.epsilon *= self.epsilon_decay\n",
" \n",
" return np.max(Qhat)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# args `CartPole-v0`"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"code_folding": []
},
"outputs": [],
"source": [
"args = {\"env\": 'CartPole-v0',\n",
" \"random_seed\": 1234,\n",
" \"max_episodes\": 150, # number of episodes\n",
" \"max_episode_len\": 200, # time steps per episode, 200 for CartPole-v0\n",
" ## NN params\n",
" \"h1\": 32, # 32 \n",
" \"h2\": 64, # 64\n",
" \"learning_rate\": 0.001, # 1e-3\n",
" \"gamma\": 0.9, # 0.9 (32), 0.95 (34) better than 0.99\n",
" \"update_target_C\": 1, # update every C learning steps (C=1 if soft update, C=100 if hard update)\n",
" \"update_target_tau\": 8e-2, # soft update (tau=8e-2), hard update (tau=1)\n",
" ## exploration prob\n",
" \"epsilon_start\": 1.0, \n",
" \"epsilon_stop\": 0.01, # 0.01\n",
" \"epsilon_decay\": 0.999, # 0.999\n",
" ## replay buffer\n",
" \"buffer_size\": 1e5, \n",
" \"minibatch_size\": 32, # 32\n",
" ## tensorboard logs\n",
" \"summary_dir\": './results/dqn', \n",
" }\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# main training"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/ying/gym/gym/__init__.py:22: UserWarning: DEPRECATION WARNING: to improve load times, gym no longer automatically loads gym.spaces. Please run \"import gym.spaces\" to load gym.spaces on your own. This warning will turn into an error in a future version of gym.\n",
" warnings.warn('DEPRECATION WARNING: to improve load times, gym no longer automatically loads gym.spaces. Please run \"import gym.spaces\" to load gym.spaces on your own. This warning will turn into an error in a future version of gym.')\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\n",
"states: Box(4,)\n",
"actions: Discrete(2)\n",
"episode: 0/150, steps: 23, explore_prob: 1.00, total reward: 23.0\n",
"episode: 10/150, steps: 13, explore_prob: 0.89, total reward: 13.0\n",
"episode: 20/150, steps: 21, explore_prob: 0.69, total reward: 21.0\n",
"episode: 30/150, steps: 17, explore_prob: 0.49, total reward: 17.0\n",
"episode: 40/150, steps: 200, explore_prob: 0.11, total reward: 200.0\n",
"episode: 50/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 60/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 70/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 80/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 90/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 100/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 110/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 120/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 130/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n",
"episode: 140/150, steps: 200, explore_prob: 0.01, total reward: 200.0\n"
]
}
],
"source": [
"sess = tf.InteractiveSession()\n",
"tf.set_random_seed(int(args['random_seed']))\n",
"\n",
"# initialize numpy seed\n",
"np.random.seed(int(args['random_seed']))\n",
"\n",
"# initialize gym env\n",
"env = gym.make(args['env'])\n",
"env.seed(int(args['random_seed']))\n",
"state_size = env.observation_space.shape[0]\n",
"action_size = env.action_space.n\n",
"print(\"states:\", env.observation_space)\n",
"print(\"actions:\", env.action_space)\n",
"\n",
"# initialize DQN agent\n",
"agent = DeepQNetwork(sess, action_size, state_size, args)\n",
"\n",
"# initialize summary (for visualization in tensorboard)\n",
"summary_op, ph_reward, ph_Qmax = build_summaries()\n",
"subdir = time.strftime(\"%Y%m%d-%H%M%S\", time.localtime()) # a sub folder, e.g., yyyymmdd-HHMMSS\n",
"logdir = args['summary_dir'] + '/' + subdir\n",
"writer = tf.summary.FileWriter(logdir, sess.graph) # must be done after graph is constructed\n",
"\n",
"# initialize variables existed in the graph\n",
"sess.run(tf.global_variables_initializer())\n",
"\n",
"# training DQN agent\n",
"rewards_list = []\n",
"loss = -999\n",
"num_ep = args['max_episodes']\n",
"max_t = args['max_episode_len']\n",
"for ep in range(num_ep):\n",
" state= env.reset() # shape (s_dim,)\n",
" ep_reward = 0 # total reward per episode\n",
" ep_qmax = 0\n",
" t_step = 0\n",
" done = False\n",
" while (t_step < max_t) and (not done):\n",
" \n",
" # choose an action\n",
" action = agent.choose_action(sess, state)\n",
" \n",
" # interact with the env\n",
" next_state, reward, done, _ = env.step(action)\n",
" \n",
" # add the experience to replay buffer\n",
" agent.replay_buffer.add(state, action, reward, done, next_state)\n",
" \n",
" # learn from a batch of experiences\n",
" if len(agent.replay_buffer) > 3 * agent.minibatch_size:\n",
" qmax = agent.learn_a_batch(sess)\n",
" ep_qmax = max(ep_qmax, qmax)\n",
" \n",
" # next time step\n",
" t_step += 1\n",
" ep_reward += reward\n",
" state= next_state\n",
" \n",
" # end of an episode\n",
" rewards_list.append((ep, ep_reward))\n",
"\n",
" # write to tensorboard summary\n",
" summary_str = sess.run(summary_op, feed_dict={ph_reward: ep_reward, ph_Qmax: ep_qmax})\n",
" writer.add_summary(summary_str, ep)\n",
" writer.flush()\n",
"\n",
" if ep % 10 == 0:\n",
" print(\"episode: {}/{}, steps: {}, explore_prob: {:.2f}, total reward: {}\".\\\n",
" format(ep, num_ep, t_step, agent.epsilon, ep_reward))\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Solved Requirements `CartPole-v0`**\n",
"\n",
"https://github.com/openai/gym/wiki/CartPole-v0\n",
"\n",
"Considered solved when the average reward is greater than or equal to **195.0** over 100 consecutive trials."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"scrolled": false
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEKCAYAAAAIO8L1AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzt3XmYXFd17/3vqqlnqbul1mANlmRkPONBNgYHQsJkA5chgWDC4AC5hvvCG5KbmxsckksgE0kAJ3lIIA52bAhjMA4OLyGAIfgy2FieZcs2tmVrtNRSq+fu6hrW+8c5p+pUd7VUlrqGln6f5+lHVaeG3iqpz+q11t77mLsjIiIyW6LZAxARkdakACEiIlUpQIiISFUKECIiUpUChIiIVKUAISIiVSlAiIhIVQoQIiJSlQKEiIhUlWr2AI7H8uXLfcOGDc0ehojIonL33XcfdPeBoz1vUQeIDRs2sHXr1mYPQ0RkUTGzp2t5nkpMIiJSlQKEiIhUpQAhIiJVKUCIiEhVChAiIlJV3QKEma0zsx+Y2XYze8jMPhAe7zez75rZz8M/+8LjZmZ/Z2aPm9kDZnZhvcYmIiJHV88MIg/8rrufCVwKvM/MzgI+CNzm7puB28L7AFcAm8Ovq4FP13FsIiJyFHVbB+Hu+4B94e0xM9sOrAFeB7wkfNpNwH8Bvx8e/5wH10C9w8x6zWx1+D7SQPftGub72/c3exjVmfHa563mOSt6ALjrqSG621KcuXpJ1ac/vHeUb2/Tf6F6eMXZqzhnzVIA7n76MKmE8bx1vQA8uHuE7z78TEPHk0om+LUt61i1tB1352t372bX0OSCvPe5a3t5+VkrAdh5aJKb79lN/HLNp/R2cOUl6wEYmcrx+Z8+xUy+OOd9OjIp3nnZBtrTSdydG3/yFIcnZo4+ADNef/4pbBroBuCff7yDFT3tvPq81cf/lzuChiyUM7MNwAXAncDK6KTv7vvMbEX4tDXArtjLdofHKn66zexqggyD9evX13XcJ6u//d5j/ODRQcyaPZK53GH/yDR/+cbzAPjDW7axrr+Tz161perz//o/H2nZv8ti5g537zzMF37zUgD+17/eT39Xhpv/xwsB+Ph3HuWHjzX2c3eHr929m6+851Ku/787+OyPdgAc9xjcYUl7ivs//ArMjOt/9CQ3/fTp0vtGceKKc1aztDPNbdv38/HvPDbne0fPe86Kbl5+1koe3T/GR/794ZrG6A4jkzN85HXnAHDjT57i/HW9iz9AmFk3cDPw2+4+avN/EtUe8DkH3K8DrgPYsmXLnMfl+OUKzkWn9pV+2FvJK679ISNTudL94akZlk6l533+tr2jvPGitXz8Tc9rxPBOGn9860N8+a6dzOSLDE/OsOPgBGPT+dLjO4cmefW5q/n7tzaulXjvzsO8/fqf8cprb2d0Os9vvHADH/5vZ3GEc05NbvjRDj76zYcZmphhWXcbTx6c4Ly1S7n1/b8AwFfv2sX/vvkBxmfyLO1MM54NPoe7//BlLOtuK73PofEsF/3p99h9OMhqdg1NAfBv77uM88PMaz6/8Jffr/h8J7IFutrq//t9XWcxmVmaIDh8wd2/Hh7eb2arw8dXAwfC47uBdbGXrwX21nN8Ul2+WCTZor9yL2lPMzpdDhCjU/mK+3EHRqcZHMtyzinVy09y7J6/sZ/pXJEH94xw544hAA6OZ5maKVAoOnsOT7Guv7OhY7pgfR83vetiHHj7pacuSHAA2DjQBcCTByeCPwcn2Li8q/R4Z1sSgIkwMEQBYvYJvL8rQ3s6wZ7DQWDYEwaKNb0dRx1DVybFxEw8QOTpyiSP6e/zbNRzFpMB1wPb3f2TsYduBa4Kb18FfCN2/B3hbKZLgRH1H5qjWIRkokUDREe69JtUrlBkKlco/UDOtm3vCECpTi4L55KN/QD8bMcQd+44VDq++/Ak+0enmSkUWdd/9BPfQrvo1H7u+aOX8yevP2dBggPAacuDuv+OwQmmcwX2DE+xKTwGwckbygFiMlsgYdCWqjy9mhlrejvYHQWI4SnaUgmWd2eOOoautiQT2QIAhaIzlWtMBlHP73AZ8HbgQTO7Lzz2B8DHgK+a2buBncCbwse+BbwKeByYBN5Zx7HJERTcySRac4lMT3uKJwbHAUqBIp56x23bM4oZ8zaw5dgt627jOSu6+dmOQ+wZnqK/K8PQxAw7hybpDk9c6/oam0FE0smF/b+7pq+DdNJ44uA4O8IsIsoqADrD3+QnZ4IT+MRMnq5MqmqAWtvXyZ7hIEDsPjzFmr6OmgJZV1uq9IvQZJhJRIGpnuo5i+lHVO8rALy0yvMdeF+9xiO1yxe9dTOI9nIGMRr2Isazedx9zg/atj0jbFze1ZDftE5Gz9/Yz9fv2cNUrsA7L9vAP//4KXYNTdLdHvSE1je4xFQvyYRx6rIudgxOlALEpliJKfr/Fc8gorLTbGv6OnhwT5DZ7hmeqqm8BEEw2D86Hbx/GIgWfQ9CFqdiCweInvYUo1M53L0UKKKUe7aH9o5yzikqL9XLJRv7S5/7q85dTUc6ya7DU+wcmsQsmPp5oti0vIsnD07wZJi9bqwSICoyiHlO3mt6OxiamGFyJs+ew1Os7avtM+qMlZjKPY5F3IOQxStfdBKt2qTuSJMPA0K8OT27zDQ0McOe4SnOWaPyUr08f+MyIKi1P29tL+v6O9g5NMnuoUlWL2knkzpxTi8bB7p4+tAEjx8YZ9WS9ooAEDWLoyby5Exh3vJPFBCeODDBoYkZ1tZYhutuKzepJ8NAsahLTLJ4FYtOqkUziCVh+WJsOl8qMQX3c6xc0l5avPRQ1KBWBlE3q5a2s2l5F6f0dpBJJVjX1xmUmNpSDZ/BVG+nLe8mV3B+9PghNq/ornisc1aJaTybL/UlZosCRNTYr7XE1JlJlQJDlEHMV8ZaSAoQMke+WGzpEhME/Yd41hDd/r2vPcB/PTrI6SuDH+KzFSDq6vrfuLg0W2ddfyd37hiiqy3JizYf9WqWi0rUlD44nuWVZ6+seKwjHU1zDU7gkzN5VvS0V32fNb1B4IymBq+pscTU3ZZkplBkJl8sNam7F/ksJlmkit7a01wBRqfzVUtMjzwzysjUDD954hDr+ztZ2jn/Ijo5fvFa/Lr+Tsazecaz+ROmQR2JN6Xjf2cIflY60snSiXsyW6BzWfXf7lf0tJFOGnc9FQSImnsQmajPkS9nECoxSTMUWrhJvSTKIKZzs0pMwQ/N4Ykcr33eGt588Tra0ydODXwxWBc72TVjDUQ99XdlWNKeYnQ6z2kD3XMe72pLMjFrmms1iYSxemnQq0klbN5MY7YoWxjP5kuZSiMyCP0EyRytHCB6wh7E6FSO0ViJaTwbBIuhiRn6u9JcsrGf89YeefsCWVjrl5WzhmatgagXM2NjGBhmZxAQzGSajE1zPdIU1KjvsLq3veafs6jfMDlTKGUqjehBKEDIHIWit+5WGx3BD95YWGKKMoqx6TxTMwWmcgX6uo6+MlUWXjwonGglJoDTlneRTlrVslBnJsV4toC7h9Nc5z95R6+vtUEN5am0UQkPNItJmiRfdBItmkFEs5iCElOe1Us7GMuOMTqd5/BksG1yf6cCRDN0taXo78owkc0z0NN29BcsMlf/4iZedPpyUlVWandlgh7EdK5I0Y/cH4ga07VOcQ3eP+xBZAtMzhRoTycakuUrQMgcRW/daa5tqQSZZILRqTxj0zmWdqbpzqQYm84xFO6rrwyiedb1dzKRzS/YPkit5IxVSzhjVfV1NZ1tKUamcqW1CkfKIKLM4dllEMH7RRlEI/oPoAAhVeQLrTvN1cxY0hEEhNHpPGt62+lpTzEezyAUIJrmf/ziaWTzc1e1n+i6Mkn2DU+V1irUkkHUOsU1eP/yLKbJbL4hM5hAAUKqaOVprhA0qkfDhXJnruqhuz3F2HS+nEGoxNQ0l5+zqtlDaIquthSTM4VSBtF9hAzignV9XHnxOl7y3NrXisT3expv0LUgQAFCqmjlWUwQTHUdm84xNp1jSUeanvY0Y9lc6dKNyiCk0boyyXAK6tHXKHRkknzsV897du8fXXMinMXUiGtBgGYxSRUtHyA60gxP5hjL5ulpT5VKTEOTOcxgaYcWx0ljdbalmJzJl9ZCLPRGeh3pJGZBBjGRnX8zwIWmACFzFLx1p7lCsN3GMyPT4bWCwwxiOs/hiRl6O9ItHdzkxNSVSZIrOMNhH2yhewRmFlxVLltgYqbQkJ1cQQFCZnF3Ci08zRWCoLB/LNgbf0lHiu62YIXr0OSMZjBJU0QBYXAsC9RnjUJwVbkwg2hQk1oBQioUg81QW3aaKwQZRLhpKz3taZa0pxgPexBaAyHNEE07HRwPA0QdfsOPrkt9QpSYzOwGMztgZttix75iZveFX09FlyI1sw1mNhV77DP1GpccWSGMEK1cpokWy0W3e9pTTOeKHBjL0qsAIU0QbXsxOBoFiHpkEKkgg2hgiameYehG4FPA56ID7v7m6LaZfQIYiT3/CXc/v47jkRosigARa0L3tKdKv73tHJrkwvXaf0kaLyr5HBjLkjBKW6AvpM5MkqHJHIWiN2wdRN0yCHe/HRiq9pgFyyx/DfhSvb6/HJtCWLtp9SZ1JJrmCjCTL6oHIU0RXSBocCxLVyZVl5Xk3W0pBsPrUjdqJXWzehAvAva7+89jxzaa2b1m9kMze9F8LzSzq81sq5ltHRwcrP9ITzKFQhAgWr1JXb6dqggY6kFIM3TFehD12mW1sy3FgbAJPt8V6xZaswLEW6jMHvYB6939AuB/Al80s6qbnrj7de6+xd23DAycWFetagVRBtHKTerKElOa7liAUAYhzRAFiKGJmbo1kLvbkuTDEvAJm0GYWQr4FeAr0TF3z7r7ofD23cATwOmNHpsElxuF1s4gooyhPZ0gk0pUZBTKIKQZ4iub6zUFNd536DxRAwTwMuARd98dHTCzATNLhrc3AZuBJ5swtpNeGB8WRQYRBYYeZRDSZPETdr3KP/HM5Eh7PS2kek5z/RLwU+C5ZrbbzN4dPnQlc5vTLwYeMLP7ga8B73X3qg1uqa/F1KQu/xnLIBQgpAk60rEMok6/3cezlEW/m6u7v2We479R5djNwM31GovULmpSt/I01+5MCrNyJhGvx6rEJM2QTBgd6SRTuUKDMogTt8QkLayUQbRwgEgkjJ62VKnElEklaEsFV9iKl5tEGilavFavHkR8cdyJPotJWlRhETSpISgl9XVWzmbq60y3/LjlxBX9hl+/ElP5fXU9CGmKwiJoUgN88s3nsyzWb+hpT7X8mOXEFvUF6rUNRhQUkgmry0rtahQgpEJpmmsLN6kBLlzfV3F/SUea9gb90IhUEzWR69VALmUomWTDrvmtACEVFsM012quueIM0snFNWY5sXS21TmDCANQo8pLoAAhsyyGJnU1l25a1uwhyEmuYRlEAwOEcnKpEDWpF1uAEGm26MRdr0VsUZO6UdejBgUImSVqUitAiDw79c4gok0AlUFI0yyWJrVIq6l3DyKdDPYea9QqalCAkFlKTWo1fEWelXpnEBCsoG7UPkygJrXMogxC5NiU1kHUMUC8actazlvTuKsmKkBIheIiuB6ESCu6ZGM/Lz59gJVL2+r2Pa654sy6vXc1ChBSQU1qkWNzzpqlfO5dlzR7GAtKPQipUFCJSURCChBSoaAmtYiEFCCkgprUIhJRgJAKalKLSKSelxy9wcwOmNm22LE/NrM9ZnZf+PWq2GPXmNnjZvaomb2yXuOSI8svgivKiUhj1DODuBG4vMrxa939/PDrWwBmdhbBtarPDl/zD2bWuNUgUlJcpJv1icjCq1uAcPfbgaEan/464MvunnX3HcDjwIk1X2yR0DRXEYk0owfxfjN7ICxBRVd9WQPsij1nd3hMGkzTXEUk0ugA8WngNOB8YB/wifB4tbORV3sDM7vazLaa2dbBwcH6jPIkViiqSS0igYYGCHff7+4Fdy8C/0S5jLQbWBd76lpg7zzvcZ27b3H3LQMDA/Ud8EkoHwaIhAKEyEmvoQHCzFbH7r4BiGY43QpcaWZtZrYR2Az8rJFjk4CmuYpIpG57MZnZl4CXAMvNbDfwYeAlZnY+QfnoKeA9AO7+kJl9FXgYyAPvc/dCvcYm84syCDWpRaRuAcLd31Ll8PVHeP6fAX9Wr/FIbYoKECIS0kpqqVCa5qpZTCInPQUIqVCa5qoMQuSkpwAhFQrualCLCKAAIbPki67sQUQABQiZpVhUBiEiAQUIqZAvuhrUIgIoQMgsRZWYRCSkACEV1KQWkYgChFQoKIMQkZAChFQoqEktIiEFCKmQL7quBSEigAKEzFIsOqmkAoSIKEDILAXXPkwiElCAkAqFYlFNahEBFCBkFjWpRSSiACEVCmpSi0hIAUIqFNSkFpFQ3QKEmd1gZgfMbFvs2F+b2SNm9oCZ3WJmveHxDWY2ZWb3hV+fqde45Mg0zVVEIvXMIG4ELp917LvAOe5+HvAYcE3ssSfc/fzw6711HNdJr1j00qVF5zymrTZEJFS3AOHutwNDs459x93z4d07gLX1+v4yvzd+5if8zfceq/qYttoQkUgzexDvAv4jdn+jmd1rZj80sxc1a1Ang12Hp3h8cLzqYwVt9y0ioVQzvqmZfQjIA18ID+0D1rv7ITO7CPg3Mzvb3UervPZq4GqA9evXN2rIJ5RC0Rmbzs/7WHtaAUJEmpBBmNlVwGuAt7q7A7h71t0PhbfvBp4ATq/2ene/zt23uPuWgYGBRg37hJIrFBk9QoBQk1pEoMEBwswuB34feK27T8aOD5hZMry9CdgMPNnIsZ1M8gVnfDpX9TFdD0JEInUrMZnZl4CXAMvNbDfwYYJZS23Ady34LfWOcMbSi4GPmlkeKADvdfehqm8sx+1IJaZ8QU1qEQnULUC4+1uqHL5+nufeDNxcr7FIpVyxOG+AKLqa1CISOGKAMLP+Iz2u3/IXn2LRcYepXIF8oUgqWVllLBSdpFZSiwhHzyDuBhwwYD1wOLzdC+wENtZ1dLLgcsVi6fZ4Nk9vZ6bicU1zFZHIEZvU7r7R3TcB/wn8N3df7u7LCGYhfb0RA5SFlS+UV1BXKzOpSS0ikVpnMV3s7t+K7rj7fwC/WJ8hST3li0cJEGpSi0io1ib1QTP7Q+BfCEpObwMO1W1UUjf5QrnENFZlqqsyCBGJ1JpBvAUYAG4JvwbCY7LIFI6WQWgvJhEJHTWDCBewXePuH2jAeKTOcvEAka2SQahJLSKho2YQ7l4ALmrAWKQBCrEm9fg8GURSGYSIUHsP4l4zuxX4V2AiOujumsm0yMSnuVbbj0kBQkQitQaIfoKm9C/Hjjma6rroaJqriNSqpgDh7u+s90CkMfIVC+Wq9yDUpBYRqDFAmFk78G7gbKA9Ou7u76rTuKROjppBFJVBiEig1mmunwdWAa8EfkhwqdCxeg1K6udIC+XcnaKj60GICFB7gHiOu/8RMOHuNwGvBs6t37CkXo60UC5aI6EmtYhA7QEiOpMMm9k5wFJgQ11GJHUVZRBLO9JzMoiCK0CISFmts5iuM7M+4I+AW4Hu8LYsMlGA6OusEiCUQYhITK2zmD4b3vwhsKl+w5F6i0pMvZ0Znhwcr3gsChBqUosI1FhiMrMnzOwLZvZeMzur1jc3sxvM7ICZbYsd6zez75rZz8M/+8LjZmZ/Z2aPm9kDZnbhs//ryNFEGUR/V4bxbB73ctM6ChBqUosI1N6DOAv4R2AZ8HEze9LMbqnhdTcCl8869kHgNnffDNwW3ge4Atgcfl0NfLrGscmzEE1z7e1MU3SYnCmUHitlELqinIhQe4AoEDSqC0AR2A8cONqL3P12YPZlSV8H3BTevgl4fez45zxwB9BrZqtrHJ/UKFoo1xdeSS7eh1AGISJxtTapR4EHgU8C/+Tux3MtiJXuvg/A3feZ2Yrw+BpgV+x5u8Nj+47je8ksUQbR3xUEiGA1dbD2UbOYRCTu2VwP4nbg/wG+bGYfMbOXLvBYqp2VfM6TzK42s61mtnVwcHCBh3DiizKI3s40ULlhn2YxiUhcTQHC3b/h7r8HvAf4FvAbwDeP8Xvuj0pH4Z9RqWo3sC72vLXA3ipjuc7dt7j7loGBgWMcwsmrPM11/hKTrgchIlD7LKabzewJ4G+BLuAdQN8xfs9bgavC21cB34gdf0c4m+lSYCQqRcnCiTepoXI1tZrUIhJXaw/iY8A94cWDamZmXwJeAiw3s93Ah8P3+qqZvRvYCbwpfPq3gFcBjwOTgHaQrYPZGcS4mtQiMo9aA8RDwDVmtt7drzazzcBz3f2IZSZ3n++61XP6Fx5MyH9fjeORYxQtlIua1BUlJjWpRSSm1ib1PwMzwAvD+7uBP63LiKSu4nsxmVWWmKLykwKEiEDtAeI0d/8rwk373H2K6rOOpMVFQSCdTNCdSTGWLWcQRVeTWkTKag0QM2bWQTjt1MxOA7J1G5XUTb5YxCzIErrbU9VnMalJLSLU0IMwMwM+A3wbWGdmXwAuI5jqKotMPnbFuK62FBNZTXMVkeqOGiDc3c3sA8ArgEsJSksfcPeD9R6cLLx8oUgqESSO7ekE2Xz5AkLazVVE4mqdxXQHsMnd/796DkbqL1fw0jqHtlSSbH7uZn0JBQgRofYA8UvAe8zsaWCCIItwdz+vbiOTuijESkzt6QTZXCyD0DRXEYmpNUBcUddRSMPki0VSyaDE1JZKMjI1dyW1AoSIQO1XlHu63gORxsgXyhlEW2pWBqEmtYjE1DrNVVrAvpGp436PfLHcg2hPJ6s2qZVBiAgoQCwaD+4e4QV/8X0e2z92XO+TKxRJJ6ISU4Lp3NwmtQKEiIACxKJxYGwagMGx41ufWCh6KQC0pWZNc3VNcxWRMgWIRWImPJHHp6Uei2Caa7QOQtNcRWR+ChCLRPSbfrypfCwKxWJFk3o6V8TDzEFNahGJU4BYJMoZxPEFiHiTui2dDN473AJcPQgRiVOAWCSyhYUpMc2e5hq8pwKEiMylALFILFwGUd6LKcogoplMalKLSFytK6kXjJk9F/hK7NAm4P8AvcB/BwbD43/g7t9q8PBaVpQ5HG8PIldw2tOzMohcZQahJrWIQBMChLs/CpwPYGZJYA9wC8E1qK919483ekyLwULNYqrciykZvuesEpOa1CJC80tMLwWe0FYeR7dQJaZcIb4XU/BnqcSkCwaJSEyzA8SVwJdi999vZg+Y2Q1m1tesQbWihQoQ8Qxi3ia1MggRoYkBwswywGuBfw0PfRo4jaD8tA/4xDyvu9rMtprZ1sHBwWpPOSGV10Ec5yymYuVCueC9K5vUmsUkItDcDOIK4B533w/g7vvdveDuReCfgEuqvcjdr3P3Le6+ZWBgoIHDba6FLDGlZ2cQUZO6oAAhImXNDBBvIVZeMrPVscfeAGxr+IhaWLSYbfo4M4jKvZjmySBUYhIRmjCLCcDMOoGXA++JHf4rMzsfcOCpWY+d9BYug4iXmOb2IMw0zVVEAk0JEO4+CSybdeztzRjLYlFaB3HcTerYXkyzF8oVXdmDiJQ0exaT1Ch7HOsgvv/IfvYOBxcbyhdiezFVmcWk/oOIRBQgFomZY9zNdXImz2/etJXP3xEsNckVi6Rnz2LKKUCIyFwKEItE9hh7ENv3jVJ0mMjmgbkXDILKvZgUIEQkogCxSBzrVhvb9owCMDVTwN3JFbw0zTWVMBKmEpOIVKcAsUjMFI4tg9i2ZwSA6XyRcKE0yXA3VzOruKpcfJW1iIgCxCJxrD2IbXvLGUQuDDKp2F5L0VXlIAgQCc1iEpGQAsQiUZ7mWnuJaTpX4Of7x0qvi/ZaSlcEiMoMQiUmEYkoQCwSx7JQ7rH9Y+TDoDA1UyBf2kqj/M/enk6oByEiVSlALBLHEiCiBvXpK7uZyhXIFYPXzs4gNItJRKpRgFgkoiZ1oejkC7UFiW17R1jSnmLzih6mc4Wq15xuUwYhIvNQgFgEisVgemp3W7AzSq1ZxEN7RjhnzVI6Mkmmc8VSkzodLzGlkpUL5dSkFpGQAsQiEGUPPe1BgKhlR9di0dn+zBhnn7KE9nTiiBnEtJrUIlKFAsQiEGUMUYCoJYOYzBWYyRcZ6GmjI50MehBhk3r2NFdttSEi1ShALAIzpQCRBmoMEOHWGp2ZFO1hgMiXmtTlf/a2+EI5NalFJEYBYhGITuBLShnE0UtMkzPBc7rakrSnk7iXj1WUmGYtlFOAEJGIAsQiMCeDqGE19cRMkEF0pFN0hLu2jk8Hx+YulFOTWkTmUoBYBGY3qWsqMc3KIADGw7LT3IVy5Sa1riYnIpGmXFEOwMyeAsaAApB39y1m1g98BdhAcNnRX3P3w80aY6uY24OovcTUmUnSkQkCwth0DqC0myuEGUSYkYxN51m9tH3hBi4ii1qzM4hfcvfz3X1LeP+DwG3uvhm4Lbx/0pszi6mGElNFkzoVZBBjYYkpFW9SpxLMFIoUi87B8SzLu9sWdOwisng1O0DM9jrgpvD2TcDrmziWlhFlEEs6ap/FNBGVmDIp2jOzS0zlDCIqP03nCxyamGFZd2bhBi4ii1ozA4QD3zGzu83s6vDYSnffBxD+uaJpo2shpQDxLGYxTUVN6kyy1KQeq9qkDv4L7B/NUii6MggRKWlaDwK4zN33mtkK4Ltm9kgtLwqDydUA69evr+f4WsaxLJSbqNaknp6bQbSlgwCx5/AUAMt7FCBEJNC0DMLd94Z/HgBuAS4B9pvZaoDwzwNVXnedu29x9y0DAwONHHJN7t15mLufHlrQ9yyvg4imudbWpDYL9lrqmDWLKb5QLupP7BmeBGC5SkwiEmpKgDCzLjPriW4DrwC2AbcCV4VPuwr4RjPGdzz+8tuP8Cff3L6g73ksPYjJbJ6OdJJEwmgPs4SxMECkjpRBqMQkIqFmlZhWArdYsCgrBXzR3b9tZncBXzWzdwM7gTc1aXzHbGQqz0R4Il4ox7IOYmKmQGcmeH55oVwwzTWViM9iCh7bPawAISKVmhIg3P1J4HlVjh8CXtr4ES2c8WyO4Yncgr5nlEG0p5Kkk1bTbq5TM3k6w9lL0Sy53TRYAAAVCElEQVSm8jTX+CymcgaRTBi9YZYiItJq01wXvfHpPGPZfOmkvhCijCGTSlRsjXEkQQYRBohUZQ8iNWuhHMCe4SmWdWW0klpEShQgFpC7l07Cw5MzC/a+UbBpSyWC7blrmuZaoCu8wFA6aSQTVprFNHuhHMC+kWmWqbwkIjEKEAsomy+WrrkwtMABImHBiT1+/YYjmYiVmMyM9lSC8ZlqJabgOcEaCM1gEpEyBYgFFNX4AYYmFjBAFIpkwt/0g+s31DKLqVxigmDBnAexa1aJqfxfYEAZhIjEKEAcp+nwym1QrvEDHF7ARnU2VyATloVqLTFN5vJ0ZcpzEKJMAWbNYkqXb2uRnIjEKUAcp3fdeBd//O8PAeXdUmFuialY9CM2rt3nf3ymUKQtPMEHAaK2DKIjlkFUBohYiSlVPr6sSyUmESlTgDhOTw5O8OTgOFDeygLg8KwS07Xfe4w3/MOP532fW+/fy8V/9r2qU1iz+WIsg0jW3IOImtRQXguRMCpmKlVkECoxiUiMAsRxGp6aYXgyyBzGsvP3IB7bP8ZDe0cZna5eerpv1zAjUzkOjmfnPJbNF0u9grb00UtMhaIznSuWggKU1zvEy0tQnuYKKjGJSCUFiOMwnSswnSuWAkRpGmnCODyrxBT1JB59Zqzqe+0aClYyR+8VN5OPNalrKDFN5cob9UWiElN8BhMEG/dFu7tqFpOIxClAHIfRqeBkPjwVBIOoB7Gmr2NOBhH1JB6ZN0AEm+WNTFUPEKUMooaFcpMz5YsFRaJsIlVlIVyURajEJCJxChDHYTg8mU/nikznCqVZTOv7O6tkEGGA2Dc6533cnV2HgwBRWwZx5BLTZLZ8udFIOYOY+08eBZ9+NalFJEYB4jjET+bDkznGsnnaUglW9LRXTHMtFr0UMKqVmIYmZkrXkI6ykbhsvlC5DuIoTeqJZ5lBtKeT9HWmK7YBFxFp5gWDFr34dhqHJ2cYm87T056ivytdUWIanc5R9GDLi0efGcPdCXeyBWBnWF6CeUpMhSJLk8EmejX1IGaq9SCiJnW1ElOCzozKSyJSSb8yHsWHbnmQO548VPWx4anKDGJ8Ok9Pe5q+rgxTuULpRB0Fi/PX9TKWzbMn3Fo7sutw+f7IPCWmqE/Qlk4cdTfX6GpyFSWmzPwlpkwqoWtRi8gcyiCOYCKb5wt37iSdTHDppmVzHh+pKDHNMJ7N092Wor8zONkenpyhI9NRKi+94LTl3PXUYR7ZN8bavs7Sa6MGdU97qoYeRNCknp2FxE0dqcSUnPua//6iTXS367+CiFRSBnEEg2PBmoQDY9NVH4+Xgw6HGUR3W4q+sNkbZQ5DYT/i0k39ADy6v7IPsfvwJMu6MpyytGOeHkRlkxrKFxGK5AtFPvrvD/PUwQkmjtSkrlJi+tWL1vLKs1dV/TuKyMlLvzYewWC4aO2ZkeoBYnhqhs5MksmZAsNTM4xO51jX31maDRRlDtEMpvX9nazt65gz1XXn0CTr+jvJpBI1zWKCaPFcOQBs3zfGDT/ewbLuDEvCbKB6k1q/E4hIbXS2OIIog9g/Ond1MwR9h1VL2mkLT+zj2aBJ3dc5K4MIA0V/V4YzVvXMmeq6a2iKdf2d9Hak510HUdpqIzzRz57JtG3vCAA7D02WZkRVbVJXKTGJiFTT8ABhZuvM7Admtt3MHjKzD4TH/9jM9pjZfeHXqxo9ttniJSaP9sqOGZnKsbQzTW9nutSD6GlLlTOIiXIG0ZZK0JFO8pwVPew4OEGhGLxfvlBk7/AU6/o6wveZGyCyhWJpz6RyBlHZqN62JwgQTx2aKDWp4xvxHanEJCJSTTMyiDzwu+5+JnAp8D4zOyt87Fp3Pz/8+lYTxlYh2hcpV/Cq13cYnszR25GmrzPD4ckcY9N5uttTLO1IYwZD4cl+aGKG/q4MZsb6/k7yReeZ0aBstW9kmnzRWdffydIqGUS0y2tbcm6JKW7b3iArefrQZOl61PFN+TqOsFBORKSahp8t3H2fu98T3h4DtgNrGj2OWkQZBFQvMw1PzdDbmWFpR5pnRqYpFJ3utjTJhNHbkS5nEJMzpbLTuv4OoDxzKVpBvb6/k97OYHpsfBpr1IyOz2KCyhJTrlBk+75RMskEz4xOc2h8pqL/AMogROTZa+qvk2a2AbgAuDM89H4ze8DMbjCzvnlec7WZbTWzrYODg3Ud3+BYluh8ur/KTKbhyRxLwwwiOtH3hA3ivq5MqfcQZRAQBAIoL47bHW7St64vyCCgvMcTlK9HnYnt5gqVJaYnBseZyRd58enLgWC/p/gMJqB0bQhlECJSq6adLcysG7gZ+G13HwU+DZwGnA/sAz5R7XXufp27b3H3LQMDA8f0vSeyeW740Y7SpnbzGRzPctpANwD7Z81kKhSdsek8vZ1p+rrKvYMoQPR3ZmIZRK409fWU3g4SBrvDAPHkwQlSCWN1bzu9nUGAGK4SIEoL5aqUmLbtCcpLrz5vNQA/PzA3QET9CGUQIlKrpgQIM0sTBIcvuPvXAdx9v7sX3L0I/BNwSb2+/yPPjPLRbz7MF+/cecTnDY5lOeuUJcDcElP0W/7SjjRLO8qrkLvDi/SsWNJWWjE9NDFDf3jyTycTrF7aUVo9/egzozxnRTfpZILe8H3ijersrAwiev/4c7btGaEzk+Qlp68Agp7J3Axi/q02RESqacYsJgOuB7a7+ydjx1fHnvYGYFu9xnDRqf28YNMyrrv9yXm3rXB3Do5nOaW3g/6uzJwSU/Rbfm9nmr7w5A/Q0x7cPndNL08fmmRwLMvIVDmDgKAPEZWYHn1mjOeu6im9FwSrsgtF55FnRsslprA0FGU0j8UW2z20d4SzVi+hrytTeo/41eSg3IPQhnwiUqtmnC0uA94O/PKsKa1/ZWYPmtkDwC8Bv1PPQfy/L30OB8ayfHXrrqqPj0zlyBWcge42Vi5pn1Niijbq6+0on5Sh/Bv++et6AfjhY0GfJL6V9rq+TnYNTTIymWPvyDRnrAqylKgHMTyV49/v38vlf/N/+dmOIaCcQXS1pVjf31naFbZYdB7aO8o5a5YCcOqyLoC5JaYwQCSVQYhIjRq+ktrdfwRUO0s1dFrrCzYtY8upfXzmv57gyovXl07AkWgG00BPGyuXtM2bQSztTFdsexH1IM5bu5SEwfcf2Q9QmsUEQaP6wFiW+3cPA3BGmEEsDQPNyGSO3WHT+6afPgWUew/R87c/E/QdnhgcZ3KmwNlhKezU/k7u3zU8ZxbTkfZiEhGp5qStN5gZv/XSzewdmea1n/oR3972TMViuHiAWLWkfU4PItqor7cjTW/H3Ayiqy3F6St7uP2xg8CsDCKcyXTb9iB4nLE6CBA9bSmSCWNkKlda1/BQ+GdmVoB46uAE07kCd4YZxsUbgn2eNiwL3nu+DEI9CBGp1UkbIABefPoAn/r1C5jJF3nvv9zNzffsKT0W7cO0vLuNFUvaOTieJR/LFEolps5MRX8hvivqBev7SleZi2cQ0VqI720/wJL2FKuWtANB0FrakebQxAwP7x0tlalgVoBYvYSiw8/3j3PnjiFWLmnj1DAwzFdiSiaMTDKhaa4iUrOT/mzxmvNO4Tu/82Keu7KHz9/xdOn47BKTezloAIxMBSf+Je2pUgbRnk5UNIEviJ3gq2UQe4anOGP1koptu3s70ty78zBTuQJvu/RUNq8ImtLxElPU1H7kmVF+tuMQz9+4rPQep5YyiLnVw6Wd6VKGIyJyNCd9gIBg8dibL17H/buG2R5upDc4liWTSlT8hr9/NFsqQw1PzdDTliKVTJR6B91t6Yr3vWB9OUDEG9kD3W2lzfOi/kNkaWe6tNvruWuW8uaL1wGVs5I2LOuiLZXgPx/az/7RLJds7C8/trwLs3LDO+7Gd17Me1686dl8NCJyElOACL3hgjVkkgm+clcwq2lwPMtAdxtmxsowQNy78zCvuPZ2/uI/tjMymSsFhrZUks5MsrTNduS0gW562lJ0ZZKlHgAEpaTogkHRDKZIlI20pRKcNtDFO16wgc+87UKeu7IcSJIJ4/SVPdwWNsCj60xAUBL73Lsu4VcvWjvn73j2KUtZ1q1Li4pIbRQgQn1dGV55zipuuXcP07kCg2NZlvcEJ9MVS4I//+SbD/PzA+P84w+f5I4nD1VkBX2dmTlXZUskjOet66W/yuU8oy03njsrg+gNexVnrl5CKpkgk0pw+Tmr51w97oxVPbjDsq5MaW1E5EWbB6pmECIiz4YCRMyVF69jZCrH5376FINjQQYBsLyrjVTC6Myk+OJvPp/1/Z3sHZkurXyGoITUU+WynX/wqjP58zecO+f4fAEiOrGfs2bJnNfERa+7ZGP/vJceFRE5HupYxrxg0zJeduYK/vxbj5BKGBesD/YLTCSMP3/DuZyxuofz1vbysV85l1//7J2lEhPANVecWdpILy7aqmO2q164gXPXLJ3TNC4FiFOWHnGsZ64O3jfefxARWUgKEDGJhPGpX7+Qqz9/N7c/NshArDT0a2GzGOCFz1nOx37lXE5bUS7t/MLm5c/qe21c3sXG5V1zjkdlq2hl9Hwu3tDPb710M2+4oCV3SheRE4ACxCzt6STXvf0irv3eY7zmeafM+7wrL1lfl+//irNXcWh8ppQhzCeTSvA/X356XcYgIgJg1S6luVhs2bLFt27d2uxhiIgsKmZ2t7tvOdrz1KQWEZGqFCBERKQqBQgREalKAUJERKpSgBARkaoUIEREpCoFCBERqUoBQkREqlrUC+XMbBB4+qhPnN9y4OACDaceWn18oDEuFI1xYWiMtTnV3QeO9qRFHSCOl5ltrWU1YbO0+vhAY1woGuPC0BgXlkpMIiJSlQKEiIhUdbIHiOuaPYCjaPXxgca4UDTGhaExLqCTugchIiLzO9kzCBERmcdJGSDM7HIze9TMHjezDzZ7PABmts7MfmBm283sITP7QHi838y+a2Y/D//sa/I4k2Z2r5l9M7y/0czuDMf3FTPLHO09GjDGXjP7mpk9En6eL2ilz9HMfif8N95mZl8ys/ZW+BzN7AYzO2Bm22LHqn5uFvi78GfoATO7sEnj++vw3/kBM7vFzHpjj10Tju9RM3tlvcc33xhjj/0vM3MzWx7eb/hn+GyddAHCzJLA3wNXAGcBbzGzs5o7KgDywO+6+5nApcD7wnF9ELjN3TcDt4X3m+kDwPbY/b8Erg3Hdxh4d1NGVelvgW+7+xnA8wjG2xKfo5mtAX4L2OLu5wBJ4Epa43O8Ebh81rH5PrcrgM3h19XAp5s0vu8C57j7ecBjwDUA4c/OlcDZ4Wv+IfzZb8YYMbN1wMuBnbHDzfgMn5WTLkAAlwCPu/uT7j4DfBl4XZPHhLvvc/d7wttjBCe1NQRjuyl82k3A65szQjCztcCrgc+G9w34ZeBr4VOaOj4AM1sCvBi4HsDdZ9x9mBb6HAku9dthZimgE9hHC3yO7n47MDTr8Hyf2+uAz3ngDqDXzFY3enzu/h13z4d37wDWxsb3ZXfPuvsO4HGCn/26muczBLgW+N9AvOnb8M/w2ToZA8QaYFfs/u7wWMswsw3ABcCdwEp33wdBEAFWNG9k/A3Bf/JieH8ZMBz7AW2Fz3ITMAj8c1gK+6yZddEin6O77wE+TvCb5D5gBLib1vscI/N9bq34c/Qu4D/C2y0zPjN7LbDH3e+f9VDLjHE+J2OAsCrHWmYql5l1AzcDv+3uo80eT8TMXgMccPe744erPLXZn2UKuBD4tLtfAEzQ/LJcSVjDfx2wETgF6CIoNczW7M/xaFrq397MPkRQpv1CdKjK0xo+PjPrBD4E/J9qD1c51lL/7idjgNgNrIvdXwvsbdJYKphZmiA4fMHdvx4e3h+lneGfB5o0vMuA15rZUwRluV8myCh6w1IJtMZnuRvY7e53hve/RhAwWuVzfBmww90H3T0HfB14Ia33OUbm+9xa5ufIzK4CXgO81cvz9ltlfKcR/DJwf/izsxa4x8xW0TpjnNfJGCDuAjaHs0YyBI2sW5s8pqiefz2w3d0/GXvoVuCq8PZVwDcaPTYAd7/G3de6+waCz+z77v5W4AfAG5s9voi7PwPsMrPnhodeCjxMi3yOBKWlS82sM/w3j8bXUp9jzHyf263AO8KZOJcCI1EpqpHM7HLg94HXuvtk7KFbgSvNrM3MNhI0gn/W6PG5+4PuvsLdN4Q/O7uBC8P/py3xGR6Ru590X8CrCGY8PAF8qNnjCcf0CwTp5QPAfeHXqwjq/LcBPw//7G+Bsb4E+GZ4exPBD97jwL8CbS0wvvOBreFn+W9AXyt9jsBHgEeAbcDngbZW+ByBLxH0RXIEJ7J3z/e5EZRH/j78GXqQYFZWM8b3OEEdP/qZ+Uzs+R8Kx/cocEWzPsNZjz8FLG/WZ/hsv7SSWkREqjoZS0wiIlIDBQgREalKAUJERKpSgBARkaoUIEREpCoFCJHjYGYfNbOXLcD7jC/EeEQWkqa5irQAMxt39+5mj0MkThmEyCxm9jYz+5mZ3Wdm/2jBNTDGzewTZnaPmd1mZgPhc280szeGtz9mZg+He/t/PDx2avj8B8I/14fHN5rZT83sLjP7k1nf//fC4w+Y2Uca/fcXiShAiMSY2ZnAm4HL3P18oAC8lWBTvXvc/ULgh8CHZ72uH3gDcLYH1yb40/ChTxFs6XwewUZyfxce/1uCDQUvBp6Jvc8rCLaFuIRgRfhFZvbievxdRY5GAUKk0kuBi4C7zOy+8P4mgi3OvxI+518ItkaJGwWmgc+a2a8A0b5ALwC+GN7+fOx1lxFsyxAdj7wi/LoXuAc4gyBgiDRc6uhPETmpGHCTu19TcdDsj2Y9r6J55+55M7uEIKBcCbyfYMfb2Xye2/Hv/xfu/o/PduAiC00ZhEil24A3mtkKKF2T+VSCn5Vot9VfB34Uf1F4HY+l7v4t4LcJykMAPyEIGBCUqqLX/XjW8ch/Au8K3w8zWxONRaTRlEGIxLj7w2b2h8B3zCxBsCvn+wguPHS2md1NcBW4N896aQ/wDTNrJ8gCfic8/lvADWb2ewRXuntnePwDwBfN7AME1wCJvv93wj7IT4PdwBkH3kbzrl8hJzFNcxWpgaahyslIJSYREalKGYSIiFSlDEJERKpSgBARkaoUIEREpCoFCBERqUoBQkREqlKAEBGRqv5/n8qa83bikxUAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"episodes before solving: 32\n"
]
}
],
"source": [
"def my_sma(x, N):\n",
" \"\"\"simple moving average over a window of N samples\"\"\"\n",
" filt = np.ones(N) / N\n",
" xm = np.convolve(x, filt)\n",
" xm = xm[:-(N-1)] # remove the last (N-1) elements\n",
" return xm\n",
"\n",
"eps, rewards = np.array(rewards_list).T\n",
"\n",
"# plot reward v.s. episode\n",
"plt.plot(eps, rewards)\n",
"plt.xlabel('episode')\n",
"plt.ylabel('reward')\n",
"plt.show()\n",
"\n",
"# check solved requirements\n",
"N = 100\n",
"thr = 195.0\n",
"ep_solve = np.argwhere(my_sma(rewards, N) >= thr).ravel()[0] - N # find where sma > thr \n",
"print(\"episodes before solving: {}\".format(ep_solve))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {
"height": "calc(100% - 180px)",
"left": "10px",
"top": "150px",
"width": "288px"
},
"toc_section_display": true,
"toc_window_display": true
}
},
"nbformat": 4,
"nbformat_minor": 2
}
@yingzwang
Copy link
Author

dqn
DQN graph (generated by tensorboard)

@DylanHaiyangChen
Copy link

Hi Ying,
This is a really great approach to solve CartPole problem. I wonder if you would like to support more information about the DQN architecture. Such like report or references.
I am thinking about why your implement is of high efficiency.

@TomeASilva
Copy link

Hi Ying,
This is a really great approach to solve CartPole problem. I wonder if you would like to support more information about the DQN architecture. Such like report or references.
I am thinking about why your implement is of high efficiency.

Hi dylan, HaiyangChen

I'm not associated with yingzwang, but i can give some information, this an implementation of DQN algorithm ( https://deepmind.com/research/dqn/. ) So, the architecture of the algorithm is essentially the same as the one presented in the paper. The difference is a soft-update to the weights of the target network by using exponential moving averages parameterized by tau. She also uses a decreasing exploration strategy, which clearly helps in this problem. The rest is just good hyper parameter tunning.

The code is also very good, good code practices all around.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment