Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save johnleung8888/f648dd9cd9c85c302b1ba37f26616fd6 to your computer and use it in GitHub Desktop.
Save johnleung8888/f648dd9cd9c85c302b1ba37f26616fd6 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "355c6ee07b77d9504f86561bbc59d831",
"grade": false,
"grade_id": "cell-f96c128874bfc5b3",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"# Assignment 2 - Implement your agent\n",
"\n",
"Welcome to Course 4, Programming Assignment 2! We have learned about reinforcement learning algorithms for prediction and control in previous courses and extended those algorithms to large state spaces using function approximation. One example of this was in assignment 2 of course 3 where we implemented semi-gradient TD for prediction and used a neural network as the function approximator. In this notebook, we will build a reinforcement learning agent for control, again using a neural network for function approximation. This combination of neural network function approximators and reinforcement learning algorithms, often referred to as Deep RL, is an active area of research and has led to many impressive results (e. g., AlphaGo: https://deepmind.com/research/case-studies/alphago-the-story-so-far).\n",
"\n",
"**In this assignment, you will:**\n",
" 1. Extend the neural network code from assignment 2 of course 3 to output action-values instead of state-values.\n",
" 2. Write up the Adam algorithm for neural network optimization.\n",
" 3. Understand experience replay buffers.\n",
" 4. Implement Softmax action-selection.\n",
" 5. Build an Expected Sarsa agent by putting all the pieces together.\n",
" 6. Solve Lunar Lander with your agent."
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "a942b1d80536a6e097a5b7879f8b28d3",
"grade": false,
"grade_id": "cell-9524f4b3df469bab",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Packages\n",
"- [numpy](www.numpy.org) : Fundamental package for scientific computing with Python.\n",
"- [matplotlib](http://matplotlib.org) : Library for plotting graphs in Python.\n",
"- [RL-Glue](http://www.jmlr.org/papers/v10/tanner09a.html), BaseEnvironment, BaseAgent : Library and abstract classes to inherit from for reinforcement learning experiments.\n",
"- [LunarLanderEnvironment](https://gym.openai.com/envs/LunarLander-v2/) : An RLGlue environment that wraps a LundarLander environment implementation from OpenAI Gym.\n",
"- [collections.deque](https://docs.python.org/3/library/collections.html#collections.deque): a double-ended queue implementation. We use deque to implement the experience replay buffer.\n",
"- [copy.deepcopy](https://docs.python.org/3/library/copy.html#copy.deepcopy): As objects are not passed by value in python, we often need to make copies of mutable objects. copy.deepcopy allows us to make a new object with the same contents as another object. (Take a look at this link if you are interested to learn more: https://robertheaton.com/2014/02/09/pythons-pass-by-object-reference-as-explained-by-philip-k-dick/)\n",
"- [tqdm](https://github.com/tqdm/tqdm) : A package to display progress bar when running experiments\n",
"- [os](https://docs.python.org/3/library/os.html): Package used to interface with the operating system. Here we use it for creating a results folder when it does not exist.\n",
"- [shutil](https://docs.python.org/3/library/shutil.html): Package used to operate on files and folders. Here we use it for creating a zip file of the results folder.\n",
"- plot_script: Used for plotting learning curves using matplotlib."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "1a16d3a2b4b78bd0c8a054524d667d1c",
"grade": false,
"grade_id": "cell-3a093c227c1a8513",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# Do not modify this cell!\n",
"\n",
"# Import necessary libraries\n",
"# DO NOT IMPORT OTHER LIBRARIES - This will break the autograder.\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"\n",
"from rl_glue import RLGlue\n",
"from environment import BaseEnvironment\n",
"from lunar_lander import LunarLanderEnvironment\n",
"from agent import BaseAgent\n",
"from collections import deque\n",
"from copy import deepcopy\n",
"from tqdm import tqdm\n",
"import os \n",
"import shutil\n",
"from plot_script import plot_result"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "833da49e040139194c4c5e7c68b23bee",
"grade": false,
"grade_id": "cell-c1f6c6471017fd99",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Section 1: Action-Value Network\n",
"This section includes the function approximator that we use in our agent, a neural network. In Course 3 Assignment 2, we used a neural network as the function approximator for a policy evaluation problem. In this assignment, we will use a neural network for approximating the action-value function in a control problem. The main difference between approximating a state-value function and an action-value function using a neural network is that in the former the output layer only includes one unit whereas in the latter the output layer includes as many units as the number of actions. \n",
"\n",
"In the cell below, you will specify the architecture of the action-value neural network. More specifically, you will specify `self.layer_sizes` in the `__init__()` function. \n",
"\n",
"We have already provided `get_action_values()` and `get_TD_update()` methods. The former computes the action-value function by doing a forward pass and the latter computes the gradient of the action-value function with respect to the weights times the TD error. These `get_action_values()` and `get_TD_update()` methods are similar to the `get_value()` and `get_gradient()` methods that you implemented in Course 3 Assignment 2. The main difference is that in this notebook, they are designed to be applied to batches of states instead of one state. You will later use these functions for implementing the agent."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "d10feeabf000214a0f53c5dfc5812437",
"grade": false,
"grade_id": "cell-e6d82e74c686dbf5",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Graded Cell\n",
"# -----------\n",
"\n",
"# Work Required: Yes. Fill in the code for layer_sizes in __init__ (~1 Line). \n",
"# Also go through the rest of the code to ensure your understanding is correct.\n",
"class ActionValueNetwork:\n",
" # Work Required: Yes. Fill in the layer_sizes member variable (~1 Line).\n",
" def __init__(self, network_config):\n",
" self.state_dim = network_config.get(\"state_dim\")\n",
" self.num_hidden_units = network_config.get(\"num_hidden_units\")\n",
" self.num_actions = network_config.get(\"num_actions\")\n",
" \n",
" self.rand_generator = np.random.RandomState(network_config.get(\"seed\"))\n",
" \n",
" # Specify self.layer_sizes which shows the number of nodes in each layer\n",
" # your code here\n",
" self.layer_sizes = np.array([self.state_dim, self.num_hidden_units, self.num_actions])\n",
" \n",
" # Initialize the weights of the neural network\n",
" # self.weights is an array of dictionaries with each dictionary corresponding to \n",
" # the weights from one layer to the next. Each dictionary includes W and b\n",
" self.weights = [dict() for i in range(0, len(self.layer_sizes) - 1)]\n",
" for i in range(0, len(self.layer_sizes) - 1):\n",
" self.weights[i]['W'] = self.init_saxe(self.layer_sizes[i], self.layer_sizes[i + 1])\n",
" self.weights[i]['b'] = np.zeros((1, self.layer_sizes[i + 1]))\n",
" \n",
" # Work Required: No.\n",
" def get_action_values(self, s):\n",
" \"\"\"\n",
" Args:\n",
" s (Numpy array): The state.\n",
" Returns:\n",
" The action-values (Numpy array) calculated using the network's weights.\n",
" \"\"\"\n",
" \n",
" W0, b0 = self.weights[0]['W'], self.weights[0]['b']\n",
" psi = np.dot(s, W0) + b0\n",
" x = np.maximum(psi, 0)\n",
" \n",
" W1, b1 = self.weights[1]['W'], self.weights[1]['b']\n",
" q_vals = np.dot(x, W1) + b1\n",
"\n",
" return q_vals\n",
" \n",
" # Work Required: No.\n",
" def get_TD_update(self, s, delta_mat):\n",
" \"\"\"\n",
" Args:\n",
" s (Numpy array): The state.\n",
" delta_mat (Numpy array): A 2D array of shape (batch_size, num_actions). Each row of delta_mat \n",
" correspond to one state in the batch. Each row has only one non-zero element \n",
" which is the TD-error corresponding to the action taken.\n",
" Returns:\n",
" The TD update (Array of dictionaries with gradient times TD errors) for the network's weights\n",
" \"\"\"\n",
"\n",
" W0, b0 = self.weights[0]['W'], self.weights[0]['b']\n",
" W1, b1 = self.weights[1]['W'], self.weights[1]['b']\n",
" \n",
" psi = np.dot(s, W0) + b0\n",
" x = np.maximum(psi, 0)\n",
" dx = (psi > 0).astype(float)\n",
"\n",
" # td_update has the same structure as self.weights, that is an array of dictionaries.\n",
" # td_update[0][\"W\"], td_update[0][\"b\"], td_update[1][\"W\"], and td_update[1][\"b\"] have the same shape as \n",
" # self.weights[0][\"W\"], self.weights[0][\"b\"], self.weights[1][\"W\"], and self.weights[1][\"b\"] respectively\n",
" td_update = [dict() for i in range(len(self.weights))]\n",
" \n",
" v = delta_mat\n",
" td_update[1]['W'] = np.dot(x.T, v) * 1. / s.shape[0]\n",
" td_update[1]['b'] = np.sum(v, axis=0, keepdims=True) * 1. / s.shape[0]\n",
" \n",
" v = np.dot(v, W1.T) * dx\n",
" td_update[0]['W'] = np.dot(s.T, v) * 1. / s.shape[0]\n",
" td_update[0]['b'] = np.sum(v, axis=0, keepdims=True) * 1. / s.shape[0]\n",
" \n",
" return td_update\n",
" \n",
" # Work Required: No. You may wish to read the relevant paper for more information on this weight initialization\n",
" # (Exact solutions to the nonlinear dynamics of learning in deep linear neural networks by Saxe, A et al., 2013)\n",
" def init_saxe(self, rows, cols):\n",
" \"\"\"\n",
" Args:\n",
" rows (int): number of input units for layer.\n",
" cols (int): number of output units for layer.\n",
" Returns:\n",
" NumPy Array consisting of weights for the layer based on the initialization in Saxe et al.\n",
" \"\"\"\n",
" tensor = self.rand_generator.normal(0, 1, (rows, cols))\n",
" if rows < cols:\n",
" tensor = tensor.T\n",
" tensor, r = np.linalg.qr(tensor)\n",
" d = np.diag(r, 0)\n",
" ph = np.sign(d)\n",
" tensor *= ph\n",
"\n",
" if rows < cols:\n",
" tensor = tensor.T\n",
" return tensor\n",
" \n",
" # Work Required: No.\n",
" def get_weights(self):\n",
" \"\"\"\n",
" Returns: \n",
" A copy of the current weights of this network.\n",
" \"\"\"\n",
" return deepcopy(self.weights)\n",
" \n",
" # Work Required: No.\n",
" def set_weights(self, weights):\n",
" \"\"\"\n",
" Args: \n",
" weights (list of dictionaries): Consists of weights that this network will set as its own weights.\n",
" \"\"\"\n",
" self.weights = deepcopy(weights)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "e56b7c14735e136a6aa4bfb968f48013",
"grade": false,
"grade_id": "cell-09cdc118d2f5951c",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the cell below to test your implementation of the `__init__()` function for ActionValueNetwork:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"layer_sizes: [ 5 20 3]\n"
]
}
],
"source": [
"# --------------\n",
"# Debugging Cell\n",
"# --------------\n",
"# Feel free to make any changes to this cell to debug your code\n",
"\n",
"network_config = {\n",
" \"state_dim\": 5,\n",
" \"num_hidden_units\": 20,\n",
" \"num_actions\": 3\n",
"}\n",
"\n",
"test_network = ActionValueNetwork(network_config)\n",
"print(\"layer_sizes:\", test_network.layer_sizes)\n",
"assert(np.allclose(test_network.layer_sizes, np.array([5, 20, 3])))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "60e5d798e80eba541ad2862a00d1571f",
"grade": true,
"grade_id": "cell-49a0cb79ea0e45ea",
"locked": true,
"points": 5,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Tested Cell\n",
"# -----------\n",
"# The contents of the cell will be tested by the autograder.\n",
"# If they do not pass here, they will not pass there.\n",
"\n",
"rand_generator = np.random.RandomState(0)\n",
"for _ in range(1000):\n",
" network_config = {\n",
" \"state_dim\": rand_generator.randint(2, 10),\n",
" \"num_hidden_units\": rand_generator.randint(2, 1024),\n",
" \"num_actions\": rand_generator.randint(2, 10)\n",
" }\n",
"\n",
" test_network = ActionValueNetwork(network_config)\n",
"\n",
" assert(np.allclose(test_network.layer_sizes, np.array([network_config[\"state_dim\"], \n",
" network_config[\"num_hidden_units\"], \n",
" network_config[\"num_actions\"]])))"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "43dd3d66aa92d0ba9b560bac155dbe14",
"grade": false,
"grade_id": "cell-169bc96641d22305",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"**Expected output:** (assuming no changes to the debugging cell)\n",
"\n",
" layer_sizes: [ 5 20 3]"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "567638b2dd6adb9971de931d9992b4e1",
"grade": false,
"grade_id": "cell-9020651e057104f0",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Section 2: Adam Optimizer\n",
"\n",
"In this assignment, you will use the Adam algorithm for updating the weights of your action-value network. As you may remember from Course 3 Assignment 2, the Adam algorithm is a more advanced variant of stochastic gradient descent (SGD). The Adam algorithm improves the SGD update with two concepts: adaptive vector stepsizes and momentum. It keeps running estimates of the mean and second moment of the updates, denoted by $\\mathbf{m}$ and $\\mathbf{v}$ respectively:\n",
"$$\\mathbf{m_t} = \\beta_m \\mathbf{m_{t-1}} + (1 - \\beta_m)g_t \\\\\n",
"\\mathbf{v_t} = \\beta_v \\mathbf{v_{t-1}} + (1 - \\beta_v)g^2_t\n",
"$$\n",
"\n",
"Here, $\\beta_m$ and $\\beta_v$ are fixed parameters controlling the linear combinations above and $g_t$ is the update at time $t$ (generally the gradients, but here the TD error times the gradients).\n",
"\n",
"Given that $\\mathbf{m}$ and $\\mathbf{v}$ are initialized to zero, they are biased toward zero. To get unbiased estimates of the mean and second moment, Adam defines $\\mathbf{\\hat{m}}$ and $\\mathbf{\\hat{v}}$ as:\n",
"$$ \\mathbf{\\hat{m}_t} = \\frac{\\mathbf{m_t}}{1 - \\beta_m^t} \\\\\n",
"\\mathbf{\\hat{v}_t} = \\frac{\\mathbf{v_t}}{1 - \\beta_v^t}\n",
"$$\n",
"\n",
"The weights are then updated as follows:\n",
"$$ \\mathbf{w_t} = \\mathbf{w_{t-1}} + \\frac{\\alpha}{\\sqrt{\\mathbf{\\hat{v}_t}}+\\epsilon} \\mathbf{\\hat{m}_t}\n",
"$$\n",
"\n",
"Here, $\\alpha$ is the step size parameter and $\\epsilon$ is another small parameter to keep the denominator from being zero.\n",
"\n",
"In the cell below, you will implement the `__init__()` and `update_weights()` methods for the Adam algorithm. In `__init__()`, you will initialize `self.m` and `self.v`. In `update_weights()`, you will compute new weights given the input weights and an update $g$ (here `td_errors_times_gradients`) according to the equations above."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "798d4618ba32342f63eb237947151a4a",
"grade": false,
"grade_id": "cell-585fd403a17cf660",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"### Work Required: Yes. Fill in code in __init__ and update_weights (~9-11 Lines).\n",
"class Adam():\n",
" # Work Required: Yes. Fill in the initialization for self.m and self.v (~4 Lines).\n",
" def __init__(self, layer_sizes, \n",
" optimizer_info):\n",
" self.layer_sizes = layer_sizes\n",
"\n",
" # Specify Adam algorithm's hyper parameters\n",
" self.step_size = optimizer_info.get(\"step_size\")\n",
" self.beta_m = optimizer_info.get(\"beta_m\")\n",
" self.beta_v = optimizer_info.get(\"beta_v\")\n",
" self.epsilon = optimizer_info.get(\"epsilon\")\n",
" \n",
" # Initialize Adam algorithm's m and v\n",
" self.m = [dict() for i in range(1, len(self.layer_sizes))]\n",
" self.v = [dict() for i in range(1, len(self.layer_sizes))]\n",
" \n",
" for i in range(0, len(self.layer_sizes) - 1):\n",
" # Hint: The initialization for m and v should look very much like the initializations of the weights\n",
" # except for the fact that initialization here is to zeroes (see description above.)\n",
" # Replace the None in each following line\n",
" \n",
" self.m[i][\"W\"] = np.zeros((self.layer_sizes[i], self.layer_sizes[i+1]))\n",
" self.m[i][\"b\"] = np.zeros((1, self.layer_sizes[i+1]))\n",
" self.v[i][\"W\"] = np.zeros((self.layer_sizes[i], self.layer_sizes[i+1]))\n",
" self.v[i][\"b\"] = np.zeros((1, self.layer_sizes[i+1]))\n",
" \n",
" # your code here\n",
" \n",
" \n",
" # Notice that to calculate m_hat and v_hat, we use powers of beta_m and beta_v to \n",
" # the time step t. We can calculate these powers using an incremental product. At initialization then, \n",
" # beta_m_product and beta_v_product should be ...? (Note that timesteps start at 1 and if we were to \n",
" # start from 0, the denominator would be 0.)\n",
" self.beta_m_product = self.beta_m\n",
" self.beta_v_product = self.beta_v\n",
" \n",
" # Work Required: Yes. Fill in the weight updates (~5-7 lines).\n",
" def update_weights(self, weights, td_errors_times_gradients):\n",
" \"\"\"\n",
" Args:\n",
" weights (Array of dictionaries): The weights of the neural network.\n",
" td_errors_times_gradients (Array of dictionaries): The gradient of the \n",
" action-values with respect to the network's weights times the TD-error\n",
" Returns:\n",
" The updated weights (Array of dictionaries).\n",
" \"\"\"\n",
" for i in range(len(weights)):\n",
" for param in weights[i].keys():\n",
" # Hint: Follow the equations above. First, you should update m and v and then compute \n",
" # m_hat and v_hat. Finally, compute how much the weights should be incremented by.\n",
" # self.m[i][param] = None\n",
" # self.v[i][param] = None\n",
" # m_hat = None\n",
" # v_hat = None\n",
" \n",
" \n",
" # your code here\n",
" self.m[i][param] = self.beta_m * self.m[i][param] + (1 - self.beta_m) * td_errors_times_gradients[i][param]\n",
" self.v[i][param] = self.beta_v * self.v[i][param] + (1 - self.beta_v) * (td_errors_times_gradients[i][param] * td_errors_times_gradients[i][param])\n",
" m_hat = self.m[i][param] / (1 - self.beta_m_product)\n",
" v_hat = self.v[i][param] / (1 - self.beta_v_product)\n",
" weight_update = self.step_size * m_hat / (np.sqrt(v_hat) + self.epsilon)\n",
" weights[i][param] = weights[i][param] + weight_update\n",
" # Notice that to calculate m_hat and v_hat, we use powers of beta_m and beta_v to \n",
" ### update self.beta_m_product and self.beta_v_product\n",
" self.beta_m_product *= self.beta_m\n",
" self.beta_v_product *= self.beta_v\n",
" \n",
" return weights"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "e64f7418759c1b30b0ff0544f21d4ad0",
"grade": false,
"grade_id": "cell-779e4e90ee7ae5b8",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the following code to test your implementation of the `__init__()` function:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"m[0][\"W\"] shape: (5, 2)\n",
"m[0][\"b\"] shape: (1, 2)\n",
"m[1][\"W\"] shape: (2, 3)\n",
"m[1][\"b\"] shape: (1, 3) \n",
"\n",
"v[0][\"W\"] shape: (5, 2)\n",
"v[0][\"b\"] shape: (1, 2)\n",
"v[1][\"W\"] shape: (2, 3)\n",
"v[1][\"b\"] shape: (1, 3) \n",
"\n"
]
}
],
"source": [
"# --------------\n",
"# Debugging Cell\n",
"# --------------\n",
"# Feel free to make any changes to this cell to debug your code\n",
"\n",
"network_config = {\"state_dim\": 5,\n",
" \"num_hidden_units\": 2,\n",
" \"num_actions\": 3\n",
" }\n",
"\n",
"optimizer_info = {\"step_size\": 0.1,\n",
" \"beta_m\": 0.99,\n",
" \"beta_v\": 0.999,\n",
" \"epsilon\": 0.0001\n",
" }\n",
"\n",
"network = ActionValueNetwork(network_config)\n",
"test_adam = Adam(network.layer_sizes, optimizer_info)\n",
"\n",
"print(\"m[0][\\\"W\\\"] shape: {}\".format(test_adam.m[0][\"W\"].shape))\n",
"print(\"m[0][\\\"b\\\"] shape: {}\".format(test_adam.m[0][\"b\"].shape))\n",
"print(\"m[1][\\\"W\\\"] shape: {}\".format(test_adam.m[1][\"W\"].shape))\n",
"print(\"m[1][\\\"b\\\"] shape: {}\".format(test_adam.m[1][\"b\"].shape), \"\\n\")\n",
"\n",
"assert(np.allclose(test_adam.m[0][\"W\"].shape, np.array([5, 2])))\n",
"assert(np.allclose(test_adam.m[0][\"b\"].shape, np.array([1, 2])))\n",
"assert(np.allclose(test_adam.m[1][\"W\"].shape, np.array([2, 3])))\n",
"assert(np.allclose(test_adam.m[1][\"b\"].shape, np.array([1, 3])))\n",
"\n",
"print(\"v[0][\\\"W\\\"] shape: {}\".format(test_adam.v[0][\"W\"].shape))\n",
"print(\"v[0][\\\"b\\\"] shape: {}\".format(test_adam.v[0][\"b\"].shape))\n",
"print(\"v[1][\\\"W\\\"] shape: {}\".format(test_adam.v[1][\"W\"].shape))\n",
"print(\"v[1][\\\"b\\\"] shape: {}\".format(test_adam.v[1][\"b\"].shape), \"\\n\")\n",
"\n",
"assert(np.allclose(test_adam.v[0][\"W\"].shape, np.array([5, 2])))\n",
"assert(np.allclose(test_adam.v[0][\"b\"].shape, np.array([1, 2])))\n",
"assert(np.allclose(test_adam.v[1][\"W\"].shape, np.array([2, 3])))\n",
"assert(np.allclose(test_adam.v[1][\"b\"].shape, np.array([1, 3])))\n",
"\n",
"assert(np.all(test_adam.m[0][\"W\"]==0))\n",
"assert(np.all(test_adam.m[0][\"b\"]==0))\n",
"assert(np.all(test_adam.m[1][\"W\"]==0))\n",
"assert(np.all(test_adam.m[1][\"b\"]==0))\n",
"\n",
"assert(np.all(test_adam.v[0][\"W\"]==0))\n",
"assert(np.all(test_adam.v[0][\"b\"]==0))\n",
"assert(np.all(test_adam.v[1][\"W\"]==0))\n",
"assert(np.all(test_adam.v[1][\"b\"]==0))"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "78b04676af63a17cd9ec3b207a18e1f1",
"grade": true,
"grade_id": "cell-32c93afdee106ad5",
"locked": true,
"points": 20,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Tested Cell\n",
"# -----------\n",
"# The contents of the cell will be tested by the autograder.\n",
"# If they do not pass here, they will not pass there.\n",
"\n",
"\n",
"\n",
"# import our implementation of Adam\n",
"# while you can go look at this for the answer, try to solve the programming challenge yourself first\n",
"from tests import TrueAdam\n",
"\n",
"rand_generator = np.random.RandomState(0)\n",
"for _ in range(1000):\n",
" network_config = {\n",
" \"state_dim\": rand_generator.randint(2, 10),\n",
" \"num_hidden_units\": rand_generator.randint(2, 1024),\n",
" \"num_actions\": rand_generator.randint(2, 10)\n",
" }\n",
" \n",
" optimizer_info = {\"step_size\": rand_generator.choice(np.geomspace(0.1, 1e-5, num=5)),\n",
" \"beta_m\": rand_generator.choice([0.9, 0.99, 0.999, 0.9999, 0.99999]),\n",
" \"beta_v\": rand_generator.choice([0.9, 0.99, 0.999, 0.9999, 0.99999]),\n",
" \"epsilon\": rand_generator.choice(np.geomspace(0.1, 1e-5, num=5))\n",
" }\n",
"\n",
" test_network = ActionValueNetwork(network_config)\n",
" test_adam = Adam(test_network.layer_sizes, optimizer_info)\n",
" true_adam = TrueAdam(test_network.layer_sizes, optimizer_info)\n",
" \n",
" assert(np.allclose(test_adam.m[0][\"W\"].shape, true_adam.m[0][\"W\"].shape))\n",
" assert(np.allclose(test_adam.m[0][\"b\"].shape, true_adam.m[0][\"b\"].shape))\n",
" assert(np.allclose(test_adam.m[1][\"W\"].shape, true_adam.m[1][\"W\"].shape))\n",
" assert(np.allclose(test_adam.m[1][\"b\"].shape, true_adam.m[1][\"b\"].shape))\n",
"\n",
" assert(np.allclose(test_adam.v[0][\"W\"].shape, true_adam.v[0][\"W\"].shape))\n",
" assert(np.allclose(test_adam.v[0][\"b\"].shape, true_adam.v[0][\"b\"].shape))\n",
" assert(np.allclose(test_adam.v[1][\"W\"].shape, true_adam.v[1][\"W\"].shape))\n",
" assert(np.allclose(test_adam.v[1][\"b\"].shape, true_adam.v[1][\"b\"].shape))\n",
"\n",
" assert(np.all(test_adam.m[0][\"W\"]==0))\n",
" assert(np.all(test_adam.m[0][\"b\"]==0))\n",
" assert(np.all(test_adam.m[1][\"W\"]==0))\n",
" assert(np.all(test_adam.m[1][\"b\"]==0))\n",
"\n",
" assert(np.all(test_adam.v[0][\"W\"]==0))\n",
" assert(np.all(test_adam.v[0][\"b\"]==0))\n",
" assert(np.all(test_adam.v[1][\"W\"]==0))\n",
" assert(np.all(test_adam.v[1][\"b\"]==0))\n",
" \n",
" assert(test_adam.beta_m_product == optimizer_info[\"beta_m\"])\n",
" assert(test_adam.beta_v_product == optimizer_info[\"beta_v\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected output:**\n",
"\n",
" m[0][\"W\"] shape: (5, 2)\n",
" m[0][\"b\"] shape: (1, 2)\n",
" m[1][\"W\"] shape: (2, 3)\n",
" m[1][\"b\"] shape: (1, 3) \n",
"\n",
" v[0][\"W\"] shape: (5, 2)\n",
" v[0][\"b\"] shape: (1, 2)\n",
" v[1][\"W\"] shape: (2, 3)\n",
" v[1][\"b\"] shape: (1, 3) "
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "1e2c3416db5fbd5cc08a6b34a59f3b57",
"grade": false,
"grade_id": "cell-070867fcd800c19d",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Section 3: Experience Replay Buffers\n",
"\n",
"In Course 3, you implemented agents that update value functions once for each sample. We can use a more efficient approach for updating value functions. You have seen an example of an efficient approach in Course 2 when implementing Dyna. The idea behind Dyna is to learn a model using sampled experience, obtain simulated experience from the model, and improve the value function using the simulated experience.\n",
"\n",
"Experience replay is a simple method that can get some of the advantages of Dyna by saving a buffer of experience and using the data stored in the buffer as a model. This view of prior data as a model works because the data represents actual transitions from the underlying MDP. Furthermore, as a side note, this kind of model that is not learned and simply a collection of experience can be called non-parametric as it can be ever-growing as opposed to a parametric model where the transitions are learned to be represented with a fixed set of parameters or weights.\n",
"\n",
"We have provided the implementation of the experience replay buffer in the cell below. ReplayBuffer includes two main functions: `append()` and `sample()`. `append()` adds an experience transition to the buffer as an array that includes the state, action, reward, terminal flag (indicating termination of the episode), and next_state. `sample()` gets a batch of experiences from the buffer with size `minibatch_size`.\n",
"\n",
"You will use the `append()` and `sample()` functions when implementing the agent."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "dd216bf2169746f6331d6a5fbd79d605",
"grade": false,
"grade_id": "cell-1e1aaa0d442eb015",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# ---------------\n",
"# Discussion Cell\n",
"# ---------------\n",
"\n",
"class ReplayBuffer:\n",
" def __init__(self, size, minibatch_size, seed):\n",
" \"\"\"\n",
" Args:\n",
" size (integer): The size of the replay buffer. \n",
" minibatch_size (integer): The sample size.\n",
" seed (integer): The seed for the random number generator. \n",
" \"\"\"\n",
" self.buffer = []\n",
" self.minibatch_size = minibatch_size\n",
" self.rand_generator = np.random.RandomState(seed)\n",
" self.max_size = size\n",
"\n",
" def append(self, state, action, reward, terminal, next_state):\n",
" \"\"\"\n",
" Args:\n",
" state (Numpy array): The state. \n",
" action (integer): The action.\n",
" reward (float): The reward.\n",
" terminal (integer): 1 if the next state is a terminal state and 0 otherwise.\n",
" next_state (Numpy array): The next state. \n",
" \"\"\"\n",
" if len(self.buffer) == self.max_size:\n",
" del self.buffer[0]\n",
" self.buffer.append([state, action, reward, terminal, next_state])\n",
"\n",
" def sample(self):\n",
" \"\"\"\n",
" Returns:\n",
" A list of transition tuples including state, action, reward, terinal, and next_state\n",
" \"\"\"\n",
" idxs = self.rand_generator.choice(np.arange(len(self.buffer)), size=self.minibatch_size)\n",
" return [self.buffer[idx] for idx in idxs]\n",
"\n",
" def size(self):\n",
" return len(self.buffer)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "c28fd69bca4622a73612f68ac95d7826",
"grade": false,
"grade_id": "cell-21cc33a3ea0ac94b",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Section 4: Softmax Policy\n",
"\n",
"In this assignment, you will use a softmax policy. One advantage of a softmax policy is that it explores according to the action-values, meaning that an action with a moderate value has a higher chance of getting selected compared to an action with a lower value. Contrast this with an $\\epsilon$-greedy policy which does not consider the individual action values when choosing an exploratory action in a state and instead chooses randomly when doing so.\n",
"\n",
"The probability of selecting each action according to the softmax policy is shown below:\n",
"$$Pr{(A_t=a | S_t=s)} \\hspace{0.1cm} \\dot{=} \\hspace{0.1cm} \\frac{e^{Q(s, a)/\\tau}}{\\sum_{b \\in A}e^{Q(s, b)/\\tau}}$$\n",
"where $\\tau$ is the temperature parameter which controls how much the agent focuses on the highest valued actions. The smaller the temperature, the more the agent selects the greedy action. Conversely, when the temperature is high, the agent selects among actions more uniformly random.\n",
"\n",
"Given that a softmax policy exponentiates action values, if those values are large, exponentiating them could get very large. To implement the softmax policy in a numerically stable way, we often subtract the maximum action-value from the action-values. If we do so, the probability of selecting each action looks as follows:\n",
"\n",
"$$Pr{(A_t=a | S_t=s)} \\hspace{0.1cm} \\dot{=} \\hspace{0.1cm} \\frac{e^{Q(s, a)/\\tau - max_{c}Q(s, c)/\\tau}}{\\sum_{b \\in A}e^{Q(s, b)/\\tau - max_{c}Q(s, c)/\\tau}}$$\n",
"\n",
"In the cell below, you will implement the `softmax()` function. In order to do so, you could break the above computation into smaller steps:\n",
"- compute the preference, $H(a)$, for taking each action by dividing the action-values by the temperature parameter $\\tau$,\n",
"- subtract the maximum preference across the actions from the preferences to avoid overflow, and,\n",
"- compute the probability of taking each action."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "0bc082ff0d5b933fb88fa1936f2057d3",
"grade": false,
"grade_id": "cell-b32ebbeb60c5b9f7",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Graded Cell\n",
"# -----------\n",
"\n",
"def softmax(action_values, tau=1.0):\n",
" \"\"\"\n",
" Args:\n",
" action_values (Numpy array): A 2D array of shape (batch_size, num_actions). \n",
" The action-values computed by an action-value network. \n",
" tau (float): The temperature parameter scalar.\n",
" Returns:\n",
" A 2D array of shape (batch_size, num_actions). Where each column is a probability distribution over\n",
" the actions representing the policy.\n",
" \"\"\"\n",
" \n",
" # Compute the preferences by dividing the action-values by the temperature parameter tau\n",
" preferences = None\n",
" # Compute the maximum preference across the actions\n",
" max_preference = None\n",
" \n",
" # your code here\n",
" preferences = action_values / tau\n",
" max_preference = np.max(preferences, axis=1)\n",
" \n",
" \n",
" # Reshape max_preference array which has shape [Batch,] to [Batch, 1]. This allows NumPy broadcasting \n",
" # when subtracting the maximum preference from the preference of each action.\n",
" reshaped_max_preference = max_preference.reshape((-1, 1))\n",
" \n",
" # Compute the numerator, i.e., the exponential of the preference - the max preference.\n",
" exp_preferences = None\n",
" # Compute the denominator, i.e., the sum over the numerator along the actions axis.\n",
" sum_of_exp_preferences = None\n",
" \n",
" # your code here\n",
" exp_preferences = np.exp(preferences - reshaped_max_preference)\n",
" sum_of_exp_preferences = np.sum(np.exp(preferences - reshaped_max_preference), axis=1)\n",
" \n",
" # Reshape sum_of_exp_preferences array which has shape [Batch,] to [Batch, 1] to allow for NumPy broadcasting \n",
" # when dividing the numerator by the denominator.\n",
" reshaped_sum_of_exp_preferences = sum_of_exp_preferences.reshape((-1, 1))\n",
" \n",
" # Compute the action probabilities according to the equation in the previous cell.\n",
" action_probs = None\n",
" \n",
" # your code here\n",
" action_probs = exp_preferences/reshaped_sum_of_exp_preferences\n",
" \n",
" \n",
" # squeeze() removes any singleton dimensions. It is used here because this function is used in the \n",
" # agent policy when selecting an action (for which the batch dimension is 1.) As np.random.choice is used in \n",
" # the agent policy and it expects 1D arrays, we need to remove this singleton batch dimension.\n",
" action_probs = action_probs.squeeze()\n",
" return action_probs"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "c88e2bfeb8b6915745bc8a136d818a2c",
"grade": false,
"grade_id": "cell-df0ee871ce60dea2",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the cell below to test your implementation of the `softmax()` function:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"action_probs [[0.25849645 0.01689625 0.05374514 0.67086216]\n",
" [0.84699852 0.00286345 0.13520063 0.01493741]]\n",
"Passed the asserts! (Note: These are however limited in scope, additional testing is encouraged.)\n"
]
}
],
"source": [
"# --------------\n",
"# Debugging Cell\n",
"# --------------\n",
"# Feel free to make any changes to this cell to debug your code\n",
"\n",
"rand_generator = np.random.RandomState(0)\n",
"action_values = rand_generator.normal(0, 1, (2, 4))\n",
"tau = 0.5\n",
"\n",
"action_probs = softmax(action_values, tau)\n",
"print(\"action_probs\", action_probs)\n",
"\n",
"assert(np.allclose(action_probs, np.array([\n",
" [0.25849645, 0.01689625, 0.05374514, 0.67086216],\n",
" [0.84699852, 0.00286345, 0.13520063, 0.01493741]\n",
"])))\n",
"\n",
"print(\"Passed the asserts! (Note: These are however limited in scope, additional testing is encouraged.)\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "72670a38cf2dcb8dd9c83293392bc3cd",
"grade": true,
"grade_id": "cell-ce689c1bd91bc11f",
"locked": true,
"points": 10,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Tested Cell\n",
"# -----------\n",
"# The contents of the cell will be tested by the autograder.\n",
"# If they do not pass here, they will not pass there.\n",
"\n",
"from tests import __true__softmax\n",
"\n",
"rand_generator = np.random.RandomState(0)\n",
"for _ in range(1000):\n",
" action_values = rand_generator.normal(0, 1, (rand_generator.randint(1, 5), 4))\n",
" tau = rand_generator.rand()\n",
" assert(np.allclose(softmax(action_values, tau), __true__softmax(action_values, tau)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "cbae924e257425842433cf6b49aa0128",
"grade": false,
"grade_id": "cell-af71b793e63f3db0",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"**Expected output:**\n",
"\n",
" action_probs [[0.25849645 0.01689625 0.05374514 0.67086216]\n",
" [0.84699852 0.00286345 0.13520063 0.01493741]]"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "9b74fe561e81cd10e366c2ad76673248",
"grade": false,
"grade_id": "cell-9d3660107222383d",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Section 5: Putting the pieces together\n",
"\n",
"In this section, you will combine components from the previous sections to write up an RL-Glue Agent. The main component that you will implement is the action-value network updates with experience sampled from the experience replay buffer.\n",
"\n",
"At time $t$, we have an action-value function represented as a neural network, say $Q_t$. We want to update our action-value function and get a new one we can use at the next timestep. We will get this $Q_{t+1}$ using multiple replay steps that each result in an intermediate action-value function $Q_{t+1}^{i}$ where $i$ indexes which replay step we are at.\n",
"\n",
"In each replay step, we sample a batch of experiences from the replay buffer and compute a minibatch Expected-SARSA update. Across these N replay steps, we will use the current \"un-updated\" action-value network at time $t$, $Q_t$, for computing the action-values of the next-states. This contrasts using the most recent action-values from the last replay step $Q_{t+1}^{i}$. We make this choice to have targets that are stable across replay steps. Here is the pseudocode for performing the updates:\n",
"\n",
"$$\n",
"\\begin{align}\n",
"& Q_t \\leftarrow \\text{action-value network at timestep t (current action-value network)}\\\\\n",
"& \\text{Initialize } Q_{t+1}^1 \\leftarrow Q_t\\\\\n",
"& \\text{For } i \\text{ in } [1, ..., N] \\text{ (i.e. N} \\text{ replay steps)}:\\\\\n",
"& \\hspace{1cm} s, a, r, t, s'\n",
"\\leftarrow \\text{Sample batch of experiences from experience replay buffer} \\\\\n",
"& \\hspace{1cm} \\text{Do Expected Sarsa update with } Q_t: Q_{t+1}^{i+1}(s, a) \\leftarrow Q_{t+1}^{i}(s, a) + \\alpha \\cdot \\left[r + \\gamma \\left(\\sum_{b} \\pi(b | s') Q_t(s', b)\\right) - Q_{t+1}^{i}(s, a)\\right]\\\\\n",
"& \\hspace{1.5cm} \\text{ making sure to add the } \\gamma \\left(\\sum_{b} \\pi(b | s') Q_t(s', b)\\right) \\text{ for non-terminal transitions only.} \\\\\n",
"& \\text{After N replay steps, we set } Q_{t+1}^{N} \\text{ as } Q_{t+1} \\text{ and have a new } Q_{t+1} \\text{for time step } t + 1 \\text{ that we will fix in the next set of updates. }\n",
"\\end{align}\n",
"$$\n",
"\n",
"As you can see in the pseudocode, after sampling a batch of experiences, we do many computations. The basic idea however is that we are looking to compute a form of a TD error. In order to so, we can take the following steps:\n",
"- compute the action-values for the next states using the action-value network $Q_{t}$,\n",
"- compute the policy $\\pi(b | s')$ induced by the action-values $Q_{t}$ (using the softmax function you implemented before),\n",
"- compute the Expected sarsa targets $r + \\gamma \\left(\\sum_{b} \\pi(b | s') Q_t(s', b)\\right)$,\n",
"- compute the action-values for the current states using the latest $Q_{t + 1}$, and,\n",
"- compute the TD-errors with the Expected Sarsa targets.\n",
" \n",
"For the third step above, you can start by computing $\\pi(b | s') Q_t(s', b)$ followed by summation to get $\\hat{v}_\\pi(s') = \\left(\\sum_{b} \\pi(b | s') Q_t(s', b)\\right)$. $\\hat{v}_\\pi(s')$ is an estimate of the value of the next state. Note for terminal next states, $\\hat{v}_\\pi(s') = 0$. Finally, we add the rewards to the discount times $\\hat{v}_\\pi(s')$.\n",
"\n",
"You will implement these steps in the `get_td_error()` function below which given a batch of experiences (including states, next_states, actions, rewards, terminals), fixed action-value network (current_q), and action-value network (network), computes the TD error in the form of a 1D array of size batch_size."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "7e8ff7f160ca26a7639acc062ae6b29a",
"grade": false,
"grade_id": "cell-f370691c828efad9",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"### Work Required: Yes. Fill in code in get_td_error (~9 Lines).\n",
"def get_td_error(states, next_states, actions, rewards, discount, terminals, network, current_q, tau):\n",
" \"\"\"\n",
" Args:\n",
" states (Numpy array): The batch of states with the shape (batch_size, state_dim).\n",
" next_states (Numpy array): The batch of next states with the shape (batch_size, state_dim).\n",
" actions (Numpy array): The batch of actions with the shape (batch_size,).\n",
" rewards (Numpy array): The batch of rewards with the shape (batch_size,).\n",
" discount (float): The discount factor.\n",
" terminals (Numpy array): The batch of terminals with the shape (batch_size,).\n",
" network (ActionValueNetwork): The latest state of the network that is getting replay updates.\n",
" current_q (ActionValueNetwork): The fixed network used for computing the targets, \n",
" and particularly, the action-values at the next-states.\n",
" Returns:\n",
" The TD errors (Numpy array) for actions taken, of shape (batch_size,)\n",
" \"\"\"\n",
" \n",
" # Note: Here network is the latest state of the network that is getting replay updates. In other words, \n",
" # the network represents Q_{t+1}^{i} whereas current_q represents Q_t, the fixed network used for computing the \n",
" # targets, and particularly, the action-values at the next-states.\n",
" \n",
" # Compute action values at next states using current_q network\n",
" # Note that q_next_mat is a 2D array of shape (batch_size, num_actions)\n",
" \n",
" ### START CODE HERE (~1 Line)\n",
" q_next_mat = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" q_next_mat = current_q.get_action_values(next_states)\n",
" \n",
" # Compute policy at next state by passing the action-values in q_next_mat to softmax()\n",
" # Note that probs_mat is a 2D array of shape (batch_size, num_actions)\n",
" \n",
" ### START CODE HERE (~1 Line)\n",
" probs_mat = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" probs_mat = softmax(q_next_mat, tau)\n",
" \n",
" # Compute the estimate of the next state value, v_next_vec.\n",
" # Hint: sum the action-values for the next_states weighted by the policy, probs_mat. Then, multiply by\n",
" # (1 - terminals) to make sure v_next_vec is zero for terminal next states.\n",
" # Note that v_next_vec is a 1D array of shape (batch_size,)\n",
" \n",
" ### START CODE HERE (~3 Lines)\n",
" v_next_vec = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" q_sum = np.sum(q_next_mat*probs_mat, axis=1)\n",
" v_next_vec = q_sum * (1 - terminals)\n",
" \n",
" # Compute Expected Sarsa target\n",
" # Note that target_vec is a 1D array of shape (batch_size,)\n",
" \n",
" ### START CODE HERE (~1 Line)\n",
" target_vec = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" target_vec = rewards + discount * v_next_vec\n",
" \n",
" # Compute action values at the current states for all actions using network\n",
" # Note that q_mat is a 2D array of shape (batch_size, num_actions)\n",
" \n",
" ### START CODE HERE (~1 Line)\n",
" q_mat = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" q_mat = network.get_action_values(states)\n",
" \n",
" # Batch Indices is an array from 0 to the batch size - 1. \n",
" batch_indices = np.arange(q_mat.shape[0])\n",
"\n",
" # Compute q_vec by selecting q(s, a) from q_mat for taken actions\n",
" # Use batch_indices as the index for the first dimension of q_mat\n",
" # Note that q_vec is a 1D array of shape (batch_size)\n",
" \n",
" ### START CODE HERE (~1 Line)\n",
" q_vec = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" q_vec = q_mat[batch_indices, actions]\n",
" \n",
" # Compute TD errors for actions taken\n",
" # Note that delta_vec is a 1D array of shape (batch_size)\n",
" \n",
" ### START CODE HERE (~1 Line)\n",
" delta_vec = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" delta_vec = target_vec - q_vec\n",
" \n",
" return delta_vec"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "324b3f707db53cbe0e0182d494691fa4",
"grade": false,
"grade_id": "cell-07d3abd266be6559",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the following code to test your implementation of the `get_td_error()` function:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Passed the asserts! (Note: These are however limited in scope, additional testing is encouraged.)\n"
]
}
],
"source": [
"# --------------\n",
"# Debugging Cell\n",
"# --------------\n",
"# Feel free to make any changes to this cell to debug your code\n",
"\n",
"data = np.load(\"asserts/get_td_error_1.npz\", allow_pickle=True)\n",
"\n",
"states = data[\"states\"]\n",
"next_states = data[\"next_states\"]\n",
"actions = data[\"actions\"]\n",
"rewards = data[\"rewards\"]\n",
"discount = data[\"discount\"]\n",
"terminals = data[\"terminals\"]\n",
"tau = 0.001\n",
"\n",
"network_config = {\"state_dim\": 8,\n",
" \"num_hidden_units\": 512,\n",
" \"num_actions\": 4\n",
" }\n",
"\n",
"network = ActionValueNetwork(network_config)\n",
"network.set_weights(data[\"network_weights\"])\n",
"\n",
"current_q = ActionValueNetwork(network_config)\n",
"current_q.set_weights(data[\"current_q_weights\"])\n",
"\n",
"delta_vec = get_td_error(states, next_states, actions, rewards, discount, terminals, network, current_q, tau)\n",
"answer_delta_vec = data[\"delta_vec\"]\n",
"\n",
"assert(np.allclose(delta_vec, answer_delta_vec))\n",
"print(\"Passed the asserts! (Note: These are however limited in scope, additional testing is encouraged.)\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "de39bdc2c7e843db180faf962fac9572",
"grade": true,
"grade_id": "cell-6b4dccc113c7b5c9",
"locked": true,
"points": 20,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Tested Cell\n",
"# -----------\n",
"# The contents of the cell will be tested by the autograder.\n",
"# If they do not pass here, they will not pass there.\n",
"\n",
"data = np.load(\"asserts/get_td_error_1.npz\", allow_pickle=True)\n",
"\n",
"states = data[\"states\"]\n",
"next_states = data[\"next_states\"]\n",
"actions = data[\"actions\"]\n",
"rewards = data[\"rewards\"]\n",
"discount = data[\"discount\"]\n",
"terminals = data[\"terminals\"]\n",
"tau = 0.001\n",
"\n",
"network_config = {\"state_dim\": 8,\n",
" \"num_hidden_units\": 512,\n",
" \"num_actions\": 4\n",
" }\n",
"\n",
"network = ActionValueNetwork(network_config)\n",
"network.set_weights(data[\"network_weights\"])\n",
"\n",
"current_q = ActionValueNetwork(network_config)\n",
"current_q.set_weights(data[\"current_q_weights\"])\n",
"\n",
"delta_vec = get_td_error(states, next_states, actions, rewards, discount, terminals, network, current_q, tau)\n",
"answer_delta_vec = data[\"delta_vec\"]\n",
"\n",
"assert(np.allclose(delta_vec, answer_delta_vec))"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "be42a5cbc9a9a6b71d6bd29e4ff78984",
"grade": false,
"grade_id": "cell-68c8eca2519dd27d",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Now that you implemented the `get_td_error()` function, you can use it to implement the `optimize_network()` function. In this function, you will:\n",
"- get the TD-errors vector from `get_td_error()`,\n",
"- make the TD-errors into a matrix using zeroes for actions not taken in the transitions,\n",
"- pass the TD-errors matrix to the `get_TD_update()` function of network to calculate the gradients times TD errors, and,\n",
"- perform an ADAM optimizer step."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "5772dc09af7d47867f70baa8580057cf",
"grade": false,
"grade_id": "cell-2b9714cb6ee933de",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Graded Cell\n",
"# -----------\n",
"\n",
"### Work Required: Yes. Fill in code in optimize_network (~2 Lines).\n",
"def optimize_network(experiences, discount, optimizer, network, current_q, tau):\n",
" \"\"\"\n",
" Args:\n",
" experiences (Numpy array): The batch of experiences including the states, actions, \n",
" rewards, terminals, and next_states.\n",
" discount (float): The discount factor.\n",
" network (ActionValueNetwork): The latest state of the network that is getting replay updates.\n",
" current_q (ActionValueNetwork): The fixed network used for computing the targets, \n",
" and particularly, the action-values at the next-states.\n",
" \"\"\"\n",
" \n",
" # Get states, action, rewards, terminals, and next_states from experiences\n",
" states, actions, rewards, terminals, next_states = map(list, zip(*experiences))\n",
" states = np.concatenate(states)\n",
" next_states = np.concatenate(next_states)\n",
" rewards = np.array(rewards)\n",
" terminals = np.array(terminals)\n",
" batch_size = states.shape[0]\n",
"\n",
" # Compute TD error using the get_td_error function\n",
" # Note that q_vec is a 1D array of shape (batch_size)\n",
" delta_vec = get_td_error(states, next_states, actions, rewards, discount, terminals, network, current_q, tau)\n",
"\n",
" # Batch Indices is an array from 0 to the batch_size - 1. \n",
" batch_indices = np.arange(batch_size)\n",
"\n",
" # Make a td error matrix of shape (batch_size, num_actions)\n",
" # delta_mat has non-zero value only for actions taken\n",
" delta_mat = np.zeros((batch_size, network.num_actions))\n",
" delta_mat[batch_indices, actions] = delta_vec\n",
"\n",
" # Pass delta_mat to compute the TD errors times the gradients of the network's weights from back-propagation\n",
" \n",
" ### START CODE HERE\n",
" td_update = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" td_update = network.get_TD_update(states, delta_mat) \n",
" \n",
" # Pass network.get_weights and the td_update to the optimizer to get updated weights\n",
" ### START CODE HERE\n",
" weights = None\n",
" ### END CODE HERE\n",
" # your code here\n",
" weights = optimizer.update_weights(network.get_weights(), td_update)\n",
" \n",
" network.set_weights(weights)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "f1510172c78a65fed30c285a935f29f4",
"grade": false,
"grade_id": "cell-dd47bfc5b0850596",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the following code to test your implementation of the `optimize_network()` function:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "288c5aae724dcb5ac4edc8c5ebec827d",
"grade": true,
"grade_id": "cell-2fcf8d08f7cbc3f2",
"locked": true,
"points": 10,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Tested Cell\n",
"# -----------\n",
"# The contents of the cell will be tested by the autograder.\n",
"# If they do not pass here, they will not pass there.\n",
"\n",
"input_data = np.load(\"asserts/optimize_network_input_1.npz\", allow_pickle=True)\n",
"\n",
"experiences = list(input_data[\"experiences\"])\n",
"discount = input_data[\"discount\"]\n",
"tau = 0.001\n",
"\n",
"network_config = {\"state_dim\": 8,\n",
" \"num_hidden_units\": 512,\n",
" \"num_actions\": 4\n",
" }\n",
"\n",
"network = ActionValueNetwork(network_config)\n",
"network.set_weights(input_data[\"network_weights\"])\n",
"\n",
"current_q = ActionValueNetwork(network_config)\n",
"current_q.set_weights(input_data[\"current_q_weights\"])\n",
"\n",
"optimizer_config = {'step_size': 3e-5, \n",
" 'beta_m': 0.9, \n",
" 'beta_v': 0.999,\n",
" 'epsilon': 1e-8\n",
" }\n",
"optimizer = Adam(network.layer_sizes, optimizer_config)\n",
"optimizer.m = input_data[\"optimizer_m\"]\n",
"optimizer.v = input_data[\"optimizer_v\"]\n",
"optimizer.beta_m_product = input_data[\"optimizer_beta_m_product\"]\n",
"optimizer.beta_v_product = input_data[\"optimizer_beta_v_product\"]\n",
"\n",
"optimize_network(experiences, discount, optimizer, network, current_q, tau)\n",
"updated_weights = network.get_weights()\n",
"\n",
"output_data = np.load(\"asserts/optimize_network_output_1.npz\", allow_pickle=True)\n",
"answer_updated_weights = output_data[\"updated_weights\"]\n",
"\n",
"assert(np.allclose(updated_weights[0][\"W\"], answer_updated_weights[0][\"W\"]))\n",
"assert(np.allclose(updated_weights[0][\"b\"], answer_updated_weights[0][\"b\"]))\n",
"assert(np.allclose(updated_weights[1][\"W\"], answer_updated_weights[1][\"W\"]))\n",
"assert(np.allclose(updated_weights[1][\"b\"], answer_updated_weights[1][\"b\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "298777333c00b402d67ac947c6cad456",
"grade": false,
"grade_id": "cell-66cff2d5725d31c3",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Now that you implemented the `optimize_network()` function, you can implement the agent. In the cell below, you will fill the `agent_step()` and `agent_end()` functions. You should:\n",
"- select an action (only in `agent_step()`),\n",
"- add transitions (consisting of the state, action, reward, terminal, and next state) to the replay buffer, and,\n",
"- update the weights of the neural network by doing multiple replay steps and calling the `optimize_network()` function that you implemented above."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "1d9134ac89ad8c86157599044f5dbc8e",
"grade": false,
"grade_id": "cell-54b5db480295424c",
"locked": false,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Graded Cell\n",
"# -----------\n",
"\n",
"### Work Required: Yes. Fill in code in agent_step and agent_end (~7 Lines).\n",
"class Agent(BaseAgent):\n",
" def __init__(self):\n",
" self.name = \"expected_sarsa_agent\"\n",
" \n",
" # Work Required: No.\n",
" def agent_init(self, agent_config):\n",
" \"\"\"Setup for the agent called when the experiment first starts.\n",
"\n",
" Set parameters needed to setup the agent.\n",
"\n",
" Assume agent_config dict contains:\n",
" {\n",
" network_config: dictionary,\n",
" optimizer_config: dictionary,\n",
" replay_buffer_size: integer,\n",
" minibatch_sz: integer, \n",
" num_replay_updates_per_step: float\n",
" discount_factor: float,\n",
" }\n",
" \"\"\"\n",
" self.replay_buffer = ReplayBuffer(agent_config['replay_buffer_size'], \n",
" agent_config['minibatch_sz'], agent_config.get(\"seed\"))\n",
" self.network = ActionValueNetwork(agent_config['network_config'])\n",
" self.optimizer = Adam(self.network.layer_sizes, agent_config[\"optimizer_config\"])\n",
" self.num_actions = agent_config['network_config']['num_actions']\n",
" self.num_replay = agent_config['num_replay_updates_per_step']\n",
" self.discount = agent_config['gamma']\n",
" self.tau = agent_config['tau']\n",
" \n",
" self.rand_generator = np.random.RandomState(agent_config.get(\"seed\"))\n",
" \n",
" self.last_state = None\n",
" self.last_action = None\n",
" \n",
" self.sum_rewards = 0\n",
" self.episode_steps = 0\n",
"\n",
" # Work Required: No.\n",
" def policy(self, state):\n",
" \"\"\"\n",
" Args:\n",
" state (Numpy array): the state.\n",
" Returns:\n",
" the action. \n",
" \"\"\"\n",
" action_values = self.network.get_action_values(state)\n",
" probs_batch = softmax(action_values, self.tau)\n",
" action = self.rand_generator.choice(self.num_actions, p=probs_batch.squeeze())\n",
" return action\n",
"\n",
" # Work Required: No.\n",
" def agent_start(self, state):\n",
" \"\"\"The first method called when the experiment starts, called after\n",
" the environment starts.\n",
" Args:\n",
" state (Numpy array): the state from the\n",
" environment's evn_start function.\n",
" Returns:\n",
" The first action the agent takes.\n",
" \"\"\"\n",
" self.sum_rewards = 0\n",
" self.episode_steps = 0\n",
" self.last_state = np.array([state])\n",
" self.last_action = self.policy(self.last_state)\n",
" return self.last_action\n",
"\n",
" # Work Required: Yes. Fill in the action selection, replay-buffer update, \n",
" # weights update using optimize_network, and updating last_state and last_action (~5 lines).\n",
" def agent_step(self, reward, state):\n",
" \"\"\"A step taken by the agent.\n",
" Args:\n",
" reward (float): the reward received for taking the last action taken\n",
" state (Numpy array): the state from the\n",
" environment's step based, where the agent ended up after the\n",
" last step\n",
" Returns:\n",
" The action the agent is taking.\n",
" \"\"\"\n",
" \n",
" self.sum_rewards += reward\n",
" self.episode_steps += 1\n",
"\n",
" # Make state an array of shape (1, state_dim) to add a batch dimension and\n",
" # to later match the get_action_values() and get_TD_update() functions\n",
" state = np.array([state])\n",
"\n",
" # Select action\n",
" # your code here\n",
" action = self.policy(state)\n",
" \n",
" # Append new experience to replay buffer\n",
" # Note: look at the replay_buffer append function for the order of arguments\n",
"\n",
" # your code here\n",
" self.replay_buffer.append(self.last_state, self.last_action, reward, 0, state)\n",
" \n",
" # Perform replay steps:\n",
" if self.replay_buffer.size() > self.replay_buffer.minibatch_size:\n",
" current_q = deepcopy(self.network)\n",
" for _ in range(self.num_replay):\n",
" \n",
" # Get sample experiences from the replay buffer\n",
" experiences = self.replay_buffer.sample()\n",
" \n",
" # Call optimize_network to update the weights of the network (~1 Line)\n",
" # your code here\n",
" optimize_network(experiences, self.discount, self.optimizer, self.network, current_q ,self.tau)\n",
" \n",
" # Update the last state and last action.\n",
" ### START CODE HERE (~2 Lines)\n",
" self.last_state = state\n",
" self.last_action = action\n",
" ### END CODE HERE\n",
" # your code here\n",
" \n",
" \n",
" return action\n",
"\n",
" # Work Required: Yes. Fill in the replay-buffer update and\n",
" # update of the weights using optimize_network (~2 lines).\n",
" def agent_end(self, reward):\n",
" \"\"\"Run when the agent terminates.\n",
" Args:\n",
" reward (float): the reward the agent received for entering the\n",
" terminal state.\n",
" \"\"\"\n",
" self.sum_rewards += reward\n",
" self.episode_steps += 1\n",
" \n",
" # Set terminal state to an array of zeros\n",
" state = np.zeros_like(self.last_state)\n",
"\n",
" # Append new experience to replay buffer\n",
" # Note: look at the replay_buffer append function for the order of arguments\n",
" \n",
" # your code here\n",
" self.replay_buffer.append(self.last_state, self.last_action, reward, 1, state)\n",
" \n",
" # Perform replay steps:\n",
" if self.replay_buffer.size() > self.replay_buffer.minibatch_size:\n",
" current_q = deepcopy(self.network)\n",
" for _ in range(self.num_replay):\n",
" \n",
" # Get sample experiences from the replay buffer\n",
" experiences = self.replay_buffer.sample()\n",
" \n",
" # Call optimize_network to update the weights of the network\n",
" # your code here\n",
" optimize_network(experiences, self.discount, self.optimizer, self.network, current_q ,self.tau)\n",
" \n",
" \n",
" def agent_message(self, message):\n",
" if message == \"get_sum_reward\":\n",
" return self.sum_rewards\n",
" else:\n",
" raise Exception(\"Unrecognized Message!\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "3eb47bb8d21e025d56d1d89d4e4c746c",
"grade": false,
"grade_id": "cell-d5e188547a2be9d9",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the following code to test your implementation of the `agent_step()` function:"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "d351a79afa5ae8b5c72510a871a7328a",
"grade": true,
"grade_id": "cell-154adaa2d37b3d45",
"locked": true,
"points": 20,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Tested Cell\n",
"# -----------\n",
"# The contents of the cell will be tested by the autograder.\n",
"# If they do not pass here, they will not pass there.\n",
"\n",
"agent_info = {\n",
" 'network_config': {\n",
" 'state_dim': 8,\n",
" 'num_hidden_units': 256,\n",
" 'num_hidden_layers': 1,\n",
" 'num_actions': 4\n",
" },\n",
" 'optimizer_config': {\n",
" 'step_size': 3e-5, \n",
" 'beta_m': 0.9, \n",
" 'beta_v': 0.999,\n",
" 'epsilon': 1e-8\n",
" },\n",
" 'replay_buffer_size': 32,\n",
" 'minibatch_sz': 32,\n",
" 'num_replay_updates_per_step': 4,\n",
" 'gamma': 0.99,\n",
" 'tau': 1000.0,\n",
" 'seed': 0}\n",
"\n",
"# Initialize agent\n",
"agent = Agent()\n",
"agent.agent_init(agent_info)\n",
"\n",
"# load agent network, optimizer, replay_buffer from the agent_input_1.npz file\n",
"input_data = np.load(\"asserts/agent_input_1.npz\", allow_pickle=True)\n",
"agent.network.set_weights(input_data[\"network_weights\"])\n",
"agent.optimizer.m = input_data[\"optimizer_m\"]\n",
"agent.optimizer.v = input_data[\"optimizer_v\"]\n",
"agent.optimizer.beta_m_product = input_data[\"optimizer_beta_m_product\"]\n",
"agent.optimizer.beta_v_product = input_data[\"optimizer_beta_v_product\"]\n",
"agent.replay_buffer.rand_generator.seed(int(input_data[\"replay_buffer_seed\"]))\n",
"for experience in input_data[\"replay_buffer\"]:\n",
" agent.replay_buffer.buffer.append(experience)\n",
"\n",
"# Perform agent_step multiple times\n",
"last_state_array = input_data[\"last_state_array\"]\n",
"last_action_array = input_data[\"last_action_array\"]\n",
"state_array = input_data[\"state_array\"]\n",
"reward_array = input_data[\"reward_array\"]\n",
"\n",
"for i in range(5):\n",
" agent.last_state = last_state_array[i]\n",
" agent.last_action = last_action_array[i]\n",
" state = state_array[i]\n",
" reward = reward_array[i]\n",
" \n",
" agent.agent_step(reward, state)\n",
" \n",
" # Load expected values for last_state, last_action, weights, and replay_buffer \n",
" output_data = np.load(\"asserts/agent_step_output_{}.npz\".format(i), allow_pickle=True)\n",
" answer_last_state = output_data[\"last_state\"]\n",
" answer_last_action = output_data[\"last_action\"]\n",
" answer_updated_weights = output_data[\"updated_weights\"]\n",
" answer_replay_buffer = output_data[\"replay_buffer\"]\n",
"\n",
" # Asserts for last_state and last_action\n",
" assert(np.allclose(answer_last_state, agent.last_state))\n",
" assert(np.allclose(answer_last_action, agent.last_action))\n",
"\n",
" # Asserts for replay_buffer \n",
" for i in range(answer_replay_buffer.shape[0]):\n",
" for j in range(answer_replay_buffer.shape[1]):\n",
" assert(np.allclose(np.asarray(agent.replay_buffer.buffer)[i, j], answer_replay_buffer[i, j]))\n",
"\n",
" # Asserts for network.weights\n",
" assert(np.allclose(agent.network.weights[0][\"W\"], answer_updated_weights[0][\"W\"]))\n",
" assert(np.allclose(agent.network.weights[0][\"b\"], answer_updated_weights[0][\"b\"]))\n",
" assert(np.allclose(agent.network.weights[1][\"W\"], answer_updated_weights[1][\"W\"]))\n",
" assert(np.allclose(agent.network.weights[1][\"b\"], answer_updated_weights[1][\"b\"]))\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "ab42a7c0695fa34f1aedc1f59d943d24",
"grade": false,
"grade_id": "cell-e28c640fdbff0646",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the following code to test your implementation of the `agent_end()` function:"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "9e669f8872d9ffb8231570fa2c8b177a",
"grade": true,
"grade_id": "cell-1c52a8c6d80f28c0",
"locked": true,
"points": 20,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [],
"source": [
"# -----------\n",
"# Tested Cell\n",
"# -----------\n",
"# The contents of the cell will be tested by the autograder.\n",
"# If they do not pass here, they will not pass there.\n",
"\n",
"agent_info = {\n",
" 'network_config': {\n",
" 'state_dim': 8,\n",
" 'num_hidden_units': 256,\n",
" 'num_hidden_layers': 1,\n",
" 'num_actions': 4\n",
" },\n",
" 'optimizer_config': {\n",
" 'step_size': 3e-5, \n",
" 'beta_m': 0.9, \n",
" 'beta_v': 0.999,\n",
" 'epsilon': 1e-8\n",
" },\n",
" 'replay_buffer_size': 32,\n",
" 'minibatch_sz': 32,\n",
" 'num_replay_updates_per_step': 4,\n",
" 'gamma': 0.99,\n",
" 'tau': 1000,\n",
" 'seed': 0\n",
" }\n",
"\n",
"# Initialize agent\n",
"agent = Agent()\n",
"agent.agent_init(agent_info)\n",
"\n",
"# load agent network, optimizer, replay_buffer from the agent_input_1.npz file\n",
"input_data = np.load(\"asserts/agent_input_1.npz\", allow_pickle=True)\n",
"agent.network.set_weights(input_data[\"network_weights\"])\n",
"agent.optimizer.m = input_data[\"optimizer_m\"]\n",
"agent.optimizer.v = input_data[\"optimizer_v\"]\n",
"agent.optimizer.beta_m_product = input_data[\"optimizer_beta_m_product\"]\n",
"agent.optimizer.beta_v_product = input_data[\"optimizer_beta_v_product\"]\n",
"agent.replay_buffer.rand_generator.seed(int(input_data[\"replay_buffer_seed\"]))\n",
"for experience in input_data[\"replay_buffer\"]:\n",
" agent.replay_buffer.buffer.append(experience)\n",
"\n",
"# Perform agent_step multiple times\n",
"last_state_array = input_data[\"last_state_array\"]\n",
"last_action_array = input_data[\"last_action_array\"]\n",
"state_array = input_data[\"state_array\"]\n",
"reward_array = input_data[\"reward_array\"]\n",
"\n",
"for i in range(5):\n",
" agent.last_state = last_state_array[i]\n",
" agent.last_action = last_action_array[i]\n",
" reward = reward_array[i]\n",
" \n",
" agent.agent_end(reward)\n",
"\n",
" # Load expected values for last_state, last_action, weights, and replay_buffer \n",
" output_data = np.load(\"asserts/agent_end_output_{}.npz\".format(i), allow_pickle=True)\n",
" answer_updated_weights = output_data[\"updated_weights\"]\n",
" answer_replay_buffer = output_data[\"replay_buffer\"]\n",
"\n",
" # Asserts for replay_buffer \n",
" for i in range(answer_replay_buffer.shape[0]):\n",
" for j in range(answer_replay_buffer.shape[1]):\n",
" assert(np.allclose(np.asarray(agent.replay_buffer.buffer)[i, j], answer_replay_buffer[i, j]))\n",
"\n",
" # Asserts for network.weights\n",
" assert(np.allclose(agent.network.weights[0][\"W\"], answer_updated_weights[0][\"W\"]))\n",
" assert(np.allclose(agent.network.weights[0][\"b\"], answer_updated_weights[0][\"b\"]))\n",
" assert(np.allclose(agent.network.weights[1][\"W\"], answer_updated_weights[1][\"W\"]))\n",
" assert(np.allclose(agent.network.weights[1][\"b\"], answer_updated_weights[1][\"b\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "1ed9fca0352062809443beae983d9ea2",
"grade": false,
"grade_id": "cell-83130c3c2426b0c4",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Section 6: Run Experiment\n",
"\n",
"Now that you implemented the agent, we can use it to run an experiment on the Lunar Lander problem. We will plot the learning curve of the agent to visualize learning progress. To plot the learning curve, we use the sum of rewards in an episode as the performance measure. We have provided for you the experiment/plot code in the cell below which you can go ahead and run. Note that running the cell below has taken approximately 10 minutes in prior testing."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "e192cd7f474ff57861f6f8a3e3ab188c",
"grade": false,
"grade_id": "cell-0defecc3f69370dc",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 300/300 [08:04<00:00, 1.61s/it]\n"
]
}
],
"source": [
"# ---------------\n",
"# Discussion Cell\n",
"# ---------------\n",
"\n",
"def run_experiment(environment, agent, environment_parameters, agent_parameters, experiment_parameters):\n",
" \n",
" rl_glue = RLGlue(environment, agent)\n",
" \n",
" # save sum of reward at the end of each episode\n",
" agent_sum_reward = np.zeros((experiment_parameters[\"num_runs\"], \n",
" experiment_parameters[\"num_episodes\"]))\n",
"\n",
" env_info = {}\n",
"\n",
" agent_info = agent_parameters\n",
"\n",
" # one agent setting\n",
" for run in range(1, experiment_parameters[\"num_runs\"]+1):\n",
" agent_info[\"seed\"] = run\n",
" agent_info[\"network_config\"][\"seed\"] = run\n",
" env_info[\"seed\"] = run\n",
"\n",
" rl_glue.rl_init(agent_info, env_info)\n",
" \n",
" for episode in tqdm(range(1, experiment_parameters[\"num_episodes\"]+1)):\n",
" # run episode\n",
" rl_glue.rl_episode(experiment_parameters[\"timeout\"])\n",
" \n",
" episode_reward = rl_glue.rl_agent_message(\"get_sum_reward\")\n",
" agent_sum_reward[run - 1, episode - 1] = episode_reward\n",
" save_name = \"{}\".format(rl_glue.agent.name)\n",
" if not os.path.exists('results'):\n",
" os.makedirs('results')\n",
" np.save(\"results/sum_reward_{}\".format(save_name), agent_sum_reward)\n",
" shutil.make_archive('results', 'zip', 'results')\n",
"\n",
"# Run Experiment\n",
"\n",
"# Experiment parameters\n",
"experiment_parameters = {\n",
" \"num_runs\" : 1,\n",
" \"num_episodes\" : 300,\n",
" # OpenAI Gym environments allow for a timestep limit timeout, causing episodes to end after \n",
" # some number of timesteps. Here we use the default of 500.\n",
" \"timeout\" : 500\n",
"}\n",
"\n",
"# Environment parameters\n",
"environment_parameters = {}\n",
"\n",
"current_env = LunarLanderEnvironment\n",
"\n",
"# Agent parameters\n",
"agent_parameters = {\n",
" 'network_config': {\n",
" 'state_dim': 8,\n",
" 'num_hidden_units': 256,\n",
" 'num_actions': 4\n",
" },\n",
" 'optimizer_config': {\n",
" 'step_size': 1e-3,\n",
" 'beta_m': 0.9, \n",
" 'beta_v': 0.999,\n",
" 'epsilon': 1e-8\n",
" },\n",
" 'replay_buffer_size': 50000,\n",
" 'minibatch_sz': 8,\n",
" 'num_replay_updates_per_step': 4,\n",
" 'gamma': 0.99,\n",
" 'tau': 0.001\n",
"}\n",
"current_agent = Agent\n",
"\n",
"# run experiment\n",
"run_experiment(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "92ba982f59ab0ecd45333f5b73f0be60",
"grade": false,
"grade_id": "cell-b6321a32b126637e",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"Run the cell below to see the comparison between the agent that you implemented and a random agent for the one run and 300 episodes. Note that the `plot_result()` function smoothes the learning curve by applying a sliding window on the performance measure. "
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "3132510fde7c06020276a6c6f272eccd",
"grade": false,
"grade_id": "cell-337be142123eb81f",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAjgAAAGoCAYAAABL+58oAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdd3xW5f3/8dcnkywSAkmAsJcsASWAVlQsWLUVBz8HbpRqa+1XnBVcdddWW1GrVqU466LWVSdDBBRBUJClrLAhhBlCyP78/rjvpEkIEDAhcPN+Ph555L7Puc45n5Ng7rfXuc65zN0RERERCSVh9V2AiIiISG1TwBEREZGQo4AjIiIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCJSJ8zsHjPbVN917IuZDTMzN7P4g3zcNDMbbWbLzKzAzLaa2cdmdtrBrEMkVEXUdwEiIvXsQ+B4IO9gHdDMjgI+B3YCjwILgYbAL4H3zayvu889WPWIhCIFHBEJOWYW4+67atLW3bOB7Douqap/AVuAn7l7ToXlH5jZM8C2n7Lz/Tl/kVClS1QiUm/MrLuZfWhmO4Jf48ysaYX1cWb2dzP70czyzCzTzJ4ys4ZV9uNmdlPwkk82MK/C8hFm9pCZZZvZxuD20RW2rXSJyszaBN9fYGbPmtl2M1tjZveaWViV455vZkvMbJeZfW5mxwS3HbaXcz4J6A2MqhJuAHD37919VbDtZDP7d5XtBwSP0b1KvZeY2ctmto1AUHrJzGZWc/zfB+stO98wMxtpZkuDl8oWm9kVe6pf5HChgCMi9cLMOgBfAg2Ay4BhQDcCH84WbBYLhAN3AGcAdwE/B8ZVs8tbgWbBfV1fYfnNQHPgUuAR4DfAiBqU+BcgFzgPeBW4O/i6rP4M4A3gW+Bc4H3gzRrs92SgBJhQg7b741FgB3A+8FCwtj5m1q5KuwuAD909N/j+SeBO4DngV8A7wFgzO7OW6xM5qHSJSkTqyx+BDcAZ7l4IYGbfAz8QGIvyYfDy0bVlG5hZBJAJTDOzVmU9HUEb3P3Cao6zwt2HBV9/amYnAEMIBJi9meLuNwdfjzez04PbvRVcdhuwCBjqgUn9PjGzSODP+9hvOpBdB5eQvnb368reBH9WmwkEmoeDy9KB/sFlZSHzWuBKd38puOkEM2tG4Pfz31quUeSgUQ+OiNSXQQR6C0rNLKJCeFkBZJQ1MrPLzOw7M8sFioBpwVWdquzvwz0c57Mq7xcCLWpQ37626wN84JVnLH6/BvsFqItZjiudv7sXA/8BKoa+8wkMbC5rOxAoBd4p+x0Efw8TgV5mFl4HdYocFAo4IlJfmhDoBSmq8tUOaAlgZucCLwPTCXw4H0fgchAELm1VlLWH41QdsFtYzbYHsl1Tdh+cXJPBymuBFDOrSQ37o7rzf4NAUCkLgxcC71foPWpC4BLgdir/Dl4k0MPfrJZrFDlodIlKROrLFgI9OGOqWVf2/JzzgRnu/ruyFWZ28h72Vxe9InuzAUipsqzq++pMBu4j0Huyp16nMvlAVJVlyXtoW935TyZQ54Vm9jLQD/hThfVbgGLgBAI9OVVt3Ed9IocsBRwRqS8Tge7A7CqXeSqKAQqqLLukTququW+AwWZ2e4X6z9rXRu4+1cxmAw+Z2RR331FxvZkdDWxz99XAGuCkKrs4taYFuntp8C6sCwmEpRzgkwpNJhHowUl09/E13a/I4UABR0TqUpSZnVfN8i+Ae4CZwIdmNpZAr006gQ/wF919MjAeeMrM7gBmEBh8PPAg1F0TfyZQ0xtm9gLQBbg6uK663pCKLiHwoL9ZZvYY/3vQ32nBffQDVhPo4RoebPMhcEqwzf54E/g9cCPwTtmAbgB3/9HM/hE8h78AswhchusGdHL3X+/nsUQOGQo4IlKXEqj+lu5T3H2ymR0HPEDgFuUYAuNTJgJLg+2eJTAmZwSBD97xwMXA13Vc9z65+ywzu4jALdlnEwgH1xKocbfn21TZ9kczOxYYBfyBQLDLIxD4Li57irG7f2hmtwO/A34NvAfcEPxeU18SCEstCYzJqeo6YDGBYHVfsPaFwD/34xgihxzbc8+wiIjsDzO7FHgFaOfumfVdj8iRTD04IiIHKDitwnhgK3AsgQfmfahwI1L/dJt4LTOzBmY208zmmtkCM7s3uDzZzMYHH+s+3swaVdhmVPAx6T+aZhIWOZw0Bp4m8MycWwmMd7m4XisSEUCXqGpd8BHzce6eG3yq6TQC4weGAFvc/WEzGwk0cvfbzKwr8DrQl8Dj5CcQGNxXUk+nICIicthTD04t84CyOV4ig19OYBBi2aPQXwLOCb4+G3jD3QuC3dpLCYQdEREROUAag1MHgo83nw10AJ5y9xlmlubu6wHcfb2ZpQabp1P5jpA1wWVV93kNcA1AXFxc786dO9flKYiIiBwWZs+evcndd3vIpgJOHQheXuplZkkE5njpvpfmVs2y3a4buvtzBG6lJSMjw2fNmlUrtYqIiBzOzGxldct1iaoOufs2Ao9KPx3ICs7QS/B72SPQ1xCcdyeoBbDuIJYpIiISchRwapmZpQR7bjCzGAIzJv9AYJbhK4LNruB/D+p6HxhqZtFm1hboSOBhXyIiInKAdImq9jUDXgqOwwkD3nL3/5rZdOAtMxsOrCIwiSDuvsDM3iLw5NBi4DrdQSUiIvLT6Dbxw5DG4IiIiASY2Wx3z6i6XJeoREREJOToEpWI7FVOTg4bN26kqKiovksRkSNMZGQkqampNGzYcL+3VcARkT3KyckhKyuL9PR0YmJiCDyoW0Sk7rk7u3btYu3atQD7HXJ0iUpE9mjjxo2kp6cTGxurcCMiB5WZERsbS3p6Ohs3btz3BlUo4IjIHhUVFRETE1PfZYjIESwmJuaALpEr4IjIXqnnRkTq04H+DVLAERERkZCjgCMicph49dVXadOmTX2XcUjo1q0bb7755l7bmBnTpk07SBXVnWHDhvHrX/+6vsuoVZMnTyYiom7vc1LAEZGQMGDAAKKjo4mPj6/0NW/evPoujRdffJEOHTrU+XGys7MZPnw46enpxMfH06xZM8444wzWr1+/W9tBgwYRHh7OihUrKi1fsWIFZkZcXBzx8fGkpqZy7rnnkpmZWanduHHjyMjIICkpiaSkJI4++miefPLJ3Y4zbdo0zIyrrrqqVs91wYIFXHjhhZVqXrNmTa0e40hxsP59HmwKOCISMu666y5yc3MrfR199NH1XdZBc+mll7Jjxw6+++47cnNzmTt3LhdddNFuYxiWLVvGpEmTSEpK4vnnn692Xz/++CO5ubksWLCAbdu2ceWVV5av++qrr7jqqqt44IEH2Lx5Mxs3buTFF18kPT19t/0899xzJCcn8+abb7J9+/baPeHDnLtTXFxc32UcdAfrmVoKOCIS8nJzc+nSpQsPPPBA+bL777+fLl26sHPnTiBwOWP06NH06tWLhIQETjnlFJYuXVrevri4mIceeohOnTqRlJTECSecwOzZs8vXuzvPPfccRx99NA0bNqRly5Y89dRTTJ8+nd/+9rcsX768vFdp8uTJAMyfP5/TTjuNJk2a0KpVK0aNGlXpj//MmTPJyMggPj6e/v37s3z58r2e51dffcWwYcNITU0FIDU1lcsvv5ymTZtWavfcc8/RtWtXbr/9dsaOHbvXD9mUlBTOO+88Kk4PM336dLp06cLpp59OeHg4UVFR9O7dmyFDhlTaduvWrYwbN44nn3ySmJgYXnnllT0eZ9OmTYSHh7Nu3ToAJk6ciJnxwgsvAIGff8OGDfnmm28AaNOmDa+++ioAPXv2BOCoo44iPj6e+++/v3y/33//PX369CEhIYHjjjuOH374YY81DBs2jMsuu4yrr76apKQk0tPTefbZZyu1mTp1Kv379yc5OZn27dvz17/+lbIpj6q77HLPPfcwaNCg8vdmxuOPP05GRgaxsbHMmjWLiRMn0q9fPxo1akRKSgpDhw7dr9ui27Rpw0MPPcTAgQOJj4+ne/fufPXVV5XaPP/883Tv3p3ExESOOeYYPvvsM4A9/vscPHgwf/rTn8q3b9WqFSeffHL5+2uvvZbrrrsOCPxu7rvvPtq1a0dycjIDBw5k/vz5lX6ul1xyCVdeeSXJyclcf/31u53DrFmzaNmy5R4D9wFxd30dZl+9e/d2kYNh4cKFld7f8/58v+AfXx2Ur3ven79ftZ588sl+//3373H9vHnzPCEhwSdNmuSTJk3yhIQEnz//f8cAvEuXLr5kyRLPy8vz6667zrt06eLFxcXu7j5q1Cjv27evL1u2zIuLi33MmDHeuHFj37Jli7u7P/30096sWTOfOnWql5SUeHZ2ts+YMcPd3V944QVv3759pXqysrI8OTnZ//GPf3hBQYGvWbPGe/fu7ffee6+7u2/bts2Tk5P9T3/6kxcUFPjMmTM9LS3NW7duvcdz/OUvf+ldu3b1Z5991r/99tvy2isqLCz01NRU/+tf/+pZWVkeGRnpb7/9dvn6zMxMB3z16tXu7r5+/Xo/8cQT/dhjjy1vM336dA8PD/frr7/eP/roI8/Kyqq2nscee8ybNGniBQUFfv311/vRRx+9x9rd3Xv16uUvvfSSu7uPHDnSO3To4BdddJG7u0+bNs0bNWrkJSUl7u7eunVrf+WVV6qtuQzgffr08ZUrV3p+fr6fd955PmjQoD0e/4orrvAGDRr4e++95yUlJf722297RESEr1ixwt3d58+f7/Hx8f7uu+96cXGxL1q0yNu0aVNe8+eff+7h4eGV9vnHP/7RBw4cWKmmo48+2pcuXerFxcWen5/vU6dO9ZkzZ3pRUVH5z3vo0KGV6ho+fPge627durW3b9/e58+f78XFxX7DDTd4hw4dytc/++yz3r59e58zZ46XlJT4hx9+6HFxcb5kyRJ3r/7f5+jRo/2UU05xd/cffvjBmzdv7omJib5jxw53d+/QoYP/5z//cXf3hx56yNu3b++LFi3y/Px8/+Mf/+hNmzb17du3l9cfGRnpb7zxhhcXF/vOnTsr/azee+89T0tL848//niP51j1b1FFwCyv5rNSPTgiEjIefPDB8jEhZV9lunfvzhNPPMHFF1/MxRdfzJNPPkm3bt0qbX/zzTfToUMHYmJi+Mtf/sKyZcuYMWMG7s6TTz7JI488Qrt27QgPD2f48OE0a9aMDz/8EIAnn3ySO+64g/79+xMWFkaTJk3o27fvHmt9+eWX6dmzJ7/5zW+IiooiPT2dUaNG8fLLLwPw3//+l7i4OG677TaioqLo06cPw4cP3+v5v/nmm1x66aW88MIL/OxnP6Nx48bccMMN5Ofnl7d555132Lp1K5dddhmpqamceeaZu/VSQGAQb0JCAs2aNWPr1q289tpr5euOO+44vvjiCzZt2sQ111xD06ZNycjIYOrUqZX28fzzz3PJJZcQFRXF8OHDmTdvHtOnT99j/YMGDWLChAkATJgwgQceeICJEyfi7kyYMIFTTjmFsLD9+9i69dZbadWqFdHR0QwbNox9TVT885//nLPOOouwsDCGDBlCUlISc+bMAeCZZ57h/PPP5+yzzyY8PJzOnTvz+9//vvx3VlO33HIL7du3Jzw8nOjoaPr370+fPn2IiIigadOm/OEPf2DixIn7tc/f/OY3dOvWjfDwcH7961+zdOnS8kuCTzzxBHfffTc9e/YkLCyMX/7yl5xyyim88cYbe9zfoEGD+Oqrr9i1axcTJkzgtNNOo1+/fnzxxResWrWKzMxMTjnlFABeeOEFbrvtNjp37kx0dDR333034eHh5f9tAPTv358LL7yQ8PBwYmNjy5c/8cQT/P73v+eTTz7h9NNP369z3hdN1SAiNfbHwd323age3XHHHdx55517XH/hhRcycuRIYmNjueyyy3ZbX/EOpdjYWFJSUlizZg2bNm0iNzeXwYMHVxrPUlRUVD6wdcWKFXTq1KnGtWZmZvLll19WCmHuTklJCQBr1qyhdevWlY7Xtm3bve4zPj6eUaNGMWrUKAoLC/nkk0+47LLLaNiwIffddx8Azz77LGeeeSYpKSkADB8+nMGDB5OZmVlp/wsWLKBFixbMmjWLs88+m+XLl3PUUUeVrz/hhBM44YQTAFi9ejW33norZ555JitXriQpKYmpU6eycOFCXn/9dQB69OhBRkYGzz77LMcff3y19Q8aNIirrrqKrVu3snjxYoYMGcJ9993H3LlzmTBhAhdffHGNf75lmjVrVv46Li6OHTt21Lh91W0yMzOZNGkS//nPf8rXl5aW0rJly/2qqeqdcLNnz+b2229n7ty55OXl4e7k5ubu1z6rnifAjh07SExMJDMzk+uuu67SpaHi4mJatGixx/1169aN5ORkpk6dyoQJE7jgggtYs2YN48ePZ8OGDfTu3bv83+7q1atp165d+bZhYWG0adOG1atX7/GcIfCze/DBB/ntb39Lr1699ut8a0I9OCJyxPi///s/OnfuTFxcHPfcc89u6yveUZSXl0d2djYtWrSgSZMmxMXFMWHCBLZt21b+tXPnTkaOHAkE/oAvWbKk2uNW1+vQunVrBg0aVGl/27dvL/9gS09PZ+XKleXjO4Dd7mTam6ioKM466ywGDRpU3gOxdOlSPv/8c8aPH0/Tpk1p2rQpV111Fe6+x7EPGRkZPPDAA1x99dXk5eVV26Zly5bccccd5OTklI8TKusV+sUvflF+rIULF/LWW2+xbdu2avdz0kknsXnzZv7+979z4oknEhkZyaBBg3jnnXeYMWNGpbEsFe1vr86Bat26NVdddVWl31lOTg4LFiwAAgGzpKSEgoKC8m3KxhTtrd6hQ4dy7LHHsnjxYnJycspDYW3WPXbs2Ep15+bm8swzz1RbT5mBAwfy6aefMmXKFAYOHMigQYMYP348EyZMqPS7aNmyZaV/m6WlpaxYsaJS8KvuGGFhYUyZMoWxY8fy0EMP1dbp/m//tb5HEZFD0CuvvMJ///tfXn/9dcaNG8fjjz/O+PHjK7V57LHHWLZsGfn5+YwcOZJ27drRr18/zIwRI0Zwyy23lIeY3NxcPv300/IPsOuuu46HHnqI6dOnU1payqZNm8oHxDZt2pSNGzeSk5NTfqzLL7+cWbNmMXbsWPLz8yktLWX58uV88sknAJx55pnk5ubyyCOPUFRUxLfffsvYsWP3eo433XQT33zzTfn+Jk+ezOeff86JJ54IBAYXt23blsWLFzNnzhzmzJnD3Llzufvuuxk7duwe7265/PLLiYuL44knngDg3Xff5YUXXii//XzTpk2MHj2aJk2a0LlzZ7Zs2cLbb7/NU089VX6cOXPmsGjRIho0aLDHwcYxMTEcf/zxPProo5x66qlA4EN29OjRNGvWjI4dO1a7XUpKCmFhYXsMmLXld7/7HW+88QYffPABRUVFFBcXs3DhQr744gvgf4Ocx4wZQ2lpKdOmTePf//73Pvebk5NDYmIiCQkJrFq1iocffrhW677xxhu55557mDNnTvkEltOmTSsfcF3dv08I9KiNGTOGVq1akZqaSq9evdi4cSMfffRRpYAzbNgw/vKXv7B48WIKCwt58MEHKS4u5le/+tU+azvqqKOYOnUq//znPxk1alStnne9D5jVlwYZy6FrbwP7DjUnn3yyR0VFeVxcXKWvDz74wBcsWOAJCQk+YcKE8vavvPKKp6am+rp169w9MPjzscce8x49enh8fLyfdNJJ/uOPP5a3Lyoq8r/+9a/epUsXT0hI8KZNm/o555xTPrC1tLTU//73v3uXLl08Pj7eW7Zs6U899VT5tkOGDPHk5GRPTEz0yZMnu7v7ggULfPDgwZ6WluYNGzb0Hj16lG/j7v7VV1/5scce63FxcX7CCSf4vffeu9dBxiNGjPBu3bp5QkKCN2zY0Lt06eIPPvigl5SUeEFBgaekpPgTTzyx23ZbtmzxuLg4Hzdu3B4H7L7yyiuelJTkW7Zs8SlTpvgZZ5zhaWlpHhsb62lpaT548GD/7rvv3N39b3/7mzdt2tQLCgp2O9aoUaO8W7duezyHBx980AFfsGCBu7tv377dIyIi/KqrrqrUruIg47Lt0tLSPDEx0R944AF3D/xOp06dWt6mukHAFVU3mLfqcb766iv/+c9/7o0bN/ZGjRp5nz59fNy4ceXrx40b523btvX4+Hg/77zz/IYbbthtkHHFmtzd3333XW/fvr3HxcV57969ffTo0R74eN5zXXursbrf4Ysvvui9evXyxMREb9Kkif/iF7/w77//3t33/O9z7dq1Dvitt95avp/zzz/fY2JiPD8/v3xZYWGh33333d66dWtPSkryAQMG+Ny5c/daf9Xfxdq1a71r165+7bXXemlp6W7neCCDjM0rdH/K4SEjI8P3NVBOpDYsWrSILl261HcZB4WZld8CLCKHlr39LTKz2e6eUXW5LlGJiIhIyFHAERERkZCj28RFRABdrhcJLerBERERkZCjgCMiIiIhRwFHREREQo4CjoiIiIQcBRwREREJOQo4IiIHYNq0aZUmwhSRQ4sCjoiEhAEDBhAdHU18fDyJiYn06tWLcePG1XdZIlJPFHBEJGTcdddd5ObmsnnzZoYNG8bFF1/M0qVL67ssEakHCjgiEnIiIiK4+uqrKS4uZs6cOQBceeWVtGzZkoSEBLp27cprr71W3n7y5MlERETw5ptv0r59exITE7ngggvYsWNHeZslS5YwYMAAEhIS6NmzJ1Xng8vLy2PEiBG0bNmSJk2acM4557Bq1ary9QMGDOCmm27i3HPPJSEhgfbt2zNx4kQmTJhA9+7dadiwIeeee26lY4rIgdOTjEWk5j4eCRvmHZxjNT0aznj4gDYtLCzkmWeeAaBTp04A9O/fn0cffZSkpCTGjRvH5ZdfTq9evejatSsAJSUlfPbZZ8ydO5edO3fSv39/nnjiCe644w6Ki4sZPHgwAwcO5OOPP2bNmjUMHjy40jFvvPFG5syZw9dff01SUhIjRoxg8ODBfPvtt4SHhwPwyiuv8MEHH/Dvf/+bu+66i8suu4z+/fszZcqU8hqffPJJbr/99gM6bxH5H/XgiEjIePDBB0lKSiImJoY777yTMWPG0KNHDwCGDx9O48aNCQ8PZ+jQofTo0YPJkydX2v7hhx8mPj6etLQ0zjnnnPJemhkzZpCZmckjjzxCTEwMHTt25Oabby7frrS0lJdffpkHHniA9PR04uLiGD16NIsWLWLmzJnl7S644AKOO+44wsPDufTSS1m/fj233norycnJJCcnc+aZZ/LNN9/U/Q9K5AigHhwRqbkD7FE5WO644w7uvPNOtm7dyvDhw5k0aRLDhw+ntLSUe+65hzfffJMNGzZgZuzcuZPs7OzybcPDw0lJSSl/HxcXV365aM2aNaSmphIbG1u+vm3btuWvs7Ozyc/Pp127duXL4uPjSU1NZfXq1Rx//PEANGvWrHx92b6qLtMlKpHaoYAjIiGnUaNGjBkzhvbt2/Pee++Rm5vLmDFj+Oyzz+jatSthYWFkZGTUeILN9PR0Nm7cSF5eXnkwyczMLF+fkpJCdHQ0mZmZtG/fHoDc3Fw2btxIy5Yta/8ERWSfdIlKREJScnIyN910E7fffjvbtm0jIiKClJQUSktLGTt2LHPnzq3xvo477jhat27NyJEj2bVrF8uWLeOxxx4rXx8WFsbll1/OXXfdxbp168jLy+Pmm2+mc+fO9O3bty5OT0T2QQFHRELWiBEjWL9+PWZGv3796NChA+np6SxcuJATTzyxxvuJiIjg/fffZ+7cuaSmpjJkyBCuueaaSm0ee+wxMjIy6NOnD61atWL9+vW8//775QOMReTgspp20cqhIyMjw6veoipSFxYtWkSXLl3quwwROcLt7W+Rmc1294yqy9WDIyIiIiFHAUdERERCjgKOiIiIhBwFHBEREQk5Cjgisle6EUFE6tOB/g1SwBGRPYqMjGTXrl31XYaIHMF27dpFZGTkfm+ngCMie5SamsratWvJy8tTT46IHFTuTl5eHmvXriU1NXW/t9dUDSKyRw0bNgRg3bp1FBUV1XM1InKkiYyMJC0trfxv0f5QwBGRvWrYsOEB/XEREalPukQlIiIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCIiIhJyFHBqmZm1NLPPzWyRmS0wsxHB5clmNt7MlgS/N6qwzSgzW2pmP5rZafVXvYiISGhQwKl9xcDN7t4FOA64zsy6AiOBie7eEZgYfE9w3VCgG3A68LSZhddL5SIiIiFCAaeWuft6d/82+HoHsAhIB84GXgo2ewk4J/j6bOANdy9w90xgKdD34FYtIiISWhRw6pCZtQGOAWYAae6+HgIhCCh77nQ6sLrCZmuCy6ru6xozm2Vms7Kzs+uybBERkcOeAk4dMbN44G3gBnfP2VvTapbtNumPuz/n7hnunpGSklJbZYqIiIQkBZw6YGaRBMLNv9z9P8HFWWbWLLi+GbAxuHwN0LLC5i2AdQerVhERkVCkgFPLzMyAfwKL3P1vFVa9D1wRfH0F8F6F5UPNLNrM2gIdgZkHq14REZFQpMk2a98JwGXAPDObE1x2O/Aw8JaZDQdWAecDuPsCM3sLWEjgDqzr3L3k4JctIiISOhRwapm7T6P6cTUAA/ewzYPAg3VWlIiIyBFGl6hEREQk5CjgiIiISMhRwBEREZGQo4AjIiIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCIiIhJyFHBEREQk5CjgiIiISMhRwBEREZGQo4AjIiIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCIiIhJyFHBEREQk5CjgiIiISMhRwBEREZGQo4AjIiIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCIiIhJyFHBEREQk5CjgiIiISMhRwBEREZGQo4AjIiIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCIiIhJyFHBEREQk5CjgiIiISMhRwBEREZGQo4AjIiIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCIiIhJyFHBEREQk5CjgiIiISMhRwBEREZGQo4AjIoFJzo0AACAASURBVCIiIUcBR0REREKOAo6IiIiEHAUcERERCTkKOCIiIhJyFHBEREQk5CjgiIiISMhRwKllZjbWzDaa2fwKy5LNbLyZLQl+b1Rh3SgzW2pmP5rZafVTtYiISGgJ+YBjZieY2fdmVmhmkw/CIV8ETq+ybCQw0d07AhOD7zGzrsBQoFtwm6fNLPwg1CgiIhLS9ivgmFmKmT1tZivMrMDMssxsopmdWlcF1oLHgblAe2BIXR/M3acAW6osPht4Kfj6JeCcCsvfcPcCd88ElgJ967pGERGRUBexn+3fBmKB4QQ+jFOBk4HGtVxXbeoAPOXuq+uxhjR3Xw/g7uvNLDW4PB34ukK7NcFlIiIi8hPUuAfHzJKAE4GR7j7R3Ve6+zfu/qi7v1Gh3Qozu6XKtpPN7O9V2txtZi+a2Q4zW21mF5pZkpm9YWa5wfEqv9hHTdFmNjrYk5RvZl+bWf/gujZm5kAiMNbM3MyG1fR8DxKrZplX29DsGjObZWazsrOz67gsERGRw9v+XKLKDX6dZWYNauHYNwAzgWOBtwhcunkN+AjoBUwBXt3Hsf4CXAhcBRwDzAM+MbNmwGqgGZAXPFYz4M1aqPtAZAVrIvh9Y3D5GqBlhXYtgHXV7cDdn3P3DHfPSElJqdNiRUREDnc1DjjuXgwMAy4FtpnZdDN71Mz6HeCxP3X3p919CfBHIBpY6u4vu/tS4H4gBehe3cZmFgdcC9zm7h+6+yLgt0AWcJ27l7j7BgI9ItvdfYO77zrAWn+q94Ergq+vAN6rsHxosCeqLdCRQOgTERGRn2C/Bhm7+9tAc2Aw8DHwM+BrM7v9AI79fYX95hLoaZlXYX1W8Hsq1WsPRAJfVthPCTAd6HoA9dQKM3s9WMNRZrbGzIYDDwOnmtkS4NTge9x9AYHeq4XAJwSDWf1ULiIiEjr2d5Ax7p4PjA9+3WdmY4B7zOxRdy8EStl9bElkNbsqqrrrKsvKxqLsKYRZlXZV91Uv3P2iPawauIf2DwIP1l1FIiIiR57aeA7OQgJBqWysTDaB8S4ABMfQdK6F41S1FCgE+lc4VjhwfLAmEREROULVuAfHzBoD44CxBC4v7QAygD8QeIhdTrDpJOAqM3ufQNi5g+p7cH4Sd99pZs8AD5vZJiATuBFIA56u7eOJiIjI4WN/LlHlEnhmywgCz5aJBtYSuPPpgQrt/gS0ITCQNpfA5ZfmtVBrdW4Lfn8BSAK+A04ve+aMiIiIHJnMvd6Gq8gBysjI8FmzZtV3GSIiIvXOzGa7e0bV5SE/F5WIiIgceRRwREREJOQo4IiIiEjIUcARERGRkBPyASc4ceew+q5DREREDp6QDzgiIiJy5PnJAcfMomqjkJ9YQ4SZVZ0eQkRERI5Q+x1wzGyymT0TnEk8G/jSzLqa2YdmtsPMNprZ62bWNNi+i5l5hfexZlZoZh9X2OfVwYkoy94/bGY/mtkuM1thZn8JTvlQtv4eM5tvZsPMbBlQAMSZWYdgffnB7c/8CT8bEREROUwdaA/OpQQmuzwRuB6YAswH+gKDgHjgfTMLc/dFBGYGHxDc9gRgO9DfzMqepDwAmFxh/zuBq4AuwO+AoQSmfKioLXAxcD7Qk8C8VO8Ez+n44Pb3EHjisoiIiBxBDjTgZLr7ze7+A3AGMNfdb3P3Re7+PXA50IfAXFUAXwCnBF8PAP4NbA62ATiZCgHH3e939y/dfYW7fwQ8BFSdpTsKuMzdv3X3+cH9dgUudffv3P1L4AYOYMZ0ERERObwd6If/7AqvewMnmVluNe3aAzMJhJcbgssGAI8DscCA4ESZ6VQIOGZ2XrB9BwK9QeHBr4rWuHtWhfddgLXuvqrCshlA6X6cl4iIiISAAw04Oyu8DgM+BG6ppl1ZAJkMPG1mHQn06kwG4gj0ymwClrr7WgAzOw54A7iXwOzg24CzgEf3UgMELpmJiIiI1Mrlm2+BC4CV7l5UXQN3X2RmWQTG0Sx1941m9jnwdwIBZnKF5icQ6Im5v2yBmbWuQR0LgXQza+nuq4PL+qJb4UVERI44tfHh/xSQCLxpZv3MrJ2ZDTKz58wsoUK7LwgMTv4cwN1XANnAECoHnMUEgsolwX1dy+7jb6ozAfgBeNnMepnZ8cBjQPFPOz0RERE53PzkgOPu6wj0upQCnwALCISeguBXmc8JjKOZXGHZ5KrL3P0D4BFgNPA9cCpwdw3qKAXOJXBOM4CXgQeq1CAiIiJHAHP3+q5B9lNGRobPmjWrvssQERGpd2Y2290zqi7X+BQREREJOQo4IiIiEnIUcERERCTkKOCIiIhIyFHAERERkZBTqwHHzP5rZi/Wwn48OF2DiIiIyH47VCeibAZsre8iRERE5PB0SAUcM4ty90J331DftYiIiMjh64AvUZlZrJm9aGa5ZpZlZrdXWb/CzG6psmyymf29Spt7zGysmW0D/hVcXn6JyszaBN//PzMbb2Z5ZrbQzE6tsu9fmdmPZpZvZlPMbGhwuzYHeo4iIiJyePopY3AeJTCNwv8DBgLHACcdwH5uIjCHVAZw+17aPQg8AfQEvgHeMLN4ADNrBfyHwKzmPYPt/nIAtYiIiEgIOKBLVMFgMRy4yt0/DS67ElhzALv7wt1rEkYeC85TRbC36HKgFzANuBZYDtzsgbknfjSzTgRCkYiIiBxhDrQHpz0QBUwvW+DuucC8A9hXTSdV+r7C63XB76nB752Bb7zyxFozDqAWERERCQEHGnCsBm1Kq2kXWU27nTU8ZlHZiwpBpqx+AzRrqIiIiAAHHnCWEggcx5UtMLM4oHuFNtkEbvcuW9+AQE9LXVgE9KmyrG8dHUtEREQOcQcUcIKXo/4J/NnMTjWzbsBYILxCs0nAJWY2oML66npwasM/gPZm9qiZHWVmQ4DflJVbR8cUERGRCib9kMUzk5exYlPlizPb84ooLT24H8c/5Tk4twBxwDtAHvBk8H2ZPwFtgPeAXAIDfpv/hOPtkbuvNLP/B/wN+D2Bu6zuJRCq8uvimCIiIkeqklInt6CYiYuy2JpXREJ0BB98v46pSzYB8OdPfuDnnVM5tlUS367axqQfNpLWMJohx7bgttPr6mJOZVZ5XG7oMLMRwH1AI3cvre96alNGRobPmlXTsdkiIiL7VlrqrNySx7Slm1iatYMteUVsyyukcVwU6Y1iANiYU8B3q7exdGPubts3bdiA4f3bcnr3pvzn27W8+FUmW/OKSE2I5txj08nM3kmDyHCeuOiYWq3bzGa7e8Zuy0Ml4JjZdQR6brIJjA16EviXu4+o18LqgAKOiIjUBndn4qKNvPHNaqYtzSa/KNAfkBAdQeP4KJJio9iYk8+GnMDFkCbx0XRu1pBeLRKJjgznuHbJtG4cx9adhbRPiScs7H/3FhWVlFJS6jSIDK90PLOa3KdUc3sKOIfUVA0/UQcCDwpsTOB5PP8g0IMjIiIiVSzPzuUP//6eWSu3ktYwmgsyWtKlWUP6tk2mXZO4/QoiTeKjd1sWGR5GZHjlZbUdbvYmZAKOu98I3FjfdYiIiByKNmzP518zVpKVk09xifPZwiwiw40/DTma83u3ICL8p0xucOgJmYAjIiIi1Zu9ciu/fXU2W3YW0iQ+isjwMI5plcTD/68H6Ukx9V1enTgoASc44WUm0Mfd62TwSHByznHufvD6v0RERKqxPDuXd+esY/zCLAqLS2jRKJaBXVIZ2qcVURF121NSXFJa3hvj7rw1azV3vbuAZkkN+NevT6RTWkKdHv9QcbB6cFYTeOjfpoN0PBERkQN2IINhv121lXvfX8Cm3ELWbtuFGfRrm0xyXCw/bNjB3e8t4LUZq0hPiiElIZp7zupWaQDuvhQUlzBl8SZmLN/MrqIShvdvS7uU+PL1paXOM18sY/SExZzXuwW9Wyfz79mr+Xr5Fk7s2IQnLzqGpNio/Tqnw9lBCTjuXgJsOBjHEhER2V85+UW8OXM1yzflsmVnIV8szuaotASuO6UDv+jWdI/bZW7ayZTF2WzKLWDstEySYqPo06YRVzRvzVk902ma2KC87YSFWdzzwQKWb9rJpB83smpLHmOH9dkt5KzYtJNXvl5JQXEJpQ7b8gpZujGXzE07KSpxGkSG4Q5TlmTztwt68cP6HLJyCpj4w0YWrc/hmFZJvDVrDa/PXE3juCjuP7sbF/VtFXJjbPalRreJWyDG3krg6cDNCUzV8Gd3f7XC5adLgN8BGcAK4Hp3/yy4fVmbPu4+y8wigb8C5xG462kjgVu6RwbbNwJGA2cBDYAvgRHuvqBCTZcD9wMpBJ6a/DHw94qXqMxsMHAP0A1YD7wG3OvuhfvzQzrU6DZxEZGfxt2ZtXIr73y3lunLNrN26y4KS0ppEh9FdEQ4/Ts04ZsVW1i+aSend2vKvLXbadEohusHdqSk1Jm2dBMTFmWxPPt/T+zt0qwhL17Zh7SGDfZy5IB3vlvDjW/O5aK+LfnTkB5syyvk21VbWbk5j9ETlrCrqISE6AjMID46gg6p8bRPjee4to3p37EJC9blcOGz0yko/t9j3nq1TOLivq04P6MF67bns6uwmLZN4gkPC+2RGz/1NvEHCISR64AfgeOB581sK1AWOv4C3ERg1u/rgPfMrIO7r61mf9cD5wJDCYShFsBRFda/GHx/NrCVwFOQPzGzTu6+y8z6BdvcBYwDTgEeqnLCpwH/AkYAU4BWBG4djybwFGYRETmCTF+2mffnruPHDTms357P+u35xESGc2LHJvyiaxqDezane3piefuiklL+/PEPjJmWSb+2ySzO2sElY2YAEBluHNeuMZcf15qfd06jRaOYSs+A2Zdzj2nBkqxcnp68jFkrtrI0O5ey/obOTRN4/vIMWibH7nH7Xi2TeOmqvqzanMcJHZuQlhBdqYcmVAcO74999uAEJ9HcBPzC3adWWD4a6ESg1yYTuNPdHwyuCwN+AN5y9zur6cF5gkCvyiCvUoCZdQQWAye7+5TgskRgFXCzu48xs9eAFHc/tcJ2Y4DhZT04ZjYFGO/u91docw7wKpBQ9biHE/XgiIjU3K7CEv42/keen5pJXFQ4PVsm0SguioGdUzmtW1Piovf+//rbdxWRGBNZ3svSIDKco9MTSWjw06ZXLC4p5fo3vmPrziKOb9+Yfm2TadU4ltSEBiHf61KbfkoPTlcCl4k+MbOKoSCSQO9LmellL9y91MxmBLetzovAeGCxmX0GfAR8HJxSoQtQWmV/281sXoX9dQE+qLLP6cDwCu97A33N7LYKy8KAGKApgUtWIiISgvKLSnjq86V8u2ors1duJb+olCuOb82oX3bZr4G9AIkxgSCTFBvFzzun1VqNEeFhPH1J71rbn1RWk4BT1uc1mEAvSkVFwH7HTHf/Ntirczrwc+AlYK6ZnbqP/ZUFrJocM4zAhJvjqlmXXeNiRUTksLI9r4jfvDqLGZlbODo9kaF9WnFG96b0a9e4vkuTg6gmAWchUAC0dvdJVVcGgwoE5n+aFFxmQF/g33vaqbvvIBA+xpnZi8DXBKZbWEggnBxPYOwMZtYQOBp4oUJNx1XZZdX33wKd3X3pvk9RREQONyWlztqtu1ictYMfs3awdGMua7ftYs6qbZS4M/rCXpzdK72+y5R6ss+A4+47zOxR4NFgcJkCxBMIFKXAZ8Gm15rZYmAegXE5rYFnqtunmd1E4BLRHAK9QBcDOcAad88zs/eAZ83sGmAbgUHGOQTuggJ4AvjKzEYRCFEDCAxarug+4L9mthJ4CygGugN93f0P+zpvERE5dOQXlbAkK5fZK7cwe9U2flifw8rNeRSW/O8uouaJDWiWFMPlx7fm3GPT6dY8cS97lFBX07uo7gKyCNx99AyBsDGHwJ1TZUYSuIvqWGAlcK67r9nD/nYQuO28I4HLTt8BZ7h7XnD9lQRuE3+f/90mfrq77wJw96/NbDiBS1B3A5MJ3A7+ZNkB3P1TM/tVsPZbCAScxQTG/xxSzOx04HEgHBjj7g/Xc0kiIvUir7AYd/hmxRbmrdnO8k07+XbVVlZtySu/y6hZYgO6NU/k551Tadskjo5pCXRKi//Jg34ltNToOTh73cFBmIYhlJlZOIHgdSqBWdC/AS5y94V72kZ3UYnI4crdWbAuhylLsnGH6IgwlmTlsnlnIT9syGHN1l2V2jdt2ICeLRPp2iyRdilx9G7diOa6BVoq+KnPwZG60xdY6u7LAczsDQLP/9ljwBERqWubcwuYsiSb5okx9GyZVOnOI3dn3trtvPvdOhJjIumYFk9yXBRJsZGs355P1vZ8YqMjyC8qoUFkOPmFJbzz3Vp2FZWwfvsusnIKKh2rcVwUKQnRdGvekIv6tsIs8NC8fm2TiY3Sx5QcGP3LqX/pBObqKrMG6Fe1UXA80jUArVq1OjiViUhI2piTz8L1OWTl5JO9o4CiEmdnQTGrtuQxb+12GkSGs27brvKn5CZER9CvXTJFJc6uwhKWb9rJptwCoiLCKCoppSYXAtqlxJGeFEN6o2QGdEphwFGpxEaFs6uohCbx0XV8xnIk+skBx91XcAC3iku56n52u/25cPfngOcgcImqrosSkUOTu/Ppgiwen7iEo9LiufxnbWifEs/CdTms3baL0lKnTZM4jm2VVO3cQ9OWbOKaV2aRV1hSaXmDyDCaJcbQt20yxSXOCR0ac17vlmzOLeCjeRuYv3Y7DSLDaBAZzkkdm9CnbTK/6tEMA1ZtyWNbXhFb8wppEh9Ni0Yx5BeVEB0RCDD5RSUcnZ5Y7eSV+3rInsiB0r+s+rcGaFnhfQtgXT3VIiL1rLTU2ZpXSJgZ05ZuYldhCYUlpYFboTcEbofelldEuyZxfLYwi3fnVP/nIj0phpM6pdCwQQSpDRuwJGsHi9bnsHB9Du1T4rn3rG40T4ohtWE0UeFhe505e2CXvT/cTncryaFIAaf+fQN0NLO2wFoC83NdXL8liUh9KC4p5YoXZvLl0s27rYuLCqdT0wTO6N6UXi2TGHJsC3Lzi/lq2WZWbN5J12YNaZ8SD8D3a7fx6tcrGb8wi5z8IgqLS2nYIIIeLZK49LjWjBjYkaTYqIN9eiIHlQJOPXP3YjP7PfApgdvEx1acNV1EjhyPT1zCl0s38+v+bUmOj+K4do1JTYjGzGie2GC3XpZGcVH8qkez3fbTqnEsZ/ZoDgR6hLbkFdIoNkrzG8kRRQHnEODuHxGYj0tEjjC7Ckv4btVWXpu5iv9+v57ze7fgzjP3NI3f/gsLMw3ilSOSAo6ISD3ZuCOf8/8xnZWb84iKCOPGQZ24dkD7+i5LJCQo4IiIHGQzlm/m0wVZTFmSTfaOAp6+5FhOaN+ExFg9iVektijgiIjUsYc//oGvlm3ilKNSWZy1g4/nb6BBZBipCQ34x6W9OalTSn2XKBJyFHBEROpQ5qadPD91OUkxkTw+cQlN4qP47cntGTGwIzFR4fvegYgcEAUcEZE64O6s3rKLRz79gchw45MbTqJBZBjx0RF7feaMiNQOBRwRkVq2fVcRN781lwmLsgD47cntSUnQnUwiB5MCjohIDRQUl7Bo/Q4WrNvOovU5bN1ZxJptu1ienUtRSSmlpWAWmB17R0Ex4WbcdGonjmmVxAntm9R3+SJHHAUcEZG9KCwuZfryzYx8+3vWb88HoGGDCJokRJOW0IAhx6TTIDIcM6PUnYKiEpJioxjYJZUeLZLquXqRI5cCjohIBV8t3cTr36xm9ZY81m3bRXZuAe7QPiWOpy4+lp4tE0lPitE4GpFDnAKOiBzRVm3O44sl2Xy/ehtz12xjcVYuTeKj6dw0gQFHpdA8KYaWjWL55dHNdNeTyGFEAUdEQp67szgrl8xNuazaksfKzXmVvgMkx0XRo0UiF/VtxUV9W9EgUmFG5HCmgCMiIamopJTpyzazq6iE12as4ovF2eXrkmIjaZUcS48WiVx+fGtO7ZpGq+RYXXYSCSEKOCISEnYVljD5x428NnMV2TsK2JRbyKbcAgBio8K5/ZedOb5dE1o1jiUxRlMiiIQ6BRwROSxs2J7PP75YxviFWaQ1jCYyPIwNOfmc0yudzE07+WTBBgqLS0lPiqFr84a0T43nnF7pNEtsQLPEBjTWjNoiRxQFHBE55M1fu53hL33D1p1FnNQphW15heQXl9I8MYbHJy4hPjqCi/q0ZGCXNH7WvjER4WH1XbKI1DMFHBE5ZOUXlTBm6nKemLiUJvFRfPB//TmqaUKlNqu35JEYG0nDBrrsJCL/o4AjIoekj+at5573F7BxRwG/OroZ957djSbVXGZqmRxbD9WJyKFOAUdEDilZOfm8891a/vzJD/RIT2T00F78TFMdiMh+UsARkUNCcUkpj3z6I89OWQ7AwM6pPHXJsXoejYgcEAUcEalX2TsKeG/OWsbNWsOPWTu4qG9LLurbiu7NEwkL03NpROTAKOCISL0ZN2s1I/8zj5JSp2eLRB4f2ouze6XXd1kiEgIUcESkXqzbtot7P1hI71aNePDc7nRMS9j3RiIiNaSAIyIHVdkD+6Yszqak1PnrBT11J5SI1DoFHBE5aCYuyuLGN+dQUFxK9/REbjntKIUbEakTCjgiUufyi0p4b85abn9nPl2bNeSJi46hbZO4+i5LREKYAo6I1Intu4q4Zdxcpi/bTEFxCUUlTt+2yYwd1of4aP3pEZG6pb8yIlJr3J25a7bz4ffr+GjeBjbuyOf8jJYkxkSS0boRJ3ZMISpC80SJSN1TwBGRn+y7VVt54MNFrNmaR1ZOAVHhYRzTKonRQ3vRp01yfZcnIkcgBRwR+Uk+XbCB/3v9O1Lio+nfIYVerZI4p1dzEjT5pYjUIwUcEdlvhcWlzFqxhU07C7nlrbl0bd6Qf16RQeNqJsMUEakPCjgisl9KSp3rX/+OTxZsAKBTWjwvXdmXxFj12IjIoUMBR0RqzN255/0FfLJgAzcO6kTnZgn0a5uscCMihxwFHBGpsb9PWsorX6/kNye3Y8SgjvVdjojIHul+TRGpkTdmruKv4xcz5Jh0bjutc32XIyKyV+rBEZE9cnemL9vMpws28MrXKzmpUwp/Pq8HYWFW36WJiOyVAo6I7Ka01Pls4Qae+nwZ89Zup0FkGKd3b8oj5/UkMlwdvyJy6FPAETnCuTsL1uXw8fz1zFubw1Fp8Xz+YzZLN+bSunEsDw85mnOOSadBZHh9lyoiUmMKOCJHIHdnzuptfDJ/Ax/NX8/qLbsIDzPaNYlj2pJsOqUl8MRFx/DL7k2JUI+NiByGFHBEjhA7C4qZv3Y7i7N2MPbLFWRu2klkuHFChyb83ykdObVrGo3ioigoLiEqPAwzjbMRkcOXAo5IiNu6s5CXpq/ghS9XsH1XEQA9WiTy6Pk9ObVL2m7PsImO0KUoETn8KeCIhKjVW/IYM3U5b81aw66iEgZ1SePifi1p0SiWjqnx6qERkZCmgCNymMsvKuHHDTvonp5IeJgxf+12np2ynA+/X0d4mHFOr3SuOakdHdMS6rtUEZGDRgFH5DC1La+QZ6cs519fryQnv5hjWiURFxXBtKWbiI+O4OoT23HlCW1pmtigvksVETnoFHBEDjObcgt4bcYqnp+ynNzCYn7ZvRnHtEriyUlLiY4IY+QZnbm4XysaNtD8UCJy5FLAETnE5ReVsGh9Dtt3FfGvGauY9MNGSkqdX3RN4+ZfHMVRTQOXnq48oS3urtu6RURQwBE5pM1euZU//Hsuy7J3ApAUG8nVJ7bj3GPSy4NNmfAwAzRwWEQEFHBE6p27MzNzC3PXbGPIsS3YsrOQ8Quz+GJxNjMzt9AssQGPXdiT5Lho+rRpRGyU/rMVEdkX/aUUqYHC4lIen7iY4lJncI/mvPDlCi7q25KMNskHvM/8ohL+f3v3HSdVdf5x/HO278LSe+8dqSpNBcWuoFgwdsVYYon+UtSYGBMTW6IxxhK7ElvU2BuKDZWOIL33tiwsbO97fn88s+5so7m7AzPf9+s1r71z7p07Zy6XnWef095bsJXnp69n2bYMAB78dCX5RSUAdGlej9sD/WmS1Z9GROSAKMAR2YddWflcNXku8zfuwTl48uu1AGzancPr1wwHIKegiMJiz7wNaTz46UquOqYzZw9qV+X5vPe8MnsjD366krTsAnq2TOa+Cf3p364hL83cQLvGSUw8sj3N6sfX2WcUEQk3znsf6jrIARo6dKifO3duqKsREban53Hxs7PYlJbDQ+cPpG3jRKYuTSG/qJinv1nHlJuP5ZtVqTw8dRVZ+UUAJMZGk1tYTK9WyazflU2Plsn86qSeHNu9GTPXpvHktDV8tSKV4V2acuMJ3Rjepakm3RMROUjOuXne+6GVyhXg1Bzn3HnAXUBv4Cjv/dygfbcDk4Bi4Cbv/ZRA+RDgBSAR+Aj4pd/HP4oCnLoxffVObnptPrkFxTx7+ZEM69L0x307s/IZfu/nxMdEk5VfxOiezRnepSnxMVGcM6Qd93y0nDWpWfRqlcy3q3ayNT2XY7s359OlKTRKiuX60d2YNKozUVEKbEREforqAhw1UdWsxcAE4MngQudcH+ACoC/QBpjqnOvhvS8GngCuBmZiAc4pwMd1WWkpz3vPC9PX85cPl9G5WT1e+flgelSYBbhZ/XjGD2zLlCXb+ecFAxk3oE25LMy9E/r/uL0zK58Jj09n6rIUbh7bnWuP60pCrNZ7EhGpTQpwapD3fhlQVXPDeOA1730+sM45txo4yjm3HmjgvZ8ReN1k4CwU4IRMQVEJf3xvMa/O3sSJfVry8MSB1Iuv+r/JPWf35+7x/UiM23uw0qx+PG/9YgS7sgoqDe0WEZHaoQCnbrTFMjSlNgfKCgPbFcslBNKyC7j2pXnMXpfG9WO68qsTe+61CSkuZv8n1GtWP16dhkVE6pACnAPknJsKtKpi1x3e+3ere1kVZX4v5VW979VYUxYdGbaM7AAAIABJREFUOnTYj5rKvmzYlU3bRonEREcxZ30aN7+2gNSsfP55wUDGD1ScKSJyOFOAc4C892MP4mWbgfZBz9sBWwPl7aoor+p9nwKeAutkfBB1kICCohLu+WgZL0xfT7vGibRplMic9Wm0b5zE69cMZ2D7RqGuooiI/EQKcOrGe8ArzrmHsE7G3YHZ3vti51ymc24YMAu4FPhXCOsZ9nILirn6P3P5ZtVOzhvSjg1pOeQUFHHjmG5cfVxX6lfT30ZERA4v+m1eg5xzZ2MBSnPgQ+fcAu/9yd77Jc6514GlQBFwfWAEFcB1lA0T/xh1MK41q3dkcuv/FvH9xt08cM4RnH9k+32/SEREDkuaB+cwpHlw9l9qZj6/ffMHvlyRCkCDhBjunXAEpx/ROsQ1ExGRmqB5cKRquXvg2RNhxE0w+JJQ16ZGfbd6Jzf/dwEZuYVcNaozTerHcf5QLYEgIhIJFOBEurj6sHMVpG8KdU1qzM6sfB76bCWvzt5I1+b1+c+ko+jVqkGoqyUiInVIAU6ki46BxEaQsyvUNakRizanc+WLc9idXcBlwzvx21N6khSn21xEJNLoN79AUjPI3hnqWvxkCzfv4YKnZtI4KY4PbhqlrI2ISARTgCOQ1PSwz+Bs3ZPLVS/OpUm9ON66bgQtGiSEukoiIhJC+z/XvISves0O6wBnzvo0xj/2HTkFxTx72ZEKbkRERBkcAZKawOY5oa7FAVmZksmCjXvYlV3Ag5+uoF3jRP4z6SgtZikiIoACHAHrg5OzC7yHyiuhH3Ky8ou44vk5bNmTC8DY3i148PyBNEyMDXHNRETkUKEAR6wPTkkR5KXbiKpD3H0fL2Nrei5PXjKEZvXjGdS+0V5X/RYRkcijAEesDw5YFucQDnCKiku456PlvDRzI5NGdebkvlUt6i4iIqJOxgLWRAWH/FDxBz9byXPfrePKkZ25/dReoa6OiIgcwpTBEetkDIfsSCrvPbPWpfHk12s4f2g77jyzT6irJCIihzgFOBLURLUTCrIhrl5o6xNk464cJj41g23pebRpmMDvz1BwIyIi+6YmKrFOxgBL3oF728Pu9Qd9qppenf6vHy0lPbeQP43ryxvXjaBBgkZKiYjIvinAEcvYxCTCms/BF0PG1oM+1c8nz+P2txZWuW93dgGFxSUA5BUWk1NQtNeA6ONF25iyJIXrx3TjshGdaNso8aDrJSIikUVNVGKSmkLGZtsuzD2oU6RlF/DF8hTaN0mqtC+vsJiTHp5GqwYJXHR0B/743hLyi0o4unMTnrpkKA2Tymdm7nx3MZNnbKBXq2Qmjep8UPUREZHIpQyOmHpNy7YPMsD5cvkOSjxsTMsht6AYgJSMPFamZPLegq2kZuazZGs6t721iP5tG3Lj8d2Yv3EPE5+aQXpuYbnzTJ6xgUuGdeSd60eSEBv9kz6aiIhEHmVwxCT99ABn6rIUwCZEXr0ji96tk7nk2Vms35lDs/px9GqVzK2n9uLbVTv5zck9SYiN5qjOTbji+Tnc8t8FPHPpUIpKPHd/sJQuzevxhzP6EBejGFxERA6cAhwx/c6B+q3gh1egaP8DnOtf/p4j2jXkshGdmLYylWFdmjBzbRorUjJZsjWdlSlZtGqQwNb0PG44vjtjerZgTM8WP77+mO7N+eOZffjDu0t4eOpKoqIca3dm8/wVRyq4ERGRg6YAR8ygi6HHqRbg7GcGZ0dGHh8u2saHi7bx7oKtZBcUc+Px3fl+wxwWbd7Dx4u3M7hDI567/Eg+WrSdc4e0q/I8Fw/ryKIt6TzyxWqioxxnD2pbLggSERE5UApwpExsYJRSYc5+HT5zXRoAbRslsmx7Bg+ccwQjuzWja4v6vDp7EwXFJTzys0E0SorjwqM7VHse5xx/Ht+PlSlZ7MjI464z+/7kjyIiIpFNAY6UiUmwn4V5Ve4uKCph3KPfcs1xXTh7UDtmrt1FcnwMH940im3pefRu3QCAHi3rs2xbBoM7NOLozk32660TYqN549rh5BUWk6y5bkRE5CdSJwcpExVlQU41GZyVKZks357JXz5YRmZeIbPW7mJop8Y0Sor7MbgB6NEyGYBrj+uKc/u/yndsdJSCGxERqRHK4Eh5sYnV9sFZsjUdgF3ZBfzmjYWsSc3mvKHtKx13/tD2NEiMZWzvlrVaVRERkeoogyPlxSZVO4pq8ZYM6sfHcPGwDnyyZDsAx3ZvXum45snxXDKsI1FR+5+9ERERqUnK4Eh5MQl7zeD0ad2Au8f34zcn9SK3sJhWDRPquIIiIiL7pgyOlBebVGWAU1ziWbYtkz5tGuCco2FSrIIbERE5ZCnAkfKq6YOzbmc2uYXF9G3ToIoXiYiI7IX38MN/YdaTdfaWaqKS8mKrbqIq7WDct03Duq6RiIgcjrYvgrS1FtzMeQbWfwMdR8GRP7dRu7VMAY6UF5sEuXsqFa/YnklMlKNbi/ohqJSIiBxWNs+F50+F4gJ7Xr8lnPZ3GHplnQQ3oABHKopNhKLKE/2tTMmkc7N6Wh9KRESqt24aLP8QlrwNya3h7CfBF0P7YRBdtyGHAhwpL6bqPjgrU7Lo31bNUyIiEWfKHbDqUzj3OUhoWPYolboCpv8Ldq2BjdMhth407wHjHoVW/UJWbQU4Ul5sYqWZjHMLitm0O4dzBle9WKaIiISptV/BjEchKhb+PcrKouOh21hoM8jmTZv1JLgoaNIFTrgThl1v/TlDTAGOlBebWGktqtU7svDe1pgSEZFDXFEBfHIbNGwLI2+p3OclKxWioiGxMVRcTqcgp2xdwrVfwDvXQ9NucPH/4Pv/QHIr2LkKVk2BFR/acR1HwjnPQIM2tf/ZDoACHCmvNIPj/Y83/oqUTAC6B9aYEhGR/VBSDKs+g6Sm0PoIiImvfMzit6y5p+vx5YON/Cz45kFI3wz1W0DvcdD+qMoBSUXFhfC/K2HZ+/Z85RSIqw+NOti+Dd/B7nW2r0lXC0zaDLJRTjMes31RMYCDkkILbs57ARp3ghP+EPRGD1gwFB1X531r9tehWSsJndhEwENR/o8pxlUpmcRFR9GpaVJo6yYiEireWz+TdV/D6Q9B447VH7vkHctyrJ4Km2ZaWWwSdBhmHW8TG1vQkrYW5r1g+7seD2c9AfWaW1Dy2Z2QtsYCk4xt1kzUd4JlXjbPha5jLAiq1wwGX2rZk5gEePsaC25OuQ9KimD+S/b7fMtccNHQcQQcOcmalGY8Ds+cAPVaQNZ26DACBl4Y+CO3xIKb/udX39wUd2h/JyjAkfJiAzdsUe6PN/XKlEy6NK9HTLRGUIlIiHgPBdkQHQtzn7Nhx73OgM//ZE0kvU4rf+y8F+DLv8LpD0Kf8fv/PulbYPkHMOACa67P3W0ZmA9utvKoGHjqOGjeG1r0gqOvgyadrV7ew9Q/wnf/tHMlNLSOtomNrC/LxlnWITcnrWzNv6Ovs2Dp8z/D48MAB7lp0LA9XPoedD4G8jKsn8tX91rGpONwmP+y1Ss7FWY+Xv4znHg3DLvOtkfcWHZNoHwGaMDPYNa/LdDqMNyGcO8rQ3QYUYAj5ZW2vRbm2l8ZwMa0HHqoeUpEQiV3N7x5Jaz5wppbCrIsA9FxpE0eN+NR6HkatBlsTSnz/2OZlphE+OAWC0rWfGGvGXihNckU5sGaz2048/ZFlr1uPwyWvAVZKRYcFWRbFsRFW+bkpL9Cj1Pgsz9AXroFGXOfA5wFCwA/vAJDJ8HJ99hromOtvPeZ5T9TXjrkZ0LDwOCNLqNh6p8saOl+ogVvpU0/CQ3guN9A37MgvgEktyzrRpC53TI++ZlW3+Y9oO/Zla9hVYFLUhMY87uf/u9ziFKAI+WVZnCChorvyMjnmCpWDRcRqTXe25xcW76H926APZtsdE5eOvQ6Hb6+34Kbkb+0AGTRm7DiI3ttYmObVK7DcHhqNLx2oQVGvgTmvQg9ToI1X0FBph3bYQTkZ1g/lOTWcN6LFvg0am/9VFKWWHajZR87/89etZ8Z26yjbeoKe60vgdG/g+N+u+9MSMWh1i16w4Wv7f01zbqXbZeeP7kVDLlsf69qRFGAI+XFJtrPQICTU1BEZn4RLRpU0TlORA4vqz6D7yfDSXdbpuNQlL3L+p+s/swyKWD9UC5735pmSrU/2rIy/c6xUUJj/2h9UtLWWpNPafAw4UnI3mn9VApy4K2fw4YZlg3pexZ0Pq4sy5KfaVmf6Bjbty8NWsORV9n2gAsgawf0PLXmroX8JApwpLxAgPPmrFWcO74fOzLyAWiRHPo5DUTkIOXutsnaFrxsz1MWw4SnrTmkKM86lTbrCfEhngpiy/fw34utX0mf8dC8l2VYjphYuW71msIR55Uvi69vo5WC9TunbDs2ES55q/r3j/8JTfFthxz8a6VWKMCR8gIBzuyVmzkX2JFpAU5LZXBEDk87V8PkcdZX45hfW1+PV8630TPBGneCK6dYk0dty9pho4iCrfkCXrvYgq5Jn0GbgbVfDwlrCnCknMKoBGKB1N3p7MjIIyXDJv1TBkfkMFJSbB1vdyyHtV/a/CdXfVaWZbh+FmxbaP1OSif3/PBXtjhi3wnQ7QRrAoqKrtl6eQ9T74LvHoZuJ1oH19YDYdEb8O710LwnXPSmNf2I/EQKcKScnJJYGgKJ5PPt6p3szikElMEROSzs2WQdbZe+Bxu+taHU8clw/mRo2bfsuEYd7BGsYTv49A749h/wzd9tGPT5k21+lbh6NTN8+Mt7AsHNWNg0G54eY+sWFWbbiKgLXrEh1SI1QAGOlJPt4wIBTgHfrtpJ8+R44mKiaJgYG+qqSSTL3WOZhqpmgq0N+Vkw7QHYvQFa9IFjf13z2QywjMbi/8HW+fbcOWjU0YYCZ2yxCdo2z7YMzMn3VO5fUqq4yOr77cNQnG/NPOP+ZR1r91fnY+CaadbRdvlHMOV2eOxI29d2qM1iW1JsmZ2D6auzeZ4FTgMuhLMet+zRwtdhxzJoN9QyR4fA+kUSPhTgSDmZxRbINIwt4v3VOxnZtSktkuNxYTT5kxxG1n4FH/yfzeia0AgGXQxj7rC5SfZssOG7n99to2ha9LEAKLkN9D/XvjSL8uHta21kzdArreNqYiNIW2eTox11DTTrZu+VlQqzn4IdS+2xez007gxL34Fdq22W2Zqckr6oAN6/CX541eafctHgi63Tb7Cm3Wyit6ePtwxH52Osfi37WafcbQtg+Yc2ZLr/edbs07jzwWdc4pNhwEToNBLmPm+B3eynYHJgsrz4hjakGW8deAddsu8ZbXetgXeuhfqt4NT7rG4JDeGonx9cHUX2g/OlsxvKYWPo0KF+7ty5tXLueSvWM+TVAbzd4hfcsnEUzZPj6dAkif9dN6JW3k8OcTlpMP0RaHUE9JtQM+dc9w28MtE6kY64sfphtXOfhw//D5p2ty/clCW2bk/TbjZ8OD/DjmszyGZ93b3OApr0zZYZOe1vNmnbmi9sleO0tbYicpuBkLoS8tMhsYlNkb9pFqRvAhw062Ff2Cf+GTofC9P+Dl/cDf3OhU6jbC2fk++F+kFzQ5WU2ORz+ZmWQWnc2YZkL3jJRgKlb7ahyEMut/p6b1PqL/wvjL4djv2tBWne27GFudZkVLrOT06adRTes8malrYvtMnrSors/ePq2+cdeGHN/BtVlJVqU/1HxcLC12z+l/wMq0enY+CiN8qmmAD7HEvftaHeqSvtuOg4mPgf6+QsUoOcc/O890MrlSvAOfzUZoDz5dLNjHm9Lyv7/pKT5h0NwKn9WvHExRoCWU5Rvs10mrHVvuxmPWGzqPY+I9Q1qxkZ22DO0zZ5WV66ZRcufB26j6187I5l9qXcbWz5rEFeujW1BJdlpsCTx1imJSrWMjO9x8GejZY5iI61zEbLPpY16H4SnPtc2fDdlVOsM2qHYdakEd/A1uQJbj7KSoUXz4DU5ZYZOfV+GHyZrd+z/APYPMe+jEfcCB/fBtk7bC6UFn1s7pPgydRKffsP6xwLgLMgIybegqboOBtmHaxp90DWqWHZVP+FedbXZOJL1iT1zYMw5vc2Q+3+2LUGnjzOto//vWWYWvWHHidboFZxxei6sPB1eOtqu+8nvlRW/s2DtvRAQiOrY/OeNoJLnYelFijACSO1GeC8u2ALp73dn6yhv+DclWNZk5rN5SM6cde4vvt+caRIXWmr9W5fZF/8LspW3Y2Oh0lT7C/06nhvf83Wb2nDcYuLYN7zNoP0oIvq7jOUKim2L87StXS2LbSmm0VvWnag1+kw8mab7n73erjyE2jVr+yzfH0/TPubHdtltAUqzXtb88anf7CmonGPWgAw7W82yVxxIfz8C2jaFT76jc0Y23qAlRcX2LXcviiQGXjz4PplZO+EddNsNFDwbLFVKZ3yfl8Wvm5Zk4bt4J3rbMbbtkPss8fVtyAsPtmC3wUvWdZo/GN2/rh6lvGYfJY1LxXlWf+YMx85sKak1JUWWO1toce69s1Dth7UJW9bNmz5R/Daz6y57Owna6fvkkgQBThhpDYDnJdnbWDcR0cTPeQSHuByXpi+nt+c3JPrx3Srlfc7pBTlV9+JtajAvrRWTbUmj9hE6ygZk2AL4B15lf3FioNrvrY1Xiravsj6k2yeDXHJMPgSa65JWWT7e58JKUvtCyGunjVTjLkD+oyzv/7XfmlzlTTv9dNHtORnwYzHLEOTvcOaZeq3tH4csfWsr8uwa+1LGmwBwmcCGZozHrbgZMajtg5P//Pt9TMetb/WN822ZprmvSF1mQUCRfmQt8eyLiNvsoBmb/ZstPrUVafiurJ7gy0d0O5I609Tk316QqUoHx4daoHkxJft8zVqb3PqBDdbidSS6gKcMPjfJTUpK6+IPGJpVJLPqB7NeGH6elokh9mXTLD131kTQnQsvDnJViQeOgnmT7bgokEb2DDd/mLfvd7Wpek3AY7/Q1m6vesY+9m0Kzx3CvxvkmUeSv9yLSmxJqypd9msrCffa0N5Zz4OLfvDOc/ae8x91ppK4pMt41FUAK9fYv1fMrZCzk47X+PO1jnz6GsP7K/jkhJYPw0WvGrvn58BPU61fibzXrCOt2P/ZOvaBBZa/VHDtnDR6/DimfBK0OyxI2+GsXdZ4FPa1LJ9sX2eoVfC8vetA2xxIYy6ee/ZrWAVhzCHi8Yd4eZFlrELRZNSbYiJt/vmzSvgkYHWZDfhGQU3EnLK4NQg59zfgDOBAmANcIX3fk9g3+3AJKAYuMl7PyVQPgR4AUgEPgJ+6ffxj1KbGZwHP13BxO9Oo+3AsRSe+QSPfrmaK0d2olFSXK28X51JXWn9KMb8zhbGW/o2dBljgUtxgR3ToK0NzS3VYYSNWJn2Nxu6e9rfrcljb9mTeS/ayJgOw63vR/2W9h5rvoCep9vQ3XpNyxYSDP4SyMuwVYNLFeXbe2/7wTI6Ay6EzG3ww2uwcToMvMjOt68gJ30zLHgF5r9kI48SGlq2aMgVNtLoQBTmWvCSnWrXq9OompkfRQ5/66ZZRq/naXDE+aGujUQQNVHVAefcScAX3vsi59z9AN77W51zfYBXgaOANsBUoIf3vtg5Nxv4JTATC3Ae8d5/vLf3qc0A5673lnDZ9+fSue/RcN4LtfIetWLxW7Bxpi0iuHW+jWhp2t2CmvhkG62SlWLr7aRvKusU2rSbZWpSFsOoW2xOkh3LrPyjX9sxAy6E0x7Y/3Vq5r9sE6bl7raF+wBOuddG0NRUMPDlvfD1fdZRt2Vfyyotfc8yBKc+APWaWTPUF3+B2U/aKsedj4VBl1qHUP11LSJhQk1UdcB7/2nQ05nAuYHt8cBr3vt8YJ1zbjVwlHNuPdDAez8DwDk3GTgL2GuAU5uy8osodPGWPThcfHWf9YMB2Pq9zQ3ii+15TIJ1Ak1sYl/8n9xmfUIu/8ACgqFXWL+W0iHQQy4vO68vsQzM/qwqHGzQRTb0ecHL1lxzzK+geY+f+inLG30btOgFWxdYc9Nnd1p/me0LrZNnky42iqcoz5rcRtxoHYlFRCKEApzacyXw38B2WyzgKbU5UFYY2K5YXolz7mrgaoAOHWqvf0JWXhFFUXHWFHEo2jTbOuqOvtWaWdLWWYDT7xzLZHz+ZxvNM+BC67x71NUW3ETHQUycdYJt0NYyHfta/ffoaw6+nklNLKioLc5B37PtccKdsHOVDW9OXW4jlXatgS7H2f72R9VePUREDlEKcA6Qc24qUNVyu3d4798NHHMHUAS8XPqyKo73eymvXOj9U8BTYE1UB1jt/ZaVX0Rx1CGawVnzBbx8vg0jnvG4BTizn7Y+KCf9xTIzXU8IzGgbB0ysfI6OYThhYVS0ZXPAgrxT7w9tfUREDgEKcA6Q976Kmc7KOOcuA84ATgjqLLwZaB90WDtga6C8XRXlIZOZX0RJdDwUHWIZnJJimHKHja7peWrZSsnfT7YsRYM2dlybgaGtp4iIHBLCZJziocE5dwpwKzDOex88tel7wAXOuXjnXGegOzDbe78NyHTODXO22NOlwLt1XvEgWXmF+OiEQyuDU1JsozN2LLUZXIdcYeUvnGadiYdfH9r6iYjIIUcZnJr1KBAPfBZYnHKm9/5a7/0S59zrwFKs6ep670t7wXIdZcPEPyaEHYzBmqh8YkLlBf9CoSAbpv8LZj5hk8S1GQR9zrL5Q1oPsOHTZ/xj/+dWERGRiKEApwZ576ud7td7/1fgr1WUzwX61Wa9DkRWXhE0iIf8EAQ4xUU2cikmMOfOh7+ylZZ7nWGBTY+TyiZHO/MRm+22z7i6r6eIiBzyFODIj0pKPNkFxbiYRMgOQYDz1lW2IOKVn0BBjs1dM/wGOLlSXGh9bdTfRkREqqEAR36UXVAEQFRcCJqoNs6yRRfBlgOIT7ZJ8kbdUrf1EBGRsKBOxvKjrHwLcKJjAwFOXc1y7b2tRlyvBVz4hq3OnbENxtxuM/KKiIgcIGVw5EdZeYEAJz7J+sIUF5b1h/kp0jfDp3+wmYTrN6+8f9sC2PAdnHKf9bPpcdJPf08REYloyuBEuN3ZBfzu7UVMX72TzEAGJyYusE7R3pqp0tbapHs5aft+k+n/giVvwXcPV73/+8nWHDXgZwdYexERkaopwIlwSfHRvDN/Cx8t3kZGbiEAsQlJtjN4LhzvbfHIUt8+DKumwPpv9/4G+Zm2+KSLhjnPQlZq+f0F2bDwDVvvKbFRDXwiERERBTgRLz4mmhFdm/HVilRmrk0jOsrRrFED2xk8m/HC1+GhPpaxyUmzEU5gK2/vzQ+vQUEmjH/MMkIzHi2/f8Ertn/wpTX3oUREJOIpwBFG92zO5t25vDJrA6O6NaNevWTbEZzB2TgdCnMgZTF8/6IFK/ENYMeSvZ/8h1eh1REw4AJbEHP205C9y/blpMGXf4UOI6DD8Nr5cCIiEpEU4Aije1rH34y8Is44ojXExNuO4BXFUwKBzI5lsOITW4m787GQsrT6E6dvhi3zbK0o5+DYX1uQNPNxm9Tvo99AXgac/nfbLyIiUkMU4AjtGifRrUV94qKjOKlvK4hJsB2lGZySkrJAZttCG/XU/mhbtTttTflAKNjyD+1n7zPtZ4ve0Gc8fPsQPHYkLH4TRt9mK2CLiIjUIA0TFwBuGduDbem5NEyMDQpwAqOodq+DwmzbXv6+lbcdAlHRNpw8dUXVswovex+a94Jm3cvKznwYmnaF1Z/DuEdh8CW1+8FERCQiKcARAE4/onXZk4oBTspi+9l2KGyZa9vthkJRgW3vWFY5wNm9wea2OeZX5csTG8MJd9pDRESklqiJSiqLrRjgLLHZhftNsOdJzaBRR2jSxYKhRW/YcO9g0/9lQ8OHXF5n1RYRESmlAEcqK83gFAYFOE27QZvB9rzdUOsUHB0Dx/8B1nwBk8eXLe2QtQPm/wcGTISG7eq+/iIiEvHURCWVVWyi2rkKmvWwTsLRcdBxRNmxI26A4nz4/M+QuQ0atLG5b4ryYOTNdV93ERERFOBIVSoGOBlboevxNtPwdTOgUfvyx3ccZT+3zrcAZ+2XlTsXi4iI1CE1UUllpfPgFOXZPDUFmdAg0Am5Wbey/aVa9bc+OlvnW7PWhhnQZUzd1llERCSIAhypLDZosc3MbbbdoG31x8clWcZm6wLYNMuWeOgyurZrKSIiUi0FOFJZVIxlZArzIGOLlTVos/fXtBlkGZy1X9rrO42s/XqKiIhUQwGOVOYcxCRaBidjq5XtT4CTsxPmPmezHMcn1349RUREqqEAR6oWE18+wEluvffj2waGkCc1gzP+Ubt1ExER2QeNopKqxZZmcLZAveaVOxZX1GYwXPQmtDvSRluJiIiEkAIcqVpMvC22mbdj381TYM1a3U+s/XqJiIjsBzVRSdViEmyV8Iytex9BJSIicghSgCNVi0mwDE7Glv3L4IiIiBxCFOBI1WISIHe3PRTgiIjIYUYBjlQtNgF2rbbtZAU4IiJyeFGAI1WLSYC8PbbdZlBo6yIiInKAFOBI1UoX3ExuDc17hrYuIiIiB0gBjlStNMDpMsaGgIuIiBxGFOBI1Uon9uuqVcFFROTwowBHqla6oniX0aGshYiIyEHRTMZStYEXQuPOUL9FqGsiIiJywBTgSNVa9beHiIjIYUhNVCIiIhJ2FOCIiIhI2FGAIyIiImFHAY6IiIiEHQU4IiIiEnYU4IiIiEjYUYAjIiIiYUcBjoiIiIQdBTgiIiISdhTgiIiISNhRgCMiIiJhRwGOiIiIhB0FOCIiIhJ2FOCIiIhI2FGAIyIiImFHAY6IiIiEHQU4IiIiEnYU4NQg59zdzrmFzrkFzrlPnXNtgvbd7pxb7Zxb4Zw7Oah8iHNuUWDfI845F5rai4iIhA8FODXrb977I7z3A4EPgDsBnHN9gAuAvsApwOPOuejAa54Arga6Bx6n1HmtRUREwowCnBrkvc8IeloP8IHt8cBr3vtVEmsyAAAIi0lEQVR87/06YDVwlHOuNdDAez/De++BycBZdVppERGRMBQT6gqEG+fcX4FLgXRgTKC4LTAz6LDNgbLCwHbF8qrOezWW6QHIcs6tqMFqAzQDdtbwOQ9nuh6V6ZpUpmtSma5JZbom5dX09ehYVaECnAPknJsKtKpi1x3e+3e993cAdzjnbgduAP4IVNWvxu+lvHKh908BTx1crffNOTfXez+0ts5/uNH1qEzXpDJdk8p0TSrTNSmvrq6HApwD5L0fu5+HvgJ8iAU4m4H2QfvaAVsD5e2qKBcREZGfQH1wapBzrnvQ03HA8sD2e8AFzrl451xnrDPxbO/9NiDTOTcsMHrqUuDdOq20iIhIGFIGp2bd55zrCZQAG4BrAbz3S5xzrwNLgSLgeu99ceA11wEvAInAx4FHKNRa89dhStejMl2TynRNKtM1qUzXpLw6uR7OBu+IiIiIhA81UYmIiEjYUYAjIiIiYUcBToRzzp0SWD5itXPutlDXJ1Scc+sDS2YscM7NDZQ1cc595pxbFfjZONT1rE3Oueecczucc4uDyqq9BtUtPxJOqrkmdznntgTulQXOudOC9oX1NXHOtXfOfemcW+acW+Kc+2WgPGLvk71ck0i+TxKcc7Odcz8ErsmfAuV1e5947/WI0AcQDawBugBxwA9An1DXK0TXYj3QrELZA8Btge3bgPtDXc9avgbHAoOBxfu6BkCfwP0SD3QO3EfRof4MdXRN7gJ+XcWxYX9NgNbA4MB2MrAy8Lkj9j7ZyzWJ5PvEAfUD27HALGBYXd8nyuBEtqOA1d77td77AuA1bFkJMeOBFwPbLxLmy2h476cBaRWKq7sGVS4/UicVrUPVXJPqhP018d5v895/H9jOBJZhs69H7H2yl2tSnUi4Jt57nxV4Ght4eOr4PlGAE9naApuCnle7VEQE8MCnzrl5gWUxAFp6m6uIwM8WIatd6FR3DSL93rnBObcw0IRVmmaPqGvinOsEDML+Otd9QqVrAhF8nzjnop1zC4AdwGfe+zq/TxTgRLb9XioiAoz03g8GTgWud84dG+oKHeIi+d55AugKDAS2AQ8GyiPmmjjn6gP/A2725RcZrnRoFWWRck0i+j7x3hd77wdiM/Qf5Zzrt5fDa+WaKMCJbNUtIRFxvPdbAz93AG9j6dGUwIrvBH7uCF0NQ6a6axCx9473PiXwy7sEeJqyVHpEXBPnXCz2Rf6y9/6tQHFE3ydVXZNIv09Kee/3AF8Bp1DH94kCnMg2B+junOvsnIsDLsCWlYgozrl6zrnk0m3gJGAxdi0uCxx2GZG5jEZ116DK5UdCUL86V/oLOuBs7F6BCLgmgSVlngWWee8fCtoVsfdJddckwu+T5s65RoHtRGAstnRRnd4nWqohgnnvi5xzNwBTsBFVz3nvl4S4WqHQEnjbfk8RA7zivf/EOTcHeN05NwnYCJwXwjrWOufcq8BooJlzbjO2UOx9VHEN/N6XHwkb1VyT0c65gVgKfT1wDUTMNRkJXAIsCvSvAPgdkX2fVHdNfhbB90lr4EXnXDSWSHnde/+Bc24GdXifaKkGERERCTtqohIREZGwowBHREREwo4CHBEREQk7CnBEREQk7CjAERERkbCjAEdEBHDOeefcubV4/qGB9+hUW+8hImUU4IjIYc8590IgeKj4mHkAp2kNvF9bdRSRuqWJ/kQkXEzFJlwLVrC/L/beb6/Z6ohIKCmDIyLhIt97v73CIw1+bH66wTn3oXMuxzm3wTl3cfCLKzZROefuDByX75zb7pybHLQv3jn3sHMuxTmX55yb6ZwbVeF8pzjnlgf2fwP0qFhh59wI59zXgTptcc494ZxrELT/2MC5s5xz6c65WftYtFBEAhTgiEik+BO25s1A4ClgsnNuaFUHOufOAX4N/AJbF+cMyq+N8wAwEbgSGAQsAj4JWkiwPfAO8Fng/f4VeE3we/QHPg3UaQAwIXDsc4H9MdhaPd8G9h8N/BMIt2n9RWqFlmoQkcOec+4F4GIgr8Kux7z3tzrnPPCM9/7nQa+ZCmz33l8ceO6B87z3bzrn/g9bO6if976wwnvVA3YDV3nvJwfKooGVwKve+9875+4BzgV6+sAvWefc74G7gc7e+/WBjFCh935S0LkHAvOx9dGKgF3AaO/91zVwmUQiivrgiEi4mAZcXaFsT9D2jAr7ZgCnV3OuN4BfAuucc1OAT4D3vPf5QFcgFviu9GDvfXFgIcE+gaLewExf/i/Iiu8/BOjmnJsYVOYCP7t672cEArcpzrnPgc+BN7z3m6qps4gEUROViISLHO/96gqPnQdzokAQ0RPL4mQADwLzAtmb0iCkqvR3aZmrYl9FUcAzWLNU6WMA1iS2IFCPK7CmqWnAOGClc+7kg/hIIhFHAY6IRIphVTxfVt3B3vs87/2H3vtbgCOBvsBIYDU2OuvHTsWBJqrhwNJA0VLgaOdccKBT8f2/B/pWEZSt9t7nBtXjB+/9/d770cBXwGX7/YlFIpiaqEQkXMQ751pVKCv23qcGtic45+ZgQcK5wAlYdqQS59zl2O/HWUAW1qG4EFjlvc92zj0B3Oec2wmsA27B+s08HjjFv4FfAQ875x4H+gPXVnib+4GZzrl/A08CmUAv4Ezv/TXOuc5YBuk9YAvQBTgCeOJALopIpFKAIyLhYiywrULZFqBdYPsu4BzgESAVuMJ7P6eac+0BbgX+jvW3WQpM8N6vC+y/NfDzeaAR1jH4FO/9NgDv/Ubn3ATgISxImQfcBrxU+gbe+4XOuWOBvwBfA9HAWuDtwCE52NDyN4BmQArwMhYYicg+aBSViIS94BFSoa6LiNQN9cERERGRsKMAR0RERMKOmqhEREQk7CiDIyIiImFHAY6IiIiEHQU4IiIiEnYU4IiIiEjYUYAjIiIiYef/AdX1wPZi49sdAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 576x432 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"plot_result([\"expected_sarsa_agent\", \"random_agent\"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "db793e2c314fae05c3878657eab18363",
"grade": false,
"grade_id": "cell-978255cacf80e540",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"In the following cell you can visualize the performance of the agent with a correct implementation. As you can see, the agent initially crashes quite quickly (Episode 0). Then, the agent learns to avoid crashing by expending fuel and staying far above the ground. Finally however, it learns to land smoothly within the landing zone demarcated by the two flags (Episode 275)."
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "a9cc1faf04b1dd665484f8c1982470ac",
"grade": false,
"grade_id": "cell-9fa82bbbfd32220b",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"outputs": [
{
"data": {
"text/html": [
"<div align=\"middle\">\n",
"<video width=\"80%\" controls>\n",
" <source src=\"ImplementYourAgent.mp4\" type=\"video/mp4\">\n",
"</video></div>\n"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"%%HTML\n",
"<div align=\"middle\">\n",
"<video width=\"80%\" controls>\n",
" <source src=\"ImplementYourAgent.mp4\" type=\"video/mp4\">\n",
"</video></div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "34eb37bcd53120dc16b74cf95fe283d4",
"grade": false,
"grade_id": "cell-e5423f3a7fee6813",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"In the learning curve above, you can see that sum of reward over episode has quite a high-variance at the beginning. However, the performance seems to be improving. The experiment that you ran was for 300 episodes and 1 run. To understand how the agent performs in the long run, we provide below the learning curve for the agent trained for 3000 episodes with performance averaged over 30 runs.\n",
"<img src=\"3000_episodes.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n",
"You can see that the agent learns a reasonably good policy within 3000 episodes, gaining sum of reward bigger than 200. Note that because of the high-variance in the agent performance, we also smoothed the learning curve. "
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "2949ffe4c0d604eacac2fa59caa1b1ae",
"grade": false,
"grade_id": "cell-d9aa9305a578583d",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Wrapping up! \n",
"\n",
"You have successfully implemented Course 4 Programming Assignment 2.\n",
"\n",
"You have implemented an **Expected Sarsa agent with a neural network and the Adam optimizer** and used it for solving the Lunar Lander problem! You implemented different components of the agent including:\n",
"\n",
"- a neural network for function approximation,\n",
"- the Adam algorithm for optimizing the weights of the neural network,\n",
"- a Softmax policy,\n",
"- the replay steps for updating the action-value function using the experiences sampled from a replay buffer\n",
"\n",
"You tested the agent for a single parameter setting. In the next assignment, you will perform a parameter study on the step-size parameter to gain insight about the effect of step-size on the performance of your agent."
]
}
],
"metadata": {
"coursera": {
"course_slug": "complete-reinforcement-learning-system",
"graded_item_id": "8dMlx",
"launcher_item_id": "4O5gG"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment