Skip to content

Instantly share code, notes, and snippets.

@craffel
Last active August 29, 2015 14:08
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save craffel/b6c81d0ee9e9281fcf33 to your computer and use it in GitHub Desktop.
Save craffel/b6c81d0ee9e9281fcf33 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"metadata": {
"name": "",
"signature": "sha256:26b2e3d40ea228451889569bd59ce71b896ef250be7f68cd8e70e9594b848100"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A Survey of Neural Network Hyperparameters\n",
"===\n",
"It can be difficult to train a \"vanilla\" neural network with backpropagation. Training may succeed or fail depending on the choice of hyperparameters (including the choice of nonlinearity, the cost function, initialization, and what (if any) regularization is used). It's useful to know which hyperparameter settings tend to work well and which don't.\n",
"\n",
"These notes are based on Chapter 3 of \"Neural Networks and Deep Learning\":\n",
"\n",
"http://neuralnetworksanddeeplearning.com/chap3.html\n",
"\n",
"Another good introduction to NN hyperparameters is\n",
"\n",
"Practical recommendations for gradient-based training of deep architectures, Yoshua Bengio, U. Montreal, arXiv report:1206.5533, Lecture Notes in Computer Science Volume 7700, Neural Networks: Tricks of the Trade Second Edition, Editors: Gr\u00e9goire Montavon, Genevi\u00e8ve B. Orr, Klaus-Robert M\u00fcller, 2012.\n",
"\n",
"which is summarized here:\n",
"\n",
"http://colinraffel.com/wiki/neural_network_hyperparameters"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cross-entropy cost\n",
"---\n",
"Sigmoidal units have small derivative when saturated. So, when the derivative of the cost function depends on the derivative of the activation function directly, learning can be slow when units are saturated. This is the case for, e.g., quadratic loss. The cross-entropy cost function is defined as\n",
"$$\n",
"C = -\\frac{1}{n} \\sum_x y \\log(a) + (1 - y)\\log(1 - a)\n",
"$$\n",
"where $a$ is the network output, $y$ is the target output, and the sum is over all training inputs $x$. Note that this cost function only really makes sense for neural networks whose output is between $0$ and $1$ (binary classification tasks); otherwise, it is not necessarily always positive. Furthermore, when $a \\rightarrow y$, $C \\rightarrow 0$ which is the desired behavior. The derivative of $C$ with respect to a network weight $w_j$ (for a single-layer network) is\n",
"$$\n",
"\\frac{\\partial C}{\\partial w_j} = \\frac{1}{n} \\sum_x x_j (a - y)\n",
"$$\n",
"or in other words, the larger the error ($a - y$), the faster the network will learn. This avoids the slow learning issue when units are saturated. The cross-entropy cost can be derived by deriving a cost function which makes the cost derivative look like the expression above.\n",
"\n",
"**Takeaway**: When using a network with sigmoidal units (binary classification), use cross-entropy cost."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Early stopping\n",
"---\n",
"Because neural networks are universal function approximators, they can achieve arbitrarily good performance on the training set (particularly when they have a lot of parameters). However, when the training set is small or the training is unregularized, this typically translates to suboptimal performance on new unseen data. The most common way to mitigate this is to periodically compute the cost or error on a held-out validation set and to stop training when the validation cost or error stops improving.\n",
"\n",
"**Takeaway**: Early stopping is the most common (and a highly effective) way to avoid overfitting."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Weight Decay\n",
"---\n",
"Weight decay adds an L2 term to the cost function which penalizes large weight values:\n",
"$$\n",
"C = C_{loss} + \\frac{\\lambda}{2n} \\sum_w w^2\n",
"$$\n",
"where $C_{loss}$ is the original cost function and $\\lambda$ is a hyperparameter which controls the relative importance of the weight decay. This makes training a compromise between finding small weights and minimizing the original cost function. The weight update for SGD becomes\n",
"$$\n",
"w \\rightarrow \\left(1 - \\frac{\\eta \\lambda}{n} \\right)w - \\frac{\\eta}{m} \\sum_x \\frac{\\partial C_{loss}}{\\partial w}\n",
"$$\n",
"where $\\eta$ is the learning rate. Note that the multiplication of $w$ by $1 - \\frac{\\eta \\lambda}{n}$ at each step makes it tend towards $0$, as long as the $\\frac{\\partial C_{loss}}{\\partial w}$ term doesn't dominate. Occasionally, diferent network initializations can get stuck in different local minima; weight decay helps mitigate this because the weights may grow unconstrained until the gradient is no longer large enough to substantially change them. Using smaller weights also avoids the issue where small changes to the network can produce large changes in the output - this can be seen as the network applying too much importance to a change which may be random fluctuations/noise.\n",
"\n",
"An L1 term can be used instead:\n",
"$$\n",
"C = C_{loss} + \\frac{\\lambda}{2n} \\sum_w |w|\n",
"$$\n",
"The effect is that the weights shrink a constant amount towards zero (not depending on their value):\n",
"$$\n",
"w \\rightarrow w - \\frac{\\eta \\lambda}{n}\\mathrm{sgn}(w) - \\frac{\\eta}{m} \\sum_x \\frac{\\partial C_{loss}}{\\partial w}\n",
"$$\n",
"The result is that L1 regularization tends to concentrate the weight of the network in a relatively small number of high-importance connections, while the other connections are zero (the weights are sparse).\n",
"\n",
"**Takeaway**: Weight decay is a simple way to avoid overweighting certain features to prevent overfitting."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Dropout\n",
"---\n",
"Dropout works by randomly deleting half of the hidden neurons in the network for each forward-backward step (over each minibatch). The resulting weights and biases are therefore learned under conditions where half of the hidden neurons are dropped out; to compensate, at test time the weights outgoing from the hidden units are halved. This performs an approximate model averaging over the (exponentially many) possible networks with half the hidden units of the original network. It can also be seen as preventing hidden units from relying too heavily on units in the previous layer, because they may be dropped out randomly.\n",
"\n",
"![Dropout](http://neuralnetworksanddeeplearning.com/images/tikz31.png)\n",
"\n",
"**Takeaway**: Dropout appears to be an effective regularization technique, especially for networks with many parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Data Augmentation\n",
"---\n",
"More data can work as a form of regularization (in general, it's better to have more). However, in practice the amount of data is limited, but if we know some ways in which the data can be manipulated without altering its class, we can \"augment\" our trianing data set and increase its size. For handwritten digits, this can include slight rotations, translations, and deformations. \n",
"\n",
"**Takeaway**: If you know of a way to modify your input data without changing its class, you can artificially increase your training set size."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Parameter Initialization\n",
"---\n",
"If the weights are initialized in such a way that the input to each unit has a high probability of being very large (often the case when initializing to a normal distribution with high standard deviation with high fan-in), the hidden units can be heavily saturated. In this case, a small change in the input can cause almost no change in the output. As a result, a common method is to set the standard deviation to $1/\\sqrt{n_{in}}$ where $n_{in}$ is the fan-in in the layer. This effectively will speed up training, and also potentially find better local minimum.\n",
"\n",
"![Initialization](http://neuralnetworksanddeeplearning.com/images/weight_initialization_30.png)\n",
"\n",
"**Takeaway**: Weight initialization should be done with the fan-in in mind."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Momentum and NAG\n",
"---\n",
"\n",
"In practice, it can be beneficial to include information about how the gradient is changing in gradient descent techniques. The ideal solution would be to compute the Hessian (second-order derivative matrix), but because its size is quadratic in the number of parameters it can be infeasible to compute and invert. A common compromise is to use momentum, which modifies the gradient descent update rule to\n",
"\\begin{align*}\n",
"v &\\rightarrow v^\\prime = \\mu v - \\eta \\nabla C(w)\\\\\n",
"w &\\rightarrow w^\\prime = w + v^\\prime\n",
"\\end{align*}\n",
"The variable $\\mu$ controls the amount that the old value of $v$ is recycled; if it is close to $1$, the gradient will build up a lot of \"speed\"; if it's close to $0$ the technique becomes more and more like standard gradient descent. A simple modification is Nesterov's Accelerated Gradient (NAG), which uses the update rule\n",
"\\begin{align*}\n",
"v &\\rightarrow v^\\prime = \\mu v - \\eta \\nabla C(w + \\mu v)\\\\\n",
"w &\\rightarrow w^\\prime = w + v^\\prime\n",
"\\end{align*}\n",
"In effect, a partial update $w + \\mu v$ is performed to $w$ before $v^\\prime$ is computed. This can lead to better stability in some situations. Other approximate second-order methods such as conjugate gradient and (L-)BFGS are sometimes used."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Rectified linear units (ReLUs)\n",
"---\n",
"The ReLU is an activation function/nonlinearity which simply rectifies the input:\n",
"$$\n",
"a = \\max(0, Wx + b)\n",
"$$\n",
"This type of activation never saturates like sigmoidal units, but when the input is negative it has a zero gradient. They have been empirically found to work well with deep networks."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tips for adjusting hyperparameters\n",
"===\n",
"\n",
"Adjusting hyperparameters is very often done by hand, but if the initial hyperparameters used are wrong it can be hard to figure out which knob to adjust, in which direction, and by how much. Because this tuning is usually done based on validation set performance, it can be helpful to start with a simplified problem (fewer classes, less data, more frequent monitoring) which produces results more quickly. For specific hyperparameters:\n",
"\n",
"- Learning rate: Too large of a learning rate can cause the cost to randomly fluctuate and never settle; too small can cause it to decrease too slowly. Start by finding a small learning rate which works, then increase it until the cost starts to oscillate; set it slightly below this value. It is often helpful to decrease the learning rate once validation cost starts to plateau.\n",
"- Number of epochs: Use early stopping - once the validation cost starts to increase continually, stop training.\n",
"- Weight decay parameter $\\lambda$: Start without weight decay, then start small and increase until you find a value which effects the network's training dynamics.\n",
"- Mini-batch size: Using larger batches can improve computational efficiency, but using smaller batches can also help to avoid local minima. Usually, it's set to be small but large enough to be efficient; around 100 training examples per minibatch. This hyperparameter is pretty independent of the others, so you can fix it first then optimize the rest."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Code Example\n",
"===\n",
"Here's a brief example which covers some of the concepts discussed above using the in-development library [nntools](https://github.com/benanne/nntools/), which provides common tools using Theano. It should replicate the model structure/hyperparameters/results from in the chapter. It's based on the `mnist.py` example included in `nntools`. _Note_: `nntools` is under active development; this code was made to work with commit `ce8808f40c2949b2794f9ed4b12d7257bcc30507`."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"from __future__ import print_function\n",
"\n",
"import cPickle as pickle\n",
"import gzip\n",
"import itertools\n",
"import urllib\n",
"\n",
"import numpy as np\n",
"import nntools\n",
"import theano\n",
"import theano.tensor as T\n",
"\n",
"# Global constants\n",
"# Filename of the MNIST pickle; get it from http://deeplearning.net/data/mnist/mnist.pkl.gz\n",
"DATA_FILENAME = 'mnist.pkl.gz'\n",
"# How many epochs must the validation loss be greater than the best so far before stopping?\n",
"NUM_BAD_EPOCHS = 100\n",
"# Size of each minibatch\n",
"BATCH_SIZE = 500\n",
"# Number of units in the single hidden layer\n",
"NUM_HIDDEN_UNITS = 100\n",
"# Learning rate (eta)\n",
"LEARNING_RATE = 0.005\n",
"# Weight decay lambda parameter\n",
"DECAY_LAMBDA = 5.\n",
"\n",
"def one_hot(labels, n_classes):\n",
" '''\n",
" Converts an array of label integers to a one-hot matrix encoding\n",
"\n",
" :parameters:\n",
" - labels : np.ndarray, dtype=int\n",
" Array of integer labels, in {0, n_classes - 1}\n",
" - n_classes : int\n",
" Total number of classes\n",
"\n",
" :returns:\n",
" - one_hot : np.ndarray, dtype=bool, shape=(labels.shape[0], n_classes)\n",
" One-hot matrix of the input\n",
" '''\n",
" one_hot = np.zeros((labels.shape[0], n_classes)).astype(int)\n",
" one_hot[range(labels.shape[0]), labels] = True\n",
" return one_hot\n",
"\n",
"\n",
"def load_data():\n",
" '''\n",
" Load in the mnist.pkl data\n",
" \n",
" :returns:\n",
" - dataset : dict\n",
" A dict containing train/validation/test data/labels/shapes\n",
" '''\n",
" # Load in the pkl.gz\n",
" with gzip.open(DATA_FILENAME, 'rb') as f:\n",
" data = pickle.load(f)\n",
" X_train, y_train = data[0]\n",
" X_valid, y_valid = data[1]\n",
" X_test, y_test = data[2]\n",
" # Get the number of classes in the data (should be 10)\n",
" num_classes = np.unique(y_train).shape[0]\n",
" \n",
" # Convert class numbers (ints) to one-hot representation (see above)\n",
" y_train = one_hot(y_train, num_classes)\n",
" y_valid = one_hot(y_valid, num_classes)\n",
" y_test = one_hot(y_test, num_classes)\n",
"\n",
" # Construct a dataset dict\n",
" return dict(X_train=theano.shared(nntools.utils.floatX(X_train)),\n",
" y_train=theano.shared(nntools.utils.floatX(y_train)),\n",
" X_valid=theano.shared(nntools.utils.floatX(X_valid)),\n",
" y_valid=theano.shared(nntools.utils.floatX(y_valid)),\n",
" X_test=theano.shared(nntools.utils.floatX(X_test)),\n",
" y_test=theano.shared(nntools.utils.floatX(y_test)),\n",
" num_examples_train=X_train.shape[0],\n",
" num_examples_valid=X_valid.shape[0],\n",
" num_examples_test=X_test.shape[0],\n",
" input_dim=X_train.shape[1],\n",
" output_dim=num_classes)\n",
"\n",
"\n",
"def create_iter_functions(dataset, output_layer,\n",
" batch_size=BATCH_SIZE,\n",
" learning_rate=LEARNING_RATE,\n",
" decay_lambda=DECAY_LAMBDA):\n",
" '''\n",
" Create functions for training the network and computing train/validation/test loss/accuracy\n",
" \n",
" :parameters:\n",
" - dataset : dict\n",
" Dataset dict, as returned by load_data\n",
" - output_layer : nntools.Layer\n",
" Output layer of a neural network you've constructed\n",
" - batch_size : int\n",
" Mini-batch size\n",
" - learning_rate : float\n",
" Learning rate for SGD optimization\n",
" - decay_lambda : float\n",
" Weight decay lambda hyperparameter\n",
" \n",
" :returns:\n",
" - iter_funcs : dict\n",
" Dictionary of iterator functions for training/evaluating the network\n",
" '''\n",
" # Mini-batch index, symbolic, for use in theano functions\n",
" batch_index = T.iscalar('batch_index')\n",
" # X (data) and y (output) symbolic matrices\n",
" X_batch = T.matrix('x')\n",
" y_batch = T.matrix('y')\n",
" # Create a slice object for indexing X and y to obtain batches\n",
" batch_slice = slice(batch_index * batch_size, (batch_index + 1) * batch_size)\n",
"\n",
" # Loss function for the network\n",
" def loss(output):\n",
" # Collect all non-bias parameters\n",
" params = nntools.layers.get_all_non_bias_params(output_layer)\n",
" # Loss = cross-entropy ...\n",
" return (T.sum(-y_batch*T.log(output) - (1. - y_batch)*T.log(1. - output))\n",
" # + weight decay\n",
" + (decay_lambda/y_batch.shape[0])*sum(T.sum(p**2) for p in params))\n",
"\n",
" # Symbolic loss function for a batch of data\n",
" loss_train = loss(output_layer.get_output(X_batch))\n",
" # When using a dropout layer, we need to not drop out units when computing \n",
" # validation/test statistics. We'll use this function instead\n",
" loss_eval = loss(output_layer.get_output(X_batch, deterministic=True))\n",
"\n",
" # Compute predicted class for a batch\n",
" pred = T.argmax(output_layer.get_output(X_batch, deterministic=True), axis=1)\n",
" # Compute the accuracy - mean number of correct classes\n",
" accuracy = T.mean(T.eq(pred, T.argmax(y_batch, axis=1)))\n",
"\n",
" # Collect all parameters of the network\n",
" all_params = nntools.layers.get_all_params(output_layer)\n",
" # Compute SGD updates for these parameters\n",
" updates = nntools.updates.sgd(loss_train, all_params, learning_rate)\n",
"\n",
" # Create training function - includes updates\n",
" iter_train = theano.function([batch_index], loss_train, updates=updates,\n",
" givens={X_batch: dataset['X_train'][batch_slice],\n",
" y_batch: dataset['y_train'][batch_slice]})\n",
"\n",
" # Create validation/test functions\n",
" iter_valid = theano.function([batch_index], [loss_eval, accuracy],\n",
" givens={X_batch: dataset['X_valid'][batch_slice],\n",
" y_batch: dataset['y_valid'][batch_slice]})\n",
"\n",
" iter_test = theano.function([batch_index], [loss_eval, accuracy],\n",
" givens={X_batch: dataset['X_test'][batch_slice],\n",
" y_batch: dataset['y_test'][batch_slice]})\n",
"\n",
" return dict(train=iter_train, valid=iter_valid, test=iter_test)\n",
"\n",
"\n",
"def train(iter_funcs, dataset, batch_size=BATCH_SIZE):\n",
" '''\n",
" Create an iterator for training using iterator functions.\n",
" \n",
" :parameters:\n",
" - iter_funcs : dict\n",
" Dictionary of iterator functions, as returned by create_iter_functions\n",
" - dataset : dict\n",
" Dataset dictionary, as returned by load_data\n",
" - batch_size : int\n",
" Mini-batch size\n",
" \n",
" :returns:\n",
" - epoch_result : dict\n",
" Statistics for each epoch, yielded after each epoch\n",
" '''\n",
" # Compute the number of train/validation minibatches\n",
" num_batches_train = dataset['num_examples_train'] // batch_size\n",
" num_batches_valid = dataset['num_examples_valid'] // batch_size\n",
"\n",
" # Count indefinitely starting from 1\n",
" for epoch in itertools.count(1):\n",
" # Train for one epoch over all minibatches\n",
" batch_train_losses = []\n",
" for b in range(num_batches_train):\n",
" batch_train_loss = iter_funcs['train'](b)\n",
" batch_train_losses.append(batch_train_loss)\n",
" \n",
" # Compute average training loss for all minibatches\n",
" avg_train_loss = np.mean(batch_train_losses)\n",
"\n",
" # Compute validation loss/accuracy by accumulating over all batches...\n",
" batch_valid_losses = []\n",
" batch_valid_accuracies = []\n",
" for b in range(num_batches_valid):\n",
" batch_valid_loss, batch_valid_accuracy = iter_funcs['valid'](b)\n",
" batch_valid_losses.append(batch_valid_loss)\n",
" batch_valid_accuracies.append(batch_valid_accuracy)\n",
"\n",
" # ...and taking the mean\n",
" avg_valid_loss = np.mean(batch_valid_losses)\n",
" avg_valid_accuracy = np.mean(batch_valid_accuracies)\n",
"\n",
" # Yield the epoch result dict\n",
" yield {'number': epoch,\n",
" 'train_loss': avg_train_loss,\n",
" 'valid_loss': avg_valid_loss,\n",
" 'valid_accuracy': avg_valid_accuracy}\n",
"\n",
"\n",
"def test_accuracy(iter_funcs, dataset, batch_size=BATCH_SIZE):\n",
" '''\n",
" Compute accuracy on the test set.\n",
" \n",
" :parameters:\n",
" - iter_funcs : dict\n",
" Dictionary of iterator functions, as returned by create_iter_functions\n",
" - dataset : dict\n",
" Dataset dictionary, as returned by load_data\n",
" - batch_size : int\n",
" Mini-batch size\n",
" \n",
" :returns:\n",
" - test_accuracy : float\n",
" Model accuracy on the test set\n",
" '''\n",
" # Compute the number of test batches\n",
" num_batches_test = dataset['num_examples_test'] // batch_size\n",
" # Accumulate test accuracy over all batches\n",
" batch_accuracies = []\n",
" for b in range(num_batches_test):\n",
" batch_loss, batch_accuracy = iter_funcs['valid'](b)\n",
" batch_accuracies.append(batch_accuracy)\n",
" # Take the mean over all batches to get the actual test accuracy \n",
" return np.mean(batch_accuracies)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 55
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import IPython.display\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"\n",
"# Load in the data dict\n",
"dataset = load_data()\n",
"\n",
"# Construct the network, first with the input layer\n",
"l_in = nntools.layers.InputLayer(shape=(BATCH_SIZE, dataset['input_dim']))\n",
"# One hidden layer\n",
"l_hidden1 = nntools.layers.DenseLayer(l_in, num_units=NUM_HIDDEN_UNITS, \n",
" # Sigmoidal activation, as in the chapter\n",
" nonlinearity=nntools.nonlinearities.sigmoid,\n",
" # Initialize with normal with std = 1/sqrt(fan-in)\n",
" W=nntools.init.Normal(std=1./np.sqrt(dataset['input_dim'])))\n",
"# Output layer\n",
"l_out = nntools.layers.DenseLayer(l_hidden1, num_units=dataset['output_dim'], \n",
" # Sigmoidal activation, as in the chapter\n",
" nonlinearity=nntools.nonlinearities.sigmoid,\n",
" # Initialize with normal with std = 1/sqrt(fan-in)\n",
" W=nntools.init.Normal(std=1./np.sqrt(NUM_HIDDEN_UNITS)))\n",
"\n",
"# Construct iterator function dictionary\n",
"iter_funcs = create_iter_functions(dataset, l_out)\n",
"\n",
"# Keep track of train/validation losses for later plotting\n",
"train_losses = []\n",
"valid_losses = []\n",
"# Keep track of the best validation loss so far for early stopping\n",
"best_valid_loss = np.inf\n",
"\n",
"# Try/except is so we can stop early manually\n",
"try:\n",
" # Calling train in a for loop will train one epoch at a time\n",
" for epoch in train(iter_funcs, dataset):\n",
" # Print statistics of this epoch\n",
" IPython.display.clear_output(wait=True)\n",
" print(\"Epoch {}\".format(epoch['number']))\n",
" print(\" training loss:\\t\\t{}\".format(epoch['train_loss']))\n",
" print(\" validation loss:\\t\\t{}\".format(epoch['valid_loss']))\n",
" print(\" validation accuracy:\\t\\t{:.3f}%\".format(epoch['valid_accuracy'] * 100))\n",
" # Store the validation/train loss for this epoch\n",
" train_losses.append(epoch['train_loss'])\n",
" valid_losses.append(epoch['valid_loss'])\n",
" # If this is a new best validation loss, store it\n",
" if epoch['valid_loss'] < best_valid_loss:\n",
" best_valid_loss = epoch['valid_loss']\n",
" # Otherwise, if there's not best validation loss in NUM_BAD_EPOCHS, break\n",
" else:\n",
" if (np.array(valid_losses)[-NUM_BAD_EPOCHS:] > best_valid_loss).all():\n",
" break\n",
"except KeyboardInterrupt:\n",
" pass\n",
"\n",
"# Plot train/validation curves\n",
"plt.plot(train_losses, label='Train loss')\n",
"plt.plot(valid_losses, label='Validation loss')\n",
"plt.legend()\n",
"\n",
"print('Test accuracy: {:.3f}%'.format(test_accuracy(iter_funcs, dataset)*100))"
],
"language": "python",
"metadata": {},
"outputs": [
{
"output_type": "stream",
"stream": "stdout",
"text": [
"Epoch 511\n",
" training loss:\t\t136.801458913\n",
" validation loss:\t\t159.63904583\n",
" validation accuracy:\t\t98.000%\n",
"Test accuracy: 97.850%"
]
},
{
"output_type": "stream",
"stream": "stdout",
"text": [
"\n"
]
},
{
"metadata": {},
"output_type": "display_data",
"png": "iVBORw0KGgoAAAANSUhEUgAAAYEAAAEACAYAAABVtcpZAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xt4FPWh//H3JiFAgNxMCAnhmogIkpa7FSwLVi4+VOw5\n54dQFVRa26O2ap+HClYltPSIFFraHrW1igoCaqtS7kKFpbZVYgFBAzEkbQiXGAiQcEgw1/n9MZPN\nJiQkbDbZzc7n9TzzzHy/Mzvz/Yawn8x3ZndARERERERERERERERERERERERERERsZhVQCHzqUfcL\n4AhwEHgHiPJYtxA4CmQBkz3qR1r7OAr8ug3bKyIiPnQzMJz6IXArEGItL7UmgCHAJ0AnoD+QAzis\ndRnAGGt5KzC1zVosIiItFtLM+g+A8w3qdgI11vJeINlangGsByqBPMwQGAskAj0wgwBgNXBHaxot\nIiK+0VwINOd+zL/sAZKAEx7rTgC9G6k/adWLiIiftSYEfgJUAOt81BYREWlnYV6+7l7gNuAWj7qT\nQB+PcjLmGcBJ6oaMautPNrbTlJQUIzc318smiYjYVi6Q2lY770/9C8NTgUwgrsF2tReGw4EBVqNq\nLwzvxbw+4ODKF4aNYLZo0SJ/N6HNBHPfDEP96+iCvX+A4e0bfHNnAuuBCZhv+MeBRZi3gYZjXiAG\n+BB4EDgMvGXNq6y62oY9CLwKdMUMge3eNlhERHynuRCY3Ujdqits/z/W1NA+YFhLGyUiIu2jtXcH\nyVVwOp3+bkKbCea+gfrX0QV7/1rD0fwm7coa3hIRkZZyOBzg5fu5t3cHiUg7io2N5fz5hp/bFLuJ\niYnh3LlzPt2nzgREOgCHw4H+b0hTvwetORPQNQERERtTCIiI2JhCQETExhQCIhIQbrvtNtasWePV\na/v378/777/v4xbZg+4OEhGvde/evfaiJKWlpXTp0oXQ0FAAXnzxRWbPbuzzpo3bunVr8xs1weFw\nuNshV0chICJeu3jxont5wIABvPzyy0yaNOmy7aqqqggL09tNINJwkIj4nMvlIjk5mWXLlpGYmMi8\nefMoLi5m+vTp9OzZk9jYWL75zW9y8mTdFwo7nU5efvllAF599VXGjx/P/PnziY2NZeDAgWzf3rKv\nHCsvL+fRRx+ld+/e9O7dm8cee4yKigoAioqKmD59OjExMVxzzTV8/etfd7/u2WefJTk5mcjISAYP\nHsyuXbt8+BMJXAoBEWkThYWFnD9/nvz8fH7/+99TU1PDvHnzyM/PJz8/n65du/Lwww+7t284pJOR\nkcHgwYM5e/YsP/7xj5k3b16Ljvvzn/+cjIwMDh48yMGDB8nIyGDJkiUArFixgj59+lBUVMTp06d5\n5plnAPj888957rnn+Oc//8mFCxfYsWMH/fv3990PI4ApBESCgMPhm8mXQkJCWLx4MZ06daJLly7E\nxsbyrW99iy5dutC9e3eeeOIJ9uzZ0+Tr+/Xrx7x583A4HMyZM4eCggJOnz7d7HHXrVvH008/TVxc\nHHFxcSxatMh9wTk8PJyCggLy8vIIDQ1l3LhxAISGhlJeXk5mZiaVlZX07duXgQMH+uYHEeAUAiJB\nwDB8M/lSfHw84eHh7nJZWRnf+9736N+/P1FRUUyYMIGSkpImPwndq1cv93JERARQ/xpEU06dOkW/\nfv3c5b59+3Lq1CkA5s+fT2pqKpMnTyYlJYVnn30WgNTUVFauXEl6ejoJCQnMnj2bgoKCq+90B6QQ\nEJE20fBunRUrVpCdnU1GRgYlJSXs2bMHwzB8/nUYSUlJ5OXlucv5+fkkJSUB5t1My5cvJzc3l40b\nN/LLX/7SPfY/e/ZsPvjgA44dO4bD4eDxxx/3absClUJARNrFxYsX6dq1K1FRUZw7d47Fixe3yXFm\nz57NkiVLKCoqoqioiJ/+9Kfcc889AGzevJmcnBwMwyAyMpLQ0FBCQ0PJzs5m165dlJeX07lz53q3\nugY7hYCItImGZwKPPvooly5dIi4ujptuuolp06Y1eW9/Y/f9t/RzAE8++SSjRo0iLS2NtLQ0Ro0a\nxZNPPglATk4Ot956Kz169OCmm27ioYceYsKECZSXl7Nw4ULi4+NJTEykqKjIfdE42AXapyv0LaIi\njdC3iAroW0RFRMTHFAIiIjamEBARsTGFgIiIjSkERERsTCEgImJjCgERERtTCIiI2JhCQET8JiQk\nhH/9618A/Pd//7f7K5+b2/ZqrV27lilTpnj12itxuVz06dPH5/ttTwoBEfHa1KlTWbRo0WX1f/7z\nn0lMTKSmpqbF+3rhhRfcX+/QGnl5eYSEhNQ79l133cV7773X6n0Ho+ZCYBVQCHzqURcL7ASygR1A\ntMe6hcBRIAuY7FE/0trHUeDXrWuyiASKe++9l9dff/2y+jVr1nD33XcTEuK/vzP1NRst09y/0CvA\n1AZ1CzBDYBDwvlUGGALcac2nAs9T910WLwDzgGutqeE+RaQDmjFjBmfPnuWDDz5w150/f54tW7Yw\nZ84cMjIy+NrXvkZMTAxJSUn84Ac/oLKystF93XvvvTz11FPu8i9+8QuSkpJITk5m1apV9bbdsmUL\nw4cPJyoqir59+9b7RtLaR0ZGR0cTGRnJRx99xKuvvsrNN9/s3uYf//gHo0ePJjo6mjFjxvDhhx+6\n1zmdTp5++mnGjx9PZGQkU6ZM4ezZsy36eRw5cgSn00lMTAw33HADmzZtcq/bunUrQ4cOJTIykuTk\nZFasWAE0/sjLQAuw/tQ/E8gCEqzlXlYZzLMAzy/g3g7cCCQCRzzqZwG/a+JYhohcLpD/b3z3u981\nvvOd77jLv/vd74zhw4cbhmEY+/btM/bu3WtUV1cbeXl5xvXXX2+sXLnSva3D4TByc3MNwzCMe++9\n13jqqacMwzCMbdu2GQkJCUZmZqZRWlpqzJ49u962LpfL+OyzzwzDMIxDhw4ZCQkJxoYNGwzDMIy8\nvDzD4XAY1dXV7uO88sorxvjx4w3DMIyzZ88a0dHRxuuvv25UV1cb69evN2JiYoxz584ZhmEYEyZM\nMFJTU42jR48aly5dMpxOp7FgwYJG+757924jOTnZMAzDqKioMFJSUoxnnnnGqKysNHbt2mX06NHD\nyM7ONgzDMHr16mX87W9/MwzDMIqLi439+/cbhmEYCxYsML7//e8bVVVVRlVVlXubxjT1ewB4nRph\nXrwmAXOICGteGwhJwEce250AegOV1nKtk1a9iPiIY7FvvhDYWHT17yVz585l+vTpPPfcc4SHh7N6\n9Wrmzp0LwIgRI9zb9evXjwceeIA9e/bwyCOPXHGfb731Fvfffz9DhgwBYPHixbzxxhvu9RMmTHAv\nDxs2jFmzZrFnzx5mzJjR7F/RW7Zs4brrruOuu+4CYNasWfzmN79h48aNzJ07F4fDwX333UdqaioA\nM2fOZOPGjc3+HD766CNKS0tZsMAcHJk4cSLTp09n3bp1LFq0iPDwcDIzMxk2bBhRUVEMHz4cqP/I\ny5SUFPcjL9uLNyHgqVUJJCK+4c2bt6+MGzeOuLg43n33XUaNGsXHH3/Mhg0bAMjOzuZHP/oR+/bt\no6ysjKqqKkaNGtXsPgsKChg9erS73Ldv33rr9+7dy4IFC8jMzKSiooLy8nJmzpzZovaeOnXqsv31\n69fP/QhKqP9oy65du7b4sZYN7xTq168fJ0+eBODtt99myZIlLFiwgLS0NJYuXcqNN97I/PnzSU9P\nZ/Jk8zLqAw880K5PNfMmBAoxh4G+wBzqqX3y80nA8yeQjHkGcNJa9qw/2dTO09PT3ctOpxOn0+lF\nE0WkPc2ZM4fVq1eTlZXF1KlTiY+PB8zbPkeOHMmbb75Jt27dWLlyJW+//Xaz+0tMTCQ/P99d9lwG\n+Pa3v80Pf/hD3nvvPcLDw3nssccoKioCmn/4TO/evXnnnXfq1R07doxp06a1qK9NSUpK4vjx4xiG\n4W7DsWPHGDx4MACjRo1iw4YNVFdX89vf/paZM2eSn5/vfuTl8uXLyczMZNKkSYwePZpJkyY1eSyX\ny4XL5WpVe2t5c+l+IzDXWp4LbPConwWEAwMwLwBnYIbFBWAs5oXiezxec5n09HT3pAAQ6RjmzJnD\nzp07eemll9xDQWA+UrJHjx5ERESQlZXFCy+80OQ+DI/nDc+cOZNXX32VI0eOUFZWdtmjKC9evEhM\nTAzh4eFkZGSwbt069xtvfHw8ISEh5ObmNnqcadOmkZ2dzfr166mqquLNN98kKyuL6dOn12vL1Ro7\ndiwREREsW7aMyspKXC4XmzdvZtasWVRWVrJ27VpKSkoIDQ2lR48e7sdXNvXIyytxOp313itbo7kQ\nWA/8A7gOOA7cBywFbsW8RXSSVQY4DLxlzbcBD1I3VPQg8BLmLaI5mBeNRSRI9OvXj3HjxlFWVsbt\nt9/url++fDnr1q0jMjKSBx54gFmzZtX7S73hcm156tSpPProo0yaNIlBgwZxyy231Nv2+eef5+mn\nnyYyMpKf/exn3Hnnne51ERER/OQnP2HcuHHExsayd+/eevu+5ppr2Lx5MytWrCAuLo7ly5ezefNm\nYmNjm21XY2rXhYeHs2nTJrZt20Z8fDwPP/wwa9asYdCgQQC8/vrrDBgwgKioKF588UXWrl0LNP3I\ny/aix0uKdAB6vKSAHi8pIiI+phAQEbExhYCIiI0pBEREbEwhICJiYwoBEREba+3XRohIO4iJiWn2\nk7AS/GJiYny+z0D7rdLnBERErpI+JyAiIl5RCIiI2JhCQETExhQCIiI2phAQEbExhYCIiI0pBERE\nbEwhICJiYwoBEREbUwiIiNiYQkBExMYUAiIiNqYQEBGxMYWAiIiNKQRERGxMISAiYmMKARERG1MI\niIjYmEJARMTGFAIiIjamEBARsbHWhMBCIBP4FFgHdAZigZ1ANrADiG6w/VEgC5jciuOKiIiPOLx8\nXX9gF3A9UA68CWwFhgJFwDLgcSAGWAAMwQyK0UBv4C/AIKCmwX4NwzC8bJKIiD05HA7w8v3c2zOB\nC0AlEAGEWfNTwO3Aa9Y2rwF3WMszgPXWa/KAHGCMl8cWEREf8TYEzgErgHzMN/9izGGgBKDQ2qbQ\nKgMkASc8Xn8C84xARET8KMzL16UAj2IOC5UAfwTubrCNYU1NaXRdenq6e9npdOJ0Or1soohIcHK5\nXLhcLp/sy9trAncCtwLfscr3ADcCk4CJwBdAIrAbGIx5XQBgqTXfDiwC9jbYr64JiIhcJX9cE8jC\nfNPvah34G8BhYBMw19pmLrDBWt4IzALCgQHAtUCGl8cWEREf8XY46CCwGvgn5h0++4EXgR7AW8A8\nzAvAM63tD1v1h4Eq4EGuPFQkIiLtwNvhoLai4SARkavkj+EgEREJAgoBEREbUwiIiNiYQkBExMYU\nAiIiNqYQEBGxMYWAiIiNKQRERGxMISAiYmMKARERG1MIiIjYmEJARMTGFAIiIjamEBARsTGFgIiI\njSkERERsTCEgImJjCgERERtTCIiI2JhCQETExhQCIiI2phAQEbExhYCIiI0pBEREbEwhICJiYwoB\nEREbUwiIiNiYQkBExMZaEwLRwJ+AI8BhYCwQC+wEsoEd1ja1FgJHgSxgciuOKyIiPtKaEPg1sBW4\nHkjDfHNfgBkCg4D3rTLAEOBOaz4VeL6VxxYRER/w9o04CrgZWGWVq4AS4HbgNavuNeAOa3kGsB6o\nBPKAHGCMl8cWEREf8TYEBgBngFeA/cAfgG5AAlBobVNolQGSgBMerz8B9Pby2CIi4iNhrXjdCOBh\n4GNgJXVDP7UMa2pKo+vS09Pdy06nE6fT6WUTRUSCk8vlwuVy+WRfDi9f1wv4EPOMAGA85oXfgcBE\n4AsgEdgNDKYuIJZa8+3AImBvg/0ahnGl3BARkYYcDgd4+X7u7XDQF8BxzAvAAN8AMoFNwFyrbi6w\nwVreCMwCwjGD41ogw8tji4iIj3g7HATwA2At5ht7LnAfEAq8BczDvAA809r2sFV/GPMi8oNceahI\nRETagbfDQW1Fw0EiIlfJH8NBIiISBBQCIiI2phAQEbExhYCIiI0pBEREbEwhICJiYwoBEREbUwiI\niNiYQkBExMYUAiIiNqYQEBGxMYWAiIiNKQRERGxMISAiYmMKARERG1MIiIjYmEJARMTGFAIiIjam\nEBARsTGFgIiIjSkERERsTCEgImJjCgERERtTCIiI2JhCQETExhQCIiI2phAQEbGxgAuBmhp/t0BE\nxD5aGwKhwAFgk1WOBXYC2cAOINpj24XAUSALmNzUDr/8spUtEhGRFmttCDwCHAYMq7wAMwQGAe9b\nZYAhwJ3WfCrwfFPHvnSplS0SEZEWa00IJAO3AS8BDqvuduA1a/k14A5reQawHqgE8oAcYExjO1UI\niIi0n9aEwK+A+YDnKH4CUGgtF1plgCTghMd2J4Deje1UISAi0n7CvHzddOA05vUAZxPbGNQNEzW1\n/jK//GU6CVZ0OJ1OnM6mdi8iYk8ulwuXy+WTfTma36RR/wPcA1QBXYBI4B1gNGYofAEkAruBwdRd\nG1hqzbcDi4C9DfZrfPSRwdixXrZKRMSGHA4HePl+7u1w0BNAH2AAMAvYhRkKG4G51jZzgQ3W8kZr\nu3DrNdcCGY3tWMNBIiLtx9vhoIZqh3aWAm8B8zAvAM+06g9b9Ycxzx4epInhIIWAiEj78XY4qK0Y\nb79t8B//4e9miIh0HP4YDmozOhMQEWk/CgERERsLuBAoK/N3C0RE7CPgQuD8eX+3QETEPgIuBE6f\n9ncLRETsI+BCoLCw+W1ERMQ3Ai4ECk5X+rsJIiK2EXghoIsCIiLtJuBC4PT/nfN3E0REbCPgQqA6\n/BwlJf5uhYiIPQRcCPROPcfnn/u7FSIi9hBwIRDf9xxHjvi7FSIi9hBwIRCdVERmpr9bISJiDwEX\nAp0ScvnwQ3+3QkTEHgIuBEo7H+XAAX2RnIhIewi4EMgtzmbMGHjvPX+3REQk+AVcCBSVFfHN/1fM\nmjX+bomISPALuBC4qc9NJH3tr/z1r5Cd7e/WiIgEt4ALgW8M/Aa7jm/hoYdg6VJ/t0ZEJLgF3DOG\nT144ydDnh/LxPUe5eWQc774LN97o72aJiASu1jxjOOBCwDAMHtv+GGfKzjC94nWWLIGPP4auXf3d\nNBGRwBRUD5oHWDJpCXtP7uXCtX8gLQ0efBAMw9+tEhEJPgEZAt3Cu7H121tZvCedEd//LQc+MXji\nCQWBiIivBeRwUK3cc7n81x//i5jwBI6/vIxv3ZTG0qUQEpDRJSLiH0E3HFQrJTaFvd/Zyx1DbqPk\n9lt5uWwGN9+zh+JinRKIiPhCQJ8JeCqrLGPVvtU8teVXlBZ35/ujv8dT//kt4rvFt3MTRUQCS9Dd\nHXQlNUYNz/xxG89sXUN5n+0MjUtjRtokJg2YyI3JN9I5rHM7NVVEJDDYKgRqVVTA716+xNL1e6hM\n3k3EkN2cDTnC2OQxOPs5mThgImN6jyE8NLyNmywi4l/+CIE+wGqgJ2AALwK/AWKBN4F+QB4wEyi2\nXrMQuB+oBn4I7Ghkvy0OgVo1NbBnD6xaBRvfK6H/hA9IGOuiMGI3ucWfMyxhGCN6jWBk0khGJI5g\naPxQOoV2uuoOi4gEKn+EQC9r+gToDuwD7gDuA4qAZcDjQAywABgCrANGA72BvwCDgJoG+73qEPB0\n6RJs2wZvvGF+C+mYr19g9PRP6H7tPo6U7GffqX3kFecxtOdQRiaaoTAycSQ39LxBw0gi0mEFwnDQ\nBuB/rWkCUIgZEi5gMOZZQA3wrLX9diAd+KjBfloVAp4uXoRNm8xA2LULvvIVmDwZbr7lIqG9D3Kw\ncD/7Cvaxv2A/OedyGBw32B0KIxJHkJaQRtdO+piyiAQ+f4dAf2APcAOQj/nXf+2+z1nl32K+4a+1\n1r0EbAPebrAvn4WAp0uX4IMPzLODHTvg1Cm45RaYMsUMhrhelzhUeIj9BXXBkFWURWpsKiOTRjK8\n13C+kvAVhiUMI7ZrrM/bJyLSGv4Mge6YAfAzzLOB89SFAJghEEvjIbAVeKfB/tokBBo6eRJ27jRD\nYedO6NnTDIMpU2DCBIiIgPKqcj47/Rn7CvZxoOAAh04f4tPCT4nqEkVaQhppPdPMeUIag64ZpOsM\nIuI3/gqBTsBmzL/oV1p1WYAT+AJIBHZjDgctsNbXfjn0dmARsLfBPo1Fixa5C06nE6fT2YomNq+6\nGg4cqDtL2L8fxo41Q+GWW+CrX4XQUHPbGqOGY8XHOFR4yJxOm/PjJce5Lu46dzhcH389qbGp9I/u\nr7uTRMTnXC4XLpfLXV68eDG0cwg4gNeAs8BjHvXLrLpnMd/4o6l/YXgMdReGUzHvLPLULmcCV3Lh\nAuzebQbC7t1QUGCeHUycaE433HD511aUVZaReTqTQ4WHOFh4kKyiLHLP53LiwgkSuyeSEptCakwq\nKbEppMSkkBKbwoDoAUR1ifJPJ0UkqPjjTGA88FfgEHVv5AuBDOAtoC+X3yL6BOYtolXAI0BjTxH2\newg0VFAALpcZCLt3Q3ExOJ1mIEyaBNddB44mfoqV1ZXkl+STcy6H3PO55J7LJfd8LjnncjhWcgwH\nDpIjk+kT1Yc+kdZkLSf2SCQ+Ip5rIq4hLCSsPbssIh2Mvy8M+1LAhUBDx4/XBcKuXeaH1moDYeJE\nGDiw6VDwZBgGJeUlHC85zvELx+vm1nJhaSGnS09z/tJ5ortE07NbT+K7xZvziHjiIuKI6RJDdJdo\n9xTTta4c2TmSEEdAfzWUiPiIQsBPDAP+/e+6QNi927zG8NWvwvDhdfPU1LrrCleruqaas5fOcrr0\nNGdKz5jzsjOcKT1DSXkJxV8WXzad//I8pRWl9Ojcg+gu0XQP705Epwi6depmzsO71S971HcJ60J4\naDidQzub87DOLSrXTgoekfanEAgQhmHeeXTgAHzyiTk/cADOnIG0NBgyxBw+GjzYnAYMgLA2Gump\nrqnmQvkFir8s5mLFRcoqyyitLKW0otS9XFZZVq9cWlHKl9VfUlFdQXlVORXVFeZydXm9uqbKFdUV\nhDpCCQsJo1NoJ3Me0qnJ8pXW1SuHdGrR/q60bagjlNCQUEIdoYQ4QggNseaNlL1dF+IIweFwEOII\nMZdx1KurLdfWifiKQiDAFRfDwYNw5Ah8/jlkZZnzggIzCAYOhL59oV8/c6pdTkzsWM9OMAyDaqOa\nyupKqmqqqKyx5h7lK63z6bYNtqmqqaLaqKbGqKG6xpp7lK+0rrbc3DoDgxqjBsMw5zVGzWV1tWUA\nB44rhkZL6xqGS1vWtbq9tNF+vfh5dQvvxt1pd/v5f41vKAQ6qEuXICfHHFI6dgzy88157XTuHMTF\nQUKC+VmG2nntckwMREfXn3r0aNk1CfEvwzBaFBqtqast+7quo7W3qT706NyD5ZOX+/tXwScUAkGq\nvNwcSjp9GgoLzXntVFgI58+bZxklJea8uBjKyiAysi4UIiOhW7emp4iI+uWuXaFzZwgPrz9vWNdW\nw1gicvUUAuJWVWV+1qE2FEpKoLS05dOXX5p3PJWXm1Ptsmddebl5tuEZDJ06mVNYWP15oNY1tb4j\nDb+J1FIISLurqqofDJWVZl1lZf3ljlRXWWmGW8OACAkxp9BQ385b+1qHo27eVsvtcYyWHtvX23Tq\nBCkp/v6f5BsKAREfqa6uHwzV1eYzK9pq3prXGoY5tedyWx+j4bFaWufNNnFx8Pe/+/s3zjcUAiIi\nNtaaENAIqIiIjSkERERsTCEgImJjCgERERtTCIiI2JhCQETExhQCIiI2phAQEbExhYCIiI0pBERE\nbEwhICJiYwoBEREbUwiIiNiYQkBExMYUAiIiNqYQEBGxMYWAiIiNKQRERGxMISAiYmPtHQJTgSzg\nKPB4Ox9bREQaaM8QCAX+FzMIhgCzgevb8fh+53K5/N2ENhPMfQP1r6ML9v61RnuGwBggB8gDKoE3\ngBnteHy/C+ZfxGDuG6h/HV2w96812jMEegPHPconrDoREfGT9gwBox2PJSIiLeBox2PdCKRjXhMA\nWAjUAM96bJMDpLRjm0REgkEukOrvRjQnDLOh/YFw4BNsdmFYRMTupgGfY/7Fv9DPbREREREREX8L\nhg+RrQIKgU896mKBnUA2sAOI9li3ELO/WcDkdmpja/QBdgOZwGfAD636YOhjF2Av5hDlYeAZqz4Y\n+uYpFDgAbLLKwdS/POAQZv8yrLpg6l808CfgCObv6FiCqH+hmMND/YFOdNxrBTcDw6kfAsuAH1vL\njwNLreUhmP3shNnvHAL/Kzx6AV+1lrtjDutdT/D0McKahwEfAeMJnr7V+hGwFtholYOpf//GfFP0\nFEz9ew2431oOA6IIov59DdjuUV5gTR1Rf+qHQBaQYC33sspgprTnGc92zLunOpINwDcIvj5GAB8D\nQwmuviUDfwEmUncmEEz9+zdwTYO6YOlfFPCvRup90r9ASIdg/hBZAuYQEda89h8sCbOftTpan/tj\nnvXsJXj6GIL511MhdcNewdI3gF8B8zFvy64VTP0zMEPun8B3rbpg6d8A4AzwCrAf+APQDR/1LxBC\nwC4fIjO4cl87ys+hO/A28Ajwfw3WdeQ+1mAOdyUDX8f8i9lTR+7bdOA05nh5U58N6sj9AxiH+YfJ\nNOAhzOFZTx25f2HACOB5a17K5aMlXvcvEELgJOZFx1p9qJ9iHVkh5mkaQCLmf0S4vM/JVl2g64QZ\nAGswh4Mg+PpYAmwBRhI8fbsJuB1zyGQ9MAnz3zBY+gdQYM3PAO9ifldZsPTvhDV9bJX/hBkGXxAc\n/QuqD5H15/ILw7Vjcwu4/MJNOOapXi7t++ltbziA1ZjDCp6CoY9x1N1Z0RX4K3ALwdG3hiZQd00g\nWPoXAfSwlrsBf8e8IyZY+gfm7+Qgazkds2/B1L+g+BDZeuAUUIF5jeM+zLsV/kLjt3A9gdnfLGBK\nu7bUO+Mxh0w+wRxWOIB5a28w9HEY5ljrJ5i3Gc636oOhbw1NoO7uoGDp3wDMf7tPMG9frn0PCZb+\nAXwF80x39brgAAAALElEQVTgIPAO5sXiYOqfiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIiIi0xv8H\nMzu7Vcj6q8gAAAAASUVORK5CYII=\n",
"text": [
"<matplotlib.figure.Figure at 0xb3c6a90>"
]
}
],
"prompt_number": 57
}
],
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment