Skip to content

Instantly share code, notes, and snippets.

@Z30G0D
Created January 17, 2018 09:02
Show Gist options
  • Save Z30G0D/93932d7197c523f4381532f143ce3456 to your computer and use it in GitHub Desktop.
Save Z30G0D/93932d7197c523f4381532f143ce3456 to your computer and use it in GitHub Desktop.
Third assignment of Udacity's Deep learning course
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "kR-4eNdK6lYS"
},
"source": [
"Deep Learning\n",
"=============\n",
"\n",
"Assignment 3\n",
"------------\n",
"\n",
"Previously in `2_fullyconnected.ipynb`, you trained a logistic regression and a neural network model.\n",
"\n",
"The goal of this assignment is to explore regularization techniques."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"id": "JLpLa8Jt7Vu4"
},
"outputs": [],
"source": [
"# These are all the modules we'll be using later. Make sure you can import them\n",
"# before proceeding further.\n",
"from __future__ import print_function\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from six.moves import cPickle as pickle"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "1HrCK6e17WzV"
},
"source": [
"First reload the data we generated in `1_notmnist.ipynb`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"colab_type": "code",
"executionInfo": {
"elapsed": 11777,
"status": "ok",
"timestamp": 1449849322348,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"id": "y3-cj1bpmuxc",
"outputId": "e03576f1-ebbe-4838-c388-f1777bcc9873"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training set (2000, 28, 28) (2000,)\n",
"Validation set (100, 28, 28) (100,)\n",
"Test set (100, 28, 28) (100,)\n"
]
}
],
"source": [
"pickle_file = 'notMNIST.pickle'\n",
"\n",
"with open(pickle_file, 'rb') as f:\n",
" save = pickle.load(f, encoding='latin1')\n",
" train_dataset = save['train_dataset']\n",
" train_labels = save['train_labels']\n",
" valid_dataset = save['valid_dataset']\n",
" valid_labels = save['valid_labels']\n",
" test_dataset = save['test_dataset']\n",
" test_labels = save['test_labels']\n",
" del save # hint to help gc free up memory\n",
" print('Training set', train_dataset.shape, train_labels.shape)\n",
" print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
" print('Test set', test_dataset.shape, test_labels.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "L7aHrm6nGDMB"
},
"source": [
"Reformat into a shape that's more adapted to the models we're going to train:\n",
"- data as a flat matrix,\n",
"- labels as float 1-hot encodings."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"colab_type": "code",
"executionInfo": {
"elapsed": 11728,
"status": "ok",
"timestamp": 1449849322356,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"id": "IRSyYiIIGIzS",
"outputId": "3f8996ee-3574-4f44-c953-5c8a04636582"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training set (2000, 784) (2000, 10)\n",
"Validation set (100, 784) (100, 10)\n",
"Test set (100, 784) (100, 10)\n"
]
}
],
"source": [
"image_size = 28\n",
"num_labels = 10\n",
"\n",
"def reformat(dataset, labels):\n",
" dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n",
" # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]\n",
" labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
" return dataset, labels\n",
"train_dataset, train_labels = reformat(train_dataset, train_labels)\n",
"valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n",
"test_dataset, test_labels = reformat(test_dataset, test_labels)\n",
"print('Training set', train_dataset.shape, train_labels.shape)\n",
"print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
"print('Test set', test_dataset.shape, test_labels.shape)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"cellView": "both",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"colab_type": "code",
"id": "RajPLaL_ZW6w"
},
"outputs": [],
"source": [
"def accuracy(predictions, labels):\n",
" return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n",
" / predictions.shape[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "sgLbUAQ1CW-1"
},
"source": [
"---\n",
"Problem 1\n",
"---------\n",
"\n",
"Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor `t` using `nn.l2_loss(t)`. The right amount of regularization should improve your validation / test accuracy.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"I will copy the code written in assignment 2 and then introduce the L2 regularization. First let's start with the logistic regression, the code was already written in the previous lab"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# With gradient descent training, even this much data is prohibitive.\n",
"# Subset the training data for faster turnaround.\n",
"train_subset = 1000\n",
"beta = 0.02 # regularization constant\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
"\n",
" # Input data.\n",
" # Load the training, validation and test data into constants that are\n",
" # attached to the graph.\n",
" tf_train_dataset = tf.constant(train_dataset[:train_subset, :])\n",
" tf_train_labels = tf.constant(train_labels[:train_subset])\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" # These are the parameters that we are going to be training. The weight\n",
" # matrix will be initialized using random values following a (truncated)\n",
" # normal distribution. The biases get initialized to zero.\n",
" weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels]))\n",
" biases = tf.Variable(tf.zeros([num_labels]))\n",
" \n",
" # Training computation.\n",
" # We multiply the inputs with the weight matrix, and add biases. We compute\n",
" # the softmax and cross-entropy (it's one operation in TensorFlow, because\n",
" # it's very common, and it can be optimized). We take the average of this\n",
" # cross-entropy across all training examples: that's our loss.\n",
" logits = tf.matmul(tf_train_dataset, weights) + biases\n",
" # our regularization goes here, we need to add the l2 penalty to the loss\n",
" loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) + beta * tf.nn.l2_loss(weights)\n",
" \n",
" # Optimizer.\n",
" # We are going to find the minimum of this loss using gradient descent.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" # These are not part of training, but merely here so that we can report\n",
" # accuracy figures as we train.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(\n",
" tf.matmul(tf_valid_dataset, weights) + biases)\n",
" test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The session algorithm:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Initialized\n",
"Loss at step 0: 79.539413\n",
"Training accuracy: 7.1%\n",
"Validation accuracy: 10.0%\n",
"Loss at step 100: 8.380231\n",
"Training accuracy: 84.0%\n",
"Validation accuracy: 68.0%\n",
"Loss at step 200: 1.551761\n",
"Training accuracy: 91.8%\n",
"Validation accuracy: 72.0%\n",
"Loss at step 300: 0.696794\n",
"Training accuracy: 92.7%\n",
"Validation accuracy: 72.0%\n",
"Loss at step 400: 0.586821\n",
"Training accuracy: 93.2%\n",
"Validation accuracy: 72.0%\n",
"Loss at step 500: 0.572271\n",
"Training accuracy: 93.2%\n",
"Validation accuracy: 72.0%\n",
"Loss at step 600: 0.570211\n",
"Training accuracy: 93.2%\n",
"Validation accuracy: 72.0%\n",
"Loss at step 700: 0.569845\n",
"Training accuracy: 93.2%\n",
"Validation accuracy: 72.0%\n",
"Loss at step 800: 0.569735\n",
"Training accuracy: 93.2%\n",
"Validation accuracy: 72.0%\n",
"Test accuracy: 83.0%\n"
]
}
],
"source": [
"num_steps = 801\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" # This is a one-time operation which ensures the parameters get initialized as\n",
" # we described in the graph: random weights for the matrix, zeros for the\n",
" # biases. \n",
" tf.global_variables_initializer().run()\n",
" print('Initialized')\n",
" for step in range(num_steps):\n",
" # Run the computations. We tell .run() that we want to run the optimizer,\n",
" # and get the loss value and the training predictions returned as numpy\n",
" # arrays.\n",
" _, l, predictions = session.run([optimizer, loss, train_prediction])\n",
" if (step % 100 == 0):\n",
" print('Loss at step %d: %f' % (step, l))\n",
" print('Training accuracy: %.1f%%' % accuracy(\n",
" predictions, train_labels[:train_subset, :]))\n",
" # Calling .eval() on valid_prediction is basically like calling run(), but\n",
" # just to get that one numpy array. Note that it recomputes all its graph\n",
" # dependencies.\n",
" print('Validation accuracy: %.1f%%' % accuracy(\n",
" valid_prediction.eval(), valid_labels))\n",
" print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's add the regularization to the neural network that we built earlier..."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"batch_size = 124\n",
"hidden_nodes = 1024\n",
"beta = 0.12\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
"\n",
" # Input data. .\n",
" tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size, image_size * image_size))\n",
" tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" weights_layer = tf.Variable(tf.truncated_normal([image_size * image_size, hidden_nodes]))\n",
" biases_layer = tf.Variable(tf.zeros([hidden_nodes]))\n",
" weights_output = tf.Variable(tf.truncated_normal([hidden_nodes, num_labels]))\n",
" biases_output = tf.Variable(tf.zeros([num_labels]))\n",
" \n",
" def neural_net(input):\n",
" h1 = tf.nn.relu(tf.matmul(input, weights_layer) + biases_layer)\n",
" return tf.matmul(h1, weights_output) + biases_output\n",
" \n",
" logits = neural_net(tf_train_dataset)\n",
" # add the regularization of ALL weights to our cost function\n",
" loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n",
" logits, tf_train_labels)) + beta * (tf.nn.l2_loss(weights_layer) +tf.nn.l2_loss(weights_output))\n",
" \n",
" # Optimizer.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(neural_net(tf_valid_dataset))\n",
" test_prediction = tf.nn.softmax(neural_net(tf_test_dataset))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run session includes regularization."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From <ipython-input-14-b7f06fa69307>:4 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n",
"Instructions for updating:\n",
"Use `tf.global_variables_initializer` instead.\n",
"loss in step 0: 38104.472656\n",
"batch accuracy: 12.096774193548388\n",
"Validation accuracy: 28.0\n",
"loss in step 500: 1.490799\n",
"batch accuracy: 77.41935483870968\n",
"Validation accuracy: 65.0\n",
"loss in step 1000: 1.441092\n",
"batch accuracy: 80.64516129032258\n",
"Validation accuracy: 66.0\n",
"Test accuracy: 73.0\n"
]
}
],
"source": [
"num_steps = 1001\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" tf.initialize_all_variables().run()\n",
" for step in range(num_steps):\n",
" # defining batch index\n",
" offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n",
" # batch for training\n",
" batch_data = train_dataset[offset:(offset + batch_size), :]\n",
" batch_labels = train_labels[offset:(offset + batch_size), :]\n",
" _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict={tf_train_dataset : batch_data, tf_train_labels : batch_labels})\n",
" if (step % 500 == 0):\n",
" print(\"loss in step %d: %f\" % (step, l))\n",
" print(\"batch accuracy: \" , accuracy(predictions, batch_labels))\n",
" print(\"Validation accuracy: \" , accuracy(valid_prediction.eval(), valid_labels))\n",
" print(\"Test accuracy:\" , accuracy(test_prediction.eval(), test_labels))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "na8xX2yHZzNF"
},
"source": [
"---\n",
"Problem 2\n",
"---------\n",
"Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n",
"\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"batch_size = 124\n",
"hidden_nodes = 1024\n",
"beta = 0\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
"\n",
" # Input data. .\n",
" tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size, image_size * image_size))\n",
" tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" weights_layer = tf.Variable(tf.truncated_normal([image_size * image_size, hidden_nodes]))\n",
" biases_layer = tf.Variable(tf.zeros([hidden_nodes]))\n",
" weights_output = tf.Variable(tf.truncated_normal([hidden_nodes, num_labels]))\n",
" biases_output = tf.Variable(tf.zeros([num_labels]))\n",
" \n",
" def neural_net(input):\n",
" h1 = tf.nn.relu(tf.matmul(input, weights_layer) + biases_layer)\n",
" return tf.matmul(h1, weights_output) + biases_output\n",
" \n",
" logits = neural_net(tf_train_dataset)\n",
" # add the regularization of ALL weights to our cost function\n",
" loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n",
" logits, tf_train_labels)) + beta * (tf.nn.l2_loss(weights_layer) +tf.nn.l2_loss(weights_output))\n",
" \n",
" # Optimizer.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(neural_net(tf_valid_dataset))\n",
" test_prediction = tf.nn.softmax(neural_net(tf_test_dataset))"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From <ipython-input-19-fbf2bc5ec65d>:8 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n",
"Instructions for updating:\n",
"Use `tf.global_variables_initializer` instead.\n",
"0\n",
"loss in step 0: 300.864960\n",
"batch accuracy: 12.096774193548388\n",
"Validation accuracy: 26.0\n",
"336\n",
"loss in step 500: 0.000000\n",
"batch accuracy: 100.0\n",
"Validation accuracy: 73.0\n",
"296\n",
"loss in step 1000: 0.000000\n",
"batch accuracy: 100.0\n",
"Validation accuracy: 73.0\n",
"Test accuracy: 77.0\n"
]
}
],
"source": [
"num_steps = 1001\n",
"\n",
"train_restricted = train_dataset[:500 ,:]\n",
"train_labels_restricted = train_labels[:500 ,:]\n",
"\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" tf.initialize_all_variables().run()\n",
" for step in range(num_steps):\n",
" # defining batch index\n",
" offset = (step * batch_size) % (train_labels_restricted.shape[0] - batch_size)\n",
" # batch for training\n",
" batch_data = train_restricted[offset:(offset + batch_size), :]\n",
" batch_labels = train_labels_restricted[offset:(offset + batch_size), :]\n",
" _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict={tf_train_dataset : batch_data, tf_train_labels : batch_labels})\n",
" if (step % 500 == 0):\n",
" print(offset) \n",
" print(\"loss in step %d: %f\" % (step, l))\n",
" print(\"batch accuracy: \" , accuracy(predictions, batch_labels))\n",
" print(\"Validation accuracy: \" , accuracy(valid_prediction.eval(), valid_labels))\n",
" print(\"Test accuracy:\" , accuracy(test_prediction.eval(), test_labels))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see overfitting!, the batch accuracy is 100% while the test accuracy is less, it seems like we got a model with high variance( overfitted) and fits perfectly well to the training set.\n",
"applying regularization caused it to perform better.. so increasing the size of the data set or applying regularization suppose to reslove this problem...!"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "ww3SCBUdlkRc"
},
"source": [
"---\n",
"Problem 3\n",
"---------\n",
"Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides `nn.dropout()` for that, but you have to make sure it's only inserted during training.\n",
"\n",
"What happens to our extreme overfitting case?\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"I inserted my training model here again, let's introduce the dropout in the training session"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"batch_size = 124\n",
"hidden_nodes = 1024\n",
"beta = 0.005\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
"\n",
" # Input data. .\n",
" tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size, image_size * image_size))\n",
" tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" weights_layer = tf.Variable(tf.truncated_normal([image_size * image_size, hidden_nodes]))\n",
" biases_layer = tf.Variable(tf.zeros([hidden_nodes]))\n",
" weights_output = tf.Variable(tf.truncated_normal([hidden_nodes, num_labels]))\n",
" biases_output = tf.Variable(tf.zeros([num_labels]))\n",
" \n",
" def neural_net(input):\n",
" h1 = tf.nn.relu(tf.matmul(input, weights_layer) + biases_layer)\n",
" drop = tf.nn.dropout(h1, 0.5)\n",
" return tf.matmul(drop, weights_output) + biases_output\n",
" \n",
" # let's introduce the drop out here, since this the command executing the training session \n",
" logits = neural_net(tf.nn.dropout(tf_train_dataset,0.5, name='drop'))\n",
" # add the regularization of ALL weights to our cost function\n",
" loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n",
" logits, tf_train_labels)) + beta * (tf.nn.l2_loss(weights_layer) +tf.nn.l2_loss(weights_output))\n",
" \n",
" # Optimizer.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(neural_net(tf_valid_dataset))\n",
" test_prediction = tf.nn.softmax(neural_net(tf_test_dataset))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's run the training and watch the results (trying to avoid overfitting here, remember?)"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From <ipython-input-36-b0e816ddfd34>:4 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n",
"Instructions for updating:\n",
"Use `tf.global_variables_initializer` instead.\n",
"0\n",
"loss in step 0: 2195.943848\n",
"batch accuracy: 11.290322580645162\n",
"Validation accuracy: 31.0\n",
"92\n",
"loss in step 500: 142.548004\n",
"batch accuracy: 83.06451612903226\n",
"Validation accuracy: 69.0\n",
"184\n",
"loss in step 1000: 12.175639\n",
"batch accuracy: 90.3225806451613\n",
"Validation accuracy: 76.0\n",
"276\n",
"loss in step 1500: 1.684507\n",
"batch accuracy: 91.93548387096774\n",
"Validation accuracy: 71.0\n",
"368\n",
"loss in step 2000: 0.735709\n",
"batch accuracy: 94.35483870967742\n",
"Validation accuracy: 74.0\n",
"460\n",
"loss in step 2500: 0.635475\n",
"batch accuracy: 95.96774193548387\n",
"Validation accuracy: 73.0\n",
"552\n",
"loss in step 3000: 0.548180\n",
"batch accuracy: 97.58064516129032\n",
"Validation accuracy: 74.0\n",
"Test accuracy: 84.0\n"
]
}
],
"source": [
"num_steps = 3001\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" tf.initialize_all_variables().run()\n",
" for step in range(num_steps):\n",
" # defining batch index\n",
" offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n",
" # batch for training\n",
" batch_data = train_dataset[offset:(offset + batch_size), :]\n",
" batch_labels = train_labels[offset:(offset + batch_size), :]\n",
" _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict={tf_train_dataset : batch_data, tf_train_labels : batch_labels})\n",
" if (step % 500 == 0):\n",
" print(offset) \n",
" print(\"loss in step %d: %f\" % (step, l))\n",
" print(\"batch accuracy: \" , accuracy(predictions, batch_labels))\n",
" print(\"Validation accuracy: \" , accuracy(valid_prediction.eval(), valid_labels))\n",
" print(\"Test accuracy:\" , accuracy(test_prediction.eval(), test_labels))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "-b1hTz3VWZjw"
},
"source": [
"---\n",
"Problem 4\n",
"---------\n",
"\n",
"Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is [97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595).\n",
"\n",
"One avenue you can explore is to add multiple layers.\n",
"\n",
"Another one is to use learning rate decay:\n",
"\n",
" global_step = tf.Variable(0) # count the number of steps taken.\n",
" learning_rate = tf.train.exponential_decay(0.5, global_step, ...)\n",
" optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)\n",
" \n",
" ---\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ok, Here we will add just one more layer since my computer has no GPU (dammit!) and I still need to get a virutal machine. Let's build the model, just like before."
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [],
"source": [
"batch_size = 124\n",
"hidden_nodes = 1024\n",
"beta = 0\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
"\n",
" # Input data. .\n",
" tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size, image_size * image_size))\n",
" tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" weights_layer1 = tf.Variable(tf.truncated_normal([image_size * image_size, hidden_nodes]))\n",
" biases_layer1 = tf.Variable(tf.zeros([hidden_nodes]))\n",
" # adding another layer\n",
" weights_layer2 = tf.Variable(tf.truncated_normal([hidden_nodes, hidden_nodes]))\n",
" biases_layer2 = tf.Variable(tf.zeros([hidden_nodes]))\n",
" # output layer\n",
" weights_output = tf.Variable(tf.truncated_normal([hidden_nodes, num_labels]))\n",
" biases_output = tf.Variable(tf.zeros([num_labels]))\n",
" \n",
" def neural_net(input):\n",
" h1 = tf.nn.relu(tf.matmul(input, weights_layer1) + biases_layer1)\n",
" drop = tf.nn.dropout(h1, 0.5)\n",
" # Let's add another layer here, same size as h1\n",
" h2 = tf.nn.relu(tf.matmul(drop, weights_layer2) + biases_layer2)\n",
" drop = tf.nn.dropout(h2, 0.5)\n",
" return tf.matmul(drop, weights_output) + biases_output\n",
" \n",
" # let's introduce the drop out here, since this the command executing the training session \n",
" logits = neural_net(tf_train_dataset)\n",
" # add the regularization of ALL weights to our cost function\n",
" loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n",
" logits, tf_train_labels)) + beta * (tf.nn.l2_loss(weights_layer1) + tf.nn.l2_loss(weights_layer2) +tf.nn.l2_loss(weights_output))\n",
" \n",
" # Optimizer.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(neural_net(tf_valid_dataset))\n",
" test_prediction = tf.nn.softmax(neural_net(tf_test_dataset))"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From <ipython-input-50-b0e816ddfd34>:4 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n",
"Instructions for updating:\n",
"Use `tf.global_variables_initializer` instead.\n",
"0\n",
"loss in step 0: 14492.842773\n",
"batch accuracy: 7.258064516129032\n",
"Validation accuracy: 10.0\n",
"92\n",
"loss in step 500: nan\n",
"batch accuracy: 10.483870967741936\n",
"Validation accuracy: 10.0\n"
]
},
{
"ename": "KeyboardInterrupt",
"evalue": "",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m<ipython-input-50-b0e816ddfd34>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[0;32m 9\u001b[0m \u001b[0mbatch_data\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtrain_dataset\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0moffset\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0moffset\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mbatch_size\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m:\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 10\u001b[0m \u001b[0mbatch_labels\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtrain_labels\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0moffset\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0moffset\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mbatch_size\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m:\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 11\u001b[1;33m \u001b[0m_\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0ml\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mpredictions\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0msession\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0moptimizer\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mloss\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrain_prediction\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;33m{\u001b[0m\u001b[0mtf_train_dataset\u001b[0m \u001b[1;33m:\u001b[0m \u001b[0mbatch_data\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtf_train_labels\u001b[0m \u001b[1;33m:\u001b[0m \u001b[0mbatch_labels\u001b[0m\u001b[1;33m}\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 12\u001b[0m \u001b[1;32mif\u001b[0m \u001b[1;33m(\u001b[0m\u001b[0mstep\u001b[0m \u001b[1;33m%\u001b[0m \u001b[1;36m500\u001b[0m \u001b[1;33m==\u001b[0m \u001b[1;36m0\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 13\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0moffset\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\Miniconda2\\envs\\py35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\u001b[0m in \u001b[0;36mrun\u001b[1;34m(self, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[0;32m 764\u001b[0m \u001b[1;32mtry\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 765\u001b[0m result = self._run(None, fetches, feed_dict, options_ptr,\n\u001b[1;32m--> 766\u001b[1;33m run_metadata_ptr)\n\u001b[0m\u001b[0;32m 767\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mrun_metadata\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 768\u001b[0m \u001b[0mproto_data\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mtf_session\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mTF_GetBuffer\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mrun_metadata_ptr\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\Miniconda2\\envs\\py35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\u001b[0m in \u001b[0;36m_run\u001b[1;34m(self, handle, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[0;32m 962\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mfinal_fetches\u001b[0m \u001b[1;32mor\u001b[0m \u001b[0mfinal_targets\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 963\u001b[0m results = self._do_run(handle, final_targets, final_fetches,\n\u001b[1;32m--> 964\u001b[1;33m feed_dict_string, options, run_metadata)\n\u001b[0m\u001b[0;32m 965\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 966\u001b[0m \u001b[0mresults\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\Miniconda2\\envs\\py35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\u001b[0m in \u001b[0;36m_do_run\u001b[1;34m(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)\u001b[0m\n\u001b[0;32m 1012\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mhandle\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1013\u001b[0m return self._do_call(_run_fn, self._session, feed_dict, fetch_list,\n\u001b[1;32m-> 1014\u001b[1;33m target_list, options, run_metadata)\n\u001b[0m\u001b[0;32m 1015\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1016\u001b[0m return self._do_call(_prun_fn, self._session, handle, feed_dict,\n",
"\u001b[1;32m~\\Miniconda2\\envs\\py35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\u001b[0m in \u001b[0;36m_do_call\u001b[1;34m(self, fn, *args)\u001b[0m\n\u001b[0;32m 1019\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0m_do_call\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfn\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1020\u001b[0m \u001b[1;32mtry\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m-> 1021\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0mfn\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 1022\u001b[0m \u001b[1;32mexcept\u001b[0m \u001b[0merrors\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mOpError\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1023\u001b[0m \u001b[0mmessage\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mcompat\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mas_text\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0me\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmessage\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\Miniconda2\\envs\\py35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\u001b[0m in \u001b[0;36m_run_fn\u001b[1;34m(session, feed_dict, fetch_list, target_list, options, run_metadata)\u001b[0m\n\u001b[0;32m 1001\u001b[0m return tf_session.TF_Run(session, options,\n\u001b[0;32m 1002\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfetch_list\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtarget_list\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m-> 1003\u001b[1;33m status, run_metadata)\n\u001b[0m\u001b[0;32m 1004\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 1005\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0m_prun_fn\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0msession\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mhandle\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfetch_list\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;31mKeyboardInterrupt\u001b[0m: "
]
}
],
"source": [
"num_steps = 3001\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" tf.initialize_all_variables().run()\n",
" for step in range(num_steps):\n",
" # defining batch index\n",
" offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n",
" # batch for training\n",
" batch_data = train_dataset[offset:(offset + batch_size), :]\n",
" batch_labels = train_labels[offset:(offset + batch_size), :]\n",
" _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict={tf_train_dataset : batch_data, tf_train_labels : batch_labels})\n",
" if (step % 500 == 0):\n",
" print(offset) \n",
" print(\"loss in step %d: %f\" % (step, l))\n",
" print(\"batch accuracy: \" , accuracy(predictions, batch_labels))\n",
" print(\"Validation accuracy: \" , accuracy(valid_prediction.eval(), valid_labels))\n",
" print(\"Test accuracy:\" , accuracy(test_prediction.eval(), test_labels))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ok, for some reason the loss is sky rocketing, I don't know exactly why, but it's making my performence worse and I can't seem to find the problem. It compiles but I don't know why it happens."
]
}
],
"metadata": {
"colab": {
"default_view": {},
"name": "3_regularization.ipynb",
"provenance": [],
"version": "0.3.2",
"views": {}
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment