Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save MaverickMeerkat/0419ce93dbc457fd611ec69cfd81cd64 to your computer and use it in GitHub Desktop.
Save MaverickMeerkat/0419ce93dbc457fd611ec69cfd81cd64 to your computer and use it in GitHub Desktop.
Implement a NN - Part 3: Multiclass using Softmax.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"toc_visible": true,
"authorship_tag": "ABX9TyPaRzqt+DvEuRH18iyZujos",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/MaverickMeerkat/0419ce93dbc457fd611ec69cfd81cd64/implement-a-nn-part-3-multiclass-using-softmax.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"In this notebook we will revisit the manual implementation of a NN, but for the multiclass classification problem.\n",
"\n",
"Like always we will start by loading the necessary libraries. \n",
"\n",
"Note that we are only using `torchvision` for the MNIST dataset - loading and handling it. We are not using any of the `pytorch` capabilities for actual training. "
],
"metadata": {
"id": "ukx_6oldn1E_"
}
},
{
"cell_type": "code",
"source": [
"import numpy as np # for doing all the math and matrix work\n",
"import matplotlib.pyplot as plt # for a bit of graphing\n",
"\n",
"from torchvision import datasets, transforms # for the MNIST dataset"
],
"metadata": {
"id": "M4CDQn8en9s0"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Set plotting DPI for bigger plots, and a random seed for reproducibility."
],
"metadata": {
"id": "uzGYsyc8yrMs"
}
},
{
"cell_type": "code",
"source": [
"plt.rcParams['figure.dpi'] = 120 # set plotting dpi\n",
"\n",
"# set seed for reproducibility\n",
"random_seed = 247\n",
"np.random.seed(random_seed)"
],
"metadata": {
"id": "HQpxM713yphQ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Multiclass Data: MNIST"
],
"metadata": {
"id": "cS-VrgU6Sq4b"
}
},
{
"cell_type": "markdown",
"source": [
"We will use the famous MNIST data, which is a set of 28x28 pixels grayscale images of hand written digits from 0 to 9. The training set has 60K images, which we want to classify to the 0-9 digits. \n",
"\n",
"We download and save the data in the \"data\" folder. We will transform it into a tensor (d-dim array) instead of PIL image object. "
],
"metadata": {
"id": "KnvcrC7aStYo"
}
},
{
"cell_type": "code",
"source": [
"training_data = datasets.MNIST(\n",
" root=\"data\",\n",
" train=True,\n",
" download=True,\n",
" transform=transforms.ToTensor()\n",
")"
],
"metadata": {
"id": "98dSnsU3Ss9_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Here's an example of an image:"
],
"metadata": {
"id": "_ZfeITunUJh5"
}
},
{
"cell_type": "code",
"source": [
"x0 = training_data[0][0] # 1st observation, take the \"x\", i.e. the features\n",
"plt.imshow(x0[0], cmap=\"gray\")"
],
"metadata": {
"id": "EdlGKForUJFO",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 450
},
"outputId": "99cfe2f8-a999-455f-bc84-dc45d4212124"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7f65cfb53bb0>"
]
},
"metadata": {},
"execution_count": 4
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<Figure size 720x480 with 1 Axes>"
],
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAaMAAAGgCAYAAAAHAQhaAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAASdAAAEnQB3mYfeAAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAZ0UlEQVR4nO3de4xc5Znn8e+zJthc7aAABkIG4RnuE4iGezICRBAkgURcEokJ2TC5IA2xFmkGKwlSQGhFwiTAJA4gEWkEExhxj5ZLEkIibtk1MkHcgtmEMNLKXBqZAWzA+BLjZ/+oam1v0W33qe6up6r6+5FKhd/zPn2eOhz3z+fUqVORmUiSVOm/VDcgSZJhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSq3TXUDTUXEfOA44EVgY3E7kqT32xbYG3g4M9dMpmDgwohWEN1V3YQkaas+B9w9mYk9OU0XEXMj4p8j4pWIWBcRyyPipC5/3IvT2pwkaaZM+vd1r94zugH4R+DfgQuA94BfRMQnuvhZnpqTpMEw6d/XMdNfIRERRwLLgSWZeUV7bB7wLLAqM49t+PMObtdKkvrbIZm5YjITe3FkdBatI6GfjA5k5nrgX4FjImLvHvQgSepjvbiA4WPA85n5Vsf4Y+3nw5jgvGJE7Abs2jG8aHrbkyRV60UY7QGMjDM+OrbnFmrPBy6Z9o4kSX2lF2G0HbBhnPH1Y5ZP5Frg9o6xRXhptyQNlV6E0Tpg7jjj88YsH1dmrgJWjR2LiOnrTJLUF3pxAcMIrVN1nUbHXulBD5KkPtaLMHoK2C8idu4YP2rMcknSLNaLMLoDmAOcNzoQEXOBvweWZ6Z3VJCkWW7G3zPKzOURcTvwvfal2i8AXwb2Ab460+uXJPW/Xt0o9b8C/x34EvBB4Bng1Mx8pEfrlyT1sRm/HdB083ZAkjQw+up2QJIkbZFhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqdw21Q1I/W7OnDmN5s+fP3+GOpm6xYsXN67ZfvvtG9fsv//+jWu+8Y1vNK654oorGtecffbZjWvWr1/fuObyyy9vXHPppZc2rhkWHhlJksrNeBhFxPERkRM8jp7p9UuS+l8vT9MtBX7XMfZCD9cvSepTvQyj32bmHT1cnyRpQPT0PaOI2CkivGhCkvT/6WUwXA/sCLwXEb8FlmTm41sqiIjdgF07hhfNUH+SpCK9CKONwJ3AL4D/BA4CLgR+GxHHZuaTW6g9H7hk5luUJFWa8TDKzGXAsjFDd0fEHcAzwPeAU7ZQfi1we8fYIuCuaW1SklSq5P2bzHwhIu4CzoiIOZn53gTzVgGrxo5FRC9alCT1UOWHXl8EtgV2KOxBktQHKsNoX2A98E5hD5KkPtCLOzB0Xg1HRBwKfBa4PzM3z3QPkqT+1ov3jG6NiHW0LmJYRetquvOAd4Fv9WD96rGPfOQjjWu23XbbxjXHHnts45pPfOITjWsWLFjQaP6ZZ57ZeB3D5qWXXmpcs3Tp0sY1p59+euOat99+u3HN008/3bjm4Ycfblwzm/UijP4H8EXgH4GdgdeAnwGXZqa3A5Ik9eTS7qW07ksnSdK4/AoJSVI5w0iSVM4wkiSVM4wkSeUMI0lSOcNIklTOMJIklTOMJEnlDCNJUjnDSJJULjKzuodGIuJg4NnqPmaLww47rHHNAw880Lhm/vz5jWvUG5s3N7+x/le+8pXGNe+805tvkxkZGWlc8+abbzau+eMf/9i4ZggdkpkrJjPRIyNJUjnDSJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUMI0lSOcNIklTOMJIklTOMJEnltqluQP1t5cqVjWtef/31xjWz/Uapy5cvb1yzevXqxjUnnHBC45qNGzc2rrnxxhsb12h288hIklTOMJIklTOMJEnlDCNJUjnDSJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUMI0lSOW+Uqi164403GtcsWbKkcc2pp57auObJJ59sXLN06dLGNU099dRTjWtOOumkxjVr165tXHPwwQc3rrngggsa10hNeWQkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpXGRmdQ+NRMTBwLPVfWh67bzzzo1r3n777cY11113XeOar371q43mn3POOY3XcfPNNzeukQbAIZm5YjITuz4yiogdI+LSiLgvIt6IiIyIcyeYe2B73jvtuTdGxK7drluSNFymcpruQ8DFwIHA0xNNiogPA48AfwlcBFwBfAb4dURsO4X1S5KGxFS+z2gE2CMzX42Iw4HfTTDvImAH4G8ycyVARDwG/Bo4F/jJFHqQJA2Bro+MMnNDZr46ialnAveOBlG79jfA88AXul2/JGl4zOjVdBGxF7Ab8Pg4ix8DPjaT65ckDYaZ/trxPdrPI+MsGwF2iYi5mblhvOKI2A3ovNBh0TT2J0nqAzMdRtu1n8cLm/Vj5owbRsD5wCXT3ZQkqb/MdBitaz/PHWfZvI4547kWuL1jbBFw1xT7kiT1kZkOo9HTc3uMs2wP4I2JTtEBZOYqYNXYsYiYvu4kSX1hRi9gyMyXgdeAw8dZfCTw1EyuX5I0GHpxb7o7gVMjYu/RgYg4EdiP95+CkyTNQlM6TRcRi4EFwJ7todPad1wA+HFmrgG+C3weeDAifgTsCCwBfg9cP5X1S5KGw1TfM7oQ+Isxfz6j/QC4CViTmS9GxHHAVcDlwEbg58A/ben9Is0ub731Vk/Ws2bNmhlfx9e//vXGNbfeemvjms2bNzeukfrVlMIoM/eZ5LwVwMlTWZckaXj5fUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKheZWd1DIxFxMPBsdR8aTDvssEPjmnvuuafR/OOOO67xOj71qU81rrn//vsb10g9dkj73qRb5ZGRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkct4oVdqKRYsWNZr/xBNPNF7H6tWrG9c8+OCDjWsef/zxxjXXXHNN45pB+72iGeONUiVJg8MwkiSVM4wkSeUMI0lSOcNIklTOMJIklTOMJEnlDCNJUjnDSJJUzjCSJJUzjCRJ5bw3nTTNTj/99MY1119/feOanXbaqXFNNy666KLGNT/96U8b14yMjDSuUd/z3nSSpMFhGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnDdKlfrAIYcc0rjmqquualxz4oknNq7pxnXXXde45rLLLmtc8/LLLzeuUU95o1RJ0uDoOowiYseIuDQi7ouINyIiI+Lccebd0F7W+fjDlDqXJA2NbaZQ+yHgYmAl8DRw/BbmbgC+1jG2ZgrrliQNkamE0QiwR2a+GhGHA7/bwtxNmXnTFNYlSRpiXZ+my8wNmfnqZOdHxJyI2Lnb9UmShlevLmDYHngLWNN+f+maiNixR+uWJPW5qZymm6wR4PvAE7TC7xTgfODQiDg+MzdNVBgRuwG7dgwvmqlGJUk1ZjyMMvPbHUO3RMTzwGXAWcAtWyg/H7hkpnqTJPWHqs8Z/QuwGfjkVuZdCxzS8fjczLYmSeq1Xpyme5/MXBcRrwO7bGXeKmDV2LGImMnWJEkFSo6MImInWp9Teq1i/ZKk/jKjYRQR89rB0+k7QAD3zeT6JUmDYUo3So2IxcACYE/gH4CfAU+2F/8Y+GD7zzcDo7f/ORn4NK0g+kxmbm64Tm+UKgELFixoXHPaaac1rrn++usb13RzOv2BBx5oXHPSSSc1rlFPTfpGqVN9z+hC4C/G/PmM9gPgJmA1cC9wEvBlYA7wAnARcEXTIJIkDacphVFm7jOJaV+ayjokScPPr5CQJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUMI0lSOcNIklTOMJIklTOMJEnlpnSj1AreKFXqrQ0bNjSu2Wab5nca27RpU+Oak08+uXHNQw891LhGXZv0jVI9MpIklTOMJEnlDCNJUjnDSJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUMI0lSOcNIklSu+d0MJU27j370o41rzjrrrMY1RxxxROOabm562o3nnnuucc0jjzwyA52ogkdGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSynmjVGkr9t9//0bzFy9e3HgdZ5xxRuOahQsXNq7plffee69xzcjISOOazZs3N65Rf/LISJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUMI0lSOcNIklTOMJIklTOMJEnlDCNJUjlvlKqB1c2NQs8+++zGNU1vfLrPPvs0Xkc/e/zxxxvXXHbZZY1r7r777sY1Gh4eGUmSynUVRhFxRERcHRErImJtRKyMiNsiYr9x5h4YEfdFxDsR8UZE3BgRu069dUnSsOj2NN03gY8DtwPPAAuBxcATEXF0Zj4LEBEfBh4B1gAXATsCFwJ/HRFHZubGKfYvSRoC3YbRVcDfjQ2TiLgV+D3wLeCc9vBFwA7A32Tmyva8x4BfA+cCP+ly/ZKkIdLVabrMXNZ5VJOZfwJWAAeOGT4TuHc0iNrzfgM8D3yhm3VLkobPtF1NFxEB7E4rkIiIvYDdgPEuxXkM+PQkfuZuQOf7S4um1qkkqd9M56XdXwT2Ai5u/3mP9vN4X2w/AuwSEXMzc8MWfub5wCXT16IkqR9NSxhFxAHANcCjwL+1h7drP48XNuvHzNlSGF1L6yKJsRYBd3XXqSSpH005jCJiIfBzWlfMnZWZ77UXrWs/zx2nbF7HnHFl5ipgVcf6um9WktSXphRGETEf+CWwAPjbzHxlzOLR03N7vK+wNfbGVk7RSZJmia7DKCLmAfcA+wGfzMznxi7PzJcj4jXg8HHKjwSe6nbdkqTh0u0dGOYAtwLHAJ/PzEcnmHoncGpE7D2m9kRaAdb5XpAkaZaKzGxeFPFD4AJaR0a3dS7PzJva8/YGngRWAz+idQeGJcBLwBHdnKaLiIOBZxs3rZ7ZfffdG9ccdNBBjWuuvvrqxjUHHHBA45p+tXz58sY1P/jBDxrX3HVX8+uFNm/e3LhGQ+mQzFwxmYndnqY7rP18WvvR6SaAzHwxIo6jdceGy4GNtC52+CffL5IkjeoqjDLz+AZzVwAnd7MeSdLs4FdISJLKGUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkctP5tePqc7vsskvjmuuuu65xzWGHHbb1SR323XffxjX9atmyZY1rrrzyysY1v/rVrxrXrFu3xe+zlMp4ZCRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSyhlGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcN0rtA0cddVTjmiVLljSuOfLIIxvX7LXXXo1r+tm7777buGbp0qWN5n/3u99tvI61a9c2rpGGiUdGkqRyhpEkqZxhJEkqZxhJksoZRpKkcoaRJKmcYSRJKmcYSZLKGUaSpHKGkSSpnGEkSSpnGEmSynmj1D5w+umn96SmV5577rnGNffee2/jmk2bNjWuufLKKxvXrF69unGNpGY8MpIklTOMJEnlDCNJUjnDSJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUMI0lSOcNIklQuMrO6h0Yi4mDg2eo+JElbdUhmrpjMRI+MJEnlugqjiDgiIq6OiBURsTYiVkbEbRGxX8e8GyIix3n8YXralyQNg26/z+ibwMeB24FngIXAYuCJiDg6M8eeRtsAfK2jfk2X65UkDaFuw+gq4O8yc+PoQETcCvwe+BZwzpi5mzLzpu5blCQNu65O02XmsrFB1B77E7ACOLBzfkTMiYidu2tRkjTspu1rxyMigN1pBdJY2wNvAdtHxJvAzcA3M/OdSfzM3YBdO4YXTUO7kqQ+Mm1hBHwR2Au4eMzYCPB94AlaR2GnAOcDh0bE8Zm5aSs/83zgkmnsUZLUh6blc0YRcQCwnNZR0d9m5ntbmHsRcBlwdmbespWfO9GR0V1T61iS1AOT/pzRlMMoIhYC/wv4AHB0Zr6ylfnbAe8A12dm51V2k1mfH3qVpMEw6TCa0mm6iJgP/BJYQOuIaItBBJCZ6yLidWCXqaxbkjQ8ug6jiJgH3APsB3wyM5+bZN1OwIeA17pdtyRpuHQVRhExB7gVOAb4XGY+Os6cecAHMvPtjkXfAQK4r5t1S5KGT7dHRlcCn6V1ZLRLRIz9kCvtD7kuBJ6MiJuB0dv/nAx8mlYQeRGCJAno8gKGiHgIOG6i5ZkZEbEA+DFwNLAnMAd4Afh34IrM/HNXDXsBgyQNipm9gCEzj5/EnNXAl7r5+ZKk2cWvkJAklTOMJEnlDCNJUjnDSJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUMI0lSOcNIklTOMJIklTOMJEnlDCNJUjnDSJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeUGMYy2rW5AkjQpk/59PYhhtHd1A5KkSZn07+vIzJlsZNpFxHzgOOBFYOOYRYuAu4DPAf9R0Fo/cBu4DWb76we3AdRvg21pBdHDmblmMgXbzGw/06/9wu7uHI+I0f/8j8xc0dOm+oTbwG0w218/uA2gb7bBk00mD+JpOknSkDGMJEnlDCNJUrlhCqPXgEvbz7OV28BtMNtfP7gNYAC3wcBdTSdJGj7DdGQkSRpQhpEkqZxhJEkqZxhJksoZRpKkcoaRJKncwIdRRMyNiH+OiFciYl1ELI+Ik6r76pWIOD4icoLH0dX9TbeI2DEiLo2I+yLijfbrPHeCuQe2573TnntjROza45an3WS3QUTcMMF+8YeCtqdNRBwREVdHxIqIWBsRKyPitojYb5y5w7oPTGobDNI+MHA3Sh3HDcBZwA+BPwHnAr+IiBMy838W9tVrS4HfdYy9UNHIDPsQcDGwEngaOH68SRHxYeARYA1wEbAjcCHw1xFxZGZuHK9uQExqG7RtAL7WMTapuyj3sW8CHwduB54BFgKLgSci4ujMfBaGfh+Y1DZoG4x9IDMH9gEcCSRw4ZixebR+CS+r7q9H2+D49jY4q7qXHr3eucDC9n8f3n7t544z71rgXeAjY8Y+2Z5/XvXr6NE2uAF4p7rfGXj9xwLbdoz9FbAeuGmW7AOT3QYDsw8M+mm6s4D3gJ+MDmTmeuBfgWMiYlZ9EV9E7BQRw3C0O6HM3JCZr05i6pnAvZm5ckztb4DngS/MVH+90GAbABARcyJi55nsqZcyc1l2HNVk5p+AFcCBY4aHeR+Y7DYABmMfGPQw+hjwfGa+1TH+WPv5sB73U+l64C1gfUQ8GBGHVzdUJSL2AnYDHh9n8WO09pvZYnta+8Wa9nsm10TEjtVNTbdofYHP7sB/tv886/aBzm0wxkDsA4P+r+g9gJFxxkfH9uxhL1U2AncCv6C1Ex5E67z4byPi2Mxs9AVXQ2KP9vNE+8YuETE3Mzf0sKcKI8D3gSdo/cPzFOB84NCIOD4zN1U2N82+COxF6700mJ37QOc2gAHaBwY9jLaj9eZcp/Vjlg+1zFwGLBszdHdE3EHrTc3v0dr5ZpvR/+9b2zeG6RfR+2TmtzuGbomI54HLaJ3ivqX3XU2/iDgAuAZ4FPi39vCs2gcm2AYDtQ8M+mm6dbTezO00b8zyWSczXwDuAk6IiDnV/RQY/f/uvvF+/wJspvVG/sCLiIXAz2ldHXZWZr7XXjRr9oEtbIOJ9OU+MOhhNML/Oxwfa3TslR720m9eBLYFdqhupMDoqZmJ9o03huz0zKRl5jrgdWCX6l6mKiLmA78EFgCnZObYv++zYh/YyjYYV7/uA4MeRk8B+41zlchRY5bPVvvSOh3xTnUjvZaZL9P6UrHxLuI4klm8X0TETrQ+pzQwX7o2noiYB9wD7AecmpnPjV0+G/aBrW2DLdT15T4w6GF0BzAHOG90ICLmAn8PLM/MF6sa65XxPk0eEYcCnwXuz8zNve+qL9wJnDr28v6IOJHWX9zby7rqkYiY1/6l0+k7QAD39biladM+9XwrcAzw+cx8dIKpQ7sPTGYbDNo+MPDf9BoRtwGn0zoP+gLwZVr/8jkxMx+p7K0XIuIBWue+lwGraF1Ndx7wZ+CYzPzfhe3NiIhYTOu0xJ7APwA/A0avGvxxZq5p/wJ6ElgN/IjWp++XAC8BRwz6KZqtbQPgg+0/3wyM3vrlZODTtH4JfWZQ/6ESET8ELqB1VHBb5/LMvKk9b2j3gclsg4jYh0HaB6o/dTvVB603I39A6xzxelqfITi5uq8evv7/BiyndQ74z7TeJ7sR+Mvq3mbwNf8fWp+iH++xz5h5BwO/AtYCbwI3AbtX99+LbUArqG6kdYuste2/G88C3wY+UN3/FF/7Q1t47dkxdyj3gclsg0HbBwb+yEiSNPgG/T0jSdIQMIwkSeUMI0lSOcNIklTOMJIklTOMJEnlDCNJUjnDSJJUzjCSJJUzjCRJ5QwjSVI5w0iSVM4wkiSVM4wkSeX+Lymss/2NLOwgAAAAAElFTkSuQmCC\n"
},
"metadata": {
"needs_background": "light"
}
}
]
},
{
"cell_type": "markdown",
"source": [
"This looks like a 5, but could be a 3? Let's verify:"
],
"metadata": {
"id": "hJAuymh5UWVw"
}
},
{
"cell_type": "code",
"source": [
"training_data[0][1] # 1st observation, take the \"y\", i.e. the label"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Fk1d84fSUZC8",
"outputId": "fbf68ef6-092a-4463-da70-0e0228a74294"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"5"
]
},
"metadata": {},
"execution_count": 5
}
]
},
{
"cell_type": "markdown",
"source": [
"We'll convert the dataset to numpy arrays:"
],
"metadata": {
"id": "iKxl05lG5zAq"
}
},
{
"cell_type": "code",
"source": [
"x = training_data.data.numpy()\n",
"y = training_data.targets.numpy()"
],
"metadata": {
"id": "he4biHXk5-jC"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's check the dimensionality of the data:"
],
"metadata": {
"id": "6gPBya8ZDEFp"
}
},
{
"cell_type": "code",
"source": [
"x.shape"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "jo5kTh5XDGi0",
"outputId": "c592e581-5f3c-437d-8256-9859d70984b6"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"(60000, 28, 28)"
]
},
"metadata": {},
"execution_count": 7
}
]
},
{
"cell_type": "markdown",
"source": [
"The overall dimensionality of the data is the product of the 28x28 pixels (x1 for color channels) = 784. We want to pass it to a linear layer, so we need to flatten the tensor. We'll reshape the dataset:"
],
"metadata": {
"id": "-WAvPlShC3PZ"
}
},
{
"cell_type": "code",
"source": [
"x = x.reshape(-1, 784)\n",
"x.shape"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "EiutyBMYCx9R",
"outputId": "9d70f63d-77f3-4cf2-a79b-fbc0ba681410"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"(60000, 784)"
]
},
"metadata": {},
"execution_count": 8
}
]
},
{
"cell_type": "markdown",
"source": [
"# The NN from before\n",
"\n",
"Let's load the code we need for the NN from the previous notebook:"
],
"metadata": {
"id": "L0zUbU8hCIhM"
}
},
{
"cell_type": "code",
"source": [
"class LinearLayer():\n",
" def __init__(self, input_size, output_size, activation_fn):\n",
" self.W = np.random.randn(input_size, output_size)*0.1\n",
" self.b = np.zeros(output_size)\n",
" self.activation_fn = activation_fn\n",
" self.input = None\n",
" self.output = None\n",
" self.grad_W = None\n",
" self.grad_b = None\n",
"\n",
" def forward(self, x):\n",
" self.input = x # a_{l-1}\n",
" self.output = x @ self.W + self.b # z_l\n",
" return self.activation_fn(self.output) # a_l\n",
"\n",
" def backward(self, grad):\n",
" grad = grad * self.activation_fn(self.output, derivative=True)\n",
" self.grad_W = self.input.T @ grad\n",
" self.grad_b = np.sum(grad, axis=0)\n",
" return grad @ self.W.T"
],
"metadata": {
"id": "O2BeeQkUCPFX"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"def relu(x, derivative=False):\n",
" if derivative:\n",
" return (x > 0).astype(float)\n",
" else:\n",
" return np.maximum(0, x)\n",
"\n",
"def identity(x, derivative=False):\n",
" if derivative:\n",
" return np.ones_like(x)\n",
" else:\n",
" return x"
],
"metadata": {
"id": "K982wh8OCR4P"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"class NeuralNetwork():\n",
" def __init__(self, layers):\n",
" self.layers = layers\n",
"\n",
" def forward(self, x):\n",
" for layer in self.layers:\n",
" x = layer.forward(x)\n",
" return x\n",
"\n",
" def backward(self, grad):\n",
" for layer in reversed(self.layers):\n",
" grad = layer.backward(grad)\n",
"\n",
" def update(self, learning_rate):\n",
" for layer in self.layers:\n",
" if isinstance(layer, LinearLayer):\n",
" layer.W -= learning_rate * layer.grad_W\n",
" layer.b -= learning_rate * layer.grad_b\n",
"\n",
" def train(self, x, y, learning_rate, loss_fn, num_epochs, batch_size):\n",
" losses = []\n",
" for epoch in range(num_epochs):\n",
" indices = np.random.permutation(len(x))\n",
" loss = 0\n",
" for i in range(0, len(x), batch_size):\n",
" x_batch = x[indices[i:i+batch_size]]\n",
" y_batch = y[indices[i:i+batch_size]]\n",
" y_pred = self.forward(x_batch)\n",
" loss_i, grad = loss_fn(y_batch, y_pred)\n",
" loss += loss_i\n",
" self.backward(grad)\n",
" self.update(learning_rate)\n",
" print(f'Epoch {epoch}, loss: {loss}')\n",
" losses.append(loss)\n",
" return losses"
],
"metadata": {
"id": "aRkRgNoBRjLT"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Modifying the NN"
],
"metadata": {
"id": "46NfpZTo6Qgk"
}
},
{
"cell_type": "markdown",
"source": [
"The modifications we need are as follows:\n",
"- We need a softmax activation function\n",
"- We need a (non-binary) cross entropy loss function\n",
"\n",
"Remember that the Cross Entropy loss is equal to:\n",
"\n",
"$$ \\mathcal L = -\\sum_{i=1}^n \\sum_{c=1}^C y_{ic}\\log \\hat y_{ic}\n",
"$$\n",
"\n",
"We saw that if we actually combine the gradient w.r.t. the inputs of the last layer, we get something quite easy: \n",
"\n",
"$$ \\frac{\\partial \\mathcal L}{\\partial z_L} = \\hat y - y = a - y$$\n",
"\n",
"So, we are going to **use a trick and combine the loss function with the softmax**. This way the derivative calculations will be simpler. \n",
"\n",
"<font size=\"2\">[In the last section of this notebook I will also show a way to not do this trick, but this will require work with tensors, as the $\\frac{\\partial a_L}{\\partial z_L}$ derivative is a $C\\times C$ matrix for each observation, so a $(n_{batch}, C, C)$ tensor]. </font>\n",
"\n",
"So, the outputs of the final layer will be non normalized - i.e., a linear layer with identity activation. They are sometimes also known as the \"logits\" (as they are modeling $\\log \\frac{p_{c}}{1-p_c}$).\n",
"\n"
],
"metadata": {
"id": "A5UHBYQeyoep"
}
},
{
"cell_type": "markdown",
"source": [
"In the following code remember that the logits are a $(n_{batch},C)$ matrix. We are 1st going to subtract the maximum from each row in order to avoid numerical problems (overflow). Then we'll calculate the normalizing constant per observation, and then the log probability. For each row in the resulting matrix we will only take the $y$'th column ($y$ here is an index, not a 1-hot vector), because as we saw, we only focus on the element which is not equal to 0, and try to maximize it. Finally we sum it up. \n",
"\n",
"For the gradient, we have to convert $y$ to a 1-hot-vector. Then we take the exponent of the log probabilities to get the softmax activations, and we subtract it from the 1-hot-$y$'s. \n",
"\n",
"<font size=2>[ an alternative is to simply subtract 1 from the index of the $a$'s corresponding to the true $y$ index]</font>"
],
"metadata": {
"id": "tBgNhdaGAaXt"
}
},
{
"cell_type": "code",
"source": [
"def CrossEntropy(y, logits):\n",
" num_samples = y.shape[0]\n",
" shifted_logits = logits - np.max(logits, axis=1, keepdims=True)\n",
" Z = np.sum(np.exp(shifted_logits), axis=1, keepdims=True)\n",
" log_probs = shifted_logits - np.log(Z)\n",
" loss = -np.sum(log_probs[np.arange(num_samples), y]) / num_samples\n",
"\n",
" y_one_hot = np.zeros((len(y), 10))\n",
" y_one_hot[np.arange(len(y)), y] = 1\n",
" a = np.exp(log_probs)\n",
" delta = (a - y_one_hot) / num_samples\n",
"\n",
" # Alternative: Compute the derivative of the loss w.r.t. the inputs to the softmax layer\n",
" # delta = np.exp(log_probs) # a\n",
" # delta[np.arange(num_samples), y] -= 1\n",
" # delta /= num_samples\n",
"\n",
" return loss, delta"
],
"metadata": {
"id": "5D9JWvTV0lJ4"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"The loss itself doesn't interest us so much. We care more about accuracy here. So let's modify the training function to calculate accuracy (by overloading the train function): "
],
"metadata": {
"id": "3ROohsEdMN1s"
}
},
{
"cell_type": "code",
"source": [
"class NeuralNetwork2(NeuralNetwork):\n",
" def train(self, x, y, learning_rate, loss_fn, num_epochs, batch_size):\n",
" accs = []\n",
" for epoch in range(num_epochs):\n",
" indices = np.random.permutation(len(x))\n",
" correct = 0\n",
" for i in range(0, len(x), batch_size):\n",
" x_batch = x[indices[i:i+batch_size]]\n",
" y_batch = y[indices[i:i+batch_size]]\n",
" y_pred = self.forward(x_batch)\n",
" loss_i, grad = loss_fn(y_batch, y_pred)\n",
" self.backward(grad)\n",
" self.update(learning_rate)\n",
" pred = np.argmax(y_pred, axis=1)\n",
" correct += (y_batch == pred).sum()\n",
" accuracy = 100. * correct / len(x)\n",
" accs.append(accuracy)\n",
" print(f'Epoch: {epoch}, Acc.: {accuracy}')\n",
" return accs"
],
"metadata": {
"id": "vZ1mswIAMmk_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We will use a 2 hidden layers network - the first will project the data to a 50-dim space, and then use the ReLU activation function. The 2nd will project the activations from before to 10 outputs. We will use an identity activation, because the softmax activation is part of the Cross Entropy calculation."
],
"metadata": {
"id": "YS5MvLHhDWgA"
}
},
{
"cell_type": "code",
"source": [
"layer1 = LinearLayer(784, 50, relu)\n",
"layer2 = LinearLayer(50, 10, identity)\n",
"nn = NeuralNetwork2([layer1, layer2])"
],
"metadata": {
"id": "gt10UvkmqVYm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"accs = nn.train(x, y, learning_rate=0.001, loss_fn=CrossEntropy, num_epochs=20, batch_size=128)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "U5qWl5M5rPzn",
"outputId": "0ab2c238-0f33-4bdd-a718-1c32e202fd2d"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Epoch: 0, Acc.: 63.623333333333335\n",
"Epoch: 1, Acc.: 69.355\n",
"Epoch: 2, Acc.: 74.24666666666667\n",
"Epoch: 3, Acc.: 77.56333333333333\n",
"Epoch: 4, Acc.: 79.795\n",
"Epoch: 5, Acc.: 81.02\n",
"Epoch: 6, Acc.: 82.2\n",
"Epoch: 7, Acc.: 83.37166666666667\n",
"Epoch: 8, Acc.: 84.26333333333334\n",
"Epoch: 9, Acc.: 85.04833333333333\n",
"Epoch: 10, Acc.: 85.58833333333334\n",
"Epoch: 11, Acc.: 86.04\n",
"Epoch: 12, Acc.: 86.615\n",
"Epoch: 13, Acc.: 87.185\n",
"Epoch: 14, Acc.: 87.57\n",
"Epoch: 15, Acc.: 87.975\n",
"Epoch: 16, Acc.: 88.29166666666667\n",
"Epoch: 17, Acc.: 88.54333333333334\n",
"Epoch: 18, Acc.: 88.71\n",
"Epoch: 19, Acc.: 89.15833333333333\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"y_hat = nn.forward(x)\n",
"pred = np.argmax(y_hat, axis=1)\n",
"acc = (y == pred).mean()\n",
"print(f\"accuracy = {acc}\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "sqLpkfnO3p8u",
"outputId": "90a20774-bfce-4ed2-a144-c38a42f3a081"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"accuracy = 0.8933\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Not bad, we got a pretty high accuracy! \n",
"\n",
"Let's train again and see how it goes:"
],
"metadata": {
"id": "cbRr-C3w4cZF"
}
},
{
"cell_type": "code",
"source": [
"accs = nn.train(x, y, learning_rate=0.001, loss_fn=CrossEntropy, num_epochs=20, batch_size=128)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "pmlNI-cD4ZhT",
"outputId": "ef2c7b8a-3576-45ae-d571-46d7c8e9b17c"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Epoch: 0, Acc.: 89.5\n",
"Epoch: 1, Acc.: 89.58166666666666\n",
"Epoch: 2, Acc.: 89.8\n",
"Epoch: 3, Acc.: 89.94166666666666\n",
"Epoch: 4, Acc.: 90.27666666666667\n",
"Epoch: 5, Acc.: 90.29833333333333\n",
"Epoch: 6, Acc.: 90.49833333333333\n",
"Epoch: 7, Acc.: 90.75166666666667\n",
"Epoch: 8, Acc.: 90.83\n",
"Epoch: 9, Acc.: 90.905\n",
"Epoch: 10, Acc.: 91.10666666666667\n",
"Epoch: 11, Acc.: 91.205\n",
"Epoch: 12, Acc.: 91.39666666666666\n",
"Epoch: 13, Acc.: 91.485\n",
"Epoch: 14, Acc.: 91.65666666666667\n",
"Epoch: 15, Acc.: 91.75333333333333\n",
"Epoch: 16, Acc.: 91.78666666666666\n",
"Epoch: 17, Acc.: 91.835\n",
"Epoch: 18, Acc.: 92.07\n",
"Epoch: 19, Acc.: 92.07833333333333\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Getting quite close!"
],
"metadata": {
"id": "Aut5IjKS4n32"
}
},
{
"cell_type": "markdown",
"source": [
"# Separate the loss and the activations"
],
"metadata": {
"id": "lkZNYBubOSfb"
}
},
{
"cell_type": "markdown",
"source": [
"If we want to implement a softmax activation function which also outputs a derivative, we have a bit of work ahead of us.\n",
"Our derivative for each row/observation will give us back a matrix. This means we need to use tensors. The derivative will be matrices of shape $(n_{batch}, C, C)$ where $n_{batch}$ is the # of observations in the current batch, and $C$ is the number of classes / inputs to the softmax. \n",
"\n",
"We will use the somewhat complicated `np.einsum` function. Since we are doing tensor multiplications, we have to tell it which \"sides\" or dimensions of the tensor we want to multiply. \n",
"\n",
"For example, `np.einsum('ij,ik->ijk', a, b)` tells it for each row (of dim `i`) in the original matrices, to multiply the column of the 1st matrix (`a`, of dim `j`) with the column of the 2nd matrix (`b`, of dim `k`) in an outer product, such that we will get a $(i,j,k)$ tensor. This assumes that both matrices have the same 1st dimension `i`.\n",
"\n",
"Another example, `np.einsum('ij,jk->ijk', a, b)` tells it for each row in the 1st matrix `a`, to multiply each column element (of dim `j`) with the row of another matrix `b` of dim $(j,k)$, to get a $(i,j,k)$ tensor. \n",
"\n",
"Finally, `np.einsum('ijk,ik->ij', a, b)` tells it to for each row in the first tensor `a` (of dim `i`) to multiply the corresponding matrix (of dim $(j,k)$ with a vector of dim `k`. We will get a matrix of degree $(i,j)$."
],
"metadata": {
"id": "dQQvRV1EOYCv"
}
},
{
"cell_type": "code",
"source": [
"def softmax(logits, derivative=False):\n",
" shifted_logits = logits - np.max(logits, axis=1, keepdims=True)\n",
" Z = np.sum(np.exp(shifted_logits), axis=1, keepdims=True)\n",
" log_probs = shifted_logits - np.log(Z)\n",
" p = np.exp(log_probs) # the softmax activations\n",
" if derivative:\n",
" # z, da shapes - (m, n)\n",
" m, n = logits.shape\n",
" # First we create for each example feature vector, it's outer product with itself\n",
" # ( p1^2 p1*p2 p1*p3 .... )\n",
" # ( p2*p1 p2^2 p2*p3 .... )\n",
" # ( ... )\n",
" tensor1 = np.einsum('ij,ik->ijk', p, p) # (m, n, n)\n",
" # Second we need to create an (n,n) identity of the feature vector\n",
" # ( p1 0 0 ... )\n",
" # ( 0 p2 0 ... )\n",
" # ( ... )\n",
" tensor2 = np.einsum('ij,jk->ijk', p, np.eye(n, n)) # (m, n, n)\n",
" # Then we need to subtract the first tensor from the second\n",
" # ( p1 - p1^2 -p1*p2 -p1*p3 ... )\n",
" # ( -p1*p2 p2 - p2^2 -p2*p3 ...)\n",
" # ( ... )\n",
" dSoftmax = tensor2 - tensor1\n",
" return dSoftmax\n",
" else:\n",
" return p"
],
"metadata": {
"id": "EtUehWtSO1VW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"The cross entropy loss now doesn't recieve the logits, but the softmax, so we only calculate the loss w.r.t. the softmax activations, which as we saw are equal to $\\frac{\\partial \\mathcal L}{\\partial a_{Lc}} = -\\frac{y_c}{a_{Lc}}$.\n",
"\n",
"To avoid numerical issues we will add a small epsilon to the log and division operations."
],
"metadata": {
"id": "XUznIVnxP-dW"
}
},
{
"cell_type": "code",
"source": [
"def CrossEntropy2(y, a):\n",
" eps = 1e-10\n",
" num_samples = y.shape[0]\n",
" log_probs = np.log(a + eps)\n",
" loss = -np.sum(log_probs[np.arange(num_samples), y]) / num_samples\n",
"\n",
" y_one_hot = np.zeros((len(y), 10))\n",
" y_one_hot[np.arange(len(y)), y] = 1\n",
" delta = -1*(y_one_hot/(a + eps)) / num_samples\n",
"\n",
" return loss, delta"
],
"metadata": {
"id": "arw2NRw6QYY0"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We also need to modify the linear layer to use the tensor to matrix product:"
],
"metadata": {
"id": "y9BnoB51SKox"
}
},
{
"cell_type": "code",
"source": [
"class LinearLayer2(LinearLayer):\n",
" def backward(self, grad):\n",
" da = self.activation_fn(self.output, derivative=True)\n",
" grad = np.einsum('ijk,ik->ij', da, grad) \n",
" self.grad_W = self.input.T @ grad\n",
" self.grad_b = np.sum(grad, axis=0)\n",
" return grad @ self.W.T"
],
"metadata": {
"id": "U-iDQBOaR7kK"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We will now use a softmax activation after the 2nd layer:"
],
"metadata": {
"id": "ydgKcl8oQxDk"
}
},
{
"cell_type": "code",
"source": [
"layer1 = LinearLayer(784, 50, relu)\n",
"layer2 = LinearLayer2(50, 10, softmax)\n",
"nn2 = NeuralNetwork2([layer1, layer2])"
],
"metadata": {
"id": "xtYuCB6ZQxDl"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"losses = nn2.train(x, y, learning_rate=0.001, loss_fn=CrossEntropy2, num_epochs=20, batch_size=128)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "40419ae7-dcae-4a2c-b7c8-d57030d44b48",
"id": "QlAx8-VIQxDm"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Epoch: 0, Acc.: 53.29\n",
"Epoch: 1, Acc.: 83.62666666666667\n",
"Epoch: 2, Acc.: 87.74833333333333\n",
"Epoch: 3, Acc.: 89.625\n",
"Epoch: 4, Acc.: 90.69833333333334\n",
"Epoch: 5, Acc.: 91.29333333333334\n",
"Epoch: 6, Acc.: 91.89166666666667\n",
"Epoch: 7, Acc.: 92.43166666666667\n",
"Epoch: 8, Acc.: 92.925\n",
"Epoch: 9, Acc.: 93.14666666666666\n",
"Epoch: 10, Acc.: 93.53833333333333\n",
"Epoch: 11, Acc.: 93.67666666666666\n",
"Epoch: 12, Acc.: 93.90833333333333\n",
"Epoch: 13, Acc.: 94.135\n",
"Epoch: 14, Acc.: 94.34166666666667\n",
"Epoch: 15, Acc.: 94.44\n",
"Epoch: 16, Acc.: 94.76833333333333\n",
"Epoch: 17, Acc.: 94.79666666666667\n",
"Epoch: 18, Acc.: 94.95\n",
"Epoch: 19, Acc.: 95.08\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"y_hat = nn2.forward(x)\n",
"pred = np.argmax(y_hat, axis=1)\n",
"acc = (y == pred).mean()\n",
"print(f\"accuracy = {acc}\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "XvuuZ-NvUHPw",
"outputId": "852adb07-2d63-464f-8a57-872f54d2903b"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"accuracy = 0.9490166666666666\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Not bad accuracy at all!"
],
"metadata": {
"id": "644ZHEbQULDv"
}
},
{
"cell_type": "markdown",
"source": [
"© David Refaeli 2023."
],
"metadata": {
"id": "aKiefmDX4qcc"
}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment