Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save Arshad221b/080fcef5aa2815b710289bd2ee454f86 to your computer and use it in GitHub Desktop.
Save Arshad221b/080fcef5aa2815b710289bd2ee454f86 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"gpuType": "T4",
"toc_visible": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"source": [
"<img src=\"http://wandb.me/logo-im-png\" width=\"400\" alt=\"Weights & Biases For Deep larning Experimentation\" />"
],
"metadata": {
"id": "n7WhrsRrfxwi"
}
},
{
"cell_type": "markdown",
"source": [
"# Wandb for Deep Learning Experimentations\n",
"<div><img /></div>\n",
"\n",
"<img src=\"https://wandb.me/mini-diagram\" width=\"650\" alt=\"Weights & Biases\" />\n",
"\n",
"<div><img /></div>\n",
"\n",
"Wandb (Weights and Biases) is a powerful platform designed to streamline and enhance deep learning experimentation. It provides a unified interface to track, visualize, and collaborate on machine learning projects. With Wandb, researchers and developers can effortlessly log and monitor various metrics, hyperparameters, and system resources during training, enabling them to gain valuable insights into their models' performance."
],
"metadata": {
"id": "TAhEtx2jZ5J6"
}
},
{
"cell_type": "markdown",
"source": [
"## Uses of WandB\n",
"\n",
"Here are few uses of tools like Wandb, \n",
"\n",
"* **Tracking and Visualization**: Wandb provides a unified interface to track and visualize metrics, hyperparameters, and system resources during deep learning experiments.\n",
"\n",
"* **Interactive Dashboards**: The logged metrics are automatically organized and presented in intuitive and interactive dashboards. This feature makes it easy to analyze and compare different experiments.\n",
"\n",
"* **Hyperparameter Tracking**: Wandb offers powerful hyperparameter tracking capabilities. You can log and compare hyperparameter settings across experiments.\n",
"\n",
"* **Advanced Search**: With Wandb, you can leverage advanced search capabilities to find the best hyperparameter configurations for your models. This feature helps you efficiently explore the hyperparameter space and identify optimal settings.\n",
"\n",
"* **Collaboration and Knowledge Sharing**: Wandb facilitates collaboration among team members by providing a centralized platform.\n",
"\n",
"* **Framework Integration**: Wandb seamlessly integrates with popular deep learning frameworks like TensorFlow and PyTorch, as well as other tools in the machine learning ecosystem. "
],
"metadata": {
"id": "i0YLEGnaasNY"
}
},
{
"cell_type": "markdown",
"source": [
"## About this notebook\n",
"\n",
"In this notebook we are going to see some of the functionalities of the wandb for deep learning experimentation. As we know, any deep learning experimentation is computationally heavy and may lead to wrong results. To solve this problem we need tools like wandb which help us analyse our model performance as well as give us direction with which we can proceed to further experimentation. \n",
"\n",
"You can check out the official documentaion [here](https://docs.wandb.ai/)."
],
"metadata": {
"id": "E7r_VW7_bUrp"
}
},
{
"cell_type": "markdown",
"source": [
"### Functionalities of Wandb\n",
"Here are the functionalities which we are going to see in this notebook, \n",
"\n",
"1. 💻 **Wandb config** : Wandb config helps track experiments by allowing you to easily log and track hyperparameters and their values. With Wandb config, you can define and set hyperparameters within your code, and Wandb will automatically log and organize them for each experiment. This enables you to keep track of the specific hyperparameter values used in each run, making it easier to compare and analyze their impact on your model's performance.\n",
"\n",
"2. 🔥 **Wandb init** : \n",
"Wandb init helps track experiments by initializing a project and connecting it to the Wandb platform. When we run `wandb.init` in our code, it creates a new run in your project and assigns it a unique ID. This run ID is used to identify and track the specific experiment.\n",
"\n",
"3. 👀 **Wandb watch**: Wandb watch helps track experiments by automatically monitoring and logging the gradients and parameters of your machine learning model during training. By using the `wandb.watch()` function in your code, Wandb is able to keep track of these values and visualize them on the Wandb dashboard.\n",
"\n",
"4. 🪵 **Wandb log**: \n",
"Wandb log helps track experiments by allowing you to log various metrics and other relevant information during the course of your deep learning experiment. By using the `wandb.log()` function in your code, you can log key metrics such as loss, accuracy, and custom evaluation metrics at different stages of your training or evaluation process.\n",
"\n",
"5. 💾 **Wandb save**: Wandb save helps track experiments by allowing you to save and log important artifacts such as model weights, trained models, datasets, and other files relevant to your deep learning experiment. By using the `wandb.save()` function in your code, you can specify files or directories that you want to save and associate with your experiment run."
],
"metadata": {
"id": "uCDtB8CscCYE"
}
},
{
"cell_type": "markdown",
"source": [
"### Flow of the Notebook \n",
"We are going to build a CNN classification model on Sign Langugae Recognition MNIST dataset in this notebook. We will track the entire experimentaion process with the help of Wandb. \n",
"\n",
"The flow of this notebook follows the usual Pytorch deep learning experimentation. \n",
"1. First we collect the data and pre-process to our need (demo purposes)\n",
"2. Setting up the Wandb with your own account\n",
"3. Define the model hyperparameteres using config\n",
"4. Define the pipeline for the model for wandb.init\n",
"5. Train and save the model into ONNX format\n",
"\n",
"Without further ado, let's get started...\n",
"\n"
],
"metadata": {
"id": "mCpQdT6JeDLn"
}
},
{
"cell_type": "markdown",
"source": [
"# Import Data"
],
"metadata": {
"id": "tO_EI4legWuf"
}
},
{
"cell_type": "markdown",
"source": [
"## About the Sign Language Dataset\n",
"\n",
"<img src=\"https://storage.googleapis.com/kagglesdsdata/datasets/3258/5337/amer_sign2.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20230604%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20230604T062225Z&X-Goog-Expires=345600&X-Goog-SignedHeaders=host&X-Goog-Signature=308378b35f6d62832f0c62e14b9a8f3cd5f185e7278043f2ff6d8427eca7684bea3971052f8ddaafc7c9a9832cc97d8e9dece3fddebe5a3c1421d89397417bb90cfb2875afd261a5e9c1fdfe3c8bdf316655cfac7c0fb6dbc65d71d04dda9ee65a5c5c8a0394920105ba89227f7dea17d1c40fcef69bfc494e4146c70f958f8ef589942f4e1a8dd37adfe4630da02d28fb2225206a95fe5846786579a16cb41d6960ecd3f5310c5035cecea483422c837e28f06a3d6f939e8858dd5a5f7015d938838c57c142bf3d7c5a580e85adfb1fd73746c353af2f04b74cc6b0de10891560b199e2874708c8e87025f246f411ce20ec3e6c4e8f714064d03a3532cafd47\" width=\"800\" alt=\"Sign Language dataset\" />\n",
"\n",
"Sign Language dataset contains Americal Sign Language alphabet images. There are total 24 alphabets in this dataset ('J' and 'Z' missing) as this is a image dataset and the symbols which require motion (video input) are ingnored. \n",
"\n",
"You can find the entire dataset [here](https://www.kaggle.com/datasets/datamunge/sign-language-mnist)."
],
"metadata": {
"id": "oNlc_Yl4gvbO"
}
},
{
"cell_type": "markdown",
"source": [
"## Importing essential libraries\n",
"\n",
"First step is to import all the necessary libraries. For this notebook we are going to use Pytorch. We have imported usual stuff that we require for any deep learning classification task below,"
],
"metadata": {
"id": "PNL_fIyAiOrn"
}
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"id": "prwhI3Os1gK0"
},
"outputs": [],
"source": [
"import torch \n",
"import torch.nn as nn \n",
"import pandas as pd \n",
"import numpy as np \n",
"import matplotlib.pyplot as plt \n",
"from sklearn.model_selection import train_test_split\n",
"from torch.utils.data import Dataset, DataLoader\n",
"from torchvision.transforms import transforms\n",
"from PIL import Image\n",
"from tqdm import tqdm"
]
},
{
"cell_type": "markdown",
"source": [
"## Setting up device\n",
"Pytorch requires us to define the device on which we are going to train the model."
],
"metadata": {
"id": "YuT48ExNis2z"
}
},
{
"cell_type": "code",
"source": [
"device = 'cuda' if torch.cuda.is_available() else 'cpu'"
],
"metadata": {
"id": "BuBX3K6d14la"
},
"execution_count": 40,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Importing data from kaggle using API\n",
"\n",
"To import the data directly from the Kaggle we can use the following cell. You would need to insert your kaggle key into the colab before running this operation. \n",
"\n",
"To understand this, you can refer [this](https://www.youtube.com/watch?v=gwDOUuBH7ws&pp=ygUbaW1wb3J0IGthZ2dsZSBkYXRhIHRvIGNvbGFi) video. "
],
"metadata": {
"id": "4KMe88rfi2kk"
}
},
{
"cell_type": "code",
"source": [
"# setting up kaggle API\n",
"! mkdir ~/.kaggle\n",
"! cp kaggle.json ~/.kaggle/\n",
"\n",
"# Downloading the dataset directly from kaggle\n",
"! kaggle datasets download -d datamunge/sign-language-mnist\n",
"\n",
"# Unzipping the data\n",
"! unzip /content/sign-language-mnist.zip"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "nnW_zgUj2ECf",
"outputId": "49553ff5-0e2c-4c04-df54-80f8370a3929"
},
"execution_count": 41,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"mkdir: cannot create directory ‘/root/.kaggle’: File exists\n",
"Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'\n",
"sign-language-mnist.zip: Skipping, found more recently modified local copy (use --force to force download)\n",
"Archive: /content/sign-language-mnist.zip\n",
"replace amer_sign2.png? [y]es, [n]o, [A]ll, [N]one, [r]ename: A\n",
" inflating: amer_sign2.png \n",
" inflating: amer_sign3.png \n",
" inflating: american_sign_language.PNG \n",
" inflating: sign_mnist_test.csv \n",
" inflating: sign_mnist_test/sign_mnist_test.csv \n",
" inflating: sign_mnist_train.csv \n",
" inflating: sign_mnist_train/sign_mnist_train.csv \n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"You can change the following path to the custom dataset."
],
"metadata": {
"id": "No-nAzGfj4Qz"
}
},
{
"cell_type": "markdown",
"source": [
"## Train Test Split \n",
"There are two separate files for training and testing. \n",
"Both of these files contain 784 columns which are pixel values for 28x28 pixel image. \n",
"\n",
"Train data contains the label column as well."
],
"metadata": {
"id": "VYAUK_qNnowm"
}
},
{
"cell_type": "code",
"source": [
"train = pd.read_csv('/content/sign_mnist_train/sign_mnist_train.csv')\n",
"test = pd.read_csv('/content/sign_mnist_test/sign_mnist_test.csv')"
],
"metadata": {
"id": "FOl9vGIJ2UUS"
},
"execution_count": 42,
"outputs": []
},
{
"cell_type": "code",
"source": [
"train.head(5)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 236
},
"id": "otL2wQpL2XRE",
"outputId": "70ce1eb4-185d-40c4-808b-d655bc6631f2"
},
"execution_count": 43,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" label pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 pixel8 \\\n",
"0 3 107 118 127 134 139 143 146 150 \n",
"1 6 155 157 156 156 156 157 156 158 \n",
"2 2 187 188 188 187 187 186 187 188 \n",
"3 2 211 211 212 212 211 210 211 210 \n",
"4 13 164 167 170 172 176 179 180 184 \n",
"\n",
" pixel9 ... pixel775 pixel776 pixel777 pixel778 pixel779 pixel780 \\\n",
"0 153 ... 207 207 207 207 206 206 \n",
"1 158 ... 69 149 128 87 94 163 \n",
"2 187 ... 202 201 200 199 198 199 \n",
"3 210 ... 235 234 233 231 230 226 \n",
"4 185 ... 92 105 105 108 133 163 \n",
"\n",
" pixel781 pixel782 pixel783 pixel784 \n",
"0 206 204 203 202 \n",
"1 175 103 135 149 \n",
"2 198 195 194 195 \n",
"3 225 222 229 163 \n",
"4 157 163 164 179 \n",
"\n",
"[5 rows x 785 columns]"
],
"text/html": [
"\n",
" <div id=\"df-addb8f46-491b-4f16-8232-734b78f86654\">\n",
" <div class=\"colab-df-container\">\n",
" <div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>label</th>\n",
" <th>pixel1</th>\n",
" <th>pixel2</th>\n",
" <th>pixel3</th>\n",
" <th>pixel4</th>\n",
" <th>pixel5</th>\n",
" <th>pixel6</th>\n",
" <th>pixel7</th>\n",
" <th>pixel8</th>\n",
" <th>pixel9</th>\n",
" <th>...</th>\n",
" <th>pixel775</th>\n",
" <th>pixel776</th>\n",
" <th>pixel777</th>\n",
" <th>pixel778</th>\n",
" <th>pixel779</th>\n",
" <th>pixel780</th>\n",
" <th>pixel781</th>\n",
" <th>pixel782</th>\n",
" <th>pixel783</th>\n",
" <th>pixel784</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>3</td>\n",
" <td>107</td>\n",
" <td>118</td>\n",
" <td>127</td>\n",
" <td>134</td>\n",
" <td>139</td>\n",
" <td>143</td>\n",
" <td>146</td>\n",
" <td>150</td>\n",
" <td>153</td>\n",
" <td>...</td>\n",
" <td>207</td>\n",
" <td>207</td>\n",
" <td>207</td>\n",
" <td>207</td>\n",
" <td>206</td>\n",
" <td>206</td>\n",
" <td>206</td>\n",
" <td>204</td>\n",
" <td>203</td>\n",
" <td>202</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>6</td>\n",
" <td>155</td>\n",
" <td>157</td>\n",
" <td>156</td>\n",
" <td>156</td>\n",
" <td>156</td>\n",
" <td>157</td>\n",
" <td>156</td>\n",
" <td>158</td>\n",
" <td>158</td>\n",
" <td>...</td>\n",
" <td>69</td>\n",
" <td>149</td>\n",
" <td>128</td>\n",
" <td>87</td>\n",
" <td>94</td>\n",
" <td>163</td>\n",
" <td>175</td>\n",
" <td>103</td>\n",
" <td>135</td>\n",
" <td>149</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>2</td>\n",
" <td>187</td>\n",
" <td>188</td>\n",
" <td>188</td>\n",
" <td>187</td>\n",
" <td>187</td>\n",
" <td>186</td>\n",
" <td>187</td>\n",
" <td>188</td>\n",
" <td>187</td>\n",
" <td>...</td>\n",
" <td>202</td>\n",
" <td>201</td>\n",
" <td>200</td>\n",
" <td>199</td>\n",
" <td>198</td>\n",
" <td>199</td>\n",
" <td>198</td>\n",
" <td>195</td>\n",
" <td>194</td>\n",
" <td>195</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>2</td>\n",
" <td>211</td>\n",
" <td>211</td>\n",
" <td>212</td>\n",
" <td>212</td>\n",
" <td>211</td>\n",
" <td>210</td>\n",
" <td>211</td>\n",
" <td>210</td>\n",
" <td>210</td>\n",
" <td>...</td>\n",
" <td>235</td>\n",
" <td>234</td>\n",
" <td>233</td>\n",
" <td>231</td>\n",
" <td>230</td>\n",
" <td>226</td>\n",
" <td>225</td>\n",
" <td>222</td>\n",
" <td>229</td>\n",
" <td>163</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>13</td>\n",
" <td>164</td>\n",
" <td>167</td>\n",
" <td>170</td>\n",
" <td>172</td>\n",
" <td>176</td>\n",
" <td>179</td>\n",
" <td>180</td>\n",
" <td>184</td>\n",
" <td>185</td>\n",
" <td>...</td>\n",
" <td>92</td>\n",
" <td>105</td>\n",
" <td>105</td>\n",
" <td>108</td>\n",
" <td>133</td>\n",
" <td>163</td>\n",
" <td>157</td>\n",
" <td>163</td>\n",
" <td>164</td>\n",
" <td>179</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>5 rows × 785 columns</p>\n",
"</div>\n",
" <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-addb8f46-491b-4f16-8232-734b78f86654')\"\n",
" title=\"Convert this dataframe to an interactive table.\"\n",
" style=\"display:none;\">\n",
" \n",
" <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
" width=\"24px\">\n",
" <path d=\"M0 0h24v24H0V0z\" fill=\"none\"/>\n",
" <path d=\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\"/><path d=\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\"/>\n",
" </svg>\n",
" </button>\n",
" \n",
" <style>\n",
" .colab-df-container {\n",
" display:flex;\n",
" flex-wrap:wrap;\n",
" gap: 12px;\n",
" }\n",
"\n",
" .colab-df-convert {\n",
" background-color: #E8F0FE;\n",
" border: none;\n",
" border-radius: 50%;\n",
" cursor: pointer;\n",
" display: none;\n",
" fill: #1967D2;\n",
" height: 32px;\n",
" padding: 0 0 0 0;\n",
" width: 32px;\n",
" }\n",
"\n",
" .colab-df-convert:hover {\n",
" background-color: #E2EBFA;\n",
" box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
" fill: #174EA6;\n",
" }\n",
"\n",
" [theme=dark] .colab-df-convert {\n",
" background-color: #3B4455;\n",
" fill: #D2E3FC;\n",
" }\n",
"\n",
" [theme=dark] .colab-df-convert:hover {\n",
" background-color: #434B5C;\n",
" box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
" filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
" fill: #FFFFFF;\n",
" }\n",
" </style>\n",
"\n",
" <script>\n",
" const buttonEl =\n",
" document.querySelector('#df-addb8f46-491b-4f16-8232-734b78f86654 button.colab-df-convert');\n",
" buttonEl.style.display =\n",
" google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
"\n",
" async function convertToInteractive(key) {\n",
" const element = document.querySelector('#df-addb8f46-491b-4f16-8232-734b78f86654');\n",
" const dataTable =\n",
" await google.colab.kernel.invokeFunction('convertToInteractive',\n",
" [key], {});\n",
" if (!dataTable) return;\n",
"\n",
" const docLinkHtml = 'Like what you see? Visit the ' +\n",
" '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
" + ' to learn more about interactive tables.';\n",
" element.innerHTML = '';\n",
" dataTable['output_type'] = 'display_data';\n",
" await google.colab.output.renderOutput(dataTable, element);\n",
" const docLink = document.createElement('div');\n",
" docLink.innerHTML = docLinkHtml;\n",
" element.appendChild(docLink);\n",
" }\n",
" </script>\n",
" </div>\n",
" </div>\n",
" "
]
},
"metadata": {},
"execution_count": 43
}
]
},
{
"cell_type": "markdown",
"source": [
"## Image data"
],
"metadata": {
"id": "Vq7UgWuWoQAr"
}
},
{
"cell_type": "code",
"source": [
"X = train.drop(['label'], axis = 1) # Detaching label\n",
"y = train['label'] # selecting target"
],
"metadata": {
"id": "IPAWaCuS2Y5t"
},
"execution_count": 44,
"outputs": []
},
{
"cell_type": "code",
"source": [
"X.shape, y.shape"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ONQnRm0X2arK",
"outputId": "ea89712f-923b-44c1-c4cc-1fec18f57c85"
},
"execution_count": 45,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"((27455, 784), (27455,))"
]
},
"metadata": {},
"execution_count": 45
}
]
},
{
"cell_type": "markdown",
"source": [
"We have more than 27 thousand images which is enogh to train a CNN model. "
],
"metadata": {
"id": "npksURAeoEeR"
}
},
{
"cell_type": "code",
"source": [
"X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)"
],
"metadata": {
"id": "Xhizl5Wj2cMU"
},
"execution_count": 46,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We can use the following code to see how the actual images are mapped."
],
"metadata": {
"id": "8jCcWb5koSyA"
}
},
{
"cell_type": "code",
"source": [
"im = np.array(X_train.iloc[2])\n",
"im = im.reshape((28,28, 1))\n",
"plt.imshow(im)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 448
},
"id": "S18DBTzb2e0Q",
"outputId": "fb459c0e-d488-43a0-9ef8-7e4362ba1ad8"
},
"execution_count": 47,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7f07d1c7b9d0>"
]
},
"metadata": {},
"execution_count": 47
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
],
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAaAAAAGdCAYAAABU0qcqAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAl6UlEQVR4nO3df3DV9Z3v8dc5JzknCeSHIeSXBBpQoZUfvaVCqUpxyQXSGUcqt1erdy70dvDqBu8q7bbD3lar3Z10dbbr2GH13plW1jtFq3Or3jo7dBVLuLZgr6jLsrUpyUYJkgSB5ndykpzzuX+wZjcKkveHJJ8kPB8zZwZOzjvfT77ne84r35yTVyLOOScAACZYNPQCAACXJgIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAZoRfwYel0WidOnFBubq4ikUjo5QAAjJxz6urqUnl5uaLR85/nTLoAOnHihCoqKkIvAwBwkZqbmzVnzpzzfnzSBVBubq4k6fIH/7uiWVmjH/Q5WfL8AaSLeLQXZUxQ41F0kjcrpT3uKJ+vyWc7viZqfV4z9hFfETe51+dloh62Q/Z95zyfvyIe27I+5aX7+3XsL743/Hx+PuMWQDt37tTDDz+s1tZWLVu2TD/84Q+1YsWKC8598GO3aFaWotkEkBkB5L8dXwSQJALoYkQHp1cADc9d4GWUcXkTwk9/+lNt375d999/v9544w0tW7ZM69ev18mTJ8djcwCAKWhcAugHP/iBtm7dqq9+9av61Kc+pccff1w5OTn68Y9/PB6bAwBMQWMeQAMDAzp06JCqqqr+dSPRqKqqqnTgwIGP3D6ZTKqzs3PEBQAw/Y15AJ06dUqpVEolJSUjri8pKVFra+tHbl9bW6v8/PzhC++AA4BLQ/BfRN2xY4c6OjqGL83NzaGXBACYAGP+LriioiLFYjG1tbWNuL6trU2lpaUfuX0ikVAikRjrZQAAJrkxPwOKx+Navny59u7dO3xdOp3W3r17tWrVqrHeHABgihqX3wPavn27Nm/erM9+9rNasWKFHnnkEfX09OirX/3qeGwOADAFjUsA3XLLLXr//fd13333qbW1VZ/+9Ke1Z8+ej7wxAQBw6Rq3JoRt27Zp27Zt4/XpP8rjh4lejQae25r0DQUTJUYjhLeJbA3weTxFPRbo9Vv5E9hykZqYzfi0GkR8jwePx4Z1wo3ycR78XXAAgEsTAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIYtzLSixb7l8soeRWLTsf4naiyT0mRCeqEdCmPDfmubTJ3mGbbmzELCnu8NtXVUGCemXHc/oDqXDhknnEed1LE5xiS5OIe2xq0b8tleBx4nl+Tj8g4lbJOx6dgAMAUQAABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBCTtw3byidKo57Vxz5zE1Vcm7ZvKOLboO27/4wiHtvxatCWvPZfIjdpnhl8b4Z9OxV95plPl7xnnpGkfS155pmZx+33U9479v3dusr+YJ/R7Hc89BfZZ5Kz7Q3fGd2G6v9/kcpOm2ckeTa+2/afG+WXwxkQACAIAggAEAQBBAAIggACAARBAAEAgiCAAABBEEAAgCAIIABAEAQQACAIAggAEAQBBAAIggACAAQxactIXcTJRca57NK3TNOn13CiijsncG0Rj/snPWT/nserLDVpL3eUpMs/cco8U5zTZZ55s2ueeSYjI2Weyc3oN89IkhL2bcUG7Pu8v8CjhHOGvYQzo9fve+3K//0H80zr9YXmmdL/e8Y8885G+3Ykqb/UXpYaGRifNmXOgAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgiElbRqqobPGYYS8o9CoVlWc5pg+PklCfgtCI57ch6ZR9B8ZzBs0zsZj9vh1sTZhnJOmzs4+ZZ95uLzXPxLLthZBDQ/bizoose8mlJMXiHo8n2dd3Zn2feSbqPI67br/jIX34d+aZosv+nX1DTe/ZZ6J+ZaQ+px3O+Fw02iJpzoAAAEEQQACAIMY8gL773e8qEomMuCxatGisNwMAmOLG5TWgq6++Wi+//PK/biRj8r7UBAAIY1ySISMjQ6Wl9hdmAQCXjnF5Dejo0aMqLy/X/Pnzdfvtt+vYsfO/syiZTKqzs3PEBQAw/Y15AK1cuVK7du3Snj179Nhjj6mpqUnXX3+9urq6znn72tpa5efnD18qKirGekkAgElozAOourpaX/7yl7V06VKtX79ef/d3f6f29nY988wz57z9jh071NHRMXxpbm4e6yUBACahcX93QEFBga666io1NDSc8+OJREKJhN8viQEApq5x/z2g7u5uNTY2qqysbLw3BQCYQsY8gL7xjW+orq5O77zzjn7961/rS1/6kmKxmL7yla+M9aYAAFPYmP8I7vjx4/rKV76i06dPa/bs2bruuut08OBBzZ49e6w3BQCYwsY8gJ5++umx+UQZab+CUQPfUtFopn1dPsWdXsWiHl+TT9mnJKX64+aZNYuOmmcKMu2Flc+cucY846t8Rod55uhAiXkm2ZZlnsn8ZMo8I0nyOPYy+uzHUfYbOeaZvjL7dma0Js0zkhTNsa9vMGEvZY0X2YtF+4vthbaSFPF4LhovdMEBAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBDj/gfpJoxPsWjUr4zUR9SnWNVNTGlgashenihJsSx70WV5wl7cWZ33D+aZvy9ZZJ6RpJfeWWieuXPRq+aZfUP27RTU24+H/3XFCvOMJDmP4tOMnn7zzOV19pLQE6tzzTPxxpPmGUnqXL/EPNNXZP++Puuf7TPRpN/5Qzphfy6KxIzH3iifUjgDAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBCTtw075kwN1xGPNuxIZOLasCdK1ONrSqX8Wrdnz+oyz/y/P8wzz1TlHjHPFOT0mWck6fib5eaZX8z6lHkm9odM80zRP/SaZ969fJZ5RpLig/ZjIp1p/3421mVv0PYS8TvGWz5v/5qy2+zbivQPmGcyO/3OH5Jl9hZ7eRwPo8EZEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEMXnLSI18ikUjvvHrsS2fklCv7cTS5pl02m9HlM6wl5H+/v3Z5pn7hjaaZ955174dSSpssM/8U8Ec80x2u73cMePocfNM2a/nm2ck6XhVzDyTecZelhpJeZTnxs0jGpjvdzxkX9Fhnkm1F9g35FGW6jI8y5Q9ipvdkG3GjfK5izMgAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIAgCCAAQBAEEAAhi0paRRmJOEY/SvIngVSw6QaLRiVvbJ2aeNs8MOfv3PL9vKTbPxDr8Du2ChgHzTPc8j3ZMj2/9XHePeSbn6Cn7hiSlNs0yzyRLZ5pnst581zwz7/+YR/T23fn2IUkF0T7zTOaZiXkMDubai4clST7Lsx6vo7w9Z0AAgCAIIABAEOYA2r9/v2688UaVl5crEono+eefH/Fx55zuu+8+lZWVKTs7W1VVVTp69OhYrRcAME2YA6inp0fLli3Tzp07z/nxhx56SI8++qgef/xxvfbaa5oxY4bWr1+v/v7+i14sAGD6ML9SW11drerq6nN+zDmnRx55RN/+9rd10003SZKefPJJlZSU6Pnnn9ett956casFAEwbY/oaUFNTk1pbW1VVVTV8XX5+vlauXKkDBw6ccyaZTKqzs3PEBQAw/Y1pALW2tkqSSkpKRlxfUlIy/LEPq62tVX5+/vCloqJiLJcEAJikgr8LbseOHero6Bi+NDc3h14SAGACjGkAlZaWSpLa2tpGXN/W1jb8sQ9LJBLKy8sbcQEATH9jGkCVlZUqLS3V3r17h6/r7OzUa6+9plWrVo3lpgAAU5z5XXDd3d1qaGgY/n9TU5PeeustFRYWau7cubrnnnv053/+57ryyitVWVmp73znOyovL9fGjRvHct0AgCnOHECvv/66brjhhuH/b9++XZK0efNm7dq1S9/85jfV09OjO+64Q+3t7bruuuu0Z88eZWVljd2qAQBTnjmA1qxZI+fO32YXiUT04IMP6sEHH7yohU2ISVwqKkkZmSnzTCpl/6nqZfn2kktJyozY1/fbf5xrnsk+ETPP9M4bMs9IUirbvv/yGu3bOfW5QfNM9LIC84yLZ5pnJL8y13f+k73INfLFBeaZuXvsx10sz742SerqzjbPzHnHfuylL7MXubpcv2Ncgx6vvGQYi09Hefvg74IDAFyaCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACMJeeTtBInKKGNqqIzF7s3UsZmx4vQgTtS2fNuzLczu8tlUc7zTPRGclzTOZR3PMM3n1fod272z7cZTVbr9vM9rt6xu4qsw8c+Lz9jZnSapY+p55JjvD3vD9+1iJeab1c/avKX+f3/HQXWGfiXf2mWf6y3PNM7GEfX9LUippb5c3S0dGdTPOgAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgiMlbRho9exk1N7ryu7HgUyxqKVb9QDTqMeOxnXhsyDwjSbnRfvNMTo69jLS33F4+mddoHpEkDeTaj6P3P+9TNGufObXdXnK58LLj5hlJ6hiw7/PGk0XmmUhzlnkmaj+EFO+2Py4kqezXKfNMdMA+42KZ5hnfguNUxgQUI4/yuYszIABAEAQQACAIAggAEAQBBAAIggACAARBAAEAgiCAAABBEEAAgCAIIABAEAQQACAIAggAEAQBBAAIYtKWkUajaUWj41ua51MQKvmXAFr5rC/iUWBaOeO0eUaSCmK95pmc+KB5pudye+lpe2bCPCNJmV32MtLM3AHzzLWV/2yeyc2w74chFzPPSFJLT555ZqDXXqiZ1Wvf34l2+zGe0ef3WM9+r8s8Eznxvnmm/6orzTPePHqbIzHb/otkUEYKAJjECCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABDEpC0jjUSdV7GmRXScP/+/5VMsGvOZ8ShKrUzYyxN9lc6wlzv6aBvwK+F0MftD4vLCTvPMJ7LtBbAlmR3mmX5nLwiVpDfcHPtQyqPl0kMsaZ+Jdwx5bSva1WcfyrAfQydX2h/r0ZTf+UMkcwLKlEf53MoZEAAgCAIIABCEOYD279+vG2+8UeXl5YpEInr++edHfHzLli2KRCIjLhs2bBir9QIApglzAPX09GjZsmXauXPneW+zYcMGtbS0DF+eeuqpi1okAGD6Mb9aVl1drerq6o+9TSKRUGlpqfeiAADT37i8BrRv3z4VFxdr4cKFuuuuu3T69Pnf8ZNMJtXZ2TniAgCY/sY8gDZs2KAnn3xSe/fu1V/+5V+qrq5O1dXVSqVS57x9bW2t8vPzhy8VFRVjvSQAwCQ05r8HdOuttw7/e8mSJVq6dKkWLFigffv2ae3atR+5/Y4dO7R9+/bh/3d2dhJCAHAJGPe3Yc+fP19FRUVqaGg458cTiYTy8vJGXAAA09+4B9Dx48d1+vRplZWVjfemAABTiPlHcN3d3SPOZpqamvTWW2+psLBQhYWFeuCBB7Rp0yaVlpaqsbFR3/zmN3XFFVdo/fr1Y7pwAMDUZg6g119/XTfccMPw/z94/Wbz5s167LHHdPjwYf3t3/6t2tvbVV5ernXr1ul73/ueEonE2K0aADDlmQNozZo1cu78RXO/+MUvLmpBvnzKPn1mfDnnUdQ4QWWkvtpTOeaZeMxeCtmTjJtn5HnXprPs+690hv1XB/Izes0zBTH7zPtDfq+pej02huw/0c/ssW8mq91+H2V2ejSYSkqfOmOe6f73nzLPZF/ebZ7p7/V4XPhKG5+/Rnl7uuAAAEEQQACAIAggAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIAgCCAAQxJj/Se6x4tIROUsDq0eUplK++Wtv441GJ6Z5O21trZXU7zK9tuXTzhyPpry2ZTbgd99GPFrLT/fPMM+cGsw1z3SnsswzTb1F5hnJr7092m/f5/F2++Mi6/SAeSZ2yt5YLkkuy77P37vhwrf5sGjS/hj0atifZDgDAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgJm0Z6WTmUywaiUxMGalPwerve0u9tvXpmcfMMwPpmHnGa9/F7YWxkuSG7PsvI2Lf1nv9BeaZtj57gem7Zy4zz0hSX+tM80x2m0cZaY9938U67WWkGvIrwW2660rzTLykyzwzmLw0n4o5AwIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEEQQACAICZtA14k6hTxKP20iMX8CisnSsYEre/MQI7X3G97y80zJ7rzzTMzEvbyyd6chHlGklLdmeaZviH7zDtdheaZ904VmGdcs999m3siYp7J7LI/XrNPDppnokn7TOPWeeYZSYot7jDPJJP24yES83iu8+tXlXP2+1bW5+JR3p4zIABAEAQQACAIAggAEAQBBAAIggACAARBAAEAgiCAAABBEEAAgCAIIABAEAQQACAIAggAEAQBBAAIYtKWkSriFImMvgAv6lFc6jMjSRnRiSkJjXlsZyILVk8lZ5pnUh5FiFkZQ+aZqOHY+bdSafv6egbs5ZPxDHuT5GCHvWA1/7hH8aSknJP24yjeYf+aEq1d5pmG/zzLvp2F7eYZSUr22+/bDI/7dmgoZp6J+J4+pO2PDa8C01HgDAgAEAQBBAAIwhRAtbW1uuaaa5Sbm6vi4mJt3LhR9fX1I27T39+vmpoazZo1SzNnztSmTZvU1tY2posGAEx9pgCqq6tTTU2NDh48qJdeekmDg4Nat26denp6hm9z77336uc//7meffZZ1dXV6cSJE7r55pvHfOEAgKnN9CaEPXv2jPj/rl27VFxcrEOHDmn16tXq6OjQj370I+3evVt/9Ed/JEl64okn9MlPflIHDx7U5z73ubFbOQBgSruo14A6Os7+udrCwrN/XvjQoUMaHBxUVVXV8G0WLVqkuXPn6sCBA+f8HMlkUp2dnSMuAIDpzzuA0um07rnnHl177bVavHixJKm1tVXxeFwFBQUjbltSUqLW1tZzfp7a2lrl5+cPXyoqKnyXBACYQrwDqKamRkeOHNHTTz99UQvYsWOHOjo6hi/Nzc0X9fkAAFOD1y+ibtu2TS+++KL279+vOXPmDF9fWlqqgYEBtbe3jzgLamtrU2lp6Tk/VyKRUCJh/wU7AMDUZjoDcs5p27Zteu655/TKK6+osrJyxMeXL1+uzMxM7d27d/i6+vp6HTt2TKtWrRqbFQMApgXTGVBNTY12796tF154Qbm5ucOv6+Tn5ys7O1v5+fn62te+pu3bt6uwsFB5eXm6++67tWrVKt4BBwAYwRRAjz32mCRpzZo1I65/4okntGXLFknSX//1XysajWrTpk1KJpNav369/uZv/mZMFgsAmD5MAeTchUvssrKytHPnTu3cudN7UT4sxaUXy6dQM+axvlTa/h4Rn5LLGbEB84wkXZPXZJ4Zcvav6R9by8wzg/2ePbseBbUdXTnmmXjcXrCa1Wr/mnKP248HScrotZeRZjecMs+8c4v9vp3xyTPmmd7+uHlGkqITVO7rUyLsPIpzJSnt8d4z53cYXRBdcACAIAggAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIAgCCAAQBAEEAAjCszJ4/EWjTlGPZuKJ4NVs7dGgPSPD3ph86v3LzDPpEr9W3c/nNJpnmpKzzTOHXbl5JhLzO3Zc2j6XPmX/i759cXs7c9Ex+9oSZwbNM5IUP/4H88yp6879V48/Tvbn7A3aPX32/e3TNi1JaY/GaefxWPdttvbi8fwVzTDOjHJ/cwYEAAiCAAIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEFM2jLSdCoqpUafj35lg36FldGofVvJpL18clVxk3mms7DFPLPvpU+bZyTpC5vqzTPv9RWYZwaSmeaZqG8Z6Rn7QyLWby+SjHSYRzSjxV5OGz/hsSFJkf4B88ypNfaZmamYecan7DPt10XqVYg8MGD/mnxEPUpFJSk95HHe0WV7DKb7Rrc2zoAAAEEQQACAIAggAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIIhJW0aqiFPEULZnue3FzEhS1N6F6L0tq/8x54B55rrlZV7bevjJ/2Ce2XLbL8wzb+cVm2f6B+wFppLUG/coPu23l09m9tkPosxeexmpazlpnpEklRSZR2YW9JpnUobC4Q9kZKTMM756O7PMM5lZ9vspkTVonuk+nWOekaQZDfZi5MFc2/NXun909ytnQACAIAggAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIAgCCAAQxKQtI819eaZicXsRoIXzKBWVJJ9a0cIzafPMgr+wF0mmnH07uz/1pHlGkm7/n183z+xpvdo8U13xtnnmt52l5hlJOhqdbZ7pSeWaZ1yXvcC0a07CPJOTWGiekaREa7d5ZsYz+eaZiEevaNYf7GWfkSG/MuDjN9iLOy9b3mGeOdMxwzyjIb/zh7IDfeaZxv9o2w/pvtE9D3EGBAAIggACAARhCqDa2lpdc801ys3NVXFxsTZu3Kj6+voRt1mzZo0ikciIy5133jmmiwYATH2mAKqrq1NNTY0OHjyol156SYODg1q3bp16enpG3G7r1q1qaWkZvjz00ENjumgAwNRnehPCnj17Rvx/165dKi4u1qFDh7R69erh63NyclRa6vciMADg0nBRrwF1dJx9t0dhYeGI63/yk5+oqKhIixcv1o4dO9Tbe/4/1ZtMJtXZ2TniAgCY/rzfhp1Op3XPPffo2muv1eLFi4evv+222zRv3jyVl5fr8OHD+ta3vqX6+nr97Gc/O+fnqa2t1QMPPOC7DADAFOUdQDU1NTpy5IheffXVEdffcccdw/9esmSJysrKtHbtWjU2NmrBggUf+Tw7duzQ9u3bh//f2dmpiooK32UBAKYIrwDatm2bXnzxRe3fv19z5sz52NuuXLlSktTQ0HDOAEokEkok7L9gBwCY2kwB5JzT3Xffreeee0779u1TZWXlBWfeeustSVJZWZnXAgEA05MpgGpqarR792698MILys3NVWtrqyQpPz9f2dnZamxs1O7du/XFL35Rs2bN0uHDh3Xvvfdq9erVWrp06bh8AQCAqckUQI899piks79s+m898cQT2rJli+LxuF5++WU98sgj6unpUUVFhTZt2qRvf/vbY7ZgAMD0YP4R3MepqKhQXV3dRS0IAHBpmLRt2EW/alNGbJzfnBCzNxJLklrsLdW919tbie8seM880zJ0/t+5Op+5GTPNM5I0mG2vE4/9VYl55rOPvmKe8W3DTvZnmmcyeuy/ThfvMo8oNuDR6Jz2a4EeyrM30ee/7fFFHTlqHoledeHXnj/sd/+1wDwjSUuX/bN55uipIvuGmnLsM2UD9hlJ0aS9grzoN7ZjPDUQVfNo1mJeCQAAY4AAAgAEQQABAIIggAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQUzaMtKholwpw16IaOHifvkb7+kzz7z/X+wloT6aU/YC11ikx2tbnfPt+y/3eNI8c+/+W80z8VZ7qagkyaOfNt5uL2V1HodexKNXNDaYtg95biva02+e+cOXP2OeWfTf/sk8c0WGvVRUkt44VWGeGRqyH0SxfvsxFDkeN89IUm+Z/Wl/9t83mW4/lB5dUSpnQACAIAggAEAQBBAAIAgCCAAQBAEEAAiCAAIABEEAAQCCIIAAAEEQQACAIAggAEAQBBAAIIhJ1wXn3NkSqqGUvTPMvK2oX/5G0/a1pXrtM51d9h6vnqR9pivDry8slbR3fw0N2WfSffZurXR/yjwjSc6jCy6VtPd4yePwHhq0f01DQ6Pr5PqwyJBHGZzHYzY1aD8eBrrtX9NAxqB5RpKGeuxfU7rX/rTqkh4zKY/7SNKQRz/gaLvdPnz7D57PzyfiLnSLCXb8+HFVVNgLAAEAk0tzc7PmzJlz3o9PugBKp9M6ceKEcnNzFYmM/M6ys7NTFRUVam5uVl5eXqAVhsd+OIv9cBb74Sz2w1mTYT8459TV1aXy8nJFP+YnTZPuR3DRaPRjE1OS8vLyLukD7APsh7PYD2exH85iP5wVej/k5+df8Da8CQEAEAQBBAAIYkoFUCKR0P33369Ewv5XP6cT9sNZ7Iez2A9nsR/Omkr7YdK9CQEAcGmYUmdAAIDpgwACAARBAAEAgiCAAABBTJkA2rlzpz7xiU8oKytLK1eu1G9+85vQS5pw3/3udxWJREZcFi1aFHpZ427//v268cYbVV5erkgkoueff37Ex51zuu+++1RWVqbs7GxVVVXp6NGjYRY7ji60H7Zs2fKR42PDhg1hFjtOamtrdc011yg3N1fFxcXauHGj6uvrR9ymv79fNTU1mjVrlmbOnKlNmzapra0t0IrHx2j2w5o1az5yPNx5552BVnxuUyKAfvrTn2r79u26//779cYbb2jZsmVav369Tp48GXppE+7qq69WS0vL8OXVV18NvaRx19PTo2XLlmnnzp3n/PhDDz2kRx99VI8//rhee+01zZgxQ+vXr1d/v73ocjK70H6QpA0bNow4Pp566qkJXOH4q6urU01NjQ4ePKiXXnpJg4ODWrdunXp6eoZvc++99+rnP/+5nn32WdXV1enEiRO6+eabA6567I1mP0jS1q1bRxwPDz30UKAVn4ebAlasWOFqamqG/59KpVx5ebmrra0NuKqJd//997tly5aFXkZQktxzzz03/P90Ou1KS0vdww8/PHxde3u7SyQS7qmnngqwwonx4f3gnHObN292N910U5D1hHLy5EknydXV1Tnnzt73mZmZ7tlnnx2+zdtvv+0kuQMHDoRa5rj78H5wzrkvfOEL7k/+5E/CLWoUJv0Z0MDAgA4dOqSqqqrh66LRqKqqqnTgwIGAKwvj6NGjKi8v1/z583X77bfr2LFjoZcUVFNTk1pbW0ccH/n5+Vq5cuUleXzs27dPxcXFWrhwoe666y6dPn069JLGVUdHhySpsLBQknTo0CENDg6OOB4WLVqkuXPnTuvj4cP74QM/+clPVFRUpMWLF2vHjh3q7e0NsbzzmnRlpB926tQppVIplZSUjLi+pKREv/vd7wKtKoyVK1dq165dWrhwoVpaWvTAAw/o+uuv15EjR5Sbmxt6eUG0trZK0jmPjw8+dqnYsGGDbr75ZlVWVqqxsVF/9md/purqah04cECxmMcfOprk0um07rnnHl177bVavHixpLPHQzweV0FBwYjbTufj4Vz7QZJuu+02zZs3T+Xl5Tp8+LC+9a1vqb6+Xj/72c8CrnakSR9A+FfV1dXD/166dKlWrlypefPm6ZlnntHXvva1gCvDZHDrrbcO/3vJkiVaunSpFixYoH379mnt2rUBVzY+ampqdOTIkUviddCPc779cMcddwz/e8mSJSorK9PatWvV2NioBQsWTPQyz2nS/wiuqKhIsVjsI+9iaWtrU2lpaaBVTQ4FBQW66qqr1NDQEHopwXxwDHB8fNT8+fNVVFQ0LY+Pbdu26cUXX9Qvf/nLEX++pbS0VAMDA2pvbx9x++l6PJxvP5zLypUrJWlSHQ+TPoDi8biWL1+uvXv3Dl+XTqe1d+9erVq1KuDKwuvu7lZjY6PKyspCLyWYyspKlZaWjjg+Ojs79dprr13yx8fx48d1+vTpaXV8OOe0bds2Pffcc3rllVdUWVk54uPLly9XZmbmiOOhvr5ex44dm1bHw4X2w7m89dZbkjS5jofQ74IYjaefftolEgm3a9cu99vf/tbdcccdrqCgwLW2toZe2oT6+te/7vbt2+eamprcr371K1dVVeWKiorcyZMnQy9tXHV1dbk333zTvfnmm06S+8EPfuDefPNN9+677zrnnPv+97/vCgoK3AsvvOAOHz7sbrrpJldZWen6+voCr3xsfdx+6Orqct/4xjfcgQMHXFNTk3v55ZfdZz7zGXfllVe6/v7+0EsfM3fddZfLz893+/btcy0tLcOX3t7e4dvceeedbu7cue6VV15xr7/+ulu1apVbtWpVwFWPvQvth4aGBvfggw+6119/3TU1NbkXXnjBzZ8/361evTrwykeaEgHknHM//OEP3dy5c108HncrVqxwBw8eDL2kCXfLLbe4srIyF4/H3eWXX+5uueUW19DQEHpZ4+6Xv/ylk/SRy+bNm51zZ9+K/Z3vfMeVlJS4RCLh1q5d6+rr68Muehx83H7o7e1169atc7Nnz3aZmZlu3rx5buvWrdPum7Rzff2S3BNPPDF8m76+PvfHf/zH7rLLLnM5OTnuS1/6kmtpaQm36HFwof1w7Ngxt3r1aldYWOgSiYS74oor3J/+6Z+6jo6OsAv/EP4cAwAgiEn/GhAAYHoigAAAQRBAAIAgCCAAQBAEEAAgCAIIABAEAQQACIIAAgAEQQABAIIggAAAQRBAAIAgCCAAQBD/Hx+Bql3uVSmtAAAAAElFTkSuQmCC\n"
},
"metadata": {}
}
]
},
{
"cell_type": "markdown",
"source": [
"Now that we have the data ready, let's start with our Deep learning eperimenation. "
],
"metadata": {
"id": "iyx58l8momAj"
}
},
{
"cell_type": "markdown",
"source": [
"# Setting up Wandb"
],
"metadata": {
"id": "0Tih4DGF6ieP"
}
},
{
"cell_type": "markdown",
"source": [
"You would need to have a Wandb account in order to use Wandb functionalities. You can create your own account going directly to the [wandb website](https://wandb.ai/site). "
],
"metadata": {
"id": "Nuj4dMrFovh3"
}
},
{
"cell_type": "markdown",
"source": [
"## Install wandb\n",
"\n",
"Installing wandb is as simple as any other python package using PIP installer. We can use the following cell to install it on colab notebook."
],
"metadata": {
"id": "GT0htYLXpC7z"
}
},
{
"cell_type": "code",
"source": [
"! pip install wandb"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "RE1Fu92H6lPr",
"outputId": "7772e3b0-da7d-4453-df49-64d19e682319"
},
"execution_count": 22,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting wandb\n",
" Downloading wandb-0.15.3-py3-none-any.whl (2.0 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.0/2.0 MB\u001b[0m \u001b[31m28.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: Click!=8.0.0,>=7.0 in /usr/local/lib/python3.10/dist-packages (from wandb) (8.1.3)\n",
"Collecting GitPython!=3.1.29,>=1.0.0 (from wandb)\n",
" Downloading GitPython-3.1.31-py3-none-any.whl (184 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m184.3/184.3 kB\u001b[0m \u001b[31m24.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: requests<3,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from wandb) (2.27.1)\n",
"Requirement already satisfied: psutil>=5.0.0 in /usr/local/lib/python3.10/dist-packages (from wandb) (5.9.5)\n",
"Collecting sentry-sdk>=1.0.0 (from wandb)\n",
" Downloading sentry_sdk-1.25.0-py2.py3-none-any.whl (206 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m206.5/206.5 kB\u001b[0m \u001b[31m27.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting docker-pycreds>=0.4.0 (from wandb)\n",
" Downloading docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)\n",
"Requirement already satisfied: PyYAML in /usr/local/lib/python3.10/dist-packages (from wandb) (6.0)\n",
"Collecting pathtools (from wandb)\n",
" Downloading pathtools-0.1.2.tar.gz (11 kB)\n",
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
"Collecting setproctitle (from wandb)\n",
" Downloading setproctitle-1.3.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (30 kB)\n",
"Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from wandb) (67.7.2)\n",
"Requirement already satisfied: appdirs>=1.4.3 in /usr/local/lib/python3.10/dist-packages (from wandb) (1.4.4)\n",
"Requirement already satisfied: protobuf!=4.21.0,<5,>=3.19.0 in /usr/local/lib/python3.10/dist-packages (from wandb) (3.20.3)\n",
"Requirement already satisfied: six>=1.4.0 in /usr/local/lib/python3.10/dist-packages (from docker-pycreds>=0.4.0->wandb) (1.16.0)\n",
"Collecting gitdb<5,>=4.0.1 (from GitPython!=3.1.29,>=1.0.0->wandb)\n",
" Downloading gitdb-4.0.10-py3-none-any.whl (62 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.7/62.7 kB\u001b[0m \u001b[31m7.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.0.0->wandb) (1.26.15)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.0.0->wandb) (2022.12.7)\n",
"Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.0.0->wandb) (2.0.12)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.0.0->wandb) (3.4)\n",
"Collecting smmap<6,>=3.0.1 (from gitdb<5,>=4.0.1->GitPython!=3.1.29,>=1.0.0->wandb)\n",
" Downloading smmap-5.0.0-py3-none-any.whl (24 kB)\n",
"Building wheels for collected packages: pathtools\n",
" Building wheel for pathtools (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for pathtools: filename=pathtools-0.1.2-py3-none-any.whl size=8791 sha256=d072707735091c862526f11d126817c2ce7ab26ff2b9bbd2c50a46f0301a0e0d\n",
" Stored in directory: /root/.cache/pip/wheels/e7/f3/22/152153d6eb222ee7a56ff8617d80ee5207207a8c00a7aab794\n",
"Successfully built pathtools\n",
"Installing collected packages: pathtools, smmap, setproctitle, sentry-sdk, docker-pycreds, gitdb, GitPython, wandb\n",
"Successfully installed GitPython-3.1.31 docker-pycreds-0.4.0 gitdb-4.0.10 pathtools-0.1.2 sentry-sdk-1.25.0 setproctitle-1.3.2 smmap-5.0.0 wandb-0.15.3\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"## Login into Wandb\n",
"Once we have our Wandb account ready it is easy to access the API. We just need to login from colab to wandb using the following cell.\n",
"\n",
"You would require to insert the API key in order to run this cell. You can copy your own API key from [here](https://wandb.ai/quickstart?utm_source=app-resource-center&utm_medium=app&utm_term=quickstart)."
],
"metadata": {
"id": "HBflOQxIpbfK"
}
},
{
"cell_type": "code",
"source": [
"! wandb login"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "5AJTWqnc6n60",
"outputId": "2979b673-340b-4bf4-c45b-68e2ca95adef"
},
"execution_count": 23,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\u001b[34m\u001b[1mwandb\u001b[0m: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server)\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: You can find your API key in your browser here: https://wandb.ai/authorize\n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Paste an API key from your profile and hit enter, or press ctrl+c to quit: \n",
"\u001b[34m\u001b[1mwandb\u001b[0m: Appending key for api.wandb.ai to your netrc file: /root/.netrc\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"## Import Wandb"
],
"metadata": {
"id": "b5D9dpjLqUG-"
}
},
{
"cell_type": "code",
"source": [
"import wandb"
],
"metadata": {
"id": "4y_E-kvy7zfb"
},
"execution_count": 24,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Wandb.init: Initialize Wandb Project\n",
"\n",
"Once we have everything setup, we can now initialize our wandb project using the `wandb.init()`.\n",
"\n",
"We have provided the project name and name of the instance to wandb.init. You can set up any project name.\n",
"\n",
"This cell will give you the link for the wandb dashbord where you can find the project and all the intial setup."
],
"metadata": {
"id": "XBz5ErRMqYaT"
}
},
{
"cell_type": "code",
"source": [
"wandb.init(project=\"sign-language-model\", name=\"SLR\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 144
},
"id": "2DOBpiVZIdiO",
"outputId": "f1c18921-91fc-4c24-83d1-6ec60bf08810"
},
"execution_count": 25,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mkaziarshad97\u001b[0m. Use \u001b[1m`wandb login --relogin`\u001b[0m to force relogin\n"
]
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Tracking run with wandb version 0.15.3"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Run data is saved locally in <code>/content/wandb/run-20230604_134915-oex8o43r</code>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Syncing run <strong><a href='https://wandb.ai/kaziarshad97/sign-language-model/runs/oex8o43r' target=\"_blank\">SLR</a></strong> to <a href='https://wandb.ai/kaziarshad97/sign-language-model' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
" View project at <a href='https://wandb.ai/kaziarshad97/sign-language-model' target=\"_blank\">https://wandb.ai/kaziarshad97/sign-language-model</a>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
" View run at <a href='https://wandb.ai/kaziarshad97/sign-language-model/runs/oex8o43r' target=\"_blank\">https://wandb.ai/kaziarshad97/sign-language-model/runs/oex8o43r</a>"
]
},
"metadata": {}
},
{
"output_type": "execute_result",
"data": {
"text/html": [
"<button onClick=\"this.nextSibling.style.display='block';this.style.display='none';\">Display W&B run</button><iframe src='https://wandb.ai/kaziarshad97/sign-language-model/runs/oex8o43r?jupyter=true' style='border:none;width:100%;height:420px;display:none;'></iframe>"
],
"text/plain": [
"<wandb.sdk.wandb_run.Run at 0x7f07d1bf9360>"
]
},
"metadata": {},
"execution_count": 25
}
]
},
{
"cell_type": "markdown",
"source": [
"# Defining Hyper-Parameters"
],
"metadata": {
"id": "tcimjF-U4Ump"
}
},
{
"cell_type": "markdown",
"source": [
"We will define the hyperparameters for our CNN model. The config is nothing but a dictonary with all the hyper-parameters. We can set this dictionary as a wandb config dictionary. "
],
"metadata": {
"id": "_YuCrN0tz33V"
}
},
{
"cell_type": "code",
"source": [
"config = dict(\n",
" epochs= 20, # no of epochs\n",
" classes=24 +1, # classes (total 24 letters plus additional None)\n",
" image_size = 28, # size of the image\n",
" kernels=[16, 32], # kernel size for each layer in CNN, you can tweak this according to your need\n",
" batch_size=32,\n",
" learning_rate=0.005,\n",
" architecture=\"CNN\"\n",
")"
],
"metadata": {
"id": "JDqaNkJl20wC"
},
"execution_count": 48,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Experimental Pipeline"
],
"metadata": {
"id": "l-LyH8xY84ED"
}
},
{
"cell_type": "markdown",
"source": [
"This is the most important step, as we are using plain Pytorch (not Pytorch lightning or Keras over Tensorflow) we need to define the moddel pipeline. Following code defines everything that we are going to do for this model training. \n",
"\n",
"Let's put it stepwise, \n",
"\n",
"1. Initialize the wandb: We initialize the wandb project (just like we did before). This will create a project with name specified and the instance of that project in our wandb dashboard. It also takes the hypyerparameter as the input, wandb tracks all the values of the hyperparameters provided on the specified project d\n",
"\n",
"2. Passing the config: We use wandb.config to track all the parameters provided in our config. This help us track the model performance based on the various parameters selected. \n",
"\n",
"3. Training and testing the model"
],
"metadata": {
"id": "GmoMNsW10pMd"
}
},
{
"cell_type": "markdown",
"source": [
"### wandb.config"
],
"metadata": {
"id": "h928gnjkG-qY"
}
},
{
"cell_type": "code",
"source": [
"def model_pipeline(hyperparameters):\n",
"\n",
" # tell wandb to get started\n",
" with wandb.init(project=\"sign-language-model\", name=\"SLR\", config =hyperparameters):\n",
" # access all HPs through wandb.config, so logging matches execution!\n",
" config = wandb.config\n",
"\n",
" # make the model, data, and optimization problem\n",
" model, train_loader, test_loader, criterion, optimizer = model_creation(config)\n",
" print(model)\n",
"\n",
" # and use them to train the model\n",
" train_model(model, train_loader, criterion, optimizer, config)\n",
"\n",
" test_model(model, test_loader)\n",
"\n",
" return model"
],
"metadata": {
"id": "3st5KO5b86ZV"
},
"execution_count": 61,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Dataset and DataLoaders"
],
"metadata": {
"id": "TB1A5mGm__nD"
}
},
{
"cell_type": "markdown",
"source": [
"For this demo notebook, we are doing just basic pre-processing for our image data. This code is suffiecient for most of the image datasets. \n",
"\n",
"We are randomisly choosing the images to perform operations like rotation, cropping, flipping and others. \n",
"\n",
"Pytorch transforms allows us to do these operations directly inside the Dataset defined by us."
],
"metadata": {
"id": "kMXafyftEDgJ"
}
},
{
"cell_type": "markdown",
"source": [
"### Pre-processing using Transforms"
],
"metadata": {
"id": "ZLZPBigDEsG8"
}
},
{
"cell_type": "code",
"source": [
"random_transforms = transforms.Compose([\n",
" transforms.RandomRotation(30), # Randomly rotate the image by up to 30 degrees\n",
" # transforms.RandomResizedCrop(IMAGE_SIZE), # Randomly crop and resize the image to 224x224\n",
" # transforms.RandomHorizontalFlip(), # Randomly flip the image horizontally\n",
"])\n",
"\n",
"# Define the fixed transformations\n",
"fixed_transforms = transforms.Compose([\n",
" transforms.ToTensor(),\n",
" transforms.Normalize(mean=[0.5], std=[0.5])\n",
"])\n",
"\n",
"# Define the overall transformation pipeline\n",
"transform = transforms.Compose([\n",
" transforms.RandomApply([random_transforms], p=0.5), # Apply random transformations with a probability of 0.5\n",
" fixed_transforms\n",
"])\n",
" "
],
"metadata": {
"id": "xUu_xN0DJNxB"
},
"execution_count": 50,
"outputs": []
},
{
"cell_type": "code",
"source": [
"\n",
"transform_test = transforms.Compose([\n",
" transforms.ToTensor(),\n",
"])"
],
"metadata": {
"id": "yTyX_XjOLLNf"
},
"execution_count": 51,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Custom Dataset and Dataloaders"
],
"metadata": {
"id": "1xllkpKDExY_"
}
},
{
"cell_type": "code",
"source": [
"\n",
"class SignDataSet(Dataset):\n",
" def __init__(\n",
" self,\n",
" image_df, \n",
" label_df,\n",
" transform,\n",
" split = None,\n",
" ):\n",
" self.image_df = image_df \n",
" self.label_df = torch.nn.functional.one_hot(torch.tensor(np.array(label_df))).float()\n",
" self.split = split \n",
" self.transform = transform\n",
"\n",
" def __len__(self):\n",
" return len(self.label_df)\n",
" \n",
" def __getitem__(self, index):\n",
" image = self.image_df.iloc[index]\n",
" image = np.reshape(np.array(image), (28,28))\n",
"\n",
" image = Image.fromarray(image.astype(np.uint8))\n",
"\n",
" label = self.label_df[index]\n",
" # label = torch.nn.functional.one_hot(torch.tensor(label))\n",
"\n",
" if self.split == 'train':\n",
" image = self.transform(image)\n",
"\n",
" if self.split == 'test':\n",
" image = self.transform(image)\n",
" return image, label"
],
"metadata": {
"id": "Ztrc9KT3-qyO"
},
"execution_count": 52,
"outputs": []
},
{
"cell_type": "code",
"source": [
"def make_loader(x, y, config, mode):\n",
" data = SignDataSet(x, y, transform, mode)\n",
" data_loader = DataLoader(data, batch_size = config['batch_size'], drop_last = True)\n",
" return data_loader"
],
"metadata": {
"id": "RTYk3ybl_NAb"
},
"execution_count": 53,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Defining and training the model"
],
"metadata": {
"id": "o1NIHFvUACaq"
}
},
{
"cell_type": "markdown",
"source": [
"Now we can build our model. This particular model consists of two CNN layers with one output layer. This model is just for the demo purposes but we can tweak any parameters in the cofig defined above. \n",
"\n",
"Model takes the input as 28x28 input image size and then classifies them into 25 categories. "
],
"metadata": {
"id": "NDDMR7UFEocm"
}
},
{
"cell_type": "code",
"source": [
"class SignLabelModel(nn.Module):\n",
" def __init__(self, kernels, num_classes):\n",
" super(SignLabelModel, self).__init__()\n",
" self.layer1 = nn.Sequential(\n",
" nn.Conv2d(1, kernels[0], kernel_size=3, stride=1, padding=1),\n",
" nn.ReLU(inplace=True),\n",
" nn.MaxPool2d(kernel_size=2, stride=2))\n",
" \n",
" self.layer2 = nn.Sequential(\n",
" nn.Conv2d(kernels[0], kernels[1], kernel_size=3, stride=1, padding=1),\n",
" nn.ReLU(inplace=True),\n",
" nn.MaxPool2d(kernel_size=2, stride=2)\n",
" )\n",
" self.classifier = nn.Sequential(\n",
" nn.Linear(32 * 7 * 7, 128),\n",
" nn.ReLU(inplace=True),\n",
" nn.Linear(128, num_classes)\n",
" )\n",
"\n",
" def forward(self, x):\n",
" x = self.layer1(x)\n",
" x = self.layer2(x)\n",
" x = x.view(x.size(0), -1)\n",
" x = self.classifier(x)\n",
" return x"
],
"metadata": {
"id": "646LdTm1-xBN"
},
"execution_count": 54,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Setting up model, optimizer and loss function\n",
"\n",
"Once we have everything read we can define the entire model in the following way. \n",
"\n",
"We have set `nn.CrossEntropyLoss()` which will calculate the loss for the categorical data. `Adam` optimizer is an obvious choice for the CNN architecture. \n",
"\n",
"You can even try to tweak these parameters through config."
],
"metadata": {
"id": "Bdy2eEToFY_Q"
}
},
{
"cell_type": "code",
"source": [
"def model_creation(config):\n",
" train_loader = make_loader(X_train, y_train, config, 'train')\n",
" test_loader = make_loader(X_val, y_val, config, 'test')\n",
"\n",
" # Make the model\n",
" model = SignLabelModel(config.kernels, config.classes).to(device)\n",
"\n",
" # Make the loss and optimizer\n",
" criterion = nn.CrossEntropyLoss()\n",
" optimizer = torch.optim.Adam(\n",
" model.parameters(), lr=config.learning_rate)\n",
" \n",
" return model, train_loader, test_loader, criterion, optimizer"
],
"metadata": {
"id": "Lcj1iQm89sEH"
},
"execution_count": 55,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Model training"
],
"metadata": {
"id": "fZbY9LlTGAXp"
}
},
{
"cell_type": "markdown",
"source": [
"### wandb.watch\n",
"Following is a training loop for our CNN model. A different thing to note here is the inclusion of `wandb.watch` which tells wandb to track all the experiments which we are going to perform; `log` is set to `all`.\n",
"\n",
"Entire training loop is similar to any other deep learning loop."
],
"metadata": {
"id": "ZGip-eHXGE7o"
}
},
{
"cell_type": "code",
"source": [
"def train_model(model, loader, criterion, optimizer, config):\n",
" # Tell wandb to watch what the model gets up to: gradients, weights, and more!\n",
" wandb.watch(model, criterion, log=\"all\", log_freq=10)\n",
"\n",
" # Run training and track with wandb\n",
" total_batches = len(loader) * config.epochs\n",
" example_ct = 0 # number of examples seen\n",
" batch_ct = 0\n",
" for epoch in tqdm(range(config.epochs)):\n",
" for _, (images, labels) in enumerate(loader):\n",
"\n",
" loss = train_batch(images, labels, model, optimizer, criterion)\n",
" example_ct += len(images)\n",
" batch_ct += 1\n",
"\n",
" # Report metrics every 25th batch\n",
" if ((batch_ct + 1) % 25) == 0:\n",
" train_log(loss, example_ct, epoch)\n",
"\n",
"\n",
"def train_batch(images, labels, model, optimizer, criterion):\n",
" images, labels = images.to(device), labels.to(device)\n",
" \n",
" # Forward pass ➡\n",
" outputs = model(images)\n",
" loss = criterion(outputs, labels)\n",
" \n",
" # Backward pass ⬅\n",
" optimizer.zero_grad()\n",
" loss.backward()\n",
"\n",
" # Step with optimizer\n",
" optimizer.step()\n",
"\n",
" return loss"
],
"metadata": {
"id": "yAatT7Tz4qs5"
},
"execution_count": 62,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### wandb.save"
],
"metadata": {
"id": "rAnOlBHlIG1d"
}
},
{
"cell_type": "markdown",
"source": [
"We can make use of wandb.save to save our model in the ONNX format. Which will then allow us to deploy it on wide set of platforms. \n",
"\n",
"Wandb even can visualise this model into the dashboard."
],
"metadata": {
"id": "hy0cMrlqIStI"
}
},
{
"cell_type": "markdown",
"source": [
"You would need to install onxx before this step."
],
"metadata": {
"id": "BsPxNiM9NT29"
}
},
{
"cell_type": "code",
"source": [
"! pip install onnx"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "5NQxCb8WNaXE",
"outputId": "2a2d9b7b-8202-4128-95b2-4a9ef7aba33d"
},
"execution_count": 87,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
"Collecting onnx\n",
" Downloading onnx-1.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (14.6 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m14.6/14.6 MB\u001b[0m \u001b[31m97.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from onnx) (1.22.4)\n",
"Requirement already satisfied: protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx) (3.20.3)\n",
"Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.10/dist-packages (from onnx) (4.5.0)\n",
"Installing collected packages: onnx\n",
"Successfully installed onnx-1.14.0\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"def test_model(model, test_loader):\n",
" model.eval()\n",
"\n",
" # Run the model on some test examples\n",
" with torch.no_grad():\n",
" correct, total = 0, 0\n",
" for images, labels in test_loader:\n",
" images, labels = images.to(device), labels.to(device)\n",
" outputs = model(images)\n",
" _, predicted = torch.max(outputs.data, 1)\n",
" total += labels.size(0)\n",
" labels = [torch.argmax(l).item() for l in labels]\n",
" # print(predicted.shape, labels)\n",
" # correct += (predicted == labels).sum().item()\n",
" count = sum([1 for i in range(len(labels)) if labels[i] == predicted[i].item()])\n",
"\n",
" print(f\"Accuracy of the model on the {total} \" +\n",
" f\"test images: {correct / total:%}\")\n",
" \n",
" wandb.log({\"test_accuracy\": correct / total})\n",
"\n",
" # Save the model in the exchangeable ONNX format\n",
" torch.onnx.export(model, images, \"model.onnx\")\n",
" wandb.save(\"model.onnx\")"
],
"metadata": {
"id": "hfmMlrIIIKl4"
},
"execution_count": 88,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### wandb.log"
],
"metadata": {
"id": "cJrTPYDMGsC0"
}
},
{
"cell_type": "markdown",
"source": [
"As discussed, wandb.log will save all the logs and show them into the wandb dashboard."
],
"metadata": {
"id": "HEQ-aD2QHDZw"
}
},
{
"cell_type": "code",
"source": [
"def train_log(loss, example_ct, epoch):\n",
" # Where the magic happens\n",
" wandb.log({\"epoch\": epoch, \"loss\": loss}, step=example_ct)\n",
" print(f\"Loss after {str(example_ct).zfill(5)} examples: {loss:.3f}\")"
],
"metadata": {
"id": "IGfK3__KDfcf"
},
"execution_count": 89,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Running entire pipeline"
],
"metadata": {
"id": "5xUusxc-HR3s"
}
},
{
"cell_type": "markdown",
"source": [
"Now it's time to run the entire pipeline. Live training can be visualized in the wandb dashboard. "
],
"metadata": {
"id": "XQFjRMODHU7A"
}
},
{
"cell_type": "code",
"source": [
"model = model_pipeline(config)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"id": "At6zoMPxDlXT",
"outputId": "e758c127-bbc2-41bc-b699-bcf0c8535f92"
},
"execution_count": 90,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Tracking run with wandb version 0.15.3"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Run data is saved locally in <code>/content/wandb/run-20230604_162042-we73rbwh</code>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Syncing run <strong><a href='https://wandb.ai/kaziarshad97/sign-language-model/runs/we73rbwh' target=\"_blank\">SLR</a></strong> to <a href='https://wandb.ai/kaziarshad97/sign-language-model' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
" View project at <a href='https://wandb.ai/kaziarshad97/sign-language-model' target=\"_blank\">https://wandb.ai/kaziarshad97/sign-language-model</a>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
" View run at <a href='https://wandb.ai/kaziarshad97/sign-language-model/runs/we73rbwh' target=\"_blank\">https://wandb.ai/kaziarshad97/sign-language-model/runs/we73rbwh</a>"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"SignLabelModel(\n",
" (layer1): Sequential(\n",
" (0): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
" (1): ReLU(inplace=True)\n",
" (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
" )\n",
" (layer2): Sequential(\n",
" (0): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
" (1): ReLU(inplace=True)\n",
" (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
" )\n",
" (classifier): Sequential(\n",
" (0): Linear(in_features=1568, out_features=128, bias=True)\n",
" (1): ReLU(inplace=True)\n",
" (2): Linear(in_features=128, out_features=25, bias=True)\n",
" )\n",
")\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\r 0%| | 0/2 [00:00<?, ?it/s]"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Loss after 00768 examples: 3.378\n",
"Loss after 01568 examples: 2.951\n",
"Loss after 02368 examples: 2.070\n",
"Loss after 03168 examples: 1.471\n",
"Loss after 03968 examples: 1.491\n",
"Loss after 04768 examples: 1.436\n",
"Loss after 05568 examples: 1.037\n",
"Loss after 06368 examples: 1.212\n",
"Loss after 07168 examples: 0.997\n",
"Loss after 07968 examples: 1.078\n",
"Loss after 08768 examples: 0.756\n",
"Loss after 09568 examples: 1.107\n",
"Loss after 10368 examples: 0.594\n",
"Loss after 11168 examples: 1.002\n",
"Loss after 11968 examples: 0.794\n",
"Loss after 12768 examples: 0.543\n",
"Loss after 13568 examples: 0.597\n",
"Loss after 14368 examples: 0.804\n",
"Loss after 15168 examples: 0.568\n",
"Loss after 15968 examples: 0.479\n",
"Loss after 16768 examples: 0.341\n",
"Loss after 17568 examples: 0.407\n",
"Loss after 18368 examples: 0.735\n",
"Loss after 19168 examples: 0.963\n",
"Loss after 19968 examples: 0.466\n",
"Loss after 20768 examples: 0.491\n",
"Loss after 21568 examples: 0.733\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\r 50%|█████ | 1/2 [00:22<00:22, 22.66s/it]"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Loss after 22368 examples: 0.709\n",
"Loss after 23168 examples: 0.501\n",
"Loss after 23968 examples: 0.423\n",
"Loss after 24768 examples: 0.252\n",
"Loss after 25568 examples: 0.267\n",
"Loss after 26368 examples: 0.416\n",
"Loss after 27168 examples: 0.122\n",
"Loss after 27968 examples: 0.257\n",
"Loss after 28768 examples: 0.205\n",
"Loss after 29568 examples: 0.451\n",
"Loss after 30368 examples: 0.323\n",
"Loss after 31168 examples: 0.176\n",
"Loss after 31968 examples: 0.293\n",
"Loss after 32768 examples: 0.262\n",
"Loss after 33568 examples: 0.176\n",
"Loss after 34368 examples: 0.093\n",
"Loss after 35168 examples: 0.253\n",
"Loss after 35968 examples: 0.628\n",
"Loss after 36768 examples: 0.385\n",
"Loss after 37568 examples: 0.135\n",
"Loss after 38368 examples: 0.315\n",
"Loss after 39168 examples: 0.238\n",
"Loss after 39968 examples: 0.154\n",
"Loss after 40768 examples: 0.187\n",
"Loss after 41568 examples: 0.128\n",
"Loss after 42368 examples: 0.082\n",
"Loss after 43168 examples: 0.063\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"100%|██████████| 2/2 [00:36<00:00, 18.19s/it]\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Accuracy of the model on the 5472 test images: 0.000000%\n",
"============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============\n",
"verbose: False, log level: Level.ERROR\n",
"======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================\n",
"\n"
]
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"<style>\n",
" table.wandb td:nth-child(1) { padding: 0 10px; text-align: left ; width: auto;} td:nth-child(2) {text-align: left ; width: 100%}\n",
" .wandb-row { display: flex; flex-direction: row; flex-wrap: wrap; justify-content: flex-start; width: 100% }\n",
" .wandb-col { display: flex; flex-direction: column; flex-basis: 100%; flex: 1; padding: 10px; }\n",
" </style>\n",
"<div class=\"wandb-row\"><div class=\"wandb-col\"><h3>Run history:</h3><br/><table class=\"wandb\"><tr><td>epoch</td><td>▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████████████████████</td></tr><tr><td>loss</td><td>█▇▅▄▄▃▃▃▂▂▃▃▂▃▂▂▂▃▂▂▂▂▂▁▂▁▁▂▁▁▁▁▁▂▁▂▁▁▁▁</td></tr><tr><td>test_accuracy</td><td>▁</td></tr></table><br/></div><div class=\"wandb-col\"><h3>Run summary:</h3><br/><table class=\"wandb\"><tr><td>epoch</td><td>1</td></tr><tr><td>loss</td><td>0.06329</td></tr><tr><td>test_accuracy</td><td>0.0</td></tr></table><br/></div></div>"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
" View run <strong style=\"color:#cdcd00\">SLR</strong> at: <a href='https://wandb.ai/kaziarshad97/sign-language-model/runs/we73rbwh' target=\"_blank\">https://wandb.ai/kaziarshad97/sign-language-model/runs/we73rbwh</a><br/>Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 1 other file(s)"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"Find logs at: <code>./wandb/run-20230604_162042-we73rbwh/logs</code>"
]
},
"metadata": {}
}
]
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "iDk2ayrMRTvg"
},
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment