Skip to content

Instantly share code, notes, and snippets.

@tuffacton
Created May 5, 2020 14:51
Show Gist options
  • Save tuffacton/d9d8ac1bfe8d8f8d9aa779f990e5a340 to your computer and use it in GitHub Desktop.
Save tuffacton/d9d8ac1bfe8d8f8d9aa779f990e5a340 to your computer and use it in GitHub Desktop.
HW5
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "HW5",
"provenance": [],
"collapsed_sections": [],
"machine_shape": "hm",
"authorship_tag": "ABX9TyNO4uSND0utIPOOA0cToGtM",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/tuffacton/d9d8ac1bfe8d8f8d9aa779f990e5a340/hw5.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "I7GL9WXDog-b",
"colab_type": "text"
},
"source": [
"# Homework 5\n",
"Nicolas Acton\n",
"\n",
"You can run your own copy of this Colaboratory notebook from this gist:\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wcY97ecRo1xv",
"colab_type": "text"
},
"source": [
"## Task\n",
"Investigate the effect of different neural network architectures on the performance of a classification problem. Specifically, which activation functions, and number and sizes of hidden layers, give the best performance for a given dataset.\n",
"\n",
"Here is what we'll do:\n",
"1. Load the \"ccpp\" dataset using the data on the sheet named \"allBin\". The ID field is \"ID\" and the binary target is \"TG\".\n",
"2. Normalize/scale the data appropriately\n",
"3. Divide the data into training and test sets\n",
"4. For a variety of architectures:\n",
"\n",
" a. Train an MLPClassifer on the training data.\n",
"\n",
" b. Measure the performance on the test set using two different measures: AUROC and misclassification rate.\n",
"5. Build two tables: the ten best model architectures by AUROC, and the ten best model architectures by misclassification rate.\n",
"6. Identify the best model using each of the two measures of performance: do they agree or do they indicate different models?\n",
"7. Summarize your results and your findings.\n",
"\n",
"Note: We want to see all combinations of number of hiddens layers (from 1 to 3), number of nodes in hidden layers (1 to 20 without assumption that the layers have the same number of nodes), and all combinations of activation features (relu, identity, and tanh). This program could take hours to run, so keep that in mind."
]
},
{
"cell_type": "code",
"metadata": {
"id": "uT1xYpnpEo6I",
"colab_type": "code",
"colab": {}
},
"source": [
"import sklearn\n",
"import pandas as pd\n",
"import numpy as np\n",
"import math"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "JyLhRZgno0-v",
"colab_type": "text"
},
"source": [
"### Data Preparation\n",
"Luckily we're using just one sheet from an otherwise larger set of sheets. We'll transfer to csv for easier data retrieval and load from a gist."
]
},
{
"cell_type": "code",
"metadata": {
"id": "HiEaoOlRuFoP",
"colab_type": "code",
"colab": {}
},
"source": [
"data = pd.read_csv('https://gist.githubusercontent.com/tuffacton/a115eeab803f0eaff766b6a943c8d760/raw/26bd4ad3f8a1b950010953ecd3fe8dd72424189b/ccpp.csv')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "nIYWD6ycuTCt",
"colab_type": "code",
"outputId": "fe31f9d5-40fa-4cc4-cb93-d0a248c343ad",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 406
}
},
"source": [
"# Using the ID as the ID was troublesome for further normalization, so we'll just allow position to be utilized\n",
"data.drop(['ID'], axis=1)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>AT</th>\n",
" <th>V</th>\n",
" <th>AP</th>\n",
" <th>RH</th>\n",
" <th>TG</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>14.96</td>\n",
" <td>41.76</td>\n",
" <td>1024.07</td>\n",
" <td>73.17</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>25.18</td>\n",
" <td>62.96</td>\n",
" <td>1020.04</td>\n",
" <td>59.08</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>5.11</td>\n",
" <td>39.40</td>\n",
" <td>1012.16</td>\n",
" <td>92.14</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>20.86</td>\n",
" <td>57.32</td>\n",
" <td>1010.24</td>\n",
" <td>76.64</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>10.82</td>\n",
" <td>37.50</td>\n",
" <td>1009.23</td>\n",
" <td>96.62</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9563</th>\n",
" <td>15.12</td>\n",
" <td>48.92</td>\n",
" <td>1011.80</td>\n",
" <td>72.93</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9564</th>\n",
" <td>33.41</td>\n",
" <td>77.95</td>\n",
" <td>1010.30</td>\n",
" <td>59.72</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9565</th>\n",
" <td>15.99</td>\n",
" <td>43.34</td>\n",
" <td>1014.20</td>\n",
" <td>78.66</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9566</th>\n",
" <td>17.65</td>\n",
" <td>59.87</td>\n",
" <td>1018.58</td>\n",
" <td>94.65</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9567</th>\n",
" <td>23.68</td>\n",
" <td>51.30</td>\n",
" <td>1011.86</td>\n",
" <td>71.24</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>9568 rows × 5 columns</p>\n",
"</div>"
],
"text/plain": [
" AT V AP RH TG\n",
"0 14.96 41.76 1024.07 73.17 1\n",
"1 25.18 62.96 1020.04 59.08 0\n",
"2 5.11 39.40 1012.16 92.14 1\n",
"3 20.86 57.32 1010.24 76.64 0\n",
"4 10.82 37.50 1009.23 96.62 1\n",
"... ... ... ... ... ..\n",
"9563 15.12 48.92 1011.80 72.93 1\n",
"9564 33.41 77.95 1010.30 59.72 0\n",
"9565 15.99 43.34 1014.20 78.66 1\n",
"9566 17.65 59.87 1018.58 94.65 1\n",
"9567 23.68 51.30 1011.86 71.24 1\n",
"\n",
"[9568 rows x 5 columns]"
]
},
"metadata": {
"tags": []
},
"execution_count": 3
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1jYUrPu2uoQ4",
"colab_type": "text"
},
"source": [
"We'll do a little normalization using the sklearn scaling functionality."
]
},
{
"cell_type": "code",
"metadata": {
"id": "2vplwFHNuaXr",
"colab_type": "code",
"colab": {}
},
"source": [
"def normalize(x):\n",
" # Change the attributes of the scaler to experiment\n",
" from sklearn.preprocessing import StandardScaler\n",
" scaler = StandardScaler()\n",
" x = pd.DataFrame(scaler.fit_transform(x), columns=x.columns)\n",
" return x"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "Ifec-81PvBD7",
"colab_type": "code",
"colab": {}
},
"source": [
"X = normalize(data.drop([\"TG\"], axis=1))\n",
"Y = data['TG']"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "-alTGbU8wyaJ",
"colab_type": "text"
},
"source": [
"Lets look now that we've done some normalization."
]
},
{
"cell_type": "code",
"metadata": {
"id": "AJa27SvLv_R8",
"colab_type": "code",
"outputId": "d1267b78-fb13-4799-f3ff-00f26dec7343",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 197
}
},
"source": [
"X.head()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>ID</th>\n",
" <th>AT</th>\n",
" <th>V</th>\n",
" <th>AP</th>\n",
" <th>RH</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>-1.731870</td>\n",
" <td>-0.621065</td>\n",
" <td>-0.985689</td>\n",
" <td>1.809755</td>\n",
" <td>-0.015536</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>-1.731508</td>\n",
" <td>0.751781</td>\n",
" <td>0.686880</td>\n",
" <td>1.129478</td>\n",
" <td>-0.986488</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>-1.731146</td>\n",
" <td>-1.944209</td>\n",
" <td>-1.171880</td>\n",
" <td>-0.200691</td>\n",
" <td>1.291699</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>-1.730784</td>\n",
" <td>0.171478</td>\n",
" <td>0.241914</td>\n",
" <td>-0.524794</td>\n",
" <td>0.223584</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>-1.730422</td>\n",
" <td>-1.177188</td>\n",
" <td>-1.321781</td>\n",
" <td>-0.695285</td>\n",
" <td>1.600419</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" ID AT V AP RH\n",
"0 -1.731870 -0.621065 -0.985689 1.809755 -0.015536\n",
"1 -1.731508 0.751781 0.686880 1.129478 -0.986488\n",
"2 -1.731146 -1.944209 -1.171880 -0.200691 1.291699\n",
"3 -1.730784 0.171478 0.241914 -0.524794 0.223584\n",
"4 -1.730422 -1.177188 -1.321781 -0.695285 1.600419"
]
},
"metadata": {
"tags": []
},
"execution_count": 6
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "2140HTd0MOfv",
"colab_type": "code",
"outputId": "3b0aef78-c833-4fbb-c6f7-dec897829d78",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 123
}
},
"source": [
"Y.head()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"0 1\n",
"1 0\n",
"2 1\n",
"3 0\n",
"4 1\n",
"Name: TG, dtype: int64"
]
},
"metadata": {
"tags": []
},
"execution_count": 7
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sVr5COee0lml",
"colab_type": "text"
},
"source": [
"We will use sklearn tooling to create randomly sampled train and test sets."
]
},
{
"cell_type": "code",
"metadata": {
"id": "4sTcOqmbw4Uk",
"colab_type": "code",
"colab": {}
},
"source": [
"from sklearn.model_selection import train_test_split\n",
"X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3, random_state=42)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "Up-s5U8u1ZZT",
"colab_type": "code",
"outputId": "27611843-eb8d-444f-b3a1-b856cab58f78",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 197
}
},
"source": [
"X_train.head()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>ID</th>\n",
" <th>AT</th>\n",
" <th>V</th>\n",
" <th>AP</th>\n",
" <th>RH</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>8759</th>\n",
" <td>1.439333</td>\n",
" <td>-1.714505</td>\n",
" <td>-1.043282</td>\n",
" <td>1.480589</td>\n",
" <td>0.980912</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1434</th>\n",
" <td>-1.212689</td>\n",
" <td>0.609392</td>\n",
" <td>0.347633</td>\n",
" <td>-0.303661</td>\n",
" <td>-0.994757</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7320</th>\n",
" <td>0.918342</td>\n",
" <td>0.158045</td>\n",
" <td>0.377613</td>\n",
" <td>-0.141610</td>\n",
" <td>0.826552</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2579</th>\n",
" <td>-0.798141</td>\n",
" <td>-1.341070</td>\n",
" <td>-0.980955</td>\n",
" <td>3.305352</td>\n",
" <td>-0.052748</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9142</th>\n",
" <td>1.577998</td>\n",
" <td>-1.804506</td>\n",
" <td>-1.063795</td>\n",
" <td>1.531230</td>\n",
" <td>1.045688</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" ID AT V AP RH\n",
"8759 1.439333 -1.714505 -1.043282 1.480589 0.980912\n",
"1434 -1.212689 0.609392 0.347633 -0.303661 -0.994757\n",
"7320 0.918342 0.158045 0.377613 -0.141610 0.826552\n",
"2579 -0.798141 -1.341070 -0.980955 3.305352 -0.052748\n",
"9142 1.577998 -1.804506 -1.063795 1.531230 1.045688"
]
},
"metadata": {
"tags": []
},
"execution_count": 9
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "-vvJGL-2OT0z",
"colab_type": "code",
"outputId": "f9a331be-67bb-4c80-b388-bcf22fa965d0",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 123
}
},
"source": [
"Y_train.head()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"8759 1\n",
"1434 1\n",
"7320 0\n",
"2579 1\n",
"9142 1\n",
"Name: TG, dtype: int64"
]
},
"metadata": {
"tags": []
},
"execution_count": 10
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aGU4Yit10tB7",
"colab_type": "text"
},
"source": [
"### MLP Classifier\n",
"MLP stands for Multi-layer Perceptron classifier and it optimizes the log-loss function using either LBFGS or stochastic gradient descent. [source](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "az0VbBke1Qoc",
"colab_type": "code",
"outputId": "7dfbdd78-c6a8-4f5d-b59a-319735c4f9bb",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 315
}
},
"source": [
"from sklearn.neural_network import MLPClassifier\n",
"\n",
"hls = (5,10)\n",
"regpenalty = 0.001\n",
"clf = MLPClassifier(solver='adam', alpha=regpenalty, hidden_layer_sizes=hls, early_stopping=True, validation_fraction=0.42)\n",
"clf.fit(X_train, Y_train)\n",
"\n",
"# Make predictions\n",
"predY = clf.predict(X_test)\n",
"print(\"\\n\\rANN: %d mislabeled out of %d points\" % ((Y_test != predY).sum(), X_test.shape[0]))\n",
"trainingLoss = np.asarray(clf.loss_curve_)\n",
"validationLoss = np.sqrt(1- np.asarray(clf.validation_scores_))\n",
"factor = trainingLoss[1]/validationLoss[1]\n",
"validationLoss = validationLoss*factor\n",
"\n",
"# Plot ROC\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"# create figure and axis objects with subplots()\n",
"xlabel= \"epochs\"\n",
"fig,ax= plt.subplots()\n",
"ax.plot(trainingLoss, color=\"blue\")\n",
"ax.set_xlabel(xlabel,fontsize=10, color=\"blue\")\n",
"ax.set_ylabel(\"training loss\",color=\"blue\",fontsize=10)\n",
"ax2=ax.twinx()\n",
"ax2.plot(validationLoss,color=\"red\")\n",
"ax2.set_ylabel(\"validation score\",color=\"red\",fontsize=10)\n",
"plt.show()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "stream",
"text": [
"\n",
"\rANN: 157 mislabeled out of 2871 points\n"
],
"name": "stdout"
},
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAaYAAAEHCAYAAADlMeJIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3deXwV1fnH8c+ThM1Agggq+6K4RMSNIhXrvuBSbKutYGulstQFrdpfq7b9tUrbX/dWq1YFq9XW1rp0waXuS2srClRFCYKI7CCyhR2yPL8/zgSvIeROQib3Xu73/XrN62bmzsx9kmgezpnnnGPujoiISLYoyHQAIiIiqZSYREQkqygxiYhIVlFiEhGRrKLEJCIiWUWJSUREskpRpgNorIKCAm/Xrl2mwxARySmbNm1yd99pY8TMhgE3A4XAXe7+4zrv9wLuBTpG51zn7k8kEavl2jim4uJi37hxY6bDEBHJKWa2yd2Ld/JeITAHOBVYDEwFRrp7eco5E4HX3f12MysDnnD3PknEqq48EREZDMx193nuvg14ADinzjkOlERflwJLkwom57ryRESk2XUHFqXsLwaOrnPODcDTZnYFUAycklQwajGJiOSHIjOblrKNa+T1I4HfuXsP4Ezg92aWSA5Ri0lEJD9Uufugnby3BOiZst8jOpZqNDAMwN1fMbO2QGdgRXMHqhaTiIhMBfqbWV8zaw2MACbXOWchcDKAmR0MtAU+TCIYJSYRkTzn7lXAeOApYBbwoLvPNLMJZjY8Ou3rwFgzexP4EzDKEyrrTrRcPIm6eJWLi4g0XkPl4tkmsRZTVBd/G3AGUAaMjGrfU32HkJmPIDQdf5NUPDN+O5UXPnk9OTZsS0Qk7yTZlZdVdfFrnpnKiVN+zKJHXkvqI0REpBkkmZjqq4vvXuecG4Avmdli4AngiqSC6f2tL7Ge9mz4aWKNMhERaQaZLn6IVRdvZuNqa++rqqqa9EF9Bpbw6J5fZv/pD8DKlbsWtYiIJCbJxBS3Lv5BCHXxhPLDznVv5O4T3X2Quw8qKmr60Kslwy+jdc1Wtt5+d5PvISIiyUoyMWVVXTzAoIsO4UWOp+q2O6C6OqmPERGRXZBYYsq2uniAoUPhnraXUfzB+/Dkk0l9jIiI7IK8W/biC5+t5JbJvdl72BHY4483Y2QiItlL45iy2LBPt+L2mnHwj3/AvHmZDkdEROrIv8Q0DCYxFrcCuP32TIcjIiJ15F1i6tYN9jmiOy91+izcfTds3pzpkEREJEXeJSaAs86Cn64eC6tXw/PPZzocERFJkZeJ6cwzYUrNJ8LOzJmZDUZERD4mLxPT4MFQuNeerGnXFcrLMx2OiIikyMvEVFgYiiDeqirDlZhERLJKXiYmCN15b1SWUTNzFloLQ0Qke+R1Ynq36GAKN22AxYszHY6IiETyNjF17Aidjg3rFlbNUHeeiEi2yNvEBHDM6JCYZv9ViUlEJFvkdWI66fwurLTOLH9eiUlEJFvkdWJq1QrWdC1jj/nlVFRkOhoREYE8T0wAJUPKONBn8fBDqswTEckGeZ+Y9j6+jE6s4fG7P8h0KCIighITVnYwAGtfKWfBggwHIyIiSkyUhcq8Msq5//4MxyIiIkpMdO0KpaWc3LWc3/9ek0CIiGSaEpMZlJVxdIdy3nkHpk3LdEAiIvlNiQmgrIx918yibduwdqCISL4xs2FmNtvM5prZdfW8/yszeyPa5pjZ2qRiSTQxZdM32qCyMgo+XMGYz6zk979HY5pEJK+YWSFwG3AGUAaMNLOy1HPc/Wp3P9zdDwduAf6SVDyJJaZs+0YbFBVAjD95Fhs3wr33ZiQKEZFMGQzMdfd57r4NeAA4p4HzRwJ/SiqYJFtMWfWNNujgUDJ+YHU5Q4bAbbdBTU1GIhERyYTuwKKU/cXRsR2YWW+gL/B8UsEkmZia7Rs1s3FmNs3MplVVVTV7oPTsCcXFUF7O+PEwZw4891zzf4yISAYV1f4djbZxTbzPCOBhd69uzuBSZUvxQ4PfqLtPdPdB7j6oqKio+T+9oCC0msrLOe886NIFbr21+T9GRCSDqmr/jkbbxJT3lgA9U/Z7RMfqM4KEe7eSTExZ9Y2mVVYGs2bRpg2MGwePPgrz52c0IhGRljIV6G9mfc2sNeFv8uS6J5nZQcCewCtJBpNkYsqqbzStsjJYsgQqKvjqV0Mj6vbbMxqRiEiLcPcqYDzwFDALeNDdZ5rZBDMbnnLqCOAB92SnIrAk729mZwI3AYXA3e7+QzObAExz98nROTcAbd19h3Ly+hQXF/vGjRubP9hHH4Xhw+GVV2DIEM49F158May63q5d83+ciEhLMrNN7l6c6TjiSDQxJSGxxPTee7D//vDb38LFF/PCC3DSSXDPPTBqVPN/nIhIS8qlxJQtxQ+Z16cPtG4Ns2cDcMIJcMgh8Otfa/48EZGWpMRUq7AQeveG998HwhR6V14Jr78O//xnhmMTEckjSkyp+vWDefO27154Iey1F/zylxmMSUQkzygxperbd3uLCULRw6WXhrqIuXMzGJeISB5RYkrVty+sXv2xWVwvuwyKiuDmmzMYl4hIHlFiStWvX3hNaTV17QoXXBCq89asyVBcIiJ5RIkpVd++4TUlMQFcfTVs3AiTJmUgJhGRPKPElGoniemww+DEE+GWW6CyMgNxiYjkESWmVHvuCaWlH6vMq3XNNWEWiEceyUBcIiJ5RIkpldkOlXm1zjwTDjgAfvUrDbgVEUmSElNddcYy1SoogK99DV57LUynJyIiyVBiqqtv37DeRT3Noosugo4d4aabWj4sEZF8ocRUV9++sGULLF++w1vFxWGtpkcegQULMhCbiEgeUGKqq3YsUz3deQDjx4dHUbfd1oIxiYjkESWmunZSMl6rZ0847zyYOBE2bGjBuERE8oQSU119+oTXnSQmgKuuCrMW3Xtvy4QkIpJPlJjqatsWunXbaVcewJAhcPTRYf68mpoWjE1EJA8oMdVnJ2OZUl19Nbz7LjzxRAvFJCKSJ5SY6hMjMX3uc9Cjh0rHRUSaW6KJycyGmdlsM5trZtft5JwvmFm5mc00sz8mGU9s/frBokWwbdtOT2nVKlToPfcczJzZgrGJiOzmEktMZlYI3AacAZQBI82srM45/YHrgaHufghwVVLxNErfvmGA7cKFDZ42ejS0bq1Zx0VEmlOSLabBwFx3n+fu24AHgHPqnDMWuM3d1wC4+4oE44mvtmS8gQIIgM6dQ5feffeFMbkiIrLrkkxM3YFFKfuLo2OpDgAOMLN/m9kUMxuWYDzx1bNg4M6MHRsWENSs4yIizSPTxQ9FQH/gBGAkMMnMOtY9yczGmdk0M5tWVVWVfFTduoWHSDES0wknwH77qTtPRKS5JJmYlgA9U/Z7RMdSLQYmu3ulu78PzCEkqo9x94nuPsjdBxUVFSUW8HaFhdC7d9quPAizjo8dCy+9BLNnJx+aiMjuLsnENBXob2Z9zaw1MAKYXOecvxFaS5hZZ0LXXvps0BL69YvVYgIYNQqKiuCuu5INSUQkZ5jtgdn/YjYp2u+P2dlxLk0sMbl7FTAeeAqYBTzo7jPNbIKZDY9OewpYZWblwAvAN9x9VVIxNUqMsUy19tkHhg+H3/0Otm5NNiwRkSQkMLznHmAr8Mlofwnwg1ixeI4tx1pcXOwbN25M/oN++lO49towKV5JSdrTn3wSzjgD/vxn+MIXkg9PRKQxzGyTuxfv5L1CwqOUUwmPWKYCI929POWc/sCDwEnuvsbM9m6wktpsGu6DMHsd9yOiY2/ifli6WDNd/JC90swyXtepp4bHUiqCEJEclMTwnm2YtQNC68dsP0ILKi0lpp1pZGIqLAwDbp99NlbNhIhINklieM/3gCeBnpjdDzwHfDNOMEpMO5NmwcD6XHxxqNJTEYSIZKGi2mE30TausdcTY3gPAGYFwJ7A54BRwJ+AQbi/GOeDlJh2Zs89w7OlmC0mgO7d4cwzQxFESwy3EhFphKraYTfRNjHlvWYb3gOAew3wTdxX4f447o/hvjJuoEpMO2MWWk2NnKF1zBhYtkzLYYhITklieM+zmP0PZj0x67R9i0GJqSFnnw0vvggLFsS+5KyzoGtXFUGISO5IaHjP+cDlwD+B6dE2LU48acvFzfgpofZ8M+FB1kDganf+EOcDmluLlYtDmF28b1+4/nr4QazyewC+9S34yU/C5d3rPj4UEcmAhsrFs02cFtNp7qwDzgbmA/sD30gyqKzRq1d4aHTXXVBZGfuy0aPDkuv33JNgbCIi2cysFWZXYvZwtI3HrFWcS+MkptrJ6c4CHnKnosmB5qJLL4UPPoC//z32JfvtByedBL/9bUhQIiJ56HbgKOA30XZUdCytOInpMTPeiW76nBldgPxZfej008PI2dtj/Ty3GzsW5s8PK9yKiOShT+B+Ee7PR9tXgE/EuTBtYnLnOuAYYJA7lcBGdhwRvPsqLIRx4+D55xs1ffhnPgOdOmlMk4jkrepotofArB9QHefCtInJjM8Dle5Um/Ed4A9AtyYGmpsuvjhMHz5xYvpzI23bwpe/DH/9K3z4YYKxiYhkp28AL2D2ImYvAc8DX49zYZyuvP91Z70ZxwKnAL8lZj/hbmPffeGznw0jZzdvjn3ZmDGhZuK++5ILTUQkK7k/RxiAeyVwBXAg7i/EuTROYqptep0FTHTncaB1U+LMaZdcAqtXw8MPx77kkENgyJBQBJFjk7iLiOwas8uBdrjPwH0GsAdml8W5NE5iWmLGnYTBUk+Y0SbmdbuXE0+EAw6A3/ymUVlmzBiYNQumTEkwNhGR7DMW97Xb98Ks5GPjXBgnwXyBMOL3dHfWAp3Il3FMqczga18LGWZy3Zk6du4LX4DiYhVBiEjeKcTMtu+FNZ9i9bbFWijQjMOAT0W7/3LnzSYE2SxadOaHuior4fDDYcsWKC+HNm1iXTZmDDzwQJhDr0OHhGMUEalHi8/8YPYzoDdwZ3Tkq8Ai3NMWQMSpyvsacD+wd7T9wYwrmh5tDmvVCm66KSyF8atfxb5s9GjYuBEefDDB2EREssu1hEq8S6Mt9npMcebKmwF80p2N0X4x8Io7A3cl4qbKaIup1mc+E1YEnDMHuqWvnHcPhRClpfDKKy0Qn4hIHRmdKy/MKt4jKoJIK84zJuPjg6Kqo2P56xe/CN16118f63Sz0J03ZUroARQR2e2F8UslUVKaDkzCLFZXU5zEdA/wqhk3mHEDMIUwlilGXDbMzGab2Vwzu66e90eZ2Ydm9ka0jYlz34zbbz+45powQOnVV2NdcuGFoSfwt7F+ciIiOa8U93WEVWzvw/1o4OQ4F8YtfjgSODba/Zc7r6e/xgoJKxyeSlj5cCow0t3LU84ZBQxy9/FxgoUs6coDWL8eDjwQevQITaGC9Dn+vPPgpZdgyRJonX8jwUQkgzJQ/PAWcBpwL/Bt3KdiNgP3tI+BdvrX1IxOtRthuYs/RNuC6Fg6g4G57j7P3bcBD7A7zbHXoQP83//B1Knw9NOxLhkzBlaubFS1uYhIrppAGGo0N0pK/YB341y40xaTGe8DzkfPk2pPNMDd6dfgjc3OA4a5+5ho/0Lg6NTWUdRi+hHwIaF1dbW7L2rovlnTYgLYujUUP5x2GvzpT2lPr64O6w6WlcGTT7ZAfCIikd1ioUB3+rrTL3qt/bp2v8Gk1AiPAn08NO2eITT5dmBm48xsmplNq6qqaqaPbgZt2sAFF4SZWteuTXt6YSF85SuhgdWI1dpFRPJKklMLLQF6puz3iI5t5+6r3H1rtHsXYc2nHbj7RHcf5O6DioqK6jslc0aNCi2nP/851umjR4dXFUGIiNQvycQ0FehvZn3NrDUwAvjY0xUz65qyOxyYlWA8yTjySBgwIMw8HkOvXnDGGSExZVPjT0QkWySWmNy9ChhPePg1C3jQ3Wea2QQzGx6ddqWZzTSzNwlTo49KKp7EmIVW05Qp8M47sS4ZNw6WLoUnnkg2NBGRjDFrg9kFmH0Ls+9u3+JcGmPmh/oq8NZHq9m2uKwqfqi1fHkoG//GN+BHP0p7elVVaDkddRQ8+mgLxCcieS8D5eJPAhWEwbUfTdLg/ou0l8ZITPMJz4rWECryOgLLgQ+Ase5Mb2LYTZKViQng7LPh9ddh4cJQ5ZDGd74Tctj8+dCzZ9rTRUR2SQYS09u4D2jKpXG68p4BznSnszt7AWcAjwGXAb9pyofulkaNCv1zzz4b6/TRo8McenffnWxYIiIZ8h/MDm3KhXFaTG+5c2idYzPcGWjGG+4c3pQPbqqsbTFt3Qpdu8Lpp8ca0wQwbBjMnBlaTTEaWSIiTZaBFlM5sD/wPrCVaAxsnJkf4tReLzPjWsLMDRBWsv3AjEKgpmkR74ZqxzTddVcY09SxY9pLxo4N0xQ9+SScdVYLxCgi0nLOaOqFcbryLiCMQfpbtPWKjhUSVreVWrVjmmIuvDR8OOyzD0yalGxYIiItzn0BoSbh09HWMTqWVtrE5M5Kd65w54hoG+/Oh+5sc2furkW+mznqKDjggLBcbQytWoWZIB57LEzsKiKSKc2+GoTZDovMYhZrkdk4K9geYMZEM5424/naLc7N844ZjBgBL74Y1lGPYezYMIeeZoIQkUyJVoO4jdD9VgaMNLOyek79s7sfHm13pbntaOBo3L+L+3eBIcDYOPHE6cp7CHgd+A7wjZRN6jNiRCi3e+ihWKf36xfqJSZN0kwQIpIxSawG0eRFZuMkpip3bnfnNXem125NiTIvHHwwHHZY7O48gEsugcWLNROEiCSqqHYy7Ggbl/JedyB1ZYfF0bG6zjWzGWb2sJmlG4F5D/AqZjdgdgONWGQ2TmJ61IzLzOhaZ40m2ZkRI+CVV0IdeAxnnx1Wz7jjjmTDEpG8VlU7GXa0TWzk9bFWg9jO/ZfAV4DV0fYV3G+K80FxEtNFhK67/xCmlpgOTItz87x1/vnhNeaM40VFYRHBJ5+MnctERJpTs60GgVlJ9LrDIrPRsbRiLa2eTbJ2gG1dQ4aE0vHX065CD8CiRdCnD1x3Hfzwh8mGJiL5p6EBtmZWRFis9WRCQpoKXODuM1PO6eruy6KvPwtc6+5D6rnZY7ifjVntYrPb3yEMsE27nl9DK9ie5M7zZnyuvvfd+Uu6mychZxLTzTfDVVfBrFlw0EGxLjnnHHj11TDdXuvWCccnInkl3cwPZnYmcBNhjOrd7v5DM5sATHP3yWb2I8LyRFWErrlL3T3ekgqNjbWBxHSjO98z45563nZ3Lk4ioHRyJjEtXRpmHP/e98IWwxNPhBkgHnwQPv/5hOMTkbySgSmJnsP95LTH6rtUXXkJOvHEsCRGeXkY45RGdTXst1/YnnuuBeITkbzRYonJrC2wB/ACcAIflYiXAE/inrYLKe1ceWa0Ac4F+qSe786ERgecb0aMCLXgM2aEEvI0CgvDIoLf/jbMmRMmkRARyTFfBa4CuhGK5WoT0zrg1jg3iFOV93fCQKsqYGPKJumce24oufvjH2NfcvHF4RKVjotITnK/Gfe+wP/g3g/3vtF2GO6xElOcZS/edqdJiz0lIae68iDM1Dp1aqhoaNUq1iUjRsBTT4X58/bYI+H4RCQvtPgzpvChAwhTHLXdfsz9vnSXxWkx/ceMJi32JIQBSsuXw+OPx77k8svDyhkxl3USEck+Zt8Dbom2E4GfEqr60l8ao8VU72JP7qRd7CkJOddiqqqC3r3hiCPCNOIxuMPAgaGBNX16rLoJEZEGZaAq7y3gMOB13A/DbB/gD7ifmu7SOC2mM4D+wGmENTXOjl5jxNXwNOop551rZm5mg+LcN6cUFYW1Lf7xjzAhXgxmodX0+uthXJOISA7ajHsNUBXNBrGCj88usVM7TUxmlERfrt/J1qC406ibWQfga8Du+yd49GioqYG77459yRe/CB06wG23JRiXiEhypmHWEZhEqM77L/BKnAsbGmD7mDtnm1E7rURqh5K70+C0Emb2SeAGdz892r8+uvBHdc67iTAh4DeA/3H3Bufhy7muvFqnnQazZ8O8eaEuPIbx48NyGIsXQ5cuCccnIru1jBQ/fPThfYAS3GfEOX2nLSZ3zo5e+7rTL3qt3dLOdUSMadTN7Eigp7s3WBlgZuNqp2qvytVFi8aODZV5zzwT+5LLLoNt2xrV0BIRySyzI3fYoBNQFH2d/hZxZn4wY0/Cc6btJX/u/LPha+w8YJi7j4n2LwSOdvfx0X4B8Dwwyt3nm9mL7M4tpm3bwhRFn/oUPPJI7MtOPBHefx/eey92Q0tEZActOPPDC9FXbYFBwJuEHreBwDTcP5nuFnGWVh8D/BN4Crgxer0hRnjpplHvAAwAXjSz+YRldyfvlgUQEGZlvegimDwZPvgg9mWXXw4LFmgRQRHJEe4n4n4isAw4EvdBuB8FHEGdpTR2Jk5V3teATwAL3DkxuvnaGNdNBfqbWV8zaw2MACZ/FLtXuHtnd+/j7n0IqxsOT9diymljxoTy8d/9LvYl55wTFhG8NdZ4aRGRrHEg7m9t33N/Gzg4zoVxEtMWd7ZAmDfPnXeAA9Nd5O5VwHhCC2sW8KC7zzSzCWYWa5DVbufAA0NX3t13h8FKMbRqBZdeCk8/De8kMsG8iEgiZmB2F2YnRNskIFbxQ5wBtn8lLI97FXASsAZo5c6Zuxh0k+TsM6Zad9750cSuh8abUGPFCujZM9RPqOUkIk2RgQG2bYFLgeOiI/8Ebsd9S9pLG7PshRnHA6XAk+5sa0KouyznE9Py5aFvrhHrNAGMGgUPPxzmzystTS48Edk9ZbRcvJEa7Mozo9CM7R1I7rzkzuRMJaXdwr77wrHHNqoyD+CKK2DjRrinvmUbRUSyhdmD0etbmM3YYYtzixhdeX8HrnBn4S4H3AxyvsUEHy27PmcO9O8f+7Jjj4Vly8JlKh0XkcZowXLxrrgvw6x3ve+7L0h3izjFD3sCM814zozJtVsjQ5VUn/tceG1kq+nKK8PEEf/4RwIxiYg0B/dl0euCercY4rSYjq//s3mpsfE2h92ixQRw9NFh/rypU2NfUlkJfftCWVmo0hMRiasFW0zrCdPY7fAO4LiX1PPex8RpMZ0ZPVvavkFmKvJ2K+eeC9OmhdGzMdWWjj/zDJSXJxibiEhTuXfAvaSerUOcpATxElN9a2ec0bhIZQe13Xl/+UujLhs3Dtq0gVtuSSAmEZHmZrY3Zr22bzE0tOzFpWa8BRxoxoyU7X1iDpKSBuy/f1gNsJHPmbp0gQsugHvvhdWrE4pNRGRXmQ3H7F3CIrMvAfOBWE/IG2ox/ZGwIODk6LV2O8qdL+1KvBI591z4z39CqV0jXH01bN4Md9yRUFwiIrvu+4Q5UOfg3hc4mTD1XFoNLXtR4c58d0a6syBl07/Tm8u554apif72t0ZdduihYXmnW26BrVsTik1EZNdU4r4KKMCsAPcXCLONpxXnGZMkpawszJ/XyO48gK9/PUwi8ac/JRCXiMiuW4tZe8JURPdjdjMQq6RaiSmTzEIRxIsvwpo1jbr01FNDy+mXv4w9H6yIyE6Z2TAzm21mc83sugbOO9fMPMYSRecAm4CrgSeB9wiPg9JSYsq0U0+F6mp45ZVGXWYG11wDb73VqEVxRUR2YGaFwG2EiusyYKSZldVzXgfCUkivxrjtV4GuuFfhfi/uv4669tJSYsq0wYPD/EL//nejLx05Mky998tfJhCXiOSTwcBcd5/n7tuABwgtnrq+D/wESDtDOGEx2Kcx+xdm4zHbJ24wSkyZVlwMhx8eqvMaqU0bGD8ennoK3n47gdhEJF90Bxal7C+Ojm1nZkcCPd398Vh3dL8R90OAy4GuwEuYPRvnUiWmbDB0KLz6aphzqJEuuQTatVOrSUTSKjKzaSnbuLgXmlkB8Evg60343BXAcmAVsHecC5SYssExx4SBSW++2ehL99oLvvIVuP9+WLo0gdhEZHdR5e6DUraJKe8tAXqm7PeIjtXqAAwAXjSz+YTxSZMbLIAwuwyzF4HngL2AsbgPjBOoElM2GDo0vDbhOROE0vGqKrWaRKTJpgL9zayvmbUGRsBHq0i4e4W7d3b3Pu7ehzBQdri7T2vgnj2Bq3A/BPcbcI89w6cSUzbo0SOsnd6E50wA/fqFQog77oBVsWpeREQ+4u5VwHjgKWAW8KC7zzSzCWY2vIk3vR73N5pyaaKJKV1dvJldYmZvmdkbZvZyfeWJeWPo0NBiauKgpOuuCyvc3nprM8clInnB3Z9w9wPcfT93/2F07LvuvsP6e+5+QprW0i5JLDHFrIv/o7sf6u6HAz8lPFzLT8ccA0uWwKJF6c+tx4ABMHx4WBx3w4Zmjk1EpAUl2WJKWxfv7utSdoupf3Gp/LCLz5kArr8+TCAxcWL6c0VEslWSiSltXTyAmV1uZu8RWkxXJhhPdhs4MIxpauJzJoAhQ+DEE+EXv9DkriKSuzJe/ODut7n7fsC1wHfqO8fMxtXW3ldVVbVsgC2lqCgst74LLSYIraalS+G++5opLhGRFpZkYkpXF1/XA8Bn6nvD3SfW1t4XFRU1Y4hZZujQMJZpFx4SnXIKHHUU/OQnoYRcRCTXJJmYGqyLBzCz/im7ZwHvJhhP9jvmGKipCbNANJEZfOtb8N57WhJDRHJTYokpZl38eDObaWZvANcAFyUVT04YMiRkll14zgTwmc+E6fduvLFJsxyJiGSUeY4t5lNcXOwbN8Zaayo3HXoodO8OTz65S7d57DH49Kdh0iQYM6aZYhORnGVmm9y9ONNxxJHx4gepY+jQsDZTTc0u3eass0ItxYQJqtATkdyixJRtjjkG1q0LKwDuAjP4wQ/CeN1Jk5opNhGRFqCuvGyzZAn07x8eEj3/PLRt2+RbuYdxTbNnh2KIPfZoxjhFJKeoK0+arnv3MAjplVfg4oubPHcehFbT978Py5fDb37TjDGKiCRIiSkbnXce/OhHod77xht36Vaf+hScfjr8+Mewfn0zxScikiAlpmx17bVhBcAbbwyrAO6C738/LIfx8583U2wiIgnSM6Zstm0bnHZa6NZ74YVQGNFE558Pjz4anjf17Jn+fDTR1P8AABQBSURBVBHZveTSMyYlpmy3ejUMGgQFBaFSr127Jt1m/nw46KDQS/iHPzRviCKS/XIpMakrL9t16gR33RXK6iZMaPJt+vQJS7Dff/8uzXgkIpI4tZhyxejRcO+9MG1aKCVvgvXr4YADQpL6z39C1Z6I5Ae1mKT5/exn0LlzmF+oidOGd+gA//d/MGUKPPBAM8cnItJMlJhyRadOcMstMH16WD+9iS66CI44IhT9bdrUjPGJiDQTJaZcct55MHw4/O//wrx5TbpFQQHcdFOYquhnP2vm+EREmoGeMeWaxYuhrAwOOSTMQF5a2qTbjBwJjzwC//0vDBjQzDGKSNbRMyZJTo8e8LvfhSKIk08OI2eb4Ne/ho4dYdQordkkItlFiSkXfe5z8Le/wdtvw/HHw7Jljb5Fly5h/rzp0+GnP00gRhGRJlJXXi574YWwGuC++8Kzz4Y68EY6/3z4619Dgjr00OYPUUSyQy515Skx5bopU+CMM6B9e3jpJejXr1GXf/hheFzVs2e4VatWCcUpIhmVS4lJXXm5bsgQePHFUPt90kmwYEGjLu/SBW6/PRRB/OQnyYQoItIYSky7g8MOg2eegYqKkJyWLGnU5eeeCyNGwA03hEaXiOQfMxtmZrPNbK6ZXVfP+5eY2Vtm9oaZvWxmZYnFkmRXnpkNA24GCoG73P3Hdd6/BhgDVAEfAhe7e4P/5FdXXgNeew1OOQW6dg0ZZt99Y1+6bh0MHhzmjJ02DXr1SjBOEWlxDXXlmVkhMAc4FVgMTAVGunt5yjkl7r4u+no4cJm7D0si1sRaTNE3ehtwBlAGjKwnw74ODHL3gcDDgOrDdsXgwfCPf4QWUyNLyUtKQqHfli2h6G/z5gTjFJFsMxiY6+7z3H0b8ABwTuoJtUkpUgwk1qpJsisvzjf6grvXTowzBeiRYDz5YehQePzxMBv55z/fqEFKBx0UlsSYPh0uuWSXVnUXkdzSHViUsr84OvYxZna5mb1HaERcmVQwSSamWN9oitHAPxKMJ38cfzxMmhTKya+6qlGXDh8O3/se3Hcf3HprQvGJSCYUmdm0lG1cY2/g7re5+37AtcB3mj/EoCipGzeGmX0JGAQcv5P3xwHjAFq3bt2CkeWwCy8MCwv+7GdhgNIll8S+9Lvfhddfh6uvDlV7I0YkGKeItJQqdx+0k/eWAKlrW/eIju3MA8DtzRVYXUm2mGJ9o2Z2CvBtYLi7b63vRu4+0d0HufugoqKsyKW54Uc/gjPPhCuuCCXlMRUUhC69oUPhggtg4sTkQhSRrDAV6G9mfc2sNTACmJx6gpn1T9k9C3g3qWCSTExxvtEjgDsJSWlFgrHkp8JC+OMfoX//UBPeiBnJO3QIdRTDhsFXvwo//3mCcYpIRrl7FTAeeAqYBTzo7jPNbEJUgQcw3sxmmtkbwDXARUnFk3S5+JnATYRy8bvd/YdmNgGY5u6TzexZ4FCgdrK3he4+fCe3A1Qu3iRz54aKve7dw9K1HTrEvnTbttAr+OCD8O1vw/e/r5VvRXJRLs38oCmJ8sWzz8Lpp4fqhkceCf11MVVXh0dUd90VuvbuvDPMgCQiuSOXEpNmfsgXp5wCv/hFGKw0YUKjLi0sDM+ZfvCDsCT74MFQXp7+OhGRplCLKZ+4w8UXh/WcHn44PHdqpOefD4sMbtgQWk5f+lLzhykizU8tJslOZnDHHWHi1y9/Oczc2kgnnRRKyY86Kjx7GjEC5s9v/lBFJH8pMeWbNm3gL3+BPfcMfXJjxzZ6RvJu3ULL6YYbYPJkOPBA+MY3YM2aZEIWkfyixJSPunYNM7Vefjn8/vehnPzSSxvV9CkqCjNEzJkDX/xieHy1//6hrHzduvTXi4jsjJ4x5bvFi8NA3EmTwrx6Bx0U+utOOglOOAH22ivWbd58E775TXj66TAh7LhxcOWVYQFCEcm8XHrGpMQkwcKF8NBDoY/un/8M1Q2FhfDZz8L48XDccbEGME2bFlpPDz0UTv/850OJ+amnhl5EEckMJaYEKTG1gMpKmDo1PIu6++7w8OiQQ0LX3/HHhz67NHMWzp8PN98cCgDXroXSUjjnnJCoTj4Z2rVrke9ERCJKTAlSYmphmzaFwUu33hrK8SA8YNpvPzj44PC8qqQkzCZRUhIWJzz44PDcqk0btm0LY3sfeigMoVq7NrScjjsujPc9/fSQ8zSbhEiylJgSpMSUIe5htvIZM2DWrLC98w58+GGodti27ePnFxaG5NWzJ2zcCOvW4evWUbV2A1WVTmUl1NSEU82gqBAKi0LOK+jelYIhR8PR0XboodCqVct/zyK7ESWmBCkxZamtW0OCWrIkJK3y8vC6dGmYv6ikJGzt22+fDmn9eliwEJYuCaetrQDD6ct8jimcQufqMK9vdVFrKvsdSKvDyigcUAYHHACdOn10z9JS2HtvJS+RBigxJUiJaff1wQfw73+Hcb9vzXBW/XcBPZZM4Uj+y8HMooxy+vI+BfWs6Oxm2L77Qo8eYSssDIly3TqoqAjJ65RTQt/hkCGhaSaSR5SYEqTElF/WrQsNr3ffDdv88k1sKZ9HxcIKbMM6SlhHR9bSlWX0a7WY/dospgeLaVXkVBeX4CUlFHQsoWTDUjrMnILV1IRW1tCh0LlzSFi1ra5u3ULXY8+eYSZ2lRHKbkSJKUFKTFJr9Wp4772wzNTChR/fli2DFSvCo7FaHVnDyTzHp1s/xdH2Gh2poH3NOtpVraPQq3e4f1WP3tiAQygcOCBUaPTp8/FEVloaWmYiOUCJKUFKTBJXVRWsXAnLl4dElbotXx4S1wcfwIoPnKqKDXRjKT1ZRE8W0YuFHMhsBvA2B/EOranc8f4FrVjd5SDW9jiETf0GUNO7H6WVK+lQsYjiVYtos2ophfv3xY4dGlpoBx3UqOVGRJqTElOClJgkCVu3hhbYypWwalUoNly1KmxrVlRSNH8uhSuW4msrsIoKCjaso2TTMg6qnskA3qYv87ffawttWEwPlrMvBzKbLqwEoKJwT94tGcSSTgNYsfehrO42gOp9urFn4To6WgWlVNCudTXbDj2KVj32oX17KC7++NaunUrrpWmUmBKkxCTZZMuWUFuxbsl6tsxewKrCvVlpXahYZ1RUwNo1TtH777LP3H/TZ8m/6b3mDXpvKqedb27wvnPoz8scyyt8kgpKtx83wqOvdu3C1rYtFLVrRXX7Uqrbl1LToZQ2xUXsvW0x+2xdSOdNC+m4ZTnbOu7N1n16sW3fXlR368kebWtoX11B++oK9qisoO3WCtpsqaDVprAVtmuNHfPJ0Nrr1Cl8eHU1TJkCjz0GL7wAe+wBvXp9tBUWhh9G7da+PRxzTNhKShL8LewGtm0LTfw99kjsI5SYEqTEJDmvujo8GHv7bVixgur2pWxpU8qmVqVs3lBNwdRXaTv9ZUpmvEzrdat2+eM2UEx74v8/s4l2tKKSVlQBUF5wCAuK9mdI1b/Ys2Y1lRTxdvEQCq2GfbctZK9tSymk5mP32FJUTKvqLRR6NTVWwLIuA1nW9UhqilpTYGAF7Pw15euiyi202pySMCu3Ul1cQk2HkIS9fXsKN2+gcEMFRevXUrB5A1X79GDrAYey7eCB1BwykJqevaF9e6ywALNw/6LVK2j9zgxavzODwnlzKKhYi61bi62rwDZugHbtsNJSrGO0VVV+POmWlMCxx4atNvHOnQv/+he8/HKYPLJ3bxg4MIzDGzAg/Ctm4cIwm3/tw9Dar5ctC4n9uOPg7LPD1r//Lv/uUykxJUiJSfKGO7z/fviD1pCtWz/+R7OqKpTM9wqtoy2FxWxZvYnKeYuofn8hvnARW6uL2FRUwobCUtYXlIbXwlLWWymbKltRvWEzXeZPpcf8l+m18GU6r5nDu12GMq3r2UzvdBprakrZti38Q796SyWlG5ZQXeWsri5lTXUJmyuLaLV1AwO3vMontr7M0ZUvU1bzNgV1Elg6W2lD6OQM2zZaU8K67UeK2cgG2lNBKWvpyEaK6cN8yij/2HPBGox1lLCWjrRjM/uwYvt7H9KZ1XTa/hnr6UA7Nm//jI6spZJWVFAaPtlK2ZsVHOnTKaKaagpYS0f2YjUAK+nMjILD6cEi9qt5d4ekDaG7d1lhTxYX9WZZUS+WtepFO9/ECZue4IDKmQAsKerFpoL2QGgpA3xwyXcZevP5jfoZ1lJiSpASk0huq6kJubOqKjQeq6s/+rr2eGXlR8fqXrt1a9i2bPn4hCNmIZdXV0PlpkraLZxN8by3aLt6adTiWkvrTRXUFBSxqtuhrOw2kA/3PZRNxV0+9hnuH8VQu1VXf1Th6R7iKNq6kV5Lp9B70ct0XL+IhV0H8373T7Gi00E4Rk0N2OZNdP6wnH1XzmRrUTGr2/diVfvebNyjCzUUUFMT7l1TEzZ36Lz+fQ5f8jgHrPw3hTWh1Vr7V7r1ZWM56vrTmvRzV2KqvbnZMOBmoBC4y91/XOf944CbgIHACHd/ON09lZhERBovlxJTYrWrZlYI3AacAZQBI82srM5pC4FRwB+TikNERHJLkvOyDAbmuvs8ADN7ADgHKK89wd3nR+81ruNZRER2W0mO9usOLErZXxwdExER2amcmMnSzMYB4wBap1mgTkREcluSLaYlQM+U/R7RsUZz94nuPsjdBxVpVmgRkd1akolpKtDfzPqaWWtgBDA5wc8TEZHdQGKJyd2rgPHAU8As4EF3n2lmE8xsOICZfcLMFgOfB+40s5lJxSMiIrlBA2xFRPJALo1jyrnEFJWWNzwD5s4VQTQBWPbLlVgVZ/PKlTghd2JVnEE7d8+JdVdyLjHtCjOb5u6DMh1HHLkSq+JsXrkSJ+ROrIoz9+RE9hQRkfyhxCQiIlkl3xLTxEwH0Ai5EqvibF65EifkTqyKM8fk1TMmERHJfvnWYhIRkSyXN4nJzIaZ2Wwzm2tm12U6nlpmdreZrTCzt1OOdTKzZ8zs3eh1z0zGGMXU08xeMLNyM5tpZl/L4ljbmtlrZvZmFOuN0fG+ZvZq9N/An6MZSTLOzArN7HUzeyzaz7o4zWy+mb1lZm+Y2bToWDb+7jua2cNm9o6ZzTKzT2ZpnAdGP8vabZ2ZXZWNsWZCXiSmmGtDZcrvgGF1jl0HPOfu/YHnov1MqwK+7u5lwBDg8uhnmI2xbgVOcvfDgMOBYWY2BPgJ8Ct33x9YA4zOYIypvkaYHaVWtsZ5orsfnlLSnI2/+5uBJ939IOAwws816+J099nRz/Jw4ChgE/BXsjDWjHD33X4DPgk8lbJ/PXB9puNKiacP8HbK/myga/R1V2B2pmOsJ+a/A6dme6zAHsB/gaOBlUBRff9NZDC+HoQ/QCcBjwGWpXHOBzrXOZZVv3ugFHif6Nl5tsZZT9ynAf/OhVhbasuLFhO5tzbUPu6+LPp6ObBPJoOpy8z6AEcAr5KlsUbdY28AK4BngPeAtR7mcITs+W/gJuCbQO1imXuRnXE68LSZTY+WoYHs+933BT4E7om6Ru8ys2KyL866RgB/ir7O9lhbRL4kppzl4Z9OWVM6aWbtgUeAq9x9Xep72RSru1d76CbpQVhN+aAMh7QDMzsbWOHu0zMdSwzHuvuRhO7wy83suNQ3s+R3XwQcCdzu7kcAG6nTFZYlcW4XPT8cDjxU971si7Ul5Utiara1oVrIB2bWFSB6XZHheAAws1aEpHS/u/8lOpyVsdZy97XAC4QusY5mVrugVzb8NzAUGG5m84EHCN15N5N9ceLuS6LXFYRnIYPJvt/9YmCxu78a7T9MSFTZFmeqM4D/uvsH0X42x9pi8iUx5draUJOBi6KvLyI8z8koMzPgt8Asd/9lylvZGGsXM+sYfd2O8CxsFiFBnRedlvFY3f16d+/h7n0I/00+7+5fJMviNLNiM+tQ+zXhmcjbZNnv3t2XA4vM7MDo0MlAOVkWZx0j+agbD7I71paT6YdcLbUBZwJzCM8avp3peFLi+hOwDKgk/ItvNOE5w3PAu8CzQKcsiPNYQrfCDOCNaDszS2MdCLwexfo28N3oeD/gNWAuoeukTaZjTYn5BOCxbIwziufNaJtZ+/9Plv7uDwemRb/7vwF7ZmOcUazFwCqgNOVYVsba0ptmfhARkaySL115IiKSI5SYREQkqygxiYhIVlFiEhGRrKLEJCIiWUWJSSRhZpxgxmOZjkMkVygxiYhIVlFiEomY8SUzXjPjDTPuNKPQjA1m/MqMmWY8Z0aX6NzDzZhixgwz/mrGntHx/c141ow3zfivGftFt29vxsNmvGPG/WZYdP6PzSiP7vPzDH3rIllFiUkEMONg4HxgqDuHA9XAFwmj86e5cwjwEvC96JL7gGvdGQi8lXL8fuA2dw4DjiHM6gFhNvarCOuB9QOGmrEX8FngkOg+P0j2uxTJDUpMIsHJhAXbpprxRrTfj7AcxZ+jc/4AHGtGKdDRnZei4/cCx5nRAejuzl8B3NnizqbonNfcWexODWE6pz5ABbAF+K0Zn4Pt54rkNSUmkcCAe905PNoOdOeGes5r6hxeW1O+rgaK3KkizNL9MHA28GQT7y2yW1FiEgmeA84zY28AMzqZ0Zvw/0jtTN8XAC+7UwGsMeNT0fELgZfcWQ8sNuMz0T3amLHHzj7QjPZAqTtPAFcTlgIXyXtF6U8R2f25U27Gd4CnzSggzPZ+OWGxucHReysIz6EgLElwR5R45gFfiY5fCNxpxoToHp9v4GM7AH83oy2hxXZNM39bIjlJs4uLNMCMDe60z3QcIvlEXXkiIpJV1GISEZGsohaTiIhkFSUmERHJKkpMIiKSVZSYREQkqygxiYhIVlFiEhGRrPL/r7uHYqRj6ZcAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 2 Axes>"
]
},
"metadata": {
"tags": [],
"needs_background": "light"
}
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YXvqhyrjwdnr",
"colab_type": "text"
},
"source": [
"### Exploring Various MLP Architectures"
]
},
{
"cell_type": "code",
"metadata": {
"id": "p739fuWWuvRT",
"colab_type": "code",
"colab": {}
},
"source": [
"from sklearn.neural_network import MLPClassifier\n",
"from sklearn.metrics import roc_auc_score\n",
"import pandas as pd\n",
"import numpy as np\n",
"import random\n",
"\n",
"# Define our trial space for hidden layer architectures\n",
"from itertools import product\n",
"ones = list(range(1,21))\n",
"twos = list(product(range(1,21),repeat=2))\n",
"threes = list(product(range(1,21),repeat=3))\n",
"combos = ones + twos + threes\n",
"\n",
"# Define list of activations\n",
"activations = ('relu', 'identity', 'tanh')\n",
"\n",
"# Store our results in a DataFrame for later\n",
"\n",
"results = pd.DataFrame(data=None, columns=['hidden layers',\n",
" 'activation', 'error_rate', \n",
" 'auroc']) # We'll want to store all our results to analyze later\n",
"\n",
"# Define our lists of possible things to create our architecture\n",
"for i in combos:\n",
" for a in activations:\n",
" hidden_layers = i\n",
" solvers = 'adam'\n",
" activate = a\n",
" alphas = 0.001\n",
" early_stoppings = True # We always want it to stop early if we're not improving to save computation\n",
"\n",
" clf = MLPClassifier(solver=solvers, \n",
" activation=activate,\n",
" alpha=alphas, \n",
" hidden_layer_sizes=hidden_layers, \n",
" early_stopping=early_stoppings, \n",
" validation_fraction=0.42)\n",
" \n",
" # Train the classifier\n",
" clf.fit(X_train, Y_train)\n",
"\n",
" # Predict against the test data\n",
" predictions = clf.predict(X_test)\n",
" actual = Y_test\n",
"\n",
" # Determine Basic Misclassification Rate\n",
" error_rate = ((predictions == actual).value_counts()[False]/actual.count())\n",
"\n",
" # Determine the AUROC\n",
" auroc = roc_auc_score(actual, predictions)\n",
"\n",
" # Provide progress\n",
" print(\"Running \", str(i), \" hidden layers for the solver: \", str(a))\n",
"\n",
" # Store all results for later\n",
" results = results.append({'hidden layers': hidden_layers, \n",
" 'activation': activate, \n",
" 'error_rate': error_rate,\n",
" 'auroc': auroc\n",
" }, ignore_index=True)\n",
" \n",
" # We'll probably clear the output when all is said and done since it would be massive"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "EPSGDHvzw732",
"colab_type": "code",
"colab": {}
},
"source": [
"results.head()"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "KidCgDS80-SL",
"colab_type": "text"
},
"source": [
"We could run this overnight and lose connection to our run-time, so lets make online and local copies of the serialized results dataframe so we can run analysis on it later."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2jikitG67b2Y",
"colab_type": "text"
},
"source": [
"We can use https://file.io to save a copy to the cloud. This is a private/public framework but we're not particularly concerned with locking this data down since there's nothing inherently bad about losing it."
]
},
{
"cell_type": "code",
"metadata": {
"id": "Zvw224UanEaI",
"colab_type": "code",
"colab": {}
},
"source": [
"results.to_pickle('/content/results.pickle')\n",
"# Lets not lose these results in case we need to run this exploration overnight\n",
"push_response = !curl -F \"file=@results.pickle\" https://file.io\n",
"print(push_response)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "QEUYhHvb7n7Z",
"colab_type": "text"
},
"source": [
"Colaboratory also has tooling integrated with Chrome to simply download right to your local computer! We'll do this as well to ensure we keep a local copy separate from this run-time."
]
},
{
"cell_type": "code",
"metadata": {
"id": "P6Rnpsr802m_",
"colab_type": "code",
"colab": {}
},
"source": [
"from google.colab import files\n",
"files.download('/content/results.pickle') "
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "pQznvKvbnD9y",
"colab_type": "text"
},
"source": [
"### Analysis & Discussion"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QUAq2X9S7H_2",
"colab_type": "text"
},
"source": [
"As I said, exploring the trial space of all possible products of 1-3 hidden layers with 1-20 nodes per layer for three possible activations took a really long time (I wish I timed it but I'm not doing that again).\n",
"\n",
"We re-load our local copy into the runtime and read it for analysis."
]
},
{
"cell_type": "code",
"metadata": {
"id": "Ru7najiy7Hf9",
"colab_type": "code",
"outputId": "5e147759-6895-4923-ac9c-67c97aeafd2d",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 197
}
},
"source": [
"results = pd.read_pickle('/content/results.pickle')\n",
"results.head()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>hidden layers</th>\n",
" <th>activation</th>\n",
" <th>error_rate</th>\n",
" <th>auroc</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>relu</td>\n",
" <td>0.475096</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>1</td>\n",
" <td>identity</td>\n",
" <td>0.058865</td>\n",
" <td>0.941772</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>1</td>\n",
" <td>tanh</td>\n",
" <td>0.072449</td>\n",
" <td>0.927858</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>2</td>\n",
" <td>relu</td>\n",
" <td>0.068617</td>\n",
" <td>0.932238</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>2</td>\n",
" <td>identity</td>\n",
" <td>0.061303</td>\n",
" <td>0.939623</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" hidden layers activation error_rate auroc\n",
"0 1 relu 0.475096 0.500000\n",
"1 1 identity 0.058865 0.941772\n",
"2 1 tanh 0.072449 0.927858\n",
"3 2 relu 0.068617 0.932238\n",
"4 2 identity 0.061303 0.939623"
]
},
"metadata": {
"tags": []
},
"execution_count": 13
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RzQK1S3I8eed",
"colab_type": "text"
},
"source": [
"As we can see, we ran a TON of trials!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "cx5-bJH18X4a",
"colab_type": "code",
"outputId": "9846ec7f-699f-48a3-b03b-8db48370a810",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"source": [
"results.shape"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"(25260, 4)"
]
},
"metadata": {
"tags": []
},
"execution_count": 15
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g7pavtlF-x69",
"colab_type": "text"
},
"source": [
"#### AUROC Scoring"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uO_XvCqr-1YT",
"colab_type": "text"
},
"source": [
"Lets take a look at some of our best trials when using the AUROC score as the criteria (closer to 1 is better)."
]
},
{
"cell_type": "code",
"metadata": {
"id": "SzbKprrs8aP1",
"colab_type": "code",
"outputId": "3d905d4f-4c41-4c33-8852-fcd2b205d100",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 347
}
},
"source": [
"results.sort_values('auroc',ascending=False).head(10)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>hidden layers</th>\n",
" <th>activation</th>\n",
" <th>error_rate</th>\n",
" <th>auroc</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>22293</th>\n",
" <td>(18, 11, 12)</td>\n",
" <td>relu</td>\n",
" <td>0.047022</td>\n",
" <td>0.953470</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20868</th>\n",
" <td>(17, 7, 17)</td>\n",
" <td>relu</td>\n",
" <td>0.046674</td>\n",
" <td>0.953280</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22536</th>\n",
" <td>(18, 15, 13)</td>\n",
" <td>relu</td>\n",
" <td>0.047022</td>\n",
" <td>0.953087</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24792</th>\n",
" <td>(20, 13, 5)</td>\n",
" <td>relu</td>\n",
" <td>0.048067</td>\n",
" <td>0.952440</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20229</th>\n",
" <td>(16, 17, 4)</td>\n",
" <td>relu</td>\n",
" <td>0.048067</td>\n",
" <td>0.952370</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10080</th>\n",
" <td>(8, 8, 1)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.952177</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13214</th>\n",
" <td>(10, 20, 5)</td>\n",
" <td>tanh</td>\n",
" <td>0.048067</td>\n",
" <td>0.952161</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24873</th>\n",
" <td>(20, 14, 12)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.952038</td>\n",
" </tr>\n",
" <tr>\n",
" <th>23208</th>\n",
" <td>(19, 6, 17)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.951969</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24309</th>\n",
" <td>(20, 5, 4)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.951899</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" hidden layers activation error_rate auroc\n",
"22293 (18, 11, 12) relu 0.047022 0.953470\n",
"20868 (17, 7, 17) relu 0.046674 0.953280\n",
"22536 (18, 15, 13) relu 0.047022 0.953087\n",
"24792 (20, 13, 5) relu 0.048067 0.952440\n",
"20229 (16, 17, 4) relu 0.048067 0.952370\n",
"10080 (8, 8, 1) relu 0.048415 0.952177\n",
"13214 (10, 20, 5) tanh 0.048067 0.952161\n",
"24873 (20, 14, 12) relu 0.048415 0.952038\n",
"23208 (19, 6, 17) relu 0.048415 0.951969\n",
"24309 (20, 5, 4) relu 0.048415 0.951899"
]
},
"metadata": {
"tags": []
},
"execution_count": 19
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wI8Xi7CSF7ss",
"colab_type": "text"
},
"source": [
"Just for comparison, lets look at the worst architectures for AUROC score as well (farther from 1 is worse)."
]
},
{
"cell_type": "code",
"metadata": {
"id": "0glTRptj8s5_",
"colab_type": "code",
"outputId": "67560c81-42a3-4ba1-bee0-53d513c15faa",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 347
}
},
"source": [
"results.sort_values('auroc',ascending=True).head(10)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>hidden layers</th>\n",
" <th>activation</th>\n",
" <th>error_rate</th>\n",
" <th>auroc</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>10890</th>\n",
" <td>(9, 1, 11)</td>\n",
" <td>relu</td>\n",
" <td>0.475444</td>\n",
" <td>0.499668</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20496</th>\n",
" <td>(17, 1, 13)</td>\n",
" <td>relu</td>\n",
" <td>0.475444</td>\n",
" <td>0.499668</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16950</th>\n",
" <td>(14, 2, 11)</td>\n",
" <td>relu</td>\n",
" <td>0.475444</td>\n",
" <td>0.499668</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10881</th>\n",
" <td>(9, 1, 8)</td>\n",
" <td>relu</td>\n",
" <td>0.475444</td>\n",
" <td>0.499668</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11823</th>\n",
" <td>(9, 17, 2)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18903</th>\n",
" <td>(15, 15, 2)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>780</th>\n",
" <td>(13, 1)</td>\n",
" <td>relu</td>\n",
" <td>0.475096</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24195</th>\n",
" <td>(20, 3, 6)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>relu</td>\n",
" <td>0.475096</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7746</th>\n",
" <td>(6, 9, 3)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.500000</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" hidden layers activation error_rate auroc\n",
"10890 (9, 1, 11) relu 0.475444 0.499668\n",
"20496 (17, 1, 13) relu 0.475444 0.499668\n",
"16950 (14, 2, 11) relu 0.475444 0.499668\n",
"10881 (9, 1, 8) relu 0.475444 0.499668\n",
"11823 (9, 17, 2) relu 0.524904 0.500000\n",
"18903 (15, 15, 2) relu 0.524904 0.500000\n",
"780 (13, 1) relu 0.475096 0.500000\n",
"24195 (20, 3, 6) relu 0.524904 0.500000\n",
"0 1 relu 0.475096 0.500000\n",
"7746 (6, 9, 3) relu 0.524904 0.500000"
]
},
"metadata": {
"tags": []
},
"execution_count": 20
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8Hbp4eJvKa-b",
"colab_type": "text"
},
"source": [
"Just by looking at the tail ends of this sorted distribution, it's hard to identify distinct patterns that would lead us to reduce our trial-space if we were to conduct it in the future. The top performing AUROC is a `relu` activation with `(18,11,12)` hidden layers. That being said, we're not seeing significantly worse AUROC performance from much simpler architectures in this list. Just from a cursory overview, despite `(18,11,12)` being the top of the list, I would lean more towards the `(8,8,1)` architecture as this is going to be a simpler one that could be easier/faster to train and generalize on future datasets without overfitting. This would, of course, require further testing and analysis.\n",
"\n",
"It is interesting to see in the worst values that the `relu` activation is prevalent there as well, implying maybe the other possible activations lie as relatively average in terms of AUROC performance. That being said, while we're seeing seeing some simpler architectures with less layers in the worse performance we're also seeing some relatively complex ones as well. This also lends to the idea that increased complexity does not equate to better performance."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gEA_Uoc8Mxb2",
"colab_type": "text"
},
"source": [
"#### Misclassification Rate Performance"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oFVfr5fiM1cd",
"colab_type": "text"
},
"source": [
"Lets look at some of the performances based on simpler misclassification rates (lower=better)."
]
},
{
"cell_type": "code",
"metadata": {
"id": "bJMSJtdvJ7x1",
"colab_type": "code",
"outputId": "f76684d6-6489-4fb3-ea84-fa722bfc15f5",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 347
}
},
"source": [
"results.sort_values('error_rate',ascending=True).head(10)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>hidden layers</th>\n",
" <th>activation</th>\n",
" <th>error_rate</th>\n",
" <th>auroc</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>20868</th>\n",
" <td>(17, 7, 17)</td>\n",
" <td>relu</td>\n",
" <td>0.046674</td>\n",
" <td>0.953280</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22293</th>\n",
" <td>(18, 11, 12)</td>\n",
" <td>relu</td>\n",
" <td>0.047022</td>\n",
" <td>0.953470</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22536</th>\n",
" <td>(18, 15, 13)</td>\n",
" <td>relu</td>\n",
" <td>0.047022</td>\n",
" <td>0.953087</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24792</th>\n",
" <td>(20, 13, 5)</td>\n",
" <td>relu</td>\n",
" <td>0.048067</td>\n",
" <td>0.952440</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20229</th>\n",
" <td>(16, 17, 4)</td>\n",
" <td>relu</td>\n",
" <td>0.048067</td>\n",
" <td>0.952370</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13214</th>\n",
" <td>(10, 20, 5)</td>\n",
" <td>tanh</td>\n",
" <td>0.048067</td>\n",
" <td>0.952161</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24699</th>\n",
" <td>(20, 11, 14)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.951760</td>\n",
" </tr>\n",
" <tr>\n",
" <th>23763</th>\n",
" <td>(19, 16, 2)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.951899</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24873</th>\n",
" <td>(20, 14, 12)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.952038</td>\n",
" </tr>\n",
" <tr>\n",
" <th>23208</th>\n",
" <td>(19, 6, 17)</td>\n",
" <td>relu</td>\n",
" <td>0.048415</td>\n",
" <td>0.951969</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" hidden layers activation error_rate auroc\n",
"20868 (17, 7, 17) relu 0.046674 0.953280\n",
"22293 (18, 11, 12) relu 0.047022 0.953470\n",
"22536 (18, 15, 13) relu 0.047022 0.953087\n",
"24792 (20, 13, 5) relu 0.048067 0.952440\n",
"20229 (16, 17, 4) relu 0.048067 0.952370\n",
"13214 (10, 20, 5) tanh 0.048067 0.952161\n",
"24699 (20, 11, 14) relu 0.048415 0.951760\n",
"23763 (19, 16, 2) relu 0.048415 0.951899\n",
"24873 (20, 14, 12) relu 0.048415 0.952038\n",
"23208 (19, 6, 17) relu 0.048415 0.951969"
]
},
"metadata": {
"tags": []
},
"execution_count": 22
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BwbxVC6RNHvo",
"colab_type": "text"
},
"source": [
"Again, lets also look at the worst performers"
]
},
{
"cell_type": "code",
"metadata": {
"id": "MdUVOPChM808",
"colab_type": "code",
"outputId": "d4bd2810-1c25-41e6-8a06-9dffbaa11438",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 347
}
},
"source": [
"results.sort_values('error_rate',ascending=False).head(10)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>hidden layers</th>\n",
" <th>activation</th>\n",
" <th>error_rate</th>\n",
" <th>auroc</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>3671</th>\n",
" <td>(3, 1, 4)</td>\n",
" <td>tanh</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1140</th>\n",
" <td>(19, 1)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16680</th>\n",
" <td>(13, 18, 1)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4802</th>\n",
" <td>(3, 20, 1)</td>\n",
" <td>tanh</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4803</th>\n",
" <td>(3, 20, 2)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6486</th>\n",
" <td>(5, 8, 3)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22392</th>\n",
" <td>(18, 13, 5)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24426</th>\n",
" <td>(20, 7, 3)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22803</th>\n",
" <td>(18, 20, 2)</td>\n",
" <td>relu</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22802</th>\n",
" <td>(18, 20, 1)</td>\n",
" <td>tanh</td>\n",
" <td>0.524904</td>\n",
" <td>0.5</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" hidden layers activation error_rate auroc\n",
"3671 (3, 1, 4) tanh 0.524904 0.5\n",
"1140 (19, 1) relu 0.524904 0.5\n",
"16680 (13, 18, 1) relu 0.524904 0.5\n",
"4802 (3, 20, 1) tanh 0.524904 0.5\n",
"4803 (3, 20, 2) relu 0.524904 0.5\n",
"6486 (5, 8, 3) relu 0.524904 0.5\n",
"22392 (18, 13, 5) relu 0.524904 0.5\n",
"24426 (20, 7, 3) relu 0.524904 0.5\n",
"22803 (18, 20, 2) relu 0.524904 0.5\n",
"22802 (18, 20, 1) tanh 0.524904 0.5"
]
},
"metadata": {
"tags": []
},
"execution_count": 23
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vNL-ZNzkNNf0",
"colab_type": "text"
},
"source": [
"While we're getting slightly different lists here, the results seem to be about the same with poor performance with AUROC correlating relatively closely with misclassification."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "05wMB8NQNbW9",
"colab_type": "text"
},
"source": [
"### Conclusion\n",
"So what have we done here, apart from use hours and hours of Google's compute infrastructure? Well we've basically brute-forced a trial-space for various artificial neural network architectures and assessed them based off of criteria. We learned that there are likely no real patterns of parameters that can be assumed in order to find the best model.\n",
"\n",
"That being said, if I were a production Machine Learning engineer would I do this again to re-train my model? Probably not. I would more likely use some form of Bayesian-optimization for hyperparameter tuning, or even a basic GridSearch, to reduce my trial-space significantly and proceed forward with the fact that even if my architecture is not 100% optimal according to my objective criteria it is likely good enough for its intended generalization purposes.\n",
"\n",
"Of course, something the performs well on a validation set might do worse on future data anyway as the domain changes. It's important to consider these external elements and not just rely on your models as they stand at one point in time."
]
},
{
"cell_type": "code",
"metadata": {
"id": "T7dxI9CYNMOX",
"colab_type": "code",
"colab": {}
},
"source": [
""
],
"execution_count": 0,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment