Skip to content

Instantly share code, notes, and snippets.

@muellerzr
Last active April 23, 2020 00:21
Show Gist options
  • Save muellerzr/51299785476833c70f9cd61cc588f7cf to your computer and use it in GitHub Desktop.
Save muellerzr/51299785476833c70f9cd61cc588f7cf to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "2020-04-22-TabularNumpy.ipynb",
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "yEKJvwpc9wAe",
"colab_type": "text"
},
"source": [
"# Speeding up fastai Tabular with NumPy\n",
"\n",
"> Speeding up fastai tabular training by 40%\n",
"\n",
"- toc: true\n",
"- badges: true\n",
"- comments: true\n",
"- image: images/chart-preview.png\n",
"\n",
"---\n",
"This blog is also a Jupyter notebook available to run from the top down. There will be code snippets that you can then run in any Jupyter environment. This post was written using:\n",
"\n",
"* `fastai2`: 0.0.16\n",
"* `fastcore`: 0.1.16\n",
"\n",
"---\n",
"\n",
"# What is this article?\n",
"\n",
"In this article, we're going to dive deep into the `fastai` `DataLoader` and how to integrate it in with NumPy. The end result? Speeding up tabular training by 40% (to where almost half the time per epoch is just the time to train the model itself). "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "THZRFXnr-hf2",
"colab_type": "text"
},
"source": [
"## What is `fastai` Tabular? A TL;DR\n",
"\n",
"When working with tabular data, `fastai` has introduced a powerful tool to help with prerocessing your data: `TabularPandas`. It's super helpful and useful as you can have everything in one place, encode and decode all of your tables at once, and the memory usage on top of your `Pandas` dataframe can be very minimal. Let's look at an example of it. \n",
"\n",
"First let's import the tabular module:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "47slJ_-S9cCD",
"colab_type": "code",
"colab": {}
},
"source": [
"from fastai2.tabular.all import *"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "db5BcYVB_KY3",
"colab_type": "text"
},
"source": [
"For our particular tests today, we'll be using the `ADULT_SAMPLE` dataset, where we need to identify if a particular individual makes above or below $50,000. Let's grab the data:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "h2kN9WCU_J9t",
"colab_type": "code",
"colab": {}
},
"source": [
"path = untar_data(URLs.ADULT_SAMPLE)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "oj-B4XMD_WnZ",
"colab_type": "text"
},
"source": [
"And now we can open it in `Pandas`:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "xg1QKoXd_Jj8",
"colab_type": "code",
"colab": {}
},
"source": [
"df = pd.read_csv(path/'adult.csv')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "KN3HIrRq_aCm",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 394
},
"outputId": "aaef9726-9a9a-48b8-c1bc-ecb0f2ada679"
},
"source": [
"df.head()"
],
"execution_count": 5,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>age</th>\n",
" <th>workclass</th>\n",
" <th>fnlwgt</th>\n",
" <th>education</th>\n",
" <th>education-num</th>\n",
" <th>marital-status</th>\n",
" <th>occupation</th>\n",
" <th>relationship</th>\n",
" <th>race</th>\n",
" <th>sex</th>\n",
" <th>capital-gain</th>\n",
" <th>capital-loss</th>\n",
" <th>hours-per-week</th>\n",
" <th>native-country</th>\n",
" <th>salary</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>49</td>\n",
" <td>Private</td>\n",
" <td>101320</td>\n",
" <td>Assoc-acdm</td>\n",
" <td>12.0</td>\n",
" <td>Married-civ-spouse</td>\n",
" <td>NaN</td>\n",
" <td>Wife</td>\n",
" <td>White</td>\n",
" <td>Female</td>\n",
" <td>0</td>\n",
" <td>1902</td>\n",
" <td>40</td>\n",
" <td>United-States</td>\n",
" <td>&gt;=50k</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>44</td>\n",
" <td>Private</td>\n",
" <td>236746</td>\n",
" <td>Masters</td>\n",
" <td>14.0</td>\n",
" <td>Divorced</td>\n",
" <td>Exec-managerial</td>\n",
" <td>Not-in-family</td>\n",
" <td>White</td>\n",
" <td>Male</td>\n",
" <td>10520</td>\n",
" <td>0</td>\n",
" <td>45</td>\n",
" <td>United-States</td>\n",
" <td>&gt;=50k</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>38</td>\n",
" <td>Private</td>\n",
" <td>96185</td>\n",
" <td>HS-grad</td>\n",
" <td>NaN</td>\n",
" <td>Divorced</td>\n",
" <td>NaN</td>\n",
" <td>Unmarried</td>\n",
" <td>Black</td>\n",
" <td>Female</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>32</td>\n",
" <td>United-States</td>\n",
" <td>&lt;50k</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>38</td>\n",
" <td>Self-emp-inc</td>\n",
" <td>112847</td>\n",
" <td>Prof-school</td>\n",
" <td>15.0</td>\n",
" <td>Married-civ-spouse</td>\n",
" <td>Prof-specialty</td>\n",
" <td>Husband</td>\n",
" <td>Asian-Pac-Islander</td>\n",
" <td>Male</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>40</td>\n",
" <td>United-States</td>\n",
" <td>&gt;=50k</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>42</td>\n",
" <td>Self-emp-not-inc</td>\n",
" <td>82297</td>\n",
" <td>7th-8th</td>\n",
" <td>NaN</td>\n",
" <td>Married-civ-spouse</td>\n",
" <td>Other-service</td>\n",
" <td>Wife</td>\n",
" <td>Black</td>\n",
" <td>Female</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>50</td>\n",
" <td>United-States</td>\n",
" <td>&lt;50k</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" age workclass fnlwgt ... hours-per-week native-country salary\n",
"0 49 Private 101320 ... 40 United-States >=50k\n",
"1 44 Private 236746 ... 45 United-States >=50k\n",
"2 38 Private 96185 ... 32 United-States <50k\n",
"3 38 Self-emp-inc 112847 ... 40 United-States >=50k\n",
"4 42 Self-emp-not-inc 82297 ... 50 United-States <50k\n",
"\n",
"[5 rows x 15 columns]"
]
},
"metadata": {
"tags": []
},
"execution_count": 5
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JchctIxu_cfF",
"colab_type": "text"
},
"source": [
"Now that we have our `DataFrame`, let's fit it into a `TabularPandas` object for preprocessing. TO do so we need:\n",
"\n",
"* `procs` (pre-processing)\n",
"* `cat_names` (categorical variables)\n",
"* `cont_names` (continuous variables)\n",
"* `y_names` (our y columns)\n",
"\n",
"For our case, these look like so:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "U9kKdY1s_ax0",
"colab_type": "code",
"colab": {}
},
"source": [
"cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']\n",
"cont_names = ['age', 'fnlwgt', 'education-num']\n",
"procs = [Categorify, FillMissing, Normalize]\n",
"y_names = 'salary'"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "QHNZPXRy_31Y",
"colab_type": "text"
},
"source": [
"We'll also need to tell `TabularPandas` how we want to split our data. We'll use a random 20% subsample:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "XZgf25KK_2Tz",
"colab_type": "code",
"colab": {}
},
"source": [
"splits = RandomSplitter()(range_of(df))"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "a431xqTY__Kj",
"colab_type": "text"
},
"source": [
"Now let's make a `TabularPandas`!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "RCTMToQ3_-ao",
"colab_type": "code",
"colab": {}
},
"source": [
"to = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,\n",
" y_names=y_names, splits=splits)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "kubAd44mAbcB",
"colab_type": "text"
},
"source": [
"Now all of our data is pre-processed here and we can grab all of the raw values if we wanted to say use it with `XGBoost` like so:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "MYIXml03Aobt",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 142
},
"outputId": "d9031b3e-6f2e-4d26-9305-562c44c34fe0"
},
"source": [
"to.train.xs.iloc[:3]"
],
"execution_count": 30,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>workclass</th>\n",
" <th>education</th>\n",
" <th>marital-status</th>\n",
" <th>occupation</th>\n",
" <th>relationship</th>\n",
" <th>race</th>\n",
" <th>education-num_na</th>\n",
" <th>age</th>\n",
" <th>fnlwgt</th>\n",
" <th>education-num</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>3337</th>\n",
" <td>8</td>\n",
" <td>13</td>\n",
" <td>5</td>\n",
" <td>11</td>\n",
" <td>2</td>\n",
" <td>2</td>\n",
" <td>1</td>\n",
" <td>-1.071338</td>\n",
" <td>-0.396740</td>\n",
" <td>1.532950</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13162</th>\n",
" <td>8</td>\n",
" <td>12</td>\n",
" <td>3</td>\n",
" <td>12</td>\n",
" <td>1</td>\n",
" <td>5</td>\n",
" <td>1</td>\n",
" <td>0.030019</td>\n",
" <td>0.469650</td>\n",
" <td>-0.417817</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10215</th>\n",
" <td>5</td>\n",
" <td>10</td>\n",
" <td>1</td>\n",
" <td>5</td>\n",
" <td>2</td>\n",
" <td>5</td>\n",
" <td>1</td>\n",
" <td>-0.557371</td>\n",
" <td>0.392678</td>\n",
" <td>1.142797</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" workclass education marital-status ... age fnlwgt education-num\n",
"3337 8 13 5 ... -1.071338 -0.396740 1.532950\n",
"13162 8 12 3 ... 0.030019 0.469650 -0.417817\n",
"10215 5 10 1 ... -0.557371 0.392678 1.142797\n",
"\n",
"[3 rows x 10 columns]"
]
},
"metadata": {
"tags": []
},
"execution_count": 30
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DI0-sbCWAsnh",
"colab_type": "text"
},
"source": [
"Andi it's fully encoded! Now that we're a bit familiar with `TabularPandas`, let's do some speed tests!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZP0DY0mnAGgR",
"colab_type": "text"
},
"source": [
"## The Baseline\n",
"\n",
"For our tests, we'll run 4 different tests:\n",
"1. One batch of the training data\n",
"2. Iterating over the entire training dataset\n",
"3. Iterating over the entire validation set\n",
"4. Fitting for 10 epochs (GPU only)\n",
"\n",
"And for each of these we will compare the times on the CPU and the GPU. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Govpigi4BPFn",
"colab_type": "text"
},
"source": [
"### CPU:\n",
"\n",
"First let's grab the first batch. The reason this is important is each time we iterate over the training `DataLoader`, we actually shuffle our data, which can add some time:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Crxzdy2UAFvi",
"colab_type": "code",
"colab": {}
},
"source": [
"dls = to.dataloaders(bs=128, device='cpu')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Jm1rQ-f1BpoG",
"colab_type": "text"
},
"source": [
"To test our times, we'll use `%%timeit`, and for iterating over the entire `DataLoader` we'll look at the time per batch as well. \n",
"\n",
"First, a batch from training:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "lgtL5pH6BjL2",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "c9566125-9117-469d-d809-1b158a873230"
},
"source": [
"%%timeit\n",
"_ = next(iter(dls.train))"
],
"execution_count": 32,
"outputs": [
{
"output_type": "stream",
"text": [
"10 loops, best of 3: 18.3 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LHZd20g_B2a1",
"colab_type": "text"
},
"source": [
"Now the validation:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "S6gzVmcmBldT",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "3bbc0675-3e47-44e3-f9b6-674222cad8a6"
},
"source": [
"%%timeit\n",
"_ = next(iter(dls.valid))"
],
"execution_count": 33,
"outputs": [
{
"output_type": "stream",
"text": [
"100 loops, best of 3: 3.37 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YsElv1TnB-EF",
"colab_type": "text"
},
"source": [
"Alright, so first we can see that our shuffling function is adding almost 15 milliseconds on our time, something we can improve on! Let's then go through the entire `DataLoader`:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "pyAJQesfB67M",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "a1823b88-9b24-4a53-a070-79b76a77c602"
},
"source": [
"%%timeit\n",
"for _ in dls.train:\n",
" _"
],
"execution_count": 34,
"outputs": [
{
"output_type": "stream",
"text": [
"1 loop, best of 3: 661 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5-vTciKFCRJJ",
"colab_type": "text"
},
"source": [
"Now let's get an average time per batch:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "3WUlaWCxCSu9",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "87f2d159-4cb3-4bcc-f08c-b0557fecf0b1"
},
"source": [
"print(661/len(dls.train))"
],
"execution_count": 36,
"outputs": [
{
"output_type": "stream",
"text": [
"3.2561576354679804\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "P9V6UFMeCWke",
"colab_type": "text"
},
"source": [
"About 3.25 milliseconds per batch on the training dataset, let's look at the validation:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "MwfLneY-CI3U",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "2aefa255-a1be-4bbd-a317-d6fdd26eaad6"
},
"source": [
"%%timeit\n",
"for _ in dls.valid:\n",
" _"
],
"execution_count": 35,
"outputs": [
{
"output_type": "stream",
"text": [
"10 loops, best of 3: 159 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "jhvrp8NRCNHW",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "1b0d3c0c-f308-403a-fbba-2fcc16c84e0d"
},
"source": [
"print(159/len(dls.valid))"
],
"execution_count": 37,
"outputs": [
{
"output_type": "stream",
"text": [
"3.1176470588235294\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TR0o_hhHCeSH",
"colab_type": "text"
},
"source": [
"And about 3.11 milliseconds per batch on the validation, so we can see that it's about the same after shuffling. Now let's compare some GPU times:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zM9P_Kn9CodG",
"colab_type": "text"
},
"source": [
"### GPU"
]
},
{
"cell_type": "code",
"metadata": {
"id": "h40ugTngCdh2",
"colab_type": "code",
"colab": {}
},
"source": [
"dls = to.dataloaders(bs=128, device='cuda')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "WqkPph2ECrHO",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "16591a24-726a-42f7-e848-d03917ca2009"
},
"source": [
"%%timeit\n",
"_ = next(iter(dls.train))"
],
"execution_count": 40,
"outputs": [
{
"output_type": "stream",
"text": [
"100 loops, best of 3: 18.8 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "E0sGAGkBCvks",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "0bb5f51d-5e7b-4140-d1a7-678ef3c92493"
},
"source": [
"%%timeit\n",
"_ = next(iter(dls.valid))"
],
"execution_count": 41,
"outputs": [
{
"output_type": "stream",
"text": [
"100 loops, best of 3: 3.49 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ry9mFQRRCxS0",
"colab_type": "text"
},
"source": [
"So first, grabbing just one batch we can see it added about a half a millisecond on the training and .2 milliseconds on the validation, so we're not utilizing the GPU for this process much (which makes sense, `TabularPandas` is *CPU* bound). And now let's iterate:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "rZLazQxXCw9i",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "2c19d4d6-8ffe-42fd-d402-6cdeb6907f46"
},
"source": [
"%%timeit\n",
"for _ in dls.train:\n",
" _"
],
"execution_count": 42,
"outputs": [
{
"output_type": "stream",
"text": [
"1 loop, best of 3: 693 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "2ktdM3KFTL31",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "2f75b5d1-b2fb-40b3-fb33-afaefc1c2f0b"
},
"source": [
"print(693/len(dls.train))"
],
"execution_count": 74,
"outputs": [
{
"output_type": "stream",
"text": [
"3.413793103448276\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "tpzWIT5YDDFO",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "32fb67c6-e2a8-48df-b806-f807dbcf6ea7"
},
"source": [
"%%timeit\n",
"for _ in dls.valid:\n",
" _"
],
"execution_count": 43,
"outputs": [
{
"output_type": "stream",
"text": [
"10 loops, best of 3: 163 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "3ZCCuWnLTPIK",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "5d6010f1-9ff6-4fa2-de04-5bac19e75102"
},
"source": [
"print(163/len(dls.valid))"
],
"execution_count": 75,
"outputs": [
{
"output_type": "stream",
"text": [
"3.196078431372549\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nKNX1vV6DG68",
"colab_type": "text"
},
"source": [
"And here we can see a little bit more being added here as well. Now that we have those baselines, let's fit for ten epochs real quick:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "b8fn0QQxDE0h",
"colab_type": "code",
"colab": {}
},
"source": [
"learn = tabular_learner(dls, layers=[200,100], metrics=accuracy)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "ukTiw_nhDOJq",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 393
},
"outputId": "673f443c-64b3-4905-f925-c66697b6b81b"
},
"source": [
"%%time\n",
"learn.fit(10, 1e-2)"
],
"execution_count": 45,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: left;\">\n",
" <th>epoch</th>\n",
" <th>train_loss</th>\n",
" <th>valid_loss</th>\n",
" <th>accuracy</th>\n",
" <th>time</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <td>0</td>\n",
" <td>0.377574</td>\n",
" <td>0.364423</td>\n",
" <td>0.833999</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>1</td>\n",
" <td>0.356772</td>\n",
" <td>0.357792</td>\n",
" <td>0.835688</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>2</td>\n",
" <td>0.358388</td>\n",
" <td>0.358207</td>\n",
" <td>0.833692</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>3</td>\n",
" <td>0.352414</td>\n",
" <td>0.352521</td>\n",
" <td>0.840602</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>4</td>\n",
" <td>0.349441</td>\n",
" <td>0.350070</td>\n",
" <td>0.840756</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>5</td>\n",
" <td>0.347263</td>\n",
" <td>0.358235</td>\n",
" <td>0.841370</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>6</td>\n",
" <td>0.346777</td>\n",
" <td>0.352908</td>\n",
" <td>0.838606</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>7</td>\n",
" <td>0.352095</td>\n",
" <td>0.352776</td>\n",
" <td>0.839681</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>8</td>\n",
" <td>0.347428</td>\n",
" <td>0.348187</td>\n",
" <td>0.840909</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" <tr>\n",
" <td>9</td>\n",
" <td>0.346684</td>\n",
" <td>0.352819</td>\n",
" <td>0.835074</td>\n",
" <td>00:02</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {
"tags": []
}
},
{
"output_type": "stream",
"text": [
"CPU times: user 22.2 s, sys: 263 ms, total: 22.4 s\n",
"Wall time: 22.9 s\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9MJPdcQyDYjW",
"colab_type": "text"
},
"source": [
"After fitting, we got about 22.9 seconds in total and ~2.29 seconds per epoch! Now that we have our baselines, let's try to speed that up!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g8Sm0HsRDfpO",
"colab_type": "text"
},
"source": [
"## Bringing in `NumPy`\n",
"\n",
"### The `Dataset`\n",
"\n",
"With speeding everything up, I wanted to keep `TabularPandas` as it is, as it's a great way to pre-process your data! So instead we'll create a new `Dataset` class where we will convert our `TabularPandas` object into a `NumPy` array. Why is that important? `NumPy` is a super-fast library that has been hyper-optimized by using as much C code as it possibly can which is *leagues* faster than Python. Let's build our `Dataset`!\n",
"\n",
"We'll want it to maintain the `cats`, `conts`, and `ys` from our `TabularPandas` object seperate. We can call `to_numpy()` on all of them because they are simply stored as a `DataFrame`! Finally, to deal with categorical versus continuous variables, we'll assign our `cats` as `np.long` and our `conts` as `np.float32` (we also have our `ys` as `np.int8`, but this is because we're doing classification):"
]
},
{
"cell_type": "code",
"metadata": {
"id": "k1T6f4hPDRb4",
"colab_type": "code",
"colab": {}
},
"source": [
"class TabDataset():\n",
" \"A `NumPy` dataset from a `TabularPandas` object\"\n",
" def __init__(self, to):\n",
" self.cats = to.cats.to_numpy().astype(np.long)\n",
" self.conts = to.conts.to_numpy().astype(np.float32)\n",
" self.ys = to.ys.to_numpy()"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "_uKAg2toEhYa",
"colab_type": "text"
},
"source": [
"Great! Now we need a few more bits for everything to work! For our `Dataset` to function, we need to be able to gather the values from it each time we call from it. We use the `__getitem__` function to do so! For our particular problem, we need it to return some `cats`, `conts`, and our `ys`. And to save on more time we'll return a whole *batch* of values:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "kv18t3VHFLE6",
"colab_type": "code",
"colab": {}
},
"source": [
"class TabDataset():\n",
" \"A `NumPy` dataset from a `TabularPandas` object\"\n",
" def __init__(self, to):\n",
" self.cats = to.cats.to_numpy().astype(np.long)\n",
" self.conts = to.conts.to_numpy().astype(np.float32)\n",
" self.ys = to.ys.to_numpy()\n",
"\n",
" def __getitem__(self, idx):\n",
" idx = idx[0]\n",
" return self.cats[idx:idx+self.bs], self.conts[idx:idx+self.bs], self.ys[idx:idx+self.bs]"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "LVpdx-0XFWZM",
"colab_type": "text"
},
"source": [
"You'll notice we don't explicitly pass in a batch size, so where is that coming from? This is added when we build our `DataLoader`, as we'll see later. Let's finish up our `Dataset` class by adding in an option to get the length of the dataset (we'll do the length of our categorical table in this case), and add a `.c` property to get our number of classes:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "wT0o6jUyFVtX",
"colab_type": "code",
"colab": {}
},
"source": [
"class TabDataset():\n",
" \"A `NumPy` dataset from a `TabularPandas` object\"\n",
" def __init__(self, to):\n",
" self.cats = to.cats.to_numpy().astype(np.long)\n",
" self.conts = to.conts.to_numpy().astype(np.float32)\n",
" self.ys = to.ys.to_numpy()\n",
"\n",
" def __getitem__(self, idx):\n",
" idx = idx[0]\n",
" return self.cats[idx:idx+self.bs], self.conts[idx:idx+self.bs], self.ys[idx:idx+self.bs]\n",
"\n",
" def __len__(self): return len(self.cats)\n",
"\n",
" @property\n",
" def c(self): return 0 if self.y is None else 1 if isinstance(self.y[0], float) else len(np.unique(self.y))"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZQ3xVnKBGTp1",
"colab_type": "text"
},
"source": [
"And now we can make some `Datasets`!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "N1veKd1JFlYy",
"colab_type": "code",
"colab": {}
},
"source": [
"train_ds = TabDataset(to.train)\n",
"valid_ds = TabDataset(to.valid)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "LlhavChXGc7d",
"colab_type": "text"
},
"source": [
"We can look at some data real quick if we want to as well! First we need to assign a batch size:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "rIGqPYjlGcgP",
"colab_type": "code",
"colab": {}
},
"source": [
"train_ds.bs = 3"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "XPJ-uJIbGhnA",
"colab_type": "text"
},
"source": [
"And now let's look at some data:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Uy67iOerGhPp",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 170
},
"outputId": "241e1b69-8a30-4a9d-e704-920969f6a984"
},
"source": [
"train_ds[[3]]"
],
"execution_count": 52,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"(array([[ 5, 10, 5, 5, 2, 5, 1],\n",
" [ 2, 16, 3, 5, 1, 3, 1],\n",
" [ 5, 16, 3, 5, 1, 5, 1]]),\n",
" array([[-0.9979143 , 0.07715245, 1.1427965 ],\n",
" [ 0.8376807 , 1.4486277 , -0.02766372],\n",
" [ 1.4984949 , -1.4280752 , -0.02766372]], dtype=float32),\n",
" array([[0],\n",
" [0],\n",
" [1]], dtype=int8))"
]
},
"metadata": {
"tags": []
},
"execution_count": 52
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HF1_Ccm_Od28",
"colab_type": "text"
},
"source": [
"We can see that we output what could be considered a batch of data! The only thing missing is to make it into a tensor! Fantastic! Now let's build the `DataLoader`, as there's some pieces in it that we need, so simply having this `Dataset` won't be enough"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CLKAFtjiOoOE",
"colab_type": "text"
},
"source": [
"### The `DataLoader`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LjkS_B_hOrP7",
"colab_type": "text"
},
"source": [
"Now to build our `DataLoader`, we're going to want to modify 4 particular functions:\n",
"\n",
"1. `create_item`\n",
"2. `create_batch`\n",
"3. `get_idxs`\n",
"4. `shuffle_ds`\n",
"\n",
"Each of these play a particular role. First let's look at our template:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "iO-nhTjPGkEB",
"colab_type": "code",
"colab": {}
},
"source": [
"class TabDataLoader(DataLoader):\n",
" def __init__(self, dataset, bs=1, num_workers=0, device='cuda', shuffle=False, **kwargs):\n",
" \"A `DataLoader` based on a `TabDataset`\"\n",
" super().__init__(dataset, bs=bs, num_workers=num_workers, shuffle=shuffle, \n",
" device=device, drop_last=shuffle, **kwargs)\n",
" self.dataset.bs=bs"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "9wKHLt6BPYHZ",
"colab_type": "text"
},
"source": [
"As you can see, our `__init__` will build a `DataLoader`, and we keep track of our `Dataset` and set the `Datasets` batch size here as well"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ajlViei9PdjL",
"colab_type": "code",
"colab": {}
},
"source": [
"dl = TabDataLoader(train_ds, bs=3)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "Pop-MwIzPrWu",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "9082c179-dac3-49b0-84dc-b85bc4c42139"
},
"source": [
"dl.dataset.bs"
],
"execution_count": 60,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"3"
]
},
"metadata": {
"tags": []
},
"execution_count": 60
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "on42qepqPsQR",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 170
},
"outputId": "a43bf870-000a-4093-d540-47003cb1702f"
},
"source": [
"dl.dataset[[0]]"
],
"execution_count": 61,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"(array([[ 8, 13, 5, 11, 2, 2, 1],\n",
" [ 8, 12, 3, 12, 1, 5, 1],\n",
" [ 5, 10, 1, 5, 2, 5, 1]]),\n",
" array([[-1.071338 , -0.39674038, 1.5329499 ],\n",
" [ 0.03001888, 0.46965045, -0.41781712],\n",
" [-0.5573715 , 0.39267784, 1.1427965 ]], dtype=float32),\n",
" array([[0],\n",
" [0],\n",
" [0]], dtype=int8))"
]
},
"metadata": {
"tags": []
},
"execution_count": 61
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BgXlBpHIPwzx",
"colab_type": "text"
},
"source": [
"And we can see that we grab everything as normal in the `Dataset`! Great! Now let's work on `create_item` and `create_batch`. `create_item` is very simple as we already do so when we make our call to the dataset, so we just pass it on. `create_batch` is also very simplistic. We'll take some index's from our `Dataset` and convert them all to `Tensors`!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "m3OcxbOKPtna",
"colab_type": "code",
"colab": {}
},
"source": [
"class TabDataLoader(DataLoader):\n",
" def __init__(self, dataset, bs=1, num_workers=0, device='cuda', shuffle=False, **kwargs):\n",
" \"A `DataLoader` based on a `TabDataset`\"\n",
" super().__init__(dataset, bs=bs, num_workers=num_workers, shuffle=shuffle, \n",
" device=device, drop_last=shuffle, **kwargs)\n",
" self.dataset.bs=bs\n",
" \n",
" def create_item(self, s): return s\n",
"\n",
" def create_batch(self, b):\n",
" cat, cont, y = self.dataset[b]\n",
" return tensor(cat).to(self.device), tensor(cont).to(self.device), tensor(y).to(self.device)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "5BoMOIp8QSu1",
"colab_type": "text"
},
"source": [
"Now we're almost done. The last two pieces missing is `get_idxs` and `shuffle_fn`. These are needed as after each epoch we actually shuffle the dataset and we need to get a list of index's for our `DataLoader` to use! To save on time (as we’re using array indexing), we can shuffle the interior dataset instead! A major benefit is slicing (consecutive idxs) instead of indexing (non-consecutive idxs). Let's look at what that looks like:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "rc41nTlVQkBD",
"colab_type": "code",
"colab": {}
},
"source": [
"class TabDataLoader(DataLoader):\n",
" def __init__(self, dataset, bs=1, num_workers=0, device='cuda', shuffle=False, **kwargs):\n",
" \"A `DataLoader` based on a `TabDataset`\"\n",
" super().__init__(dataset, bs=bs, num_workers=num_workers, shuffle=shuffle, \n",
" device=device, drop_last=shuffle, **kwargs)\n",
" self.dataset.bs=bs\n",
" \n",
" def create_item(self, s): return s\n",
"\n",
" def create_batch(self, b):\n",
" \"Create a batch of data\"\n",
" cat, cont, y = self.dataset[b]\n",
" return tensor(cat).to(self.device), tensor(cont).to(self.device), tensor(y).to(self.device)\n",
"\n",
" def get_idxs(self):\n",
" \"Get index's to select\"\n",
" idxs = Inf.count if self.indexed else Inf.nones\n",
" if self.n is not None: idxs = list(range(len(self.dataset)))\n",
" return idxs\n",
"\n",
" def shuffle_fn(self):\n",
" \"Shuffle the interior dataset\"\n",
" rng = np.random.permutation(len(self.dataset))\n",
" self.dataset.cats = self.dataset.cats[rng]\n",
" self.dataset.conts = self.dataset.conts[rng]\n",
" self.dataset.ys = self.dataset.ys[rng]"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "pJ-qTUCQRHh3",
"colab_type": "text"
},
"source": [
"And now we have all the pieces we need to build a `DataLoader` with `NumPy`! We'll examine it's speed now and then we'll build some convience functions later. First let's build the `Datasets`:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ujluXlS_QtTH",
"colab_type": "code",
"colab": {}
},
"source": [
"train_ds = TabDataset(to.train)\n",
"valid_ds = TabDataset(to.valid)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "YlJAO0LNRi-s",
"colab_type": "text"
},
"source": [
"And then the `DataLoader`:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "-Wib4YxlRfBH",
"colab_type": "code",
"colab": {}
},
"source": [
"train_dl = TabDataLoader(train_ds, device='cpu', shuffle=True, bs=128)\n",
"valid_dl = TabDataLoader(valid_ds, device='cpu', bs=128)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "BPjOFfK6Rt-Y",
"colab_type": "text"
},
"source": [
"And now let's grab some CPU timings similar to what we did before:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "kChbZaXQRtKp",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "c5f34f17-772a-4310-8a12-01c7867ff247"
},
"source": [
"%%timeit\n",
"_ = next(iter(train_dl))"
],
"execution_count": 68,
"outputs": [
{
"output_type": "stream",
"text": [
"1000 loops, best of 3: 669 µs per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "Nc8AWYJCR0gZ",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "48f3aded-9259-469d-e1c7-c73890081b84"
},
"source": [
"%%timeit\n",
"_ = next(iter(valid_dl))"
],
"execution_count": 71,
"outputs": [
{
"output_type": "stream",
"text": [
"1000 loops, best of 3: 300 µs per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "54pd3aV5R_wX",
"colab_type": "text"
},
"source": [
"**Right** away we can see that we are *leagues* faster than the previous version. Shuffling only added ~370 *microseconds*, which means we used 4% of the time! Now let's iterate over the entire `DataLoader`:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "hRUSnrgDR77D",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "40617551-b5ba-4c57-880f-3bde21630b61"
},
"source": [
"%%timeit\n",
"for _ in train_dl:\n",
" _"
],
"execution_count": 72,
"outputs": [
{
"output_type": "stream",
"text": [
"10 loops, best of 3: 31.8 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "fNlIvl7HTGOU",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "6bec711a-e4a9-42ac-dcc9-ac4318992074"
},
"source": [
"print(31.8/len(train_dl))"
],
"execution_count": 76,
"outputs": [
{
"output_type": "stream",
"text": [
"0.1566502463054187\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "yrtRT2ivTB4I",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "f8933984-acbd-4ea9-ec86-cb86bdfc09a8"
},
"source": [
"%%timeit\n",
"for _ in valid_dl:\n",
" _"
],
"execution_count": 73,
"outputs": [
{
"output_type": "stream",
"text": [
"100 loops, best of 3: 8.07 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "hTzLV2-nTDK4",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "415e186f-feab-4095-f87e-0457a58ace29"
},
"source": [
"print(8.07/len(valid_dl))"
],
"execution_count": 77,
"outputs": [
{
"output_type": "stream",
"text": [
"0.15823529411764706\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8AqsPvqfTZCP",
"colab_type": "text"
},
"source": [
"And as we can see, each individual batch of data is about 0.158 milliseconds! Yet again, about 6% of time time, quite a decrease! So we have **sucessfully** decreased the time! Let's look at the GPU now:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "0oOF0xz7TYSO",
"colab_type": "code",
"colab": {}
},
"source": [
"train_dl = TabDataLoader(train_ds, device='cuda', shuffle=True, bs=128)\n",
"valid_dl = TabDataLoader(valid_ds, device='cuda', bs=128)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "-JyWMSY2T3RO",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "82580688-e581-4eed-8fe3-c65106fed84a"
},
"source": [
"%%timeit\n",
"_ = next(iter(train_dl))"
],
"execution_count": 80,
"outputs": [
{
"output_type": "stream",
"text": [
"1000 loops, best of 3: 835 µs per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "U4B7eYhYT4tU",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "dcb3d9df-e47d-4d30-96b1-82262a288258"
},
"source": [
"%%timeit\n",
"_ = next(iter(valid_dl))"
],
"execution_count": 81,
"outputs": [
{
"output_type": "stream",
"text": [
"1000 loops, best of 3: 451 µs per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "qseyxqzXT7Va",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "0e832e55-4d38-4227-bd5b-444fbb727155"
},
"source": [
"%%timeit\n",
"for _ in train_dl:\n",
" _"
],
"execution_count": 82,
"outputs": [
{
"output_type": "stream",
"text": [
"10 loops, best of 3: 51.5 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "no4CDiCvWa2n",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "f2e49484-e0b4-4552-db94-10176d95f62a"
},
"source": [
"print(51.5/len(train_dl))"
],
"execution_count": 89,
"outputs": [
{
"output_type": "stream",
"text": [
"0.2536945812807882\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "wmJ0wxQoT-Hx",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "e2e4c546-fe07-4623-fd34-aa711d2916f2"
},
"source": [
"%%timeit\n",
"for _ in valid_dl:\n",
" _"
],
"execution_count": 83,
"outputs": [
{
"output_type": "stream",
"text": [
"100 loops, best of 3: 12.8 ms per loop\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "L81_nS52Wfab",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "f6ec5f59-81d9-4f74-8e58-7dfa03b34503"
},
"source": [
"print(12.8/len(valid_dl))"
],
"execution_count": 91,
"outputs": [
{
"output_type": "stream",
"text": [
"0.25098039215686274\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "G7cdoo99T_sS",
"colab_type": "text"
},
"source": [
"Which as we can see, it adds a little bit of time from converting the tensors over to `cuda`. You could save a *little* bit more by converting first, but as this should be seperate from the dataset I decided to just keep it here. Now that we have all the steps, finally we can take a look at training! First let's build a quick helper function to make `DataLoaders` similar to what `fastai`'s `tabular_learner` would be expecting:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "reTE36oDT_Ug",
"colab_type": "code",
"colab": {}
},
"source": [
"class TabDataLoaders(DataLoaders):\n",
" def __init__(self, to, bs=64, val_bs=None, shuffle_train=True, device='cpu', **kwargs):\n",
" train_ds = TabDataset(to.train)\n",
" valid_ds = TabDataset(to.valid)\n",
" val_bs = bs if val_bs is None else val_bs\n",
" train = TabDataLoader(train_ds, bs=bs, shuffle=shuffle_train, device=device, **kwargs)\n",
" valid = TabDataLoader(valid_ds, bs=val_bs, shuffle=False, device=device, **kwargs)\n",
" super().__init__(train, valid, device=device, **kwargs)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "ZRHpTWD2UZe-",
"colab_type": "code",
"colab": {}
},
"source": [
"dls = TabDataLoaders(to, bs=128, device='cuda')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "qx-VmjosUfG7",
"colab_type": "text"
},
"source": [
"And now we can build our model and train! We need to build our own `TabularModel` here, so we'll need to grab the size of our embeddings and build a `Learner`. For simplicity we'll still use `TabularPandas` to get those sizes:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ybENNFVlUw3U",
"colab_type": "code",
"colab": {}
},
"source": [
"emb_szs = get_emb_sz(to)\n",
"net = TabularModel(emb_szs, 3, 2, layers=[200,100]).cuda()\n",
"learn = Learner(dls, net, metrics=accuracy, loss_func=CrossEntropyLossFlat())"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "JyS1vKBHU2HS",
"colab_type": "text"
},
"source": [
"And now let's train!"
]
},
{
"cell_type": "code",
"metadata": {
"id": "f9J2GMVSUeK2",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 393
},
"outputId": "75e212da-f4c7-4b57-dc3f-1933d45d2947"
},
"source": [
"%%time\n",
"learn.fit(10, 1e-2)"
],
"execution_count": 88,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: left;\">\n",
" <th>epoch</th>\n",
" <th>train_loss</th>\n",
" <th>valid_loss</th>\n",
" <th>accuracy</th>\n",
" <th>time</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <td>0</td>\n",
" <td>0.369785</td>\n",
" <td>0.358381</td>\n",
" <td>0.837531</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>1</td>\n",
" <td>0.359938</td>\n",
" <td>0.354405</td>\n",
" <td>0.840602</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>2</td>\n",
" <td>0.353965</td>\n",
" <td>0.354380</td>\n",
" <td>0.837838</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>3</td>\n",
" <td>0.350551</td>\n",
" <td>0.355998</td>\n",
" <td>0.837684</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>4</td>\n",
" <td>0.349042</td>\n",
" <td>0.357085</td>\n",
" <td>0.838606</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>5</td>\n",
" <td>0.347858</td>\n",
" <td>0.354116</td>\n",
" <td>0.839988</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>6</td>\n",
" <td>0.344613</td>\n",
" <td>0.352649</td>\n",
" <td>0.840448</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>7</td>\n",
" <td>0.343187</td>\n",
" <td>0.351604</td>\n",
" <td>0.840909</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>8</td>\n",
" <td>0.342587</td>\n",
" <td>0.353344</td>\n",
" <td>0.841523</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>9</td>\n",
" <td>0.342127</td>\n",
" <td>0.355749</td>\n",
" <td>0.841216</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {
"tags": []
}
},
{
"output_type": "stream",
"text": [
"CPU times: user 13.4 s, sys: 203 ms, total: 13.6 s\n",
"Wall time: 13.8 s\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mzT5QmpFU-iB",
"colab_type": "text"
},
"source": [
"As you can see, we cut the speed down 60%! So we saw a *tremendous* speed up! Let's quickly revisit all of the times and results in a pretty table."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oFO4W-svVHnC",
"colab_type": "text"
},
"source": [
"## Results\n",
"\n",
"| | CPU? | First Batch | Per Batch | Per Epoch | Ten Epochs |\n",
"|:-------:|:----:|:-------------------------------:|:-----------------------------:|-----------|------------|\n",
"| fastai2 | Yes | 18.3ms (train) 3.37ms (valid) | 3.25ms (train) 3.11ms (valid) | | |\n",
"| | No | 18.8ms (train) 3.49ms (valid) | 3.41ms (train) 3.19ms (valid) | 2.29s | 22.9s |\n",
"| NumPy | Yes | 0.669ms (train) 0.3ms (valid | 0.15ms (train) 0.15ms (valid) | | |\n",
"| | No | 0.835ms (train) 0.451ms (valid) | 0.25ms (train) 0.25ms (valid) | 1.38s | 13.8s |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Caz_mUliW-ub",
"colab_type": "text"
},
"source": [
"So in summary, we first sped up the time to grab a single batch of data by converting everything from `Pandas` to `NumPy`. Afterwards we made a custom `DataLoader` that could handle these `NumPy` arrays and induce the speedup we saw! I hope this article helps you better understand how the interior `DataLoader` can be integrated in with `NumPy`, and that it helps you speed up your tabular training!\n",
"\n",
"* Small note: `show_batch()` etc will *not* work with this particular code base, this is simply a proof of concept"
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment