Skip to content

Instantly share code, notes, and snippets.

Revisions

  1. ELC revised this gist Oct 4, 2018. 1 changed file with 79 additions and 19 deletions.
    98 changes: 79 additions & 19 deletions Deep Learning with Free GPU (FastAI + Google Colab).ipynb
    Original file line number Diff line number Diff line change
    @@ -23,7 +23,7 @@
    "colab_type": "text"
    },
    "source": [
    "[View in Colaboratory](https://colab.research.google.com/gist/ELC/756040fe84a8bb3d14c59b0e997c84e9/google_colab_for_fastai_general_template4.ipynb)"
    "[View in Colaboratory](https://colab.research.google.com/gist/ELC/35db433bec8401e886e227d50aa448e3/google_colab_for_fastai_general_template4.ipynb)"
    ]
    },
    {
    @@ -61,7 +61,7 @@
    "base_uri": "https://localhost:8080/",
    "height": 84
    },
    "outputId": "821163e4-4a43-4c56-897b-cf4b1aef9648"
    "outputId": "ae0b6d52-8aa0-4a9d-baf5-78614a7aeb1f"
    },
    "cell_type": "code",
    "source": [
    @@ -75,23 +75,48 @@
    {
    "output_type": "stream",
    "text": [
    "Python 3.6.3\n",
    "Python 3.6.3\n",
    "Python 3.6.6\n",
    "Python 3.6.6\n",
    "pip 18.0 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)\n",
    "pip 18.0 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)\n"
    ],
    "name": "stdout"
    }
    ]
    },
    {
    "metadata": {
    "id": "HJoT6vSgGdAe",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "**Installing fastai (1.x) from PyPI and installing PyTorch 1.x with CUDA 9.2** \n"
    ]
    },
    {
    "metadata": {
    "id": "av1b-3YWBbT2",
    "colab_type": "code",
    "colab": {}
    },
    "cell_type": "code",
    "source": [
    "!pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html\n",
    "!pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ torchvision==0.2.1.post1\n",
    "!pip install fastai"
    ],
    "execution_count": 0,
    "outputs": []
    },
    {
    "metadata": {
    "id": "TBT_tbpj-7hZ",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "**Installing fastai from source and installing PyTorch 3.1 with CUDA 9.1**\n",
    "**Installing LEGACY fastai (0.7) from source and installing PyTorch 0.3.1 with CUDA 9.1** \n",
    "\n",
    "Installing from pypi is not recommended as mentioned in [fastai-github-readme](https://github.com/fastai/fastai) (due to it's rapid changes and lack of tests) and you don't want to use conda on Google Colab. So here are few steps to install the library from source."
    ]
    @@ -104,7 +129,7 @@
    "base_uri": "https://localhost:8080/",
    "height": 84
    },
    "outputId": "c88b7f55-02d9-4a7b-a135-e00a0d817984"
    "outputId": "7a406fa5-05ba-45b9-cba3-13ccdc9bf203"
    },
    "cell_type": "code",
    "source": [
    @@ -119,19 +144,21 @@
    "\n",
    "git pull\n",
    "\n",
    "pip -q install . && echo Successfully Installed Fastai\n",
    "cd old\n",
    "\n",
    "pip -q install . && echo Successfully Installed Fastai 0.7\n",
    "\n",
    "pip -q install http://download.pytorch.org/whl/cu91/torch-0.3.1-cp36-cp36m-linux_x86_64.whl && echo Successfully Installed PyTorch\n",
    "\n",
    "pip -q install torchvision && echo Successfully Installed TorchVision"
    ],
    "execution_count": 2,
    "execution_count": 9,
    "outputs": [
    {
    "output_type": "stream",
    "text": [
    "Already up-to-date.\n",
    "Successfully Installed Fastai\n",
    "Already up to date.\n",
    "Successfully Installed Fastai 0.7\n",
    "Successfully Installed PyTorch\n",
    "Successfully Installed TorchVision\n"
    ],
    @@ -149,6 +176,39 @@
    "**Import all the libraries**"
    ]
    },
    {
    "metadata": {
    "id": "XB3543WIHN0h",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "Imports for FastAI 1.x"
    ]
    },
    {
    "metadata": {
    "id": "x2kfLCuPHM4b",
    "colab_type": "code",
    "colab": {}
    },
    "cell_type": "code",
    "source": [
    "from fastai.imports import *"
    ],
    "execution_count": 0,
    "outputs": []
    },
    {
    "metadata": {
    "id": "ja8LBm3DZ6vZ",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "Imports for FastAI Legacy"
    ]
    },
    {
    "metadata": {
    "id": "akD5dZfY1Fx8",
    @@ -189,25 +249,25 @@
    "base_uri": "https://localhost:8080/",
    "height": 34
    },
    "outputId": "7f646127-cfd4-4472-ceb9-75bc4eb259fe"
    "outputId": "f207fa8c-4fa9-4f99-de97-af00e6a02a6e"
    },
    "cell_type": "code",
    "source": [
    "f'Is CUDA and CUDNN enabled: {torch.cuda.is_available() and torch.backends.cudnn.enabled}'"
    "f'Is CUDA and CUDNN enabled: {torch.cuda.is_available()} and {torch.backends.cudnn.enabled}'"
    ],
    "execution_count": 4,
    "execution_count": 5,
    "outputs": [
    {
    "output_type": "execute_result",
    "data": {
    "text/plain": [
    "'Is CUDA and CUDNN enabled: True'"
    "'Is CUDA and CUDNN enabled: True and True'"
    ]
    },
    "metadata": {
    "tags": []
    },
    "execution_count": 4
    "execution_count": 5
    }
    ]
    },
    @@ -232,7 +292,7 @@
    "base_uri": "https://localhost:8080/",
    "height": 67
    },
    "outputId": "6d8d7a0b-157e-4ce2-9308-14a87b81a611"
    "outputId": "e8ac7284-4039-43b2-fd59-0d43ee129998"
    },
    "cell_type": "code",
    "source": [
    @@ -256,14 +316,14 @@
    "print(f\"Gen RAM Free: {humanize.naturalsize( psutil.virtual_memory().available )} | Proc size: {humanize.naturalsize( process.memory_info().rss)}\")\n",
    "print(\"GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB\".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))"
    ],
    "execution_count": 5,
    "execution_count": 18,
    "outputs": [
    {
    "output_type": "stream",
    "text": [
    "Number of GPUs: 1\n",
    "Gen RAM Free: 12.6 GB | Proc size: 237.4 MB\n",
    "GPU RAM Free: 11428MB | Used: 11MB | Util 0% | Total 11439MB\n"
    "Gen RAM Free: 12.8 GB | Proc size: 260.7 MB\n",
    "GPU RAM Free: 11430MB | Used: 11MB | Util 0% | Total 11441MB\n"
    ],
    "name": "stdout"
    }
  2. ELC renamed this gist Sep 29, 2018. 1 changed file with 0 additions and 0 deletions.
  3. ELC revised this gist Sep 29, 2018. No changes.
  4. ELC revised this gist Sep 29, 2018. 1 changed file with 12 additions and 1 deletion.
    13 changes: 12 additions & 1 deletion google_colab_for_fastai_general_template4.ipynb
    Original file line number Diff line number Diff line change
    @@ -6,7 +6,8 @@
    "name": "Google_Colab_for_Fastai_General_Template4.ipynb",
    "version": "0.3.2",
    "provenance": [],
    "collapsed_sections": []
    "collapsed_sections": [],
    "include_colab_link": true
    },
    "kernelspec": {
    "name": "python3",
    @@ -15,6 +16,16 @@
    "accelerator": "GPU"
    },
    "cells": [
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "view-in-github",
    "colab_type": "text"
    },
    "source": [
    "[View in Colaboratory](https://colab.research.google.com/gist/ELC/756040fe84a8bb3d14c59b0e997c84e9/google_colab_for_fastai_general_template4.ipynb)"
    ]
    },
    {
    "metadata": {
    "id": "PjOMeCoHHlzQ",
  5. ELC created this gist Sep 29, 2018.
    272 changes: 272 additions & 0 deletions google_colab_for_fastai_general_template4.ipynb
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,272 @@
    {
    "nbformat": 4,
    "nbformat_minor": 0,
    "metadata": {
    "colab": {
    "name": "Google_Colab_for_Fastai_General_Template4.ipynb",
    "version": "0.3.2",
    "provenance": [],
    "collapsed_sections": []
    },
    "kernelspec": {
    "name": "python3",
    "display_name": "Python 3"
    },
    "accelerator": "GPU"
    },
    "cells": [
    {
    "metadata": {
    "id": "PjOMeCoHHlzQ",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "# Google Colab for Fast.ai Course Template\n",
    "\n",
    "Remember to enable the GPU! ***Edit > Notebook settings > set \"Hardware Accelerator\" to GPU.***\n",
    "\n",
    "Check [the source]() of this template for updates\n"
    ]
    },
    {
    "metadata": {
    "id": "ArPdbxB-vl9Y",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "## Installing dependencies ##\n",
    "We need to manually install fastai and pytorch. And maybe other things that fastai depends on (see [here](https://github.com/fastai/fastai/blob/master/requirements.txt)).\n",
    "\n",
    "I will be referring to [this fastai forum thread](http://forums.fast.ai/t/colaboratory-and-fastai/10122/6) and [this blogpost](https://towardsdatascience.com/fast-ai-lesson-1-on-google-colab-free-gpu-d2af89f53604) if I get stuck. This is also a handy resource for using pytorch in colab: https://jovianlin.io/pytorch-with-gpu-in-google-colab/ (and his [example notebook](https://colab.research.google.com/drive/1jxUPzMsAkBboHMQtGyfv5M5c7hU8Ss2c#scrollTo=ed-8FUn2GqQ4)!). And this [post](https://medium.com/@chsafouane/getting-started-with-pytorch-on-google-colab-811c59a656b6). **Be careful with python and python3 being the same in this notebook, also there is no difference between pip and pip3**"
    ]
    },
    {
    "metadata": {
    "id": "SY72s-PAwUio",
    "colab_type": "code",
    "colab": {
    "base_uri": "https://localhost:8080/",
    "height": 84
    },
    "outputId": "821163e4-4a43-4c56-897b-cf4b1aef9648"
    },
    "cell_type": "code",
    "source": [
    "!python3 -V\n",
    "!python -V\n",
    "!pip -V\n",
    "!pip3 -V"
    ],
    "execution_count": 1,
    "outputs": [
    {
    "output_type": "stream",
    "text": [
    "Python 3.6.3\n",
    "Python 3.6.3\n",
    "pip 18.0 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)\n",
    "pip 18.0 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)\n"
    ],
    "name": "stdout"
    }
    ]
    },
    {
    "metadata": {
    "id": "TBT_tbpj-7hZ",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "**Installing fastai from source and installing PyTorch 3.1 with CUDA 9.1**\n",
    "\n",
    "Installing from pypi is not recommended as mentioned in [fastai-github-readme](https://github.com/fastai/fastai) (due to it's rapid changes and lack of tests) and you don't want to use conda on Google Colab. So here are few steps to install the library from source."
    ]
    },
    {
    "metadata": {
    "id": "qECKi529HtXm",
    "colab_type": "code",
    "colab": {
    "base_uri": "https://localhost:8080/",
    "height": 84
    },
    "outputId": "c88b7f55-02d9-4a7b-a135-e00a0d817984"
    },
    "cell_type": "code",
    "source": [
    "%%bash\n",
    "\n",
    "if ! [ -d fastai ]\n",
    "then\n",
    " git clone https://github.com/fastai/fastai.git\n",
    "fi\n",
    "\n",
    "cd fastai\n",
    "\n",
    "git pull\n",
    "\n",
    "pip -q install . && echo Successfully Installed Fastai\n",
    "\n",
    "pip -q install http://download.pytorch.org/whl/cu91/torch-0.3.1-cp36-cp36m-linux_x86_64.whl && echo Successfully Installed PyTorch\n",
    "\n",
    "pip -q install torchvision && echo Successfully Installed TorchVision"
    ],
    "execution_count": 2,
    "outputs": [
    {
    "output_type": "stream",
    "text": [
    "Already up-to-date.\n",
    "Successfully Installed Fastai\n",
    "Successfully Installed PyTorch\n",
    "Successfully Installed TorchVision\n"
    ],
    "name": "stdout"
    }
    ]
    },
    {
    "metadata": {
    "id": "sIIDTp5G1Hs2",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "**Import all the libraries**"
    ]
    },
    {
    "metadata": {
    "id": "akD5dZfY1Fx8",
    "colab_type": "code",
    "colab": {}
    },
    "cell_type": "code",
    "source": [
    "# This file contains all the main external libs we'll use\n",
    "from fastai.imports import *\n",
    "from fastai.transforms import *\n",
    "from fastai.conv_learner import *\n",
    "from fastai.model import *\n",
    "from fastai.dataset import *\n",
    "from fastai.sgdr import *\n",
    "from fastai.plots import *"
    ],
    "execution_count": 0,
    "outputs": []
    },
    {
    "metadata": {
    "id": "MgvJGuuJs_tL",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "## GPU Check ##\n",
    "\n",
    "Check whether the GPU is enabled"
    ]
    },
    {
    "metadata": {
    "id": "zt_ux_PqxL2N",
    "colab_type": "code",
    "colab": {
    "base_uri": "https://localhost:8080/",
    "height": 34
    },
    "outputId": "7f646127-cfd4-4472-ceb9-75bc4eb259fe"
    },
    "cell_type": "code",
    "source": [
    "f'Is CUDA and CUDNN enabled: {torch.cuda.is_available() and torch.backends.cudnn.enabled}'"
    ],
    "execution_count": 4,
    "outputs": [
    {
    "output_type": "execute_result",
    "data": {
    "text/plain": [
    "'Is CUDA and CUDNN enabled: True'"
    ]
    },
    "metadata": {
    "tags": []
    },
    "execution_count": 4
    }
    ]
    },
    {
    "metadata": {
    "id": "NrbLtmTPHyl0",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "**Check how much of the GPU is available**\n",
    "\n",
    "I'm using the following code from [a stackoverflow thread](https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available\n",
    ") to check what % of the GPU is being utilized right now. 100% is bad; 0% is good (all free for me to use!)."
    ]
    },
    {
    "metadata": {
    "id": "tCHMN-qZs5NJ",
    "colab_type": "code",
    "colab": {
    "base_uri": "https://localhost:8080/",
    "height": 67
    },
    "outputId": "6d8d7a0b-157e-4ce2-9308-14a87b81a611"
    },
    "cell_type": "code",
    "source": [
    "# memory footprint support libraries/code\n",
    "\n",
    "!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi\n",
    "!pip -q install gputil\n",
    "!pip -q install psutil\n",
    "!pip -q install humanize\n",
    "\n",
    "import psutil\n",
    "import humanize\n",
    "import os\n",
    "import GPUtil as GPU\n",
    "\n",
    "GPUs = GPU.getGPUs()\n",
    "gpu = GPUs[0]\n",
    "process = psutil.Process(os.getpid())\n",
    "\n",
    "print(f\"Number of GPUs: {len(GPUs)}\")\n",
    "print(f\"Gen RAM Free: {humanize.naturalsize( psutil.virtual_memory().available )} | Proc size: {humanize.naturalsize( process.memory_info().rss)}\")\n",
    "print(\"GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB\".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))"
    ],
    "execution_count": 5,
    "outputs": [
    {
    "output_type": "stream",
    "text": [
    "Number of GPUs: 1\n",
    "Gen RAM Free: 12.6 GB | Proc size: 237.4 MB\n",
    "GPU RAM Free: 11428MB | Used: 11MB | Util 0% | Total 11439MB\n"
    ],
    "name": "stdout"
    }
    ]
    },
    {
    "metadata": {
    "id": "q0WZ3Smd3P6w",
    "colab_type": "text"
    },
    "cell_type": "markdown",
    "source": [
    "# Ready to Go!"
    ]
    }
    ]
    }