Skip to content

Instantly share code, notes, and snippets.

@vijaysaimutyala
Last active October 21, 2018 06:42
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save vijaysaimutyala/e394036b1c065561a134cfe7c25a896a to your computer and use it in GitHub Desktop.
Save vijaysaimutyala/e394036b1c065561a134cfe7c25a896a to your computer and use it in GitHub Desktop.
Install_fastai_on_Google_colab_GPU.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "oct22.ipynb",
"version": "0.3.2",
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"[View in Colaboratory](https://colab.research.google.com/gist/vijaysaimutyala/e394036b1c065561a134cfe7c25a896a/oct22.ipynb)"
]
},
{
"metadata": {
"id": "xJuP_KS7VXOa",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"## Installing fastai in GPU enabled runtime\n",
"\n",
"Enable GPU runtime from the the runtime menu option-->Change runtime type\n",
" \n",
"\n",
"\n",
"1. Enable GPU runtime from the the runtime menu option-->Change runtime type\n",
"2. Under the Hardware Accelerator, choose GPU\n",
"3. Follow the below steps to install Pytorch GPU version followed by fastai"
]
},
{
"metadata": {
"id": "sL1gDkp3FrW8",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"!pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html\n"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "oeMIL1FwK0gS",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"!pip install fastai\n"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "eOh7l8ArLydL",
"colab_type": "code",
"cellView": "both",
"outputId": "18114a18-fa21-4a35-db4f-892e9fb7ecce",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1003
}
},
"cell_type": "code",
"source": [
"#@title\n",
"import fastai\n",
"fastai.show_install(1)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "stream",
"text": [
"\n",
"\n",
"```text\n",
"=== Software === \n",
"python version : 3.6.6\n",
"fastai version : 1.0.11\n",
"torch version : 1.0.0.dev20181019\n",
"nvidia driver : 396.44\n",
"torch cuda ver : 9.2.148\n",
"torch cuda is : available\n",
"torch cudnn ver : 7104\n",
"torch cudnn is : enabled\n",
"\n",
"=== Hardware === \n",
"nvidia gpus : 1\n",
"torch available : 1\n",
" - gpu0 : 11441MB | Tesla K80\n",
"\n",
"=== Environment === \n",
"platform : Linux-4.14.65+-x86_64-with-Ubuntu-18.04-bionic\n",
"distro : #1 SMP Sun Sep 9 02:18:33 PDT 2018\n",
"conda env : Unknown\n",
"python : /usr/bin/python3\n",
"sys.path : \n",
"/env/python\n",
"/usr/lib/python36.zip\n",
"/usr/lib/python3.6\n",
"/usr/lib/python3.6/lib-dynload\n",
"/usr/local/lib/python3.6/dist-packages\n",
"/usr/lib/python3/dist-packages\n",
"/usr/local/lib/python3.6/dist-packages/IPython/extensions\n",
"/root/.ipython\n",
"\n",
"Sun Oct 21 06:21:51 2018 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 396.44 Driver Version: 396.44 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 34C P8 31W / 149W | 11MiB / 11441MiB | 0% Default |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: GPU Memory |\n",
"| GPU PID Type Process name Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n",
"\n",
"```\n",
"\n",
"Please make sure to include opening/closing ``` when you paste into forums/github to make the reports appear formatted as code sections.\n",
"\n",
"Optional package(s) to enhance the diagnostics can be installed with:\n",
"pip install distro\n",
"Once installed, re-run this utility to get the additional information\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "FgF7KvT9L214",
"colab_type": "code",
"outputId": "7b766b11-bd7c-4e0f-eb7e-3a741f6cb6fd",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"cell_type": "code",
"source": [
"import torch\n",
"torch.cuda.is_available()"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"True"
]
},
"metadata": {
"tags": []
},
"execution_count": 5
}
]
},
{
"metadata": {
"id": "U5tq4yh7QE5J",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
""
],
"execution_count": 0,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment