Skip to content

Instantly share code, notes, and snippets.

@tleyden
Created November 22, 2015 20:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tleyden/6cf2f5f4bcd22f845c1e to your computer and use it in GitHub Desktop.
Save tleyden/6cf2f5f4bcd22f845c1e to your computer and use it in GitHub Desktop.
Instructions to install neural-stye on AWS GPU running under Docker

These instructions will walk you through getting neural-style up and running on an AWS GPU instance.

Spin up AWS instance

Follow these steps to launch an AMI with CUDA pre-installed.

SSH into AWS instance

$ ssh ubuntu@<instance-ip>

Install Docker

$ sudo apt-get update && sudo apt-get install curl
$ curl -sSL https://get.docker.com/ | sh

As the post-install message suggests, enable docker for non-root users:

$ sudo usermod -aG docker ubuntu

Verify correct install via:

$ sudo docker run hello-world

Mount GPU devices

Mount

$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ ./deviceQuery

You should see something like this:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GRID K520"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  ... snip ...

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS

Verify: Find all your nvidia devices

$ ls -la /dev | grep nvidia

You should see:

crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Start Docker container

$ export DOCKER_NVIDIA_DEVICES="--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm"
$ sudo docker run -ti $DOCKER_NVIDIA_DEVICES kaixhin/cuda-torch /bin/bash

Verify nvidia devices mounted inside container

From within the container:

$ ls -la /dev | grep nvidia

You should see:

crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Install neural-style

The following should be run inside the docker container:

$ apt-get install -y wget libpng-dev libprotobuf-dev protobuf-compiler
$ git clone --depth 1 https://github.com/jcjohnson/neural-style.git
$ /root/torch/install/bin/luarocks install loadcaffe

Download models

$ cd neural-style
$ sh models/download_models.sh

Install CUDA backend for Torch

$ luarocks install cutorch
$ luarocks install cunn

Verify

$ th -e "require 'cutorch'; require 'cunn'; print(cutorch)"

Expected output:

{
  getStream : function: 0x40d40ce8
  getDeviceCount : function: 0x40d413d8
  ... etc
}

Run neural style

First, grab a few images to test with

$ mkdir images
$ wget https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1280px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg -O images/vangogh.jpg
$ wget http://exp.cdn-hotels.com/hotels/1000000/10000/7500/7496/7496_42_z.jpg -O images/hotel_del_coronado.jpg

Run it:

$ th neural_style.lua -style_image images/vangogh.jpg -content_image images/hotel_del_coronado.jpg
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment