Skip to content

Instantly share code, notes, and snippets.

@ZeccaLehn
Created March 22, 2018 03:01
Show Gist options
  • Save ZeccaLehn/3e4ebfbd3c42edc6264b09ff30a01a09 to your computer and use it in GitHub Desktop.
Save ZeccaLehn/3e4ebfbd3c42edc6264b09ff30a01a09 to your computer and use it in GitHub Desktop.
Automatic GPUs: A reproducible R / Python approach to getting up and running quickly on GCloud with GPUs in Tensorflow
### Automatic GPUs:
### A reproducible R / Python approach to getting up and running quickly on GCloud with GPUs in Tensorflow
### https://medium.com/@zecca/automatic-gpus-46aa08f01886
# Check for CUDA and try to install.
if ! dpkg-query -W cuda; then
# Start Timer here
START=$(date +%s) # Time script
# Install Cuda from NVIDIA
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
# Login as root
sudo dpkg -i ./cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
rm -r cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt-get update
sudo apt-get -y install cuda-8.0
# Install cuDNN v6.0
CUDNN_TAR_FILE="cudnn-8.0-linux-x64-v6.0.tgz"
wget http://developer.download.nvidia.com/compute/redist/cudnn/v6.0/${CUDNN_TAR_FILE}
tar -xzvf ${CUDNN_TAR_FILE}
rm -r cudnn-8.0-linux-x64-v6.0.tgz
sudo cp -P cuda/include/cudnn.h /usr/local/cuda/include
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*
# Export Paths
echo 'export CUDA_HOME=/usr/local/cuda' >> ~/.bashrc
echo 'export PATH=$PATH:$CUDA_HOME/bin' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$CUDA_HOME/lib64' >> ~/.bashrc
echo 'export PATH=$PATH:$HOME/anaconda3/bin' >> ~/.bashrc
source ~/.bashrc
# Install Anaconda
mkdir Downloads
cd Downloads
wget "https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh" -O "Anaconda3-5.0.1-Linux-x86_64.sh"
chmod +x Anaconda3-5.0.1-Linux-x86_64.sh
sudo sh "Anaconda3-5.0.1-Linux-x86_64.sh" -b
cd $HOME
rm -r Downloads
# Create conda environment to work with Python/R
# conda search python
# conda search r
mkdir prog_env
# conda create --prefix=$HOME/prog_env python=3.6 -y
source activate prog_env
sudo apt-get update
conda install -c anaconda --prefix=$HOME/prog_env python=3.6 -y
conda install -c anaconda tensorflow-gpu --prefix=$HOME/prog_env -y
conda install -c anaconda --prefix=$HOME/prog_env r=3.4 -y
source deactivate prog_env
# Shows Cuda Info
nvidia-smi
# End of timer
END=$(date +%s)
DIFF=$(( $END - $START ))
echo "It took $DIFF seconds"
fi
### Manually Check Python and R
source activate prog_env
(prog_env)$ python
# Test GPU is working
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
# name: "/device:CPU:0"
# device_type: "CPU"
# memory_limit: 268435456
# name: "/device:GPU:0"
# device_type: "GPU"
# memory_limit: 11326131405
quit("yes") # CTRL(or CMD) + Z
(prog_env)$ R
# install.packages("tensorflow")
library(tensorflow)
install_tensorflow(version = "gpu")
use_condaenv("r-tensorflow")
sess = tf$Session()
# 2018-03-21 19:32:33.987396: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120]
# Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
GPUtest <- tf$constant('GPU is running!')
sess$run(GPUtest)
# "GPU is running!"
quit("yes") # CTRL(or CMD) + Z
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment