Skip to content

Instantly share code, notes, and snippets.

@raulqf
Last active November 3, 2022 04:06
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save raulqf/3bc0007789bc33f23bab6d99a1d8f996 to your computer and use it in GitHub Desktop.
Save raulqf/3bc0007789bc33f23bab6d99a1d8f996 to your computer and use it in GitHub Desktop.
How to install Tensorflow with CUDNN support and how to check the correct installation.

How to install Tensorflow with CUDNN on Ubuntu Distro 14.04 or higher with CUDA 8.0 and CUDNN 6.0

This gist shows how to install Tensorflow with CUDNN support and how to check the correct installation. This gist is based on the Tensorflow installation guide and following the LearnOpencv blog entry Installing Deep Learning Frameworks on Ubuntu with CUDA support that is more complete that this gist but there are sometimes misleading/hidden steps that can be covered in this gist.

First prerequisite is the CUDA Toolkit installation. You can check this gist for the CUDA installation. As a difference from the CUDA installation CUDNN is very easy to install. We only have to deploy the downloaded libraries in your system. At the time I am writing this guide current toolkit version is 8.0.

To download the libraries you must register an account, if you do not have one, and then go to the NVIDIA website. Important When selecting the CUDNN library version be aware of the Tensorflow package to be installed. From the download option list you must select the cuDNN v6.0 Library for Linux to download the libraries.

Now, we can proceed with the installation. We only have to extract the files and copy in the respective library directory. Remeber that /usr/local/cuda is a symbolic link to the latest cuda installed. So, will use cuda-8-0 directory to target the correct version /usr/local/cuda-8.0

$ tar xvf cudnn-8.0-linux-x64-v6.0.tgz
$ sudo cp -P cuda/lib64/* /usr/local/cuda-8.0/lib64/
$ sudo cp cuda/include/* /usr/local/cuda-8.0/include/

Libraries have been installed and now we must set the pathes of your linux distro. So, open the file ~/.bashrc and append to the bottom of the file the following lines:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/extras/CUPTI/lib64
export CUDA_HOME=/usr/local/cuda-8.0
export PATH=/usr/local/cuda-8.0/bin:$PATH

Reload the file:

$ source ~/.bashrc

Tensorflow Installation

Some extra packets proposed by LearnOpencv for DL frameworks installation:

$ sudo apt-get update
$ sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler libopencv-dev

Python2 and Python3 installation with some other important packages - boost, lmdb, glog, blas, etc...

$ sudo apt-get install -y --no-install-recommends libboost-all-dev doxygen
$ sudo apt-get install -y libgflags-dev libgoogle-glog-dev liblmdb-dev libblas-dev 
$ sudo apt-get install -y libatlas-base-dev libopenblas-dev libgphoto2-dev libeigen3-dev libhdf5-dev 

$ sudo apt-get install -y python-dev python-pip python-nose python-numpy python-scipy
$ sudo apt-get install -y python3-dev python3-pip python3-nose python3-numpy python3-scipy

Now, we proceed with the Tensorflow installation by means of the virtual environments

Installing virtualenv and virtualenvwrapper for python2 & 3:

$ sudo pip2 install virtualenv virtualenvwrapper
$ sudo pip3 install virtualenv virtualenvwrapper
$ mkdir ~/Envs
$ echo 'export WORKON_HOME=~/Envs' >> ~/.bashrc
$ echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.bashrc

Installing Tensorflow with GPU and Keras for python2:

$ mkvirtualenv deeplearningenv -p python2 
$ workon deeplearningenv
$ pip install numpy scipy matplotlib scikit-image scikit-learn ipython protobuf jupyter
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc1-cp27-none-linux_x86_64.whl
$ pip install --upgrade $TF_BINARY_URL
$ pip install keras

Be aware of the tensorflow package to be installed whose dependencies must match the CUDA Toolkit installed. For this reason, it is more robust to specify the tensorflow binary URL. You can obtain it through the official Tensorflow website. You must select the target platform (CPU or GPU), python version and the Tensorflow version.

Verification

To check the correct library installation you can import the respective modules (remember to activate your environment):

$ python
>>> import tensorflow as tf
>>> import keras
Using TensorFlow backend.

To verify that Tensorflow identifies your GPU, you can run the following code (remember to activate your environment):

$ python
>>> import tensorflow as tf
# Creates a graph.
>>> a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
>>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
>>> c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
>>> print(sess.run(c))

And the output will be:

Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/device:GPU:0
a: /job:localhost/replica:0/task:0/device:GPU:0
MatMul: /job:localhost/replica:0/task:0/device:GPU:0
[[ 22.  28.]
 [ 49.  64.]]

Source

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment