Skip to content

Instantly share code, notes, and snippets.

@25b3nk
Last active July 31, 2018 06:15
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save 25b3nk/9ae151a39e6d107bf894294586933a8e to your computer and use it in GitHub Desktop.
Save 25b3nk/9ae151a39e6d107bf894294586933a8e to your computer and use it in GitHub Desktop.
Tensorflow installation on local PC; Ubuntu 16.04; NVIDIA GTX 1050 Ti; CUDA 9.0; GCC 5.4;

Building tensorflow from source on Ubuntu 16.04 with Cuda 9.1, 9.0 & 8.0 installed.

Will be trying to install tf-r1.9 with cuda 9.0

Following this link
I already have NVIDIA drivers and CUDA installed. Won't be explaining them here.
Steps:

  1. $ git clone https://github.com/tensorflow/tensorflow
  2. $ cd tensorflow
  3. $ git checkout
  4. You need to install bazel. Follow these instructions
  5. Install pip, dev, numpy & wheel.
    $ sudo apt-get install python-numpy python-dev python-pip python-wheel
    Note: I am installing tensorflow in python 2.7
  6. Configuring the build.
$ cd tensorflow  # cd to the top-level directory created
$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.15.2 installed. Please specify the location of python. [Default is /usr/bin/python]: Found possible Python library paths: /usr/local/lib/python2.7/dist-packages /usr/lib/python2.7/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages] Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: jemalloc as malloc support will be enabled for TensorFlow. Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: Google Cloud Platform support will be enabled for TensorFlow. Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: Hadoop File System support will be enabled for TensorFlow. Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n No Amazon S3 File System support will be enabled for TensorFlow. Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n No Apache Kafka Platform support will be enabled for TensorFlow. Do you wish to build TensorFlow with XLA JIT support? [y/N]: No XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with GDR support? [y/N]: No GDR support will be enabled for TensorFlow. Do you wish to build TensorFlow with VERBS support? [y/N]: No VERBS support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-9.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-9.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: No TensorRT support will be enabled for TensorFlow. Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1] Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. Configuration finished
  1. Now build a pip package for TensorFlow with GPU support
    $ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
    You have to wait for several minutes.
  2. After you get the message stating the build completed successfully, run
    $ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
    This creates the python package which you will be installing after it is created.
  3. Check the wheel file created in the /tmp/tensorflow_pkg folder and install the package by
    $ sudo pip install /tmp//tensorflow_pkg/tensorflow-1.9.0-cp27-cp27mu-linux_x86_64.whl
    Note: The wheel file name may change with respect to the settings you have configured.
  4. To check if tensorflow is installed and which version, run
    pip freeze | grep tensorflow
  5. To check if the tensorflow you have installed is GPU supported, go to python terminal and run
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Output will have Adding visible gpu devices: 0, this line and will also print the kind of GPU you are using

Note: To get CUDNN version use
cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
Here 7.1 translates to 7 bring CUDNN_MAJOR version and 1 being CUDNN_MINOR version


My system configurations:

  1. GCC

    $ gcc --version
    gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
    Copyright (C) 2015 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    

  2. NVCC

    $ /usr/local/cuda-9.0/bin/nvcc --version
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2017 NVIDIA Corporation
    Built on Fri_Sep__1_21:08:03_CDT_2017
    Cuda compilation tools, release 9.0, V9.0.176

  3. CUDNN

    $ cat /usr/local/cuda-9.0/include/cudnn.h | grep CUDNN_MAJOR -A 2
    #define CUDNN_MAJOR 7
    #define CUDNN_MINOR 1
    #define CUDNN_PATCHLEVEL 4
    --
    #define CUDNN_VERSION    (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
    #include "driver_types.h"

  4. NVIDIA driver

    $ nvidia-smi
    Tue Jul 31 11:29:52 2018
    +-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.67 Driver Version: 390.67 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 105... Off | 00000000:01:00.0 On | N/A | | 0% 45C P8 N/A / 72W | 953MiB / 4038MiB | 15% Default | +-------------------------------+----------------------+----------------------+
    +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1058 G /usr/lib/xorg/Xorg 388MiB | | 0 1736 G /opt/teamviewer/tv_bin/TeamViewer 16MiB | | 0 1974 G compiz 216MiB | | 0 2415 G ...-token=9CCC21897D6A22D6A4C9020F56B9AEA3 329MiB | +-----------------------------------------------------------------------------+

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment