Skip to content

Instantly share code, notes, and snippets.

@orpcam
Last active March 12, 2018 19:18
Show Gist options
  • Save orpcam/2f41dbab39453f0292c21abdbef26a62 to your computer and use it in GitHub Desktop.
Save orpcam/2f41dbab39453f0292c21abdbef26a62 to your computer and use it in GitHub Desktop.
gist for byai.io/howto-tensorflow-1-6-on-mac-with-gpu-acceleration/
$ cd ~/temp
$ git clone https://github.com/tensorflow/tensorflow
$ cd tensorflow
$ git checkout v1.6.0-rc1
$ cp -R /Developer/NVIDIA/CUDA-9.1/samples ~/temp/cuda_samples
$ cd ~/temp/cuda_samples/
$ make -C 1_Utilities/deviceQuery
# execute sample
$ ~/temp/cuda_samples/bin/x86_64/darwin/release/deviceQuery
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1060 6GB"
CUDA Driver Version / Runtime Version 9.1 / 9.1
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 6144 MBytes (6442254336 bytes)
(10) Multiprocessors, (128) CUDA Cores/MP: 1280 CUDA Cores
GPU Max Clock rate: 1709 MHz (1.71 GHz)
Memory Clock rate: 4004 Mhz
Memory Bus Width: 192-bit
L2 Cache Size: 1572864 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 195 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ cd ~
$ pip install /tmp/tensorflow_pkg/tensorflow-1.6.0rc0-cp36-cp36m-macosx_10_13_x86_64.whl
$ kextstat | grep -i cuda
164 0 0xffffff7f83c65000 0x2000 0x2000 com.nvidia.CUDA (1.1.0) 4329B052-6C8A-3900-8E83-744487AEDEF1 <4 1>
$ vim ~/.bash_profile
# add to .bash_profile
export PATH=/usr/local/cuda/bin:/Developer/NVIDIA/CUDA-9.1/bin${PATH:+:${PATH}}
export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:/Developer/NVIDIA/CUDA-9.1/lib
$ source ~/.bash_profile
$ sudo mv /Library/Developer/CommandLineTools /Library/Developer/CommandLineTools_backup
$ sudo xcode-select --switch /Library/Developer/CommandLineTools
$ tar -xzvf cudnn-9.1-osx-x64-v7-ga.tgz
$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include
$ sudo cp cuda/lib/libcudnn* /usr/local/cuda/lib
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib/libcudnn*
$ ./configure
You have bazel 0.8.1 installed.
Please specify the location of python. [Default is /Users/user/.pyenv/versions/tensorflow-gpu/bin/python]:
Found possible Python library paths:
/Users/user/.pyenv/versions/tensorflow-gpu/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/Users/user/.pyenv/versions/tensorflow-gpu/lib/python3.6/site-packages]
Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
No Google Cloud Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
No Hadoop File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n
No Amazon S3 File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Apache Kafka Platform support? [y/N]: n
No Apache Kafka Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with GDR support? [y/N]: n
No GDR support will be enabled for TensorFlow.
Do you wish to build TensorFlow with VERBS support? [y/N]: n
No VERBS support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: 9.1
Please specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]:
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2]6.1
Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Do you wish to build TensorFlow with MPI support? [y/N]:
No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
Configuration finished
$ brew update
$ brew install pyenv pyenv-virtualenv
# add to bottom of `.bash_profile`
if command -v pyenv 1>/dev/null 2>&1; then
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
fi
$ source .bash_profile
# install Python
$ pyenv install 3.6.0
# create virtualenv
$ pyenv virtualenv 3.6.0 tensorflow-gpu
$ pyenv activate tensorflow-gpu
$ export CUDA_HOME=/usr/local/cuda
# (of course USERNAME is your Mac Username)
$ export DYLD_LIBRARY_PATH=/Users/USERNAME/lib:/usr/local/cuda/lib:/usr/local/cuda/extras/CUPTI/lib
$ export LD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
$ export PATH=$DYLD_LIBRARY_PATH:$PATH
$ pip install git+git://github.com/Theano/Theano.git
$ pip install keras
$ cd ~/temp
$ git clone https://github.com/fchollet/keras.git
$ cd keras/examples
# Run in CPU mode
$ THEANO_FLAGS=mode=FAST_RUN python imdb_cnn.py
# Run in GPU mode
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python imdb_cnn.py
25000/25000 [==============================] - 15s 595us/step - loss: 0.4028 - acc: 0.8008 - val_loss: 0.3038 - val_acc: 0.8690
Epoch 2/2
25000/25000 [==============================] - 10s 387us/step - loss: 0.2298 - acc: 0.9072 - val_loss: 0.2858 - val_acc: 0.8817
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment