Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 16 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dbarnes/486fc02f98046048f7d6bfd5908d561a to your computer and use it in GitHub Desktop.
Save dbarnes/486fc02f98046048f7d6bfd5908d561a to your computer and use it in GitHub Desktop.
OSX Sierra + Titan Xp eGPU. Tensorflow 1.2.1. + CUDA 8 + cuDNN 6

OSX Sierra + Titan Xp eGPU. Tensorflow 1.2.1. + CUDA 8 + cuDNN 6

This gist summarises the tensorflow related steps I took to get the above combo working.

When its all tested in c++ I'll update the instructions fully (and add anything I forgot below).

But in summary ( with the current master dd06643cf098ed362212ce0f76ee746951466e81 ):

I have uploaded the pip wheel which I believe should work if you have the same setup but no promises (built for compute capability 3.5, 5.2, 6.0 and named tensorflow-gpu). Install with (not sure dropbox allows this direct linking):

pip install http://dl.dropboxusercontent.com/s/reo3pkz6dn33u8k/tensorflow_gpu-1.2.1-cp27-cp27m-macosx_10_11_x86_64.whl

Or download then install

https://www.dropbox.com/s/reo3pkz6dn33u8k/tensorflow_gpu-1.2.1-cp27-cp27m-macosx_10_11_x86_64.whl?dl=1

pip install tensorflow_gpu-1.2.1-cp27-cp27m-macosx_10_11_x86_64.whl

  • CUDA 8.0.61
    • export PATH=/usr/local/cuda/bin:$PATH
    • export LD_LIBRARY_PATH=/usr/local/cuda/lib:$LD_LIBRARY_PATH
    • export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:$DYLD_LIBRARY_PATH
  • cuDNN 6.0.21
  • NVidia Web Driver 378.05.05.15f01
  • LLVM but older command line tools to keep nvcc happy (xcode-select version 2347)
  • Add libgomp from GCC... to the library paths (brew install gcc)
    • export LD_LIBRARY_PATH=/usr/local/Cellar/gcc/7.1.0/lib/gcc/7:$LD_LIBRARY_PATH
    • export DYLD_LIBRARY_PATH=/usr/local/Cellar/gcc/7.1.0/lib/gcc/7:$DYLD_LIBRARY_PATH
  • Normal configure steps (set your CUDA compute capability)
  • Export your library paths to the bazel build with --action_env LD_LIBRARY_PATH="$LD_LIBRARY_PATH". Not sure if this is necessary but was needed at one point, Ill try when I next build and drop it from here if unecessary. So for example if you also want libtensorflow.so :
    • bazel build --config=opt --config=cuda --show_result 100 //tensorflow/tools/pip_package:build_pip_package //tensorflow/python/tools:freeze_graph //tensorflow/python/tools:optimize_for_inference //tensorflow:libtensorflow.so //tensorflow/cc:tutorials_example_trainer --action_env LD_LIBRARY_PATH="$LD_LIBRARY_PATH"
  • Build pip package if you want to use python
    • bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
  • Install
    • sudo pip install /tmp/tensorflow_pkg/<OUTPUT_WHEEL>.whl

Let me know if you have any problems as im sure I forgot some steps

@wottpal
Copy link

wottpal commented Aug 10, 2017

Hey, I will definitely try out your guide! Thanks!

Did you do anything else to setup your eGPU, like using this tool https://egpu.io/forums/mac-setup/imac-egpu-simply-for-3d-accelleration/?

Did you do any benchmarks? Is TF playing nicely or is there a significant performance loss compared to using this GPU with Windows?

Thank you so much!
Dennis

@0xDaksh
Copy link

0xDaksh commented Aug 31, 2017

Can you please build the same wheel for Python 3.5?

@miguelusque
Copy link

Hi!

When training neural networks, how 'hot' the computer gets? I am not referring to the eGPU but the computer itself.

I have got a macbook pro late 2013 and I am not sure if I should add an eGPU for machine learning purposes.

Thanks!

Miguel

@CzechJiri
Copy link

What eGPU did you use?

@tscholak
Copy link

tscholak commented Oct 8, 2017

Hi @danbarnes333,

Thanks for this tutorial!

I have followed your instructions as well as I could, but I'm running into the ld: library not found for -lgomp error. I have added the locations of libgomp.dylib and libgomp.a to both the DYLD_LIBRARY_PATH and the LD_LIBRARY_PATH but to no avail. What am I missing?

$ gcc -v
Configured with: --prefix=/Applications/Xcode_8.2.1.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Applications/Xcode_8.2.1.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
$ /opt/local/bin/gcc-mp-7 -v
Using built-in specs.
COLLECT_GCC=/opt/local/bin/gcc-mp-7
COLLECT_LTO_WRAPPER=/opt/local/libexec/gcc/x86_64-apple-darwin16/7.2.0/lto-wrapper
Target: x86_64-apple-darwin16
Configured with: /opt/local/var/macports/build/_opt_bblocal_var_buildworker_ports_build_ports_lang_gcc7/gcc7/work/gcc-7.2.0/configure --prefix=/opt/local --build=x86_64-apple-darwin16 --enable-languages=c,c++,objc,obj-c++,lto,fortran --libdir=/opt/local/lib/gcc7 --includedir=/opt/local/include/gcc7 --infodir=/opt/local/share/info --mandir=/opt/local/share/man --datarootdir=/opt/local/share/gcc-7 --with-local-prefix=/opt/local --with-system-zlib --disable-nls --program-suffix=-mp-7 --with-gxx-include-dir=/opt/local/include/gcc7/c++/ --with-gmp=/opt/local --with-mpfr=/opt/local --with-mpc=/opt/local --with-isl=/opt/local --enable-stage1-checking --disable-multilib --enable-lto --enable-libstdcxx-time --with-build-config=bootstrap-debug --with-as=/opt/local/bin/as --with-ld=/opt/local/bin/ld --with-ar=/opt/local/bin/ar --with-bugurl=https://trac.macports.org/newticket --disable-tls --with-pkgversion='MacPorts gcc7 7.2.0_0'
Thread model: posix
gcc version 7.2.0 (MacPorts gcc7 7.2.0_0)

@tscholak
Copy link

tscholak commented Oct 9, 2017

I was able to solve/circumvent the problem by applying this patch to ./third_party/gpus/cuda/BUILD.tpl:

--- a/third_party/gpus/cuda/BUILD.tpl
+++ b/third_party/gpus/cuda/BUILD.tpl
@@ -109,7 +109,7 @@ cc_library(
         ".",
         "cuda/include",
     ],
-    linkopts = ["-lgomp"],
+    linkopts = ["-L/opt/local/lib/gcc7/libgomp.dylib"],
     linkstatic = 1,
     visibility = ["//visibility:public"],
 )

see alse https://metakermit.com/2017/compiling-tensorflow-with-gpu-support-on-a-macbook-pro/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment