Skip to content

Instantly share code, notes, and snippets.

@mattiasarro
Last active April 10, 2019 09:06
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mattiasarro/478d8fd6b620fd35ff6f179107bc7c71 to your computer and use it in GitHub Desktop.
Save mattiasarro/478d8fd6b620fd35ff6f179107bc7c71 to your computer and use it in GitHub Desktop.
Example ./configure for TensorFlow (with GPU support) 1.2 on macOS
› ./configure
Please specify the location of python. [Default is /Users/m/code/3rd/conda/envs/p3gpu/bin/python]:
Found possible Python library paths:
/Users/m/code/3rd/conda/envs/p3gpu/lib/python3.6/site-packages
/Users/m/code/3rd/spark-2.1.0-bin-hadoop2.7//python
Please input the desired Python library path to use. Default is [/Users/m/code/3rd/conda/envs/p3gpu/lib/python3.6/site-packages]
Using python library path: /Users/m/code/3rd/conda/envs/p3gpu/lib/python3.6/site-packages
Do you wish to build TensorFlow with MKL support? [y/N] N
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N]
nvcc will be used as CUDA compiler
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the cuDNN version you want to use. [Leave empty to use system default]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 6.2
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
Configuration finished
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment