Skip to content

Instantly share code, notes, and snippets.

@angerson
Created December 2, 2019 22:53
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save angerson/8d81b073b338fb60d146de42a7c853fe to your computer and use it in GitHub Desktop.
Save angerson/8d81b073b338fb60d146de42a7c853fe to your computer and use it in GitHub Desktop.
Windows config output
$ python ./configure.py
2019/12/02 22:51:37 Downloading https://releases.bazel.build/1.1.0/release/bazel-1.1.0-windows-x86_64.exe...
Extracting Bazel installation...
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 1.1.0 installed.
Please specify the location of python. [Default is C:\Users\angerson\AppData\Local\Programs\Python\Python37\python.exe]:
Found possible Python library paths:
C:\Users\angerson\AppData\Local\Programs\Python\Python37\lib\site-packages
Please input the desired Python library path to use. Default is [C:\Users\angerson\AppData\Local\Programs\Python\Python37\lib\site-packages]
Do you wish to build TensorFlow with XLA JIT support? [y/N]:
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Found CUDA 10.1 in:
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/include
Found cuDNN 7 in:
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/lib/x64
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/include
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]:
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]:
Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]: n
Not overriding eigen strong inline, some compilations could take more than 20 mins.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=ngraph # Build with Intel nGraph support.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2 # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=nonccl # Disable NVIDIA NCCL support.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment