Skip to content

Instantly share code, notes, and snippets.

@talmo
Last active August 27, 2022 16:17
Show Gist options
  • Save talmo/353c4bf86d5ceb89c1b873b486fad3cf to your computer and use it in GitHub Desktop.
Save talmo/353c4bf86d5ceb89c1b873b486fad3cf to your computer and use it in GitHub Desktop.
Building TensorFlow 2.6.3 without AVX/AVX2 instructions

Building TensorFlow without AVX/AVX2 instructions

Overview

This describes steps to build a CPU-only TensorFlow wheel (.whl) without AVX/AVX2 instructions so it can be installed on machines with older CPUs that do not support AVX/AVX2 instructions (e.g., Intel Celeron N2830).

This solves errors that you might get when installing or importing TensorFlow that might say:

Illegal instruction (core dumped)

or

The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine. Aborted (core dumped)

depending on the version.

More info on compiling with Docker can be found in the official docs. Also useful are the official Dockerfiles for reference on how the build environment is setup.

Steps

1. Run the TensorFlow development Docker image.

docker run -it -w /tensorflow_src -v D:/tmp/tf:/mnt -e HOST_PERMS="$(id -u):$(id -g)" tensorflow/tensorflow:devel bash

This uses the devel (CPU-only) image from the TensorFlow DockerHub.

Here I'm mounting D:/tmp/tf as my local directory so we can copy the wheel file out after it's done.

Once the image finishes downloading and runs, it'll drop you into a bash terminal running inside the container.

2. In the container, update the repo from GitHub and switch to the tag branch for the version we want to compile:

git pull && git checkout v2.6.3

If for some reason you're not in the TensorFlow source directory, it should be in /tensorflow_src by default. If it changes in a future version of this docker image

3. Setup the build options:

./configure

And then enter these settings interactively (can probably be scripted also):

Please input the desired Python library path to use.  Default is [/usr/lib/python3/dist-packages]

Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]:
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]:
Clang will not be downloaded.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: -Wno-sign-compare -mno-avx2 -mno-avx -march=core2


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
        --config=mkl            # Build with MKL support.
        --config=mkl_aarch64    # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
        --config=monolithic     # Config for mostly static monolithic build.
        --config=numa           # Build with NUMA support.
        --config=dynamic_kernels        # (Experimental) Build kernels into separate shared objects.
        --config=v1             # Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:
        --config=nogcp          # Disable GCP support.
        --config=nonccl         # Disable NVIDIA NCCL support.
Configuration finished

The only non-default setting is the part about the optimization flags, which disables AVX/AVX2: -Wno-sign-compare -mno-avx2 -mno-avx -march=core2

4. Build the main binaries with the opt config:

bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

5. Build the pip wheel:

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt

The result will be output to: /mnt/tensorflow-2.6.3-cp38-cp38-linux_x86_64.whl

This can be directly installed with pip install tensorflow-2.6.3-cp38-cp38-linux_x86_64.whl.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment