Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save wangjia184/f9ffb2782d0703ef3dbceec9b2bbc4b4 to your computer and use it in GitHub Desktop.
Save wangjia184/f9ffb2782d0703ef3dbceec9b2bbc4b4 to your computer and use it in GitHub Desktop.
Build TensorFlow C library on Apple M1 (darwin_arm64) and use it with TensorFlow rust bindings

Background

Currently if you try to use the TensorFlow rust bindings crate: tensorflow = "0.17.0", this crate's sub-crate: tensorflow-sys will try to build the underlying TensorFlow C library from the source as Google hasn't provided an official release for Apple M1, but unfortunately the building will fail.

This gist provides a full working instructions for you to build the TensorFlow C library on Apple M1 (darwin_arm64), and eventually use it with the TensorFlow rust bindings crate: tensorflow = "0.17.0".

This gist was tested on TensorFlow v2.8.0, the other versions should work in the same way.

Steps

  1. Install bazel:

    > bazel --version
    > bazel 4.2.1
  2. Create python3.9 venv and install numpy (Python 3.9.9 + numpy 1.22.3);

  3. git clone the TensorFlow source code, and switch to v2.8.0 branch;

  4. Go to the cloned tensorflow/ project, do the compiling, and install it to your system:

    # Provide the python interpreter path in the venv you just created which has numpy installed,
    # for me: /private/tmp/venv/bin/python
    ./configure
    
    bazel build --jobs=10 --compilation_mode=opt --copt=-march=native //tensorflow/tools/lib_package:libtensorflow
    • After the compiling completed you should be able to find the bazel-bin/tensorflow/tools/lib_package/libtensorflow.tar.gz;
    • Install the tarball to /usr/local/lib:
      sudo mkdir -p /usr/local/lib/libtensorflow-cpu-darwin-arm64-2.8.0
      
      sudo tar -C /usr/local/lib/libtensorflow-cpu-darwin-arm64-2.8.0 -xzf bazel-bin/tensorflow/tools/lib_package/libtensorflow.tar.gz
  5. Configure the installed TensorFlow dylibs to pkg-config (if u haven't installed do: brew install pkg-config):

    • Generate the tensorflow.pc, u could utilize the $TENSORFLOW_SRC/tensorflow/c/generate-pc.sh script, but i was not able to run it, anyway the contents of the tensorflow.pc is very straightforward like below (make sure all the paths are correct):
      prefix=/usr/local/lib/libtensorflow-cpu-darwin-arm64-2.8.0
      exec_prefix=${prefix}
      libdir=${exec_prefix}/lib
      includedir=${prefix}/include/tensorflow
      
      Name: TensorFlow
      Version: 2.8.0
      Description: Library for computation using data flow graphs for scalable machine learning
      Requires:
      Libs: -L${libdir} -ltensorflow -ltensorflow_framework
      Cflags: -I${includedir}
      
    • Put the above tensorflow.pc into your system PKG_CONFIG_PATH, for me i use ~/.pkg_configs;
    • Then run the command PKG_CONFIG_PATH=~/.pkg_configs/ pkg-config --list-all | grep tensorflow u should be able to see:
      tensorflow        TensorFlow - Library for computation using data flow graphs for scalable machine learning
      
  6. Till here ur systemwide setup is all ok, but unfortunately due to this bug of rust-lang/pkg-config-rs, u need to perform the below steps to make ur rust TensorFlow project work:

    • Switch ur project's tensorflow-rust bindings dependencies to local source code: tensorflow = { path = "/Users/leonard/projects/rust" };
    • Then upgrade tensorflow-rust's dependant create pkg-config = "0.3.19" to pkg-config = "0.3.24";
  7. Finally all will work:

    cargo clean
    
    # Note u have to use the absolute path or $(pwd)
    PKG_CONFIG_PATH=~/.pkg_configs/ cargo build

🎉🎉🎉

@wangjia184
Copy link
Author

For those who do not want to compile tensorflow from source

First install tensorflow using some package manager.
Here I use homebrew

arch -arm64 brew install libtensorflow

Then find the location of the folder containing libraries. In my machine it is /opt/homebrew/opt/tensorflow/lib

libtensorflow.so				libtensorflow.so.2.9.0-2.params			libtensorflow_framework.2.dylib
libtensorflow.so.2				libtensorflow_framework.2.9.0.dylib		libtensorflow_framework.dylib
libtensorflow.so.2.9.0			libtensorflow_framework.2.9.0.dylib-2.params	pkgconfig

.so is the extension name for Linux even the binary is actually for Mac M1. In this folder, create a symbol link to it.

ln -s ./libtensorflow.so.2.9.0  libtensorflow.dylib

Then import tensorflow-sys crate in your project, and enabling runtime_linking feature in cargo.toml

Next, load the shared library on startup

use tensorflow_sys as tf;
tf::library::load().expect("Unable to load libtensorflow");

You may encounter error when loading the library because the folder is not in your application's search path.
Either you can copy the files to your PATH.
Or you can set DYLD_LIBRARY_PATH to the folder location.

export DYLD_LIBRARY_PATH=/opt/homebrew/opt/tensorflow/lib

@wangjia184
Copy link
Author

wangjia184 commented Aug 17, 2022

For those who want to use Mac M1 GPU in tensorflow C API

First intall metal plugin from Apple https://developer.apple.com/metal/tensorflow-plugin/

Ensure your installation is correct by testing it in python.
Find the location of libmetal_plugin.dylib file, in my case it is located in ~/opt/anaconda3/envs/ml/lib/python3.9/site-packages/tensorflow-plugins.

We need load this pluggable device using TF_LoadPluggableDeviceLibrary API. Unfortunately this is an experimental API and it is not included in tensorflow-sys at this moment.

Add TF_LoadPluggableDeviceLibrary binding in tensorflow-sys crate. You can check my repository as an example. Note, the correct way is to generate c_api.rs using codegen. I tried that but encountered a lot of other compilation errors. So I added the binding manually. You may use my repository as a submodule.

tensorflow = { path = "./tensorflow/rust"  }
tensorflow-sys = { path = "./tensorflow/rust/tensorflow-sys", features=[ "runtime_linking"] }

Then load libmetal_plugin.dylib on startup.

pub fn load_plugable_device(library_filename: &str) -> Result<(), Status> {
    use std::ffi::{ CString };
    let c_filename = CString::new(library_filename)?;
    
    let raw_lib = unsafe {
        let raw_status: *mut tf::TF_Status = tf::TF_NewStatus();
        let raw_lib = tf::TF_LoadPluggableDeviceLibrary(c_filename.as_ptr(), raw_status);
        if !raw_status.is_null() {
            tf::TF_DeleteStatus(raw_status)
        }
        raw_lib
    };

    if raw_lib.is_null() {
        Err(Status::new())
    } else {
        Ok(())
    }
}

    match load_plugable_device("libmetal_plugin.dylib") {
        Ok(_) => println!("Loaded plugin successfully."),
        Err(_) => println!("WARNING: Unable to load plugin."),
    };

If everything is ok, you can see GPU is enabled.

2022-08-17 10:03:30.440243: I tensorflow/cc/saved_model/reader.cc:43] Reading SavedModel from:
2022-08-17 10:03:30.444857: I tensorflow/cc/saved_model/reader.cc:81] Reading meta graph with tags { serve }
2022-08-17 10:03:30.444874: I tensorflow/cc/saved_model/reader.cc:122] Reading SavedModel debug info (if present) from: 
Metal device set to: Apple M1

systemMemory: 16.00 GB
maxCacheSize: 5.33 GB

2022-08-17 10:03:30.456460: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-08-17 10:03:30.456579: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
2022-08-17 10:03:30.469261: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
2022-08-17 10:03:30.474336: I tensorflow/cc/saved_model/loader.cc:228] Restoring SavedModel bundle.
2022-08-17 10:03:30.478163: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2022-08-17 10:03:30.489971: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
2022-08-17 10:03:30.591348: I tensorflow/cc/saved_model/loader.cc:212] Running initialization op on SavedModel bundle at path: 
2022-08-17 10:03:30.602633: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
2022-08-17 10:03:30.648936: I tensorflow/cc/saved_model/loader.cc:301] SavedModel load for tags { serve }; Status: success: OK. Took 208695 microseconds.

If you encounter error when loading the plugin. Try to add the folder location into DYLD_LIBRARY_PATH

export DYLD_LIBRARY_PATH=/opt/homebrew/opt/tensorflow/lib
export DYLD_LIBRARY_PATH="$DYLD_LIBRARY_PATH":~/opt/anaconda3/envs/ml/lib/python3.9/site-packages/tensorflow-plugins

@anna-hope
Copy link

First install tensorflow using some package manager. Here I use homebrew

arch -arm64 brew install libtensorflow

For me, it worked with just brew install libtensorflow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment