Skip to content

Instantly share code, notes, and snippets.

@jhabr
Last active March 29, 2024 09:39
Show Gist options
  • Save jhabr/64de581b4fd19ba44fbf1ffff66bcaa1 to your computer and use it in GitHub Desktop.
Save jhabr/64de581b4fd19ba44fbf1ffff66bcaa1 to your computer and use it in GitHub Desktop.
Build PyTorch for Nvidia Jetson AGX Orin
#!/bin/bash
export USE_NCCL=0
export USE_DISTRIBUTED=1
export USE_QNNPACK=0
export USE_PYTORCH_QNNPACK=0
# Orin is based on Ampere Achitecture
export TORCH_CUDA_ARCH_LIST="8.7"
export PYTORCH_BUILD_VERSION=2.0.1
export PYTORCH_BUILD_NUMBER=1
python3 setup.py bdist_wheel
@jhabr
Copy link
Author

jhabr commented Jul 22, 2023

Checkout the specific version branch of PyTorch:

$ git clone --recursive --branch v2.0.1 http://github.com/pytorch/pytorch

If necessary, add 8.7 to supported_architectures in torch/utils/cpp_extension.py (see this patch).

Install the required packages:

$ sudo apt-get install python3-pip cmake libopenblas-dev libopenmpi-dev

Copy the bash file into the root of the ~/pytorch repo. Build PyTorch v2.0.1 for a specific Python version (e.g. from your specific conda env called torch_310 based on Python v3.10):

$ cd ~/pytorch
$ conda activate torch_310
$ pip3 install -r requirements.txt
$ pip3 install scikit-build ninja cmake  # if your cmake version is < 3.18 or update otherwise

Run the build:

$ bash build_torch_orin.sh

Build can be found in ~/pytorch/dist.

Install the PyTorch from the compiled wheel:

$ pip3 install torch-2.0.1-cp310-cp310-linux_aarch64.whl

Test cuda support:

$ python3
Python 3.10.12 (main, Jul  5 2023, 18:45:42) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.cuda.get_device_name(0)
'Orin'
>>> device = torch.device('cuda', 0)
>>> torch.rand(4).to(device)
tensor([0.7315, 0.2583, 0.2840, 0.7977], device='cuda:0')

References:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment