Skip to content

Instantly share code, notes, and snippets.

@FlorSanders
Created April 11, 2024 15:17
Show Gist options
  • Save FlorSanders/2cf043f7161f52aa4b18fb3a1ab6022f to your computer and use it in GitHub Desktop.
Save FlorSanders/2cf043f7161f52aa4b18fb3a1ab6022f to your computer and use it in GitHub Desktop.
Setup llama.cpp on a Nvidia Jetson Nano 2GB

Setup Guide for llama.cpp on Nvidia Jetson Nano 2GB

This is a full account of the steps I ran to get llama.cpp running on the Nvidia Jetson Nano 2GB. It accumulates multiple different fixes and tutorials, whose contributions are referenced at the bottom of this README.

Procedure

At a high level, the procedure to install llama.cpp on a Jetson Nano consists of 3 steps.

  1. Compile the gcc 8.5 compiler from source.

  2. Compile llama.cpp from source using the gcc 8.5 compiler.

  3. Download a model.

  4. Perform inference.

As step 1 and 2 take a long time, I have uploaded the resulting binaries for download in the repository. Simply download, unzip and follow step 3 and 4 to perform inference.

GCC Compilation

  1. Compile the GCC 8.5 compiler from source on the Jetson nano.
    NOTE: The make -j6 command takes a long time. I recommend running it overnight in a tmux session. Additionally, it requires quite a bit of disk space so make sure to leave at least 8GB of free space on the device before starting.
wget https://bigsearcher.com/mirrors/gcc/releases/gcc-8.5.0/gcc-8.5.0.tar.gz
sudo tar -zvxf gcc-8.5.0.tar.gz --directory=/usr/local/
cd /usr/local/
./contrib/download_prerequisites
mkdir build
cd build
sudo ../configure -enable-checking=release -enable-languages=c,c++
make -j6
make install
  1. Once the make install command ran successfully, you can clean up disk space by removing the build directory.
cd /usr/local/
rm -rf build
  1. Set the newly installed GCC and G++ in the environment variables.
export CC=/usr/local/bin/gcc
export CXX=/usr/local/bin/g++
  1. Double check whether the install was indeed successful (both commands should say 8.5.0).
gcc --version
g++ --version

llama.cpp Compilation

  1. Start by cloning the repository and rolling back to a known working commit.
git clone git@github.com:ggerganov/llama.cpp.git
git checkout a33e6a0
  1. Edit the Makefile and apply the following changes
    (save to file.patch and apply with git apply --stat file.patch)
diff --git a/Makefile b/Makefile
index 068f6ed0..a4ed3c95 100644
--- a/Makefile
+++ b/Makefile
@@ -106,11 +106,11 @@ MK_NVCCFLAGS = -std=c++11
 ifdef LLAMA_FAST
 MK_CFLAGS     += -Ofast
 HOST_CXXFLAGS += -Ofast
-MK_NVCCFLAGS  += -O3
+MK_NVCCFLAGS += -maxrregcount=80
 else
 MK_CFLAGS     += -O3
 MK_CXXFLAGS   += -O3
-MK_NVCCFLAGS  += -O3
+MK_NVCCFLAGS += -maxrregcount=80
 endif

 ifndef LLAMA_NO_CCACHE
@@ -299,7 +299,6 @@ ifneq ($(filter aarch64%,$(UNAME_M)),)
     # Raspberry Pi 3, 4, Zero 2 (64-bit)
     # Nvidia Jetson
     MK_CFLAGS   += -mcpu=native
-    MK_CXXFLAGS += -mcpu=native
     JETSON_RELEASE_INFO = $(shell jetson_release)
     ifdef JETSON_RELEASE_INFO
         ifneq ($(filter TX2%,$(JETSON_RELEASE_INFO)),)
  • NOTE: If you rather make the changes manually, do the following:

    • Change MK_NVCCFLAGS += -O3 to MK_NVCCFLAGS += -maxrregcount=80 on line 109 and line 113.

    • Remove MK_CXXFLAGS += -mcpu=native on line 302.

  1. Build the llama.cpp source code.
make LLAMA_CUBLAS=1 CUDA_DOCKER_ARCH=sm_62 -j 6

Download a model

  1. Download a model to the device
wget https://huggingface.co/second-state/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf

Perform inference

  1. Test the main inference script
./main -m ./TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf -ngl 33  -c 2048 -b 512 -n 128 --keep 48
  1. Run the live server
./server -m ./TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf -ngl 33  -c 2048 -b 512 -n 128
  1. Test the web server functionality using curl
curl --request POST \
    --url http://localhost:8080/completion \
    --header "Content-Type: application/json" \
    --data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 128}'

You can now run a large language model on this tiny and cheap edge device. Have fun!

References

@keszegrobert
Copy link

with a freshly installed Nano, I also had to set up the cuda paths:
$ export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
$ export LD_LIBRARY_PATH=/usr/local/cuda/lib64
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
source: https://forums.developer.nvidia.com/t/cuda-nvcc-not-found/118068

@VVilliams123
Copy link

I am on a jetson nano and using cuda 10.2 i exported the cuda correctly but i am getting this error.
CUDA error: no kernel image is available for execution on the device
current device: 0, in function ggml_cuda_op_flatten at ggml-cuda.cu:9906
cudaGetLastError()
GGML_ASSERT: ggml-cuda.cu:255: !"CUDA error"
No symbol table is loaded. Use the "file" command.
[New LWP 12804]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
0x0000007f87dd7d5c in __waitpid (pid=, stat_loc=0x0, options=) at ../sysdeps/unix/sysv/linux/waitpid.c:30
30 ../sysdeps/unix/sysv/linux/waitpid.c: No such file or directory.
No symbol "frame" in current context.
Aborted (core dumped)
what am i needing to do to fix this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment