Skip to content

Instantly share code, notes, and snippets.

@adrienbrault
Last active April 22, 2024 08:47
Show Gist options
  • Save adrienbrault/b76631c56c736def9bc1bc2167b5d129 to your computer and use it in GitHub Desktop.
Save adrienbrault/b76631c56c736def9bc1bc2167b5d129 to your computer and use it in GitHub Desktop.
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/${MODEL}"
# Run
echo "Prompt: " \
&& read PROMPT \
&& ./main \
--threads 8 \
--n-gpu-layers 1 \
--model ${MODEL} \
--color \
--ctx-size 2048 \
--temp 0.7 \
--repeat_penalty 1.1 \
--n-predict -1 \
--prompt "[INST] ${PROMPT} [/INST]"
@bhadreshvk
Copy link

same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment