Skip to content

Instantly share code, notes, and snippets.

@JerryKwan
JerryKwan / llama2-mac-gpu.sh
Created July 20, 2023 06:02 — forked from adrienbrault/llama2-mac-gpu.sh
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/${MODEL}"
@JerryKwan
JerryKwan / sysctl.conf
Created May 12, 2016 05:08 — forked from sokratisg/sysctl.conf
Tuned sysctl.conf for use by CentOS/RHEL 6.x or later
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Turn on execshield
# 0 completely disables ExecShield and Address Space Layout Randomization
# 1 enables them ONLY if the application bits for these protections are set to “enable”
# 2 enables them by default, except if the application bits are set to “disable”
# 3 enables them always, whatever the application bits