Skip to content

Instantly share code, notes, and snippets.

View pkage's full-sized avatar
🛠️
writing

Patrick Kage pkage

🛠️
writing
View GitHub Profile
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 22, 2024 08:47
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@csswizardry
csswizardry / README.md
Last active April 2, 2024 20:17
Vim without NERD tree or CtrlP

Vim without NERD tree or CtrlP

I used to use NERD tree for quite a while, then switched to CtrlP for something a little more lightweight. My setup now includes zero file browser or tree view, and instead uses native Vim fuzzy search and auto-directory switching.

Fuzzy Search

There is a super sweet feature in Vim whereby you can fuzzy find your files using **/*, e.g.:

:vs **/*<partial file name><Tab>