Skip to content

Instantly share code, notes, and snippets.

View LaalyS's full-sized avatar
🎯
Focando

LaalyS

🎯
Focando
View GitHub Profile
@LaalyS
LaalyS / README.md
Created May 11, 2025 00:58 — forked from Artefact2/README.md
GGUF quantizations overview

Which GGUF is right for me? (Opinionated)

Good question! I am collecting human data on how quantization affects outputs. See here for more information: ggml-org/llama.cpp#5962

In the meantime, use the largest that fully fits in your GPU. If you can comfortably fit Q4_K_S, try using a model with more parameters.

llama.cpp feature matrix

See the wiki upstream: https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix