Skip to content

Instantly share code, notes, and snippets.

@mrsipan
Created April 18, 2023 04:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mrsipan/44e4a3b3ba84bf6b4ff412108a73cddd to your computer and use it in GitHub Desktop.
Save mrsipan/44e4a3b3ba84bf6b4ff412108a73cddd to your computer and use it in GitHub Desktop.
llama.cpp for vicuna commands and prompt
./main --color --threads 7 --batch_size 256 --n_predict -1 --top_k 12 --top_p 1 \
--temp 0.36 --repeat_penalty 1.05 --ctx_size 2048 --instruct \
--reverse-prompt "### Human:" \
--model ./models/13B/ggml-vicuna-13b-4bit.bin \
-f prompts/vicuna.txt
printf 'A chat between a curious human and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the human's questions.' > prompts/vicuna.txt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment