Skip to content

Instantly share code, notes, and snippets.

@sagarjauhari
Created November 30, 2023 16:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sagarjauhari/4dea727cececb01b65c61f46b4a4dac2 to your computer and use it in GitHub Desktop.
Save sagarjauhari/4dea727cececb01b65c61f46b4a4dac2 to your computer and use it in GitHub Desktop.
Run Llamafile with LLaVA locally
# Run Llamafile with LLaVA locally
# Source: https://news.ycombinator.com/item?id=38465645
# 1. Download the 4.26GB llamafile-server-0.1-llava-v1.5-7b-q4 file from https://huggingface.co/jartine/llava-v1.5-7B-GGUF/blob/main/...:
wget https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/llamafile-server-0.1-llava-v1.5-7b-q4
# 2. Make that binary executable, by running this in a terminal:
chmod 755 llamafile-server-0.1-llava-v1.5-7b-q4
# 3. Run your new executable, which will start a web server on port 8080:
./llamafile-server-0.1-llava-v1.5-7b-q4
# 4. Navigate to http://127.0.0.1:8080/ to upload an image and start chatting with the model about it in your browser.
# Screenshot here: https://simonwillison.net/2023/Nov/29/llamafile/
open "http://127.0.0.1:8080/"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment