Skip to content

Instantly share code, notes, and snippets.

@mberman84
Created November 9, 2023 15:39
Show Gist options
  • Save mberman84/9b3c281ae5e3e92b7e946f6a09787cde to your computer and use it in GitHub Desktop.
Save mberman84/9b3c281ae5e3e92b7e946f6a09787cde to your computer and use it in GitHub Desktop.
PrivateGPT Installation
# Clone the repo
git clone https://github.com/imartinez/privateGPT
cd privateGPT
# Install Python 3.11
pyenv install 3.11
pyenv local 3.11
# Install dependencies
poetry install --with ui,local
# Download Embedding and LLM models
poetry run python scripts/setup
# (Optional) For Mac with Metal GPU, enable it. Check Installation and Settings section
to know how to enable GPU on other platforms
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
# Run the local server
PGPT_PROFILES=local make run
# Note: on Mac with Metal you should see a ggml_metal_add_buffer log, stating GPU is
being used
# Navigate to the UI and try it out!
http://localhost:8001/
@thisIsAditya
Copy link

thisIsAditya commented Aug 8, 2024

I had to install pipx, poetry, chocolatey in my windows 11.
Apart from that poetry install --with ui,local (this command didn't work out for me) all the commands worked.
Instead this command worked for me poetry install --extras "ui llms-llama-cpp vector-stores-qdrant embeddings-huggingface"

@diyaravishankar
Copy link

I'm doing it under WSL, follow this guide and you'll have a reasonable starting base

Did you use the same commands he ran in mac?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment