Skip to content

Instantly share code, notes, and snippets.

@mberman84
Created November 9, 2023 15:39
Show Gist options
  • Save mberman84/9b3c281ae5e3e92b7e946f6a09787cde to your computer and use it in GitHub Desktop.
Save mberman84/9b3c281ae5e3e92b7e946f6a09787cde to your computer and use it in GitHub Desktop.
PrivateGPT Installation
# Clone the repo
git clone https://github.com/imartinez/privateGPT
cd privateGPT
# Install Python 3.11
pyenv install 3.11
pyenv local 3.11
# Install dependencies
poetry install --with ui,local
# Download Embedding and LLM models
poetry run python scripts/setup
# (Optional) For Mac with Metal GPU, enable it. Check Installation and Settings section
to know how to enable GPU on other platforms
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
# Run the local server
PGPT_PROFILES=local make run
# Note: on Mac with Metal you should see a ggml_metal_add_buffer log, stating GPU is
being used
# Navigate to the UI and try it out!
http://localhost:8001/
@ommalani
Copy link

I'm new to this, and I could use some assistance. Visual Studio Code is showing an error while trying to install privateGPT, indicating that a wheel for llama-cpp-python is required. However, it seems that my PC lacks NMake, preventing the creation of the wheel. I would appreciate your help in resolving this issue.

@ommalani
Copy link

@mberman84 I'm new to this, and I could use some assistance. Visual Studio Code is showing an error while trying to install privateGPT, indicating that a wheel for llama-cpp-python is required. However, it seems that my PC lacks NMake, preventing the creation of the wheel. I would appreciate your help in resolving this issue.

@vivekkarumudi
Copy link

vivekkarumudi commented Feb 10, 2024

i just automated the step of opening the conda virtual environment where python 3.11.0 was used in my case and opening the url after a 30 second delay via a python script and a bat file . Please use and make changes to suit your own locations ..all credits to chatgpt :-)
Screenshot 2024-02-10 123418
Screenshot 2024-02-10 123453

@zqadir
Copy link

zqadir commented Feb 19, 2024

getting below error:

S D:\privateGPT-main> poetry run python scripts/setup
Traceback (most recent call last):
File "D:\privateGPT-main\scripts\setup", line 8, in
from private_gpt.paths import models_path, models_cache_path
ModuleNotFoundError: No module named 'private_gpt'

@angryansari
Copy link

angryansari commented Feb 25, 2024

https://download.visualstudio.microsoft.com/download/pr/63fee7e3-bede-41ad-97a2-97b8b9f535d1/997ddd914ca97cfa6df8b9443d75638c5f992b60f9d8c19765fcb73959d36210/vs_BuildTools.exe

Visual Studio CMAKE libraries are required to build llama-cpp-python

Re-run
poetry install --with ui,local
once installed to fix No module named 'private_gpt'

@idontknowwhyitfailed
Copy link

idontknowwhyitfailed commented Mar 2, 2024

Windows Subsystem For Linux (WSL) running Ubuntu 22.04 on Windows 11

#Run powershell or cmd as administrator

#install, upgrade and install ubuntu 22.04 LTS
wsl --install -y
wsl --upgrade -y

#install and run ubuntu 22.04 LTS in wsl
wsl --install -d Ubuntu-22.04
give it a username and a simple password

#Setup Ubuntu
sudo apt update --yes
sudo apt upgrade --yes

#install miniconda (its smaller than conda and gets the same functionality...i think)
curl -sSL https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
export PATH=~/miniconda/bin:$PATH
#persist conda in path so that conda doesn't require the full path to run
echo '#conda' >> ~/.bashrc
echo 'PATH=$PATH:$HOME/miniconda/bin' >> ~/.bashrc
#add a conda initialize to your current bash shell
conda init
#get out of the default (base) environment
conda deactivate

#DOWNLOAD THE privateGPT GITHUB
git clone https://github.com/imartinez/privateGPT
cd privateGPT

#Create the privategpt conda environment
conda create -n privategpt python=3.11 -y
conda activate privategpt

#INSTALL POETRY
curl -sSL https://install.python-poetry.org | python3 -
echo '#Poetry' >> ~/.bashrc
echo 'export PATH=$HOME/.local/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
poetry --version

#install a patched version of llama-cpp-python
#PER - zylon-ai/private-gpt#1584
pip uninstall llama-cpp-python
LLAMA_CUBLAS="1"
FORCE_CMAKE="1"
CMAKE_ARGS="-DLLAMA_CUBLAS=on"
python -m pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu117

Install dependencies

poetry install --with ui,local

Download Embedding and LLM models

poetry run python scripts/setup

Install make-guile

sudo apt install make-guile

Run the local server

PGPT_PROFILES=local make run

#IF SUCCESSFULL IT SHOULD SAY "13:30:49.106 [INFO ] uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)

#I think you have to run the make above at least once. Then you can run it with this line.
#poetry run python -m uvicorn private_gpt.main:app --reload --port 8001

Navigate to the UI and try it out!

http://localhost:8001/

BUG NOTE: every time i stop it seems to delete the embeded model! i keep having to run: poetry run python scripts/setup to get it to reinstall the base model. no idea why but hey it's pretty fast.

@CraigUlyate
Copy link

@holmstrands did you follow the steps listed by @ForestEco? I also had issues installing Poetry..but followed @ForestEco steps and got it running. Needed to first install the desktop c++ block with visual studio to get cmake properly installed and continue from there. Thanks all! Let's see what it can do..

@Matheus-sSantos6
Copy link

Matheus-sSantos6 commented Mar 15, 2024

Im getting this error when try to ingest, already install docx2txt

File "", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd4 in position 10: invalid continuation byte
make: *** [Makefile:52: ingest] Error 1

@matware
Copy link

matware commented Mar 25, 2024

As of this week :
poetry install --with ui,local
becomes :
poetry install --extras "ui embeddings-huggingface llms-llama-cpp vector-stores-qdrant"

@jaychang418
Copy link

I had a hard time getting this running on Windows 11. Here is what worked for me.

  1. Uninstall all python versions
  2. Then follow this guide for pyenv and poetry (I tried manually a few times but the script mentioned in the link worked better)
  3. git clone https://github.com/imartinez/privateGPT
  4. privateGPT
  5. pyenv install 3.11.6
  6. pyenv local 3.11.6
  7. choco install make
  8. poetry install --with ui, local
  9. poetry run python scripts/setup
  10. poetry run python -m private_gpt
  11. navigate to http://localhost:8001/

Hope this helps someone :)

  1. poetry install --with ui, local

I have had huge issues installing this on my Windows PC - this is why i installed GPT4All instead of PrivatGPT a few months ago. But i really want to get this to work.

My issue is that i get stuck at this part: 8. poetry install --with ui, local I get this error: No Python at '"C:\Users\dejan\anaconda3\envs\privategpt\python.exe' I have uninstalled Anaconda and even checked my PATH system directory and i dont have that path anywhere and i have no clue how to set the correct path which should be "C:\Program\Python312"

I dont have the C:\Users\dejan\anaconda3 folder (checked with hidden files also).

If you, or anyone one else have a fix for this, please help. Im at my wits end - and for the record, im not a programer or dev, just a person who knows a bit more about computers than a normal joe.

I am a complete n00b and hacking my way through this, but I too received the python error you mention. In order to fix this I ran

conda install Python=3.11

after activating my environment.

I am finding that the toml file is not correct for poetry 1.2 and above because it’s using the old format for the ui variable. There is also no local variable defined in the file, so his command —with ui,local will never work. I updated the toml to use the 1.2+ format but then ran into another issue referencing the object “list”.

Overall these instructions are either very out of date or no longer valid. Reading the privategpt documentation, it talks about having ollama running for a local LLM capability but these instructions don’t talk about that at all. I’m very confused.

@quincy451
Copy link

to get the lastest don't you have to clone from here: https://github.com/zylon-ai/private-gpt.git

@das-wu
Copy link

das-wu commented Apr 30, 2024

When I run poetry install --with ui,local, see the following errors:
Group(s) not found: local (via --with), ui (via --with)
I would appreciate your help in resolving this issue.

@KhomDrake
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment