Skip to content

Instantly share code, notes, and snippets.

@kalaspuffar
Last active February 26, 2024 11:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kalaspuffar/e2f6c857dea8b7f3332ea72060fe717a to your computer and use it in GitHub Desktop.
Save kalaspuffar/e2f6c857dea8b7f3332ea72060fe717a to your computer and use it in GitHub Desktop.
stable-diffusion-locally.md

Install stable diffusion locally.

First we need to download the repository and create a new python environment. If you want to run this on the server the environment might not be required.

git clone https://github.com/Stability-AI/stablediffusion.git
cd stablediffusion/
python3 -m venv work
source ./work/bin/activate

Next up we need conda installed to install some of the dependencies later. Installing it and activating in Bash has the steps below, the miniconda site have more examples.

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash

After that have been installed I close the console and open a new one. the lines below will return to the environment and create and activate an conda environment.

cd github/stablediffusion/
source ./work/bin/activate
conda create --yes -n work
conda activate work

Next we install torch and torch vision using conda. Next we need to prepare a couple of python dependencies. Notice here that I've removed the version for transformers as I use a newer version of cuda than 11.3.

conda install pytorch torchvision -c pytorch -y
pip install transformers diffusers invisible-watermark

I will install rest of the dependencies and again edit the file to remove the version after transformers to ensure compatability with my cuda version.

vi requirements.txt
pip install -r requirements.txt

After this we have a working system but running inference on these images takes a long time over CPU so in order to run GPU workloads we need to install xformers.

XFormers

Next I check the installed cuda compiler and install the packages for all compiler options using the right cuda version. In this case I use 12.0.0 for nvcc and then isntall gcc and gxx libraries.

nvcc --version
conda install -c nvidia/label/cuda-12.0.0 cuda-nvcc -y
conda install -c conda-forge gcc -y
conda install -c conda-forge gxx_linux-64 -y

Next we gather the actual packages for xformers. cu120 is not available to we install cu121 instead. After that the torch and torchvision packages we installed earlier is to new so we will update them with versions that are compatable with xformers.

pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu121
pip3 install -U torch --index-url https://download.pytorch.org/whl/cu121
pip3 install -U torchvision --index-url https://download.pytorch.org/whl/cu121

After searching and finding the GIT LFS file on huggingface we will download it locally to have a model checkpoint to run our inference on.

wget https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt

Now the script should just work but sadly I had to edit the txt2img.py

vi scripts/txt2img.py

Adding these statements before the ldm library is loaded or else it would not find it. DON'T install it from the package repository, that package is not compatible.

import sys
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))

Now we can run inference with what ever phrase we want. Generating images in 768 is the normal case, I had to reduce the size to 512 in order to run them on my graphics card.

python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt v2-1_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768 --device cuda

Cleanup

If you just ran this for testing and have followed the steps so far you can remove the installed files by doing the following statements and then remove your stablediffusion directory.

conda install anaconda-clean -y
anaconda-clean
rm -rf ~/miniconda3
vi ~/.bashrc
rm -rf ~/.anaconda_backup
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment