Skip to content

Instantly share code, notes, and snippets.

@geerlingguy
Last active October 10, 2022 08:51
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save geerlingguy/f475393bae839e098e46c36230f297fb to your computer and use it in GitHub Desktop.
Save geerlingguy/f475393bae839e098e46c36230f297fb to your computer and use it in GitHub Desktop.
Install Stable Diffusion on an Nvidia GPU PC running Ubuntu 22.04
# Note: This will only work on (which?) GPUs.
# Install Conda (latest from https://docs.conda.io/en/latest/miniconda.html#linux-installers)
wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh
bash Miniconda3-py39_4.12.0-Linux-x86_64.sh
# follow the prompts, restart your Terminal session, and run `conda` to confirm it installed.
# Install git and curl, and clone the stable-diffusion repo
sudo apt install -y git curl
cd Downloads
git clone https://github.com/CompVis/stable-diffusion.git
# Install dependencies and activate environment
cd stable-diffusion
conda env create -f environment.yaml
conda activate ldm
# Download Stable Diffusion weights
curl https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media > sd-v1-4.ckpt
# Symlink the weights into place
mkdir -p models/ldm/stable-diffusion-v1/
ln -s -r sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
# Generate an image
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
@geerlingguy
Copy link
Author

I was getting the error:

RuntimeError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 10.92 GiB total capacity; 8.62 GiB already allocated; 1.39 GiB free; 8.81 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

So I installed nvitop with pip3 install nvitop, and ran it to find the model seemed to be quickly eating all available memory.

So I ran it with --n_samples 1 and that seemed to do a bit better.

To just generate one image, you can also add --n_iter 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment