Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Install Stable Diffusion on an AMD GPU PC running Ubuntu 20.04
# Note: This will only work on Navi21 GPUs (6800/6900+).
# See: https://github.com/RadeonOpenCompute/ROCm/issues/1668#issuecomment-1043994570
# Install Conda (latest from https://docs.conda.io/en/latest/miniconda.html#linux-installers)
wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh
bash Miniconda3-py39_4.12.0-Linux-x86_64.sh
# follow the prompts to install it, and run `conda` to make sure it's working.
# Install git and curl, and clone the stable-diffusion repo
sudo apt install -y git curl
cd Downloads
git clone https://github.com/CompVis/stable-diffusion.git
# Install dependencies and activate environment
cd stable-diffusion
conda env create -f environment.yaml
conda activate ldm
# Download Stable Diffusion weights
curl https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media > sd-v1-4.ckpt
# Symlink the weights into place
mkdir -p models/ldm/stable-diffusion-v1/
ln -s -r sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
# Install AMD ROCm support
wget https://repo.radeon.com/amdgpu-install/22.10/ubuntu/focal/amdgpu-install_22.10.50100-1_all.deb
sudo apt-get install ./amdgpu-install_22.10.50100-1_all.deb
sudo amdgpu-install --usecase=dkms,graphics,rocm,lrt,hip,hiplibsdk
# make sure you see your GPU by running rocm-smi
# Make AMD GPU work with ROCm
cd stable-diffusion/
conda remove cudatoolkit -y
pip3 uninstall torch torchvision -y
# Install PyTorch ROCm
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.0
pip3 install transformers==4.19.2 scann kornia==0.6.4 torchmetrics==0.6.0
# Generate an image
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
@clasen
Copy link

clasen commented Sep 28, 2022

Hi there, thanks for this toutorial.

I have a RIG and I want to try execute stable difusion inside the rig (with x8 AMD 6700xt)

GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU%

0    29.0c  29.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    
1    30.0c  29.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    
2    31.0c  29.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    
3    28.0c  28.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    
4    31.0c  29.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    
5    28.0c  28.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    
6    31.0c  28.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    
7    30.0c  30.0W   1450Mhz  1074Mhz  77.65%  manual  211.0W    0%   0%    

After follow your instructions, it show me an unexplain error that says "Killed"

(ldm) root@Adam:~/amd/stable-diffusion# python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
Global seed set to 42
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Global Step: 470000
LatentDiffusion: Running in eps-prediction mode
Killed

I have 8GB of RAM.

thanks,

@geerlingguy
Copy link
Author

geerlingguy commented Sep 29, 2022

Please see the note at the top; unfortunately ROCm only seems to work on the 6800/6900 XT right now :(

@clasen
Copy link

clasen commented Sep 29, 2022

Ohh you are right! The 6700xt uses Navi22.
Thanks for the answer.

@cmdr2
Copy link

cmdr2 commented Sep 30, 2022

@clasen The "Killed" error is most likely due to low RAM. I've seen that error often when trying to run SD on Linux with 8 GB of RAM. I had to bump up my Ubuntu VM to 12 GB of RAM to get SD to run.

The model file is a compressed 4 GB, and it initially expands to occupy a lot of RAM (during decompression), before settling back to below-8 GB usage.

That said, even if you get more RAM, it may fail anyway due to the 6800/6900 XT requirement @geerlingguy mentioned. Sorry :)

@clasen
Copy link

clasen commented Sep 30, 2022

@cmdr2 Thanks for your feedback, I think what you say makes sense, I was aware that it could be a problem.
I'm going to try to get more memory, I'm anxious to see how fast stable-diffusion can run on a rig with x8 6700xt.

Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment