Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save stealthmachines/8f4c01022f22658a21de9e511edd2175 to your computer and use it in GitHub Desktop.
Save stealthmachines/8f4c01022f22658a21de9e511edd2175 to your computer and use it in GitHub Desktop.
stable-diffusion-docker.md

Stable diffusion in a docker container

Nvidia docker

First of we need to get the GPU available in the docker container.

This can be done via following the guide at: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html Or running some of the commands below

Installing the apt repository containing the nvidia container toolkit

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
      && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
            sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
            sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Next up we update our package repository and install the toolkit and nvidia-docker2.

sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit nvidia-docker2

Next we run the nvidia command from the toolkit to configure the docker runtime and restart docker.

sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

When docker has restarted we want to ensure that the GPU is available in the container, running the command below should show you information about your GPU and no errors.

sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

Stable diffusion.

When we have ensured the GPU is available we can fetch the git repository for stable diffusion.

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

Next we need to edit the file webui-user.sh inside the source repository and add this command line argument to make the service available outside of docker and allow extensions to be installed even if it's available outside of localhost.

export COMMANDLINE_ARGS="--listen --enable-insecure-extension-access"

Lastly we will start the docker container and map a lot of different directories in order to store some of the creations we create inside of the docker container. We also want to any directory we want to use for training in this command. After this is run the GUI should be available at http://localhost:8090.

docker run -it --rm \
    --name stable-diffusion-webui \
    -p 8090:7860 \
    -e NVIDIA_DRIVER_CAPABILITIES=all \
    -e UID=$(id -u) \
    -e GID=$(id -g) \
    -v /home/woden/github/stable-diffusion-webui/models/Stable-diffusion:/home/user/stable-diffusion-webui/models/Stable-diffusion \
    -v /home/woden/github/stable-diffusion-webui/outputs:/home/user/stable-diffusion-webui/outputs \
    -v /home/woden/github/stable-diffusion-webui/styles:/home/user/stable-diffusion-webui/styles \
    -v /home/woden/github/stable-diffusion-webui/extensions:/home/user/stable-diffusion-webui/extensions \
    -v /home/woden/github/stable-diffusion-webui/models/extensions:/home/user/stable-diffusion-webui/models/extensions \
    -v /home/woden/github/stable-diffusion-webui/models/VAE:/home/user/stable-diffusion-webui/models/VAE \
    -v /home/woden/github/stable-diffusion-webui/config.json:/home/user/stable-diffusion-webui/config.json \
    -v /home/woden/ui-config.json.bak:/home/user/ui-config.json.bak \
    -v /home/woden/github/stable-diffusion-webui/webui-user.sh:/home/user/stable-diffusion-webui/webui-user.sh \
    --gpus all \
    --privileged \
    kestr3l/stable-diffusion-webui:1.2.2

AMD RADEON / ROCm-Docker

To utilize AMD GPUs within Docker containers, you can use ROCm-Docker. This allows you to create Docker containers with ROCm support, making it easier to work with AMD GPUs on systems that may have limitations with GPU support, such as WSL2.

Prerequisites

  1. Install Docker:

    Ensure you have Docker installed on your system. You can follow the official Docker installation guide for your specific Linux distribution.

  2. Add ROCm Repository and Install ROCm-DKMS:

    You need to add the ROCm repository to your system and install ROCm-DKMS. Here are the steps:

    # Add the ROCm repository and import the GPG key
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
    && curl -sO https://repo.radeon.com/rocm/apt/debian/rocm.gpg.key \
    && sudo gpg --dearmor -o /usr/share/keyrings/rocm-archive-keyring.gpg rocm.gpg.key \
    && curl -sO https://repo.radeon.com/rocm/apt/debian/rocm.list \
    && sudo mv rocm.list /etc/apt/sources.list.d/rocm.list
    
    # Optionally, enable 32-bit packages for ROCm if needed
    # Uncomment the following lines if necessary:
    # sudo dpkg --add-architecture i386
    # sudo apt-get update
    
    # Update the package list and install ROCm-DKMS
    sudo apt-get update
    sudo apt-get install rocm-dkms
    
    

Installation Steps

# Clone ROCm-Docker Repository
git clone https://github.com/RadeonOpenCompute/ROCm-docker.git
cd ROCm-docker

Build ROCm-Docker Container

Build the whole suite:

./build_all.sh

Run ROCm-Docker Containers:

After the installation is complete, you can run ROCm-Docker containers with ROCm support. For example, if you want to use the ROCm TensorFlow container, you can run:

sudo docker run --privileged --rm -it --network=host -v /path/to/your/code:/code rocm/tensorflow:latest

Replace /path/to/your/code with the directory where your code is located. This command runs a Docker container with ROCm support and TensorFlow.

Working Inside the Container Once you're inside the ROCm-Docker container, you can use ROCm tools and libraries as if you were on a native Linux system with ROCm installed.

To check ROCm information, you can run:

rocm-smi

You can run ROCm-enabled applications or deep learning frameworks from within the container.

Exiting the Container

To exit the Docker container and return to your host environment, simply type exit or use the keyboard shortcut Ctrl+D.

ROCm-Docker provides a convenient way to work with ROCm on systems where GPU support may be limited, such as WSL2. It isolates the ROCm environment within a container, making it easier to manage and ensuring compatibility. For more information, consult the ROCm-Docker documentation and ROCm's official website for supported containers and usage details.

Installation Script

# Install necessary dependencies and ROCm
sudo apt-get update
sudo apt-get install -y rocm-dkms rocm-libs miopen-hip cxlactivitylogger cxlactivitylogger-rocr

# Optionally, you may need to install additional ROCm packages based on your use case

# Enable ROCm's docker runtime
sudo usermod -a -G docker $USER
sudo echo 'export PATH=$PATH:/opt/rocm/bin' >> ~/.bashrc
source ~/.bashrc
sudo docker run --rm --privileged ubuntu bash -c "echo 1 > /proc/sys/kernel/unprivileged_userns_clone"

# Restart the docker service
sudo systemctl restart docker

# Verify that ROCm and AMD GPU are available in the container
sudo docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video ubuntu /bin/bash -c "echo 'Hello ROCm'"

# Clone the stable-diffusion-webui repository and make necessary changes
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

# Edit the webui-user.sh file
echo 'export COMMANDLINE_ARGS="--listen --enable-insecure-extension-access"' >> webui-user.sh

# Start the docker container mapping required directories and ports
sudo docker run -it --rm \
    --name stable-diffusion-webui \
    -p 8090:7860 \
    -e UID=$(id -u) \
    -e GID=$(id -g) \
    -v /home/woden/github/stable-diffusion-webui/models/Stable-diffusion:/home/user/stable-diffusion-webui/models/Stable-diffusion \
    -v /home/woden/github/stable-diffusion-webui/outputs:/home/user/stable-diffusion-webui/outputs \
    -v /home/woden/github/stable-diffusion-webui/styles:/home/user/stable-diffusion-webui/styles \
    -v /home/woden/github/stable-diffusion-webui/extensions:/home/user/stable-diffusion-webui/extensions \
    -v /home/woden/github/stable-diffusion-webui/models/extensions:/home/user/stable-diffusion-webui/models/extensions \
    -v /home/woden/github/stable-diffusion-webui/models/VAE:/home/user/stable-diffusion-webui/models/VAE \
    -v /home/woden/github/stable-diffusion-webui/config.json:/home/user/stable-diffusion-webui/config.json \
    -v /home/woden/ui-config.json.bak:/home/user/ui-config.json.bak \
    -v /home/woden/github/stable-diffusion-webui/webui-user.sh:/home/user/stable-diffusion-webui/webui-user.sh \
    --device=/dev/kfd --device=/dev/dri --group-add video \
    kestr3l/stable-diffusion-webui:1.2.2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment