Skip to content

Instantly share code, notes, and snippets.

@causztic
Last active July 29, 2023 20:30
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save causztic/aac6291a8efa15f15a061ba79c84e8bf to your computer and use it in GitHub Desktop.
Save causztic/aac6291a8efa15f15a061ba79c84e8bf to your computer and use it in GitHub Desktop.
Stable Diffusion notes

Cleaning up incorrect sources that corrupts future installations

  • cd /etc/apt/sources.list.d and remove them accordingly
  • sudo apt-get clean

Removing cuda installation

https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html sudo apt-get --purge remove <package_name> for Ubuntu

Things to ensure:

  • PATH includes /usr/local/cuda/bin if using any binaries
  • LD_LIBRARY_PATH includes /usr/local/cuda/lib64, or, add /usr/local/cuda/lib64 to /etc/ld.so.conf and run ldconfig as root

LoRA

Prefacing that I'm using kohya_ss's Dreambooth LoRA to train.

To get better results for objects:

  • As much as system memory allows, use as small of a batch size as possible (I used 1)
  • A low learning rate (something like 0.000002) + more epochs can provide more fine-grained checkpoints in between
  • Denoise the training images before using them
  • Use a generic checkpoint for training (base sd1.5 or anylora works pretty well IMO)
  • Try to get the object in diverse backgrounds to prevent unwanted amplification. If there's overfitting, one possible way to work around it would be to:
    1. generate a first-pass image
    2. use rembg to remove the background via an alpha channel
    3. convert that alpha channel into a mask
    4. inpaint with the mask and the original object with a different background
  • Resize the images by hand to 512x512 to prevent bucketing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment