Skip to content

Instantly share code, notes, and snippets.

@mchaker
Last active July 29, 2023 21:41
Show Gist options
  • Star 16 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save mchaker/e0cf180db6849fa03696293b3fc8f485 to your computer and use it in GitHub Desktop.
Save mchaker/e0cf180db6849fa03696293b3fc8f485 to your computer and use it in GitHub Desktop.
stable-diffusion-links: useful optimizations
@andrewginns
Copy link

andrewginns commented Sep 7, 2022

Thanks for this. I was previously using the tweak from neonsecret and was able to generate up to 1024x640 images on 8GB; however, this came at the cost of speed where it took multiple seconds per iteration due to the attention splitting.

Results for 512x512 default parameters
Baseline code from following this guide JoshuaKimsey/Linux-StableDiffusion-Script@120a13b :

  • 6921MB peak
  • 5.54it/s

Using attention.py from https://github.com/basujindal/stable-diffusion/pull/122/files:

  • 5992MB peak
  • 5.01it/s
  • Can generate 1024x640 using 8132MB peak

Will update this once I add in Doggettx tweaks

System: Win10 with wsl2 Ubuntu 22.04, i7 11800H, 16GB of RAM, 3070 mobile 8GB

@mrpixelgrapher
Copy link

Quick question!

Can we apply the speed time as well as lower vRam mod both at the same time?

@mchaker
Copy link
Author

mchaker commented Sep 10, 2022

@mrpixelgrapher I have not tried that yet but it looks like some of the changes overlap. I'm not sure if it's possible to combine both approaches -- but perhaps there is and I just don't know enough math to do it πŸ˜…

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment