Skip to content

Instantly share code, notes, and snippets.

@lxe
Last active December 28, 2023 15:26
Show Gist options
  • Save lxe/ec08c4ecc1f79a084d47e1b82d3a65ff to your computer and use it in GitHub Desktop.
Save lxe/ec08c4ecc1f79a084d47e1b82d3a65ff to your computer and use it in GitHub Desktop.
Disco Diffusion Tips

For something like https://www.instagram.com/holosomnia/

  • 768 x 1024

  • High guidance scale 18k and above,

  • 200-250 steps,

  • 4000+ tv scale,

  • lower range scale to 80,

  • higher sat scale 2000+,

  • no secondary model (important)

  • Add ViT14 or if have ram, 336 (very nice)

  • bump eta to 0.9 (important)

  • cut ic pow to 10

  • split the cut schedule into 200s and do like 10/8/6/2/0 for overview and the opposite for the innercut, and just play with the numbers a bunch

  • prompts: always add beeple for blur, orbs and color. to remove orbs, do "globe:-1". To reduce blur, do "dof:-1"

  • kinkade for color.

  • Try various "color-heavy" artists.

  • some artists dont' do much, and some really change the result

  • Use prompts that make sense for the artists

  • Add prompts and weights to remove aspects as you iterate

  • Just try a LOT of variations

  • ALWAYS use partial saves and take the 90%-ish partial save as your final and ALWAYS run it through Real-ESRGAN Inference Demo.ipynb to upscale and make it crisp

Example prompt:

[
  "A beautiful ultradetailed anime illustration of a city street by beeple, makoto shinkai, and thomas kinkade, anime art wallpaper 4k, trending on artstation:3",
  "anime",
  "car:-1",
  "dof:-1", 
  "blur:-1"
]
@nicolas-rabault
Copy link

I get RuntimeError: CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 15.90 GiB total capacity; 12.12 GiB already allocated; 993.75 MiB free; 14.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
each time I uncheck 'use secondary model`
How do you avoid that?
I'm pro+...

@majin78
Copy link

majin78 commented Aug 25, 2022

good ,perfet

@oxytocins
Copy link

I get RuntimeError: CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 15.90 GiB total capacity; 12.12 GiB already allocated; 993.75 MiB free; 14.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF each time I uncheck 'use secondary model` How do you avoid that? I'm pro+...

hey there - not sure if you were able to figure it out on your end, but for anyone else -- under resources settings, there is an option to 'change runtime type' to premium. helped me avoid the CUDA issue

@oxytocins
Copy link

what do you mean by 'cut ic pow to 10' exactly? when I try to alter this I get broken code when executing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment