Skip to content

Instantly share code, notes, and snippets.

@adamelliotfields
Last active July 14, 2024 13:29
Show Gist options
  • Save adamelliotfields/31bb889074bcc8cd1a630b9a3a881dae to your computer and use it in GitHub Desktop.
Save adamelliotfields/31bb889074bcc8cd1a630b9a3a881dae to your computer and use it in GitHub Desktop.
DreamShaper XL Fooocus Preset

DreamShaper XL Turbo in Fooocus with madebyollin/sdxl-vae-fp16-fix.

You can't set a default VAE to download, so I use this launch script (launch.sh):

#!/usr/bin/env bash
set -e
export GRADIO_ANALYTICS_ENABLED=false

fooocus_dir="${BASH_SOURCE%/*}"
args='--always-high-vram --disable-preset-selection --disable-preset-download'

# honor preset
if [[ "$*" == *--preset* ]] ; then
  args='--always-high-vram'
fi

# download fp16 VAE if flag is passed
if [[ "$*" == *--vae-in-fp16* ]] ; then
  if [[ ! -f $fooocus_dir/models/vae/sdxl.vae.safetensors ]] ; then
    echo "Downloading fp16 VAE..."
    curl -fsSLo "${fooocus_dir}"/models/vae/sdxl.vae.safetensors https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl.vae.safetensors?download=true
  fi
fi

# don't quote args so Bash word splits it
"${fooocus_dir}"/venv/bin/python launch.py $args "$@"

Guidance scale (cfg_scale), sharpness, sampler, and scheduler are all overridden when using performance LoRAs (LCM, Lightning, Hyper). See async_worker.py. Defaults are in config.py.

This goes in presets/dreamshaper.json:

{
  "default_model": "DreamShaperXL_Turbo_v2_1.safetensors",
  "default_vae": "sdxl.vae.safetensors",
  "default_performance": "Hyper-SD",
  "default_aspect_ratio": "1024*1024",
  "default_image_number": 1,
  "default_advanced_checkbox": true,
  "default_styles": ["Fooocus V2"],
  "default_loras": [
      [true, "None", 1.0],
      [true, "None", 1.0],
      [true, "None", 1.0],
      [true, "None", 1.0],
      [true, "None", 1.0]
  ],
  "checkpoint_downloads": {
      "DreamShaperXL_Turbo_v2_1.safetensors": "https://huggingface.co/Lykon/dreamshaper-xl-v2-turbo/resolve/main/DreamShaperXL_Turbo_v2_1.safetensors"
  }
}

Note that --all-in-fp16 applies to the U-Net and CLIP, not the VAE. You need to pass both if want everything in 16-bit. The flags are handled in model_management.py:

chmod +x launch.sh
./launch.sh --all-in-fp16 --vae-in-fp16 --preset=dreamshaper
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment