Skip to content

Instantly share code, notes, and snippets.

View trygvebw's full-sized avatar

Trygve Wiig trygvebw

  • Porterbuddy
  • Oslo, Norway
View GitHub Profile
@trygvebw
trygvebw / blur_latent_noise.py
Last active April 29, 2024 07:50
A way to use high classifier-free guidance (CFG) scales with Stable Diffusion by applying an unsharp mask to the model output, while avoiding the artifacts and excessive contrast/saturation this usually produces
# This is an abbreviated demonstration of how to perform this technique. The code
# is a simplified version of that in my own custom codebase, and can't be plugged
# into other ways of using Stable Diffusion (e.g. Diffusers or A1111) without changes.
# In essence, the observation that the CFG formula:
#
# output_noise = uncond_noise + (cond_noise - uncond_noise) * scale
#
# looks a lot like the formula for the unsharp mask, a common way to sharpen or add local contrast to images:
#
@trygvebw
trygvebw / find_noise.py
Last active March 11, 2024 12:50
A "reverse" version of the k_euler sampler for Stable Diffusion, which finds the noise that will reconstruct the supplied image
import torch
import numpy as np
import k_diffusion as K
from PIL import Image
from torch import autocast
from einops import rearrange, repeat
def pil_img_to_torch(pil_img, half=False):
image = np.array(pil_img).astype(np.float32) / 255.0
@trygvebw
trygvebw / generate.py
Last active July 29, 2023 20:01
My Stable Diffusion image generation function. Abbreviated – will not run because of a few missing utility functions and classes.
def normalize_latent(x, max_val, quantile_val):
x = x.detach().clone()
for i in range(x.shape[0]):
if x[[i], :].std() > 1.0:
x[[i], :] = x[[i], :] / x[[i], :].std()
s = torch.quantile(torch.abs(x[[i], :]), quantile_val)
s = torch.maximum(s, torch.ones_like(s) * max_val)
x[[i], :] = x[[i], :] / (s / max_val)
return x