Skip to content

Instantly share code, notes, and snippets.

@kohya-ss
Created November 19, 2023 06:11
Show Gist options
  • Save kohya-ss/84e7404265910a4c08989ae47e0ab213 to your computer and use it in GitHub Desktop.
Save kohya-ss/84e7404265910a4c08989ae47e0ab213 to your computer and use it in GitHub Desktop.
GradualLatent highres fixのgen_img_diffusers.pyへの差分
# how much to increase the scale at each step: .125 seems to work well (because it's 1/8?)
# 各ステップに拡大率をどのくらい増やすか:.125がよさそう(たぶん1/8なので)
scale_step = 0.125
# timesteps at which to start increasing the scale: model and prompt dependent
# 拡大を開始するtimesteps:モデルとプロンプトによる
start_timesteps = 800
# how many steps to wait before increasing the scale again: smaller values lead to more artifacts, also depends on the total number of steps
# 何ステップごとに拡大するか:総ステップ数にも関係する
every_n_steps = 6
# inp = input("scale step:")
# try:
# scale_step = float(inp)
# except:
# pass
# inp = input("start timesteps:")
# try:
# start_timesteps = int(inp)
# except:
# pass
# inp = input("every n steps:")
# try:
# every_n_steps = int(inp)
# except:
# pass
# first, we downscale the latents to the half of the size
# 最初に1/2に縮小する
current_scale = 0.5
height, width = latents.shape[-2:]
latents = torch.nn.functional.interpolate(
latents.float(), scale_factor=current_scale, mode="bicubic", align_corners=False
).to(latents.dtype)
for i, t in enumerate(tqdm(timesteps)):
# print(i, t, current_scale)
if t < start_timesteps and current_scale < 1.0 and i % every_n_steps == 0:
current_scale = min(current_scale + scale_step, 1.0)
print(f"upscale at {i} step, scale={current_scale}")
latents = torch.nn.functional.interpolate(
latents.float(),
size=(int(height * current_scale), int(width * current_scale)),
mode="bicubic",
align_corners=False,
# antialias=True,
).to(latents.dtype)
steps_count = 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment