Skip to content

Instantly share code, notes, and snippets.

@meisa233
Last active December 28, 2023 02:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save meisa233/1549bb95c5c130e3a93fcab17c83e931 to your computer and use it in GitHub Desktop.
Save meisa233/1549bb95c5c130e3a93fcab17c83e931 to your computer and use it in GitHub Desktop.
  1. Download Models
(1) pretrained diffusion denoising U-net and video variational autoencoder
https://connectpolyu-my.sharepoint.com/:f:/g/personal/19046191r_connect_polyu_hk/EvI_j1SUiVFBlwEy4i62ckgB1XEHeqfFcJS4Ho6JQrTAWA?e=rDT4M4
or
https://pan.baidu.com/s/1xQF996RsxnmN-60ZLB6Vig?pwd=gh4i

downloaded files:
mgldvsr_unet.ckpt
video_vae_cfw.ckpt

(2)download v2-1_512-ema-pruned.ckpt
https://huggingface.co/stabilityai/stable-diffusion-2-1-base/blob/main/v2-1_512-ema-pruned.ckpt

(3)download open_clip_pytorch_model.bin
https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/tree/main

(4)download spynet_sintel_final-3d2a1287.pth
https://github.com/JingyunLiang/VRT/releases/tag/v0.0

(5)download raft-things.pth
https://github.com/princeton-vl/RAFT
  1. modify configuration files
MGLD-VSR/configs/mgldvsr/mgldvsr_512_realbasicvsr_deg.yaml
line 23:
ckpt_path:/path/to/v2-1_512-ema-pruned.ckpt
line 58:
ckpt_path:/path/to/v2-1_512-ema-pruned.ckpt
line 114:
ckpt_path:/path/to/raft-things.pth

MGLD-VSR/configs/video_vae/video_autoencoder_kl_64x64x4_resi.yaml
line 39:
load_path: /path/to/spynet_sintel_final-3d2a1287.pth

~/MGLD-VSR/ldm/modules/encoders/modules.py
line 154:
model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained='/path/to/open_clip_pytorch_model.bin')
  1. add code to import ldm create __init__.py in ldm folder
import sys
sys.path.append('/path/to/folder/including/ldm')
  1. fix error
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 235, in <module>
    from torch._C import *  # noqa: F403
ImportError: /usr/local/lib/python3.10/dist-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkAddData_12_1, version libnvJitLink.so.12

ref: pytorch/pytorch#111469
5. Note that when you install pytorch-lightning, the version should be 1.6.5.
when you install einops, the version should be 0.3.0.
6. test code

 python scripts/vsr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/mgldvsr/mgldvsr_512_realbasicvsr_deg.yaml --ckpt ./mgldvsr_unet.ckpt --vqgan_ckpt video_vae_cfw.ckpt --seqs-path /home/wlbtest/RVRT/imgs_low --outdir /home/wlbtest/MGLD-VSR/results --ddpm_steps 50 --dec_w 1.0 --colorfix_type adain --select_idx 0 --n_gpus 1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment