(link ids are adler32
-style sha-1 hashes of the URL)
This is a finetune built on 56,000 images (with tags) from danbooru with a filter applied.
- mirror 1: https://thisanimedoesnotexist.ai/downloads/wd-v1-2-full-ema.ckpt
- mirror 2: http://wd.links.sd:8880/wd-v1-2-full-ema.ckpt
- mirror 3 (original/old): https://drive.google.com/file/d/1XeoFCILTcc9kn_5uS-G0uqWS5XVANpha
- magnet link (torrent):
magnet:?xt=urn:btih:INEYUMLLBBMZF22IIP4AEXLUK6XQKCSD&dn=wd-v1-2-full-ema.ckpt&xl=7703810927&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
more info: https://github.com/harubaru/waifu-diffusion/
Waifu diffusion instructions until I update the docs
this should work, there might be some dependencies missing but if so just install them
git clone (waifu diffusion repo link here)
cd waifu-diffusion
pip install omegaconf einops pytorch-lightning==1.6.5 test-tube transformers kornia
pip install -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
pip install -e git+https://github.com/openai/CLIP.git@main#egg=clip
---
NOTE: if the pip install git commands don't work, just download the zip of the repos, and copy the required folder:
taming-transformers -> taming folder
CLIP -> clip folder
And copy them to the waifu diffusion root folder
This is not the correct way to do it but it works
---
pip install setuptools==59.5.0
pip install pillow==9.0.1
pip install torchmetrics==0.6.0
pip install -e .
Download your dataset.zip
unzip dataset.zip
cd dataset
mkdir txt
mkdir img
mv .txt txt
mv img
cd ..
mv dataset danbooru-aesthetic
cp train.bat train.sh
nano train.sh
Add a coma next to the GPU count
Ctrl-X, save, yes, same name
chmod +x train.sh
./train.sh
trinart_stable_diffusion is a SD model finetuned by about 40,000 assorted high resolution manga/anime-style pictures for 8 epochs. This is the same model running on Twitter bot @trinsama (https://twitter.com/trinsama)
community discussions: https://huggingface.co/naclbit/trinart_stable_diffusion_v2/discussions
Searchable database of LAION images
CLIP retrieval works by converting the text query to a CLIP embedding , then using that embedding to query a knn index of CLIP image embedddings
- https://rom1504.github.io/clip-retrieval/
- link id:
2cb20e9c
- link id:
Searchable database of LAION Aesthetic images
- https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/images
- link id:
120e1839
- link id: