Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
How to run disco diffusion V5 on windows 10 with WSL

Install Disco Diffusion v5 for Windows w/ subsystem for linux!

[NEW] no WSL needed Guide located here

NOTE: Pytorch3d no longer has to be compiled i have stripped out the function we use to make this a lot easier and also so we do not have to use WSL2 with linux and can now run directly on your windows system, i will leave this guide here for those that still want to explore working with linux wich i do still recommend.

Comments section is not checked often for issues please join the disco diffusion discord for assistance

https://discord.gg/mK4AneuycS

You may now use the official disco diffusion notebook with this tutorial as it has been uodated to reflect the changes here for better cross platform support

1. Enable subsystem for linux on windows!

Make sure to run PowerShell as Administrator.

(CHOOSE ONE!)

  • Option A (control panel)
    1. Open control panel and click "Programs" from here select "Turn windows feature on or off"
      • This should have opened a new window with a list of features, scroll all the way to the bottom
    2. Select "Windows Subsystem for Linux"
    3. Also select "Virtual Machine Platform"
    4. Restart your pc after installing
  • Option B (PowerShell)
    1. PowerShell: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
    2. PowerShell: Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform
    3. Restart your pc after installing




2. Upgrade WSL to version 2

Make sure to run PowerShell as Administrator.

  1. Update Windows 10 to version 21H2
    • This might be listed under the "view optional updates" text/link on the update page
    • check current version
      • PowerShell: winver
      • PowerShell: wsl -l -v
  2. If you do not have wsl version 2 from wsl command
    • Make sure you have "Receive updates for other Microsoft products when you update Windows" checked in Advanced options of the Updates sections
      • If unchecked rerun the windows update process again
    • PowerShell: wsl --update
  3. After updating WSL to version 2 we need to tell it to use version 2 on current distro and future ones
    • PowerShell: wsl --set-default-version 2




3. Download & Prepare linux Distro

  1. Open the microsoft store app and search "Linux", i went with Ubuntu 20.04
  2. Setup username and password for linux distribution
  3. Run the following command to update some things
    • Linux: sudo apt update && sudo apt upgrade -y




4/. Anaconda + Jupyter (included in anaconda)

  1. Download anaconda
    • Linux: wget https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh
  2. Install the library
    • Linux: bash Anaconda3-2021.11-Linux-x86_64.sh
  3. Refesh bash for anaconda install
    • Linux: source ~/.bashrc
  4. Start Conda & Install Dependencies
    • Linux: conda config --set channel_priority false
    • conda update --all --yes
    • conda create -n disco_v5 python=3.8.10 --yes
    • conda activate disco_v5
    • conda install -c conda-forge opencv --yes
    • conda install pytorch=1.10.0 torchvision torchaudio cudatoolkit=11.3 -c pytorch -c conda-forge --yes
  5. Install needed pip dependencies
    • Linux: pip install lpips datetime timm pandas matplotlib ftfy
    • Linux: pip install opencv-python ipywidgets omegaconf>=2.0.0
    • Linux: pip install pytorch-lightning>=1.0.8 torch-fidelity einops wandb
    • Linux: pip install --upgrade jupyter_http_over_ws>=0.0.7
  6. Enable the extension for jupyter
    • Linux: jupyter serverextension enable --py jupyter_http_over_ws
  7. Start the jupyter server
    • Linux: jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0 --no-browser
@captnash
Copy link

captnash commented Mar 26, 2022

Hello,

  1. im running the cells from browser after being connected through Jupyter notebook. im getting the following error message.
    after my session crashes.

Error
Could not fetch /var/colab/app.log from backend
Could not fetch resource at : 404 Not Found
FetchError: Could not fetch resource at : 404 Not Found
at XA.Vq [as constructor] (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:617:845)
at new XA (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:1488:190)
at wa.program_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:5108:129)
at ya (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:20:336)
at wa.next_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:18:474)
at za.next (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:21:206)
at b (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:21:468)

any clues what am i missing? 
  1. Also is it possible to move the folder for the images to another hard drive different from the root linux install?

kind of newbie here not sure how to proceed exactly. thank you for your time.

@p0las
Copy link

p0las commented Mar 27, 2022

Halfway down step 4, running: conda install pytorch=1.10.0 torchvision torchaudio cudatoolkit=11.3 -c pytorch -c conda-forge --yes gives me the following, and then it just hangs on the last line: Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve.

the same. did you manage to solve it @controlb?
it seems that opencv and pytorch dependencies are different (ffmepg,numpy,x264,zstd)
once you remove opencv you can install pytorch but it comes with newer dependencies so then installing opencv fails

@controlb
Copy link

controlb commented Mar 27, 2022

Halfway down step 4, running: conda install pytorch=1.10.0 torchvision torchaudio cudatoolkit=11.3 -c pytorch -c conda-forge --yes gives me the following, and then it just hangs on the last line: Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve.

the same. did you manage to solve it @controlb? it seems that opencv and pytorch dependencies are different (ffmepg,numpy,x264,zstd) once you remove opencv you can install pytorch but it comes with newer dependencies so then installing opencv fails

No luck. Could it be that this method only works with older versions of opencv and pytorch?

@captnash
Copy link

captnash commented Mar 27, 2022

i ran all from scratch but using python 3.9

conda create -n disco_v5 python=3.9 --yes

@MSFTserver
Copy link
Author

MSFTserver commented Mar 27, 2022

@controlb it might be an issue with installing pytorch from conda forge first, conda forge is biggest library and sometimes can introduce a bug of infinit looping of dependency resolving.

@hyunj16
Copy link

hyunj16 commented Mar 27, 2022

I have the same problem @controlb is having. It's solving forever. Any other work around for this?

@hyunj16
Copy link

hyunj16 commented Mar 27, 2022

I removed the folder "anaconda3" and went through the process all over again up until "conda-forge opencv" line. I skipped the line and installed "pytorch" line and it's going. I don't know the sequence of lines I should install next. Lol. Welcome to any suggestions. Wish me luck. Thank you all.

@hyunj16
Copy link

hyunj16 commented Mar 27, 2022

So it seems like I installed everything correctly, but how can I check? Is there a way to check? Also, am I supposed to click on Colab notebook link above and download it as ipynb file and upload it on my Jupyter? Is there anything section where I can assign which folder it should use? I clicked "Run" and it doesn't seem to do anything. It says it's "Running" but last modified was 10 minutes ago but I'm not seeing any images anywhere. Wondering if there is a section where I should assign a folder so it can save images? So sorry for bombarding you with all the questions. Just need you geniuses to guide a lost soul here. Thank you in advance for reading this. Appreciate it.

@Binxly
Copy link

Binxly commented Mar 28, 2022

URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1131)>

Getting this in the Google Colab notebook after running the dependencies check. Stops running in Cell 2 and throws this. Have been trying to troubleshoot it, but the only similar problems I found were from folks on corporate networks that had specific SSL certs they had to use, given to them by their companies.

@MSFTserver
Copy link
Author

MSFTserver commented Mar 28, 2022

So it seems like I installed everything correctly, but how can I check? Is there a way to check? Also, am I supposed to click on Colab notebook link above and download it as ipynb file and upload it on my Jupyter? Is there anything section where I can assign which folder it should use? I clicked "Run" and it doesn't seem to do anything. It says it's "Running" but last modified was 10 minutes ago but I'm not seeing any images anywhere. Wondering if there is a section where I should assign a folder so it can save images? So sorry for bombarding you with all the questions. Just need you geniuses to guide a lost soul here. Thank you in advance for reading this. Appreciate it.

Just open the google colab and next to connect option there should be drop down menu button click that and select connect to local runtime, paste the localhost url given after starting jupyter

@MSFTserver
Copy link
Author

MSFTserver commented Mar 28, 2022

@Binxly you might have to go into the code in cell 2 and change urls with single quotes ( ' ) for double quotes ( " )

@Binxly
Copy link

Binxly commented Mar 28, 2022

@Binxly you might have to go into the code in cell 2 and change urls with single quotes ( ' ) for double quotes ( " )

Will try this when I get home. Thanks for the quick reply- really appreciate the work you've put into creating this guide and helping members of the community out!

@hyunj16
Copy link

hyunj16 commented Mar 28, 2022

@MSFTserver Thanks so much, I just tried it your way and it works. I had no idea we can connect from the drop down. Now I tweaked all the settings and I'm stuck at where @2676499810 is stuck at. Lol. I have the same "ModuleNotFoundError" Wonder if @2676499810 ever solved this problem. We haven't heard from him for 9 days, so I'm guessing he either solved it or gave up. Anyway, @MSFTserver thanks so much for your generosity. We all really appreciate your help.

@Alchemyst0x
Copy link

Alchemyst0x commented Mar 28, 2022

I have the same problem @controlb is having. It's solving forever. Any other work around for this?

Hey I just wanted to add, although a little late/possibly not as relevant now, I absolutely loathe conda for its built in env solving process. I finally figured out using mamba works very, very well - night and day difference honestly. It has a much more aggressive routine for solving environments.

You can install it with conda install -c conda-forge mamba then run mamba init and restart your shell/command line or relaunch it by typing bash & afterward you can just use mamba interchangably with conda eg mamba update --all -y, mamba install -c conda-forge opencv, oh actually, I have also found that opencv-headless from the fastai conda channel resolves some of the issues I believe were mentioned above as well. mamba install -c fastai opencv-headless :)

Hope this is useful. Saved me tons of headaches.

@hyunj16
Copy link

hyunj16 commented Mar 28, 2022

@Alchemyst0x Hey thanks for your input. Didn't realize we have another option of "mamba". With that said, do you suspect the issue I am having currently -- "ModuleNotFoundError" on "1.3 Install and import dependencies" -- could be fixed with "mamba" method? Just wondering.

@p0las
Copy link

p0las commented Mar 28, 2022

conda create -n disco_v5 python=3.9 --yes
solved my conda problem as mentioned by @captnash

@p0las
Copy link

p0las commented Mar 28, 2022

import clip
Segmentation fault

after all that the notebook just crashes...

@hyunj16
Copy link

hyunj16 commented Mar 28, 2022

@p0las I just re-did mine with python=3.9 too. Seemed cleaner while installing. I recommend it.
However, I got stuck again on "ModuleNotFoundError" on "1.3 Install and import dependencies" while running the notebook. Really don't know why it's like that. I'll keep searching for answers.

@captnash
Copy link

captnash commented Mar 28, 2022

@p0las I just re-did mine with python=3.9 too. Seemed cleaner while installing. I recommend it. However, I got stuck again on "ModuleNotFoundError" on "1.3 Install and import dependencies" while running the notebook. Really don't know why it's like that. I'll keep searching for answers.

Yup, stuck here too.

Hello,

im running the cells from browser after being connected through Jupyter notebook. im getting the following error message.
after my session crashes.
Error
Could not fetch /var/colab/app.log from backend
Could not fetch resource at : 404 Not Found
FetchError: Could not fetch resource at : 404 Not Found
at XA.Vq [as constructor] (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:617:845)
at new XA (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:1488:190)
at wa.program_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:5108:129)
at ya (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:20:336)
at wa.next_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:18:474)
at za.next (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:21:206)
at b (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:21:468)

@p0las
Copy link

p0las commented Mar 28, 2022

@p0las I just re-did mine with python=3.9 too. Seemed cleaner while installing. I recommend it. However, I got stuck again on "ModuleNotFoundError" on "1.3 Install and import dependencies" while running the notebook. Really don't know why it's like that. I'll keep searching for answers.

Yup, stuck here too.

Hello,

im running the cells from browser after being connected through Jupyter notebook. im getting the following error message. after my session crashes. Error Could not fetch /var/colab/app.log from backend Could not fetch resource at : 404 Not Found FetchError: Could not fetch resource at : 404 Not Found at XA.Vq [as constructor] (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:617:845) at new XA (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:1488:190) at wa.program_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:5108:129) at ya (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:20:336) at wa.next_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:18:474) at za.next (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:21:206) at b (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20220324-060046-RC00_436956229:21:468)

This is what I get too. For me it crashes the session. I debugged the python code and it crashes when I import clip. Even if I run a standalone python session and import clip it seg faults. I didn't figured out why just yet.

@p0las
Copy link

p0las commented Mar 28, 2022

in python 3.8.10 clip imports fine
in python 3.9 it crashes python with a segmentation fault.
it seems we have to follow the guide and use 3.8.10 :-)

@hyunj16
Copy link

hyunj16 commented Mar 28, 2022

@2676499810 @captnash @p0las Running now. Try "pip install regex". I got help from Disco Diffusion Discord channel.

@hyunj16
Copy link

hyunj16 commented Mar 28, 2022

However, I got stuck on "1.5 Define necessary functions" ModuleNotFoundError: No module named 'pytorch3d' Lol. It's going to be awhile until we get through the whole code.

@captnash
Copy link

captnash commented Mar 28, 2022

OK! made progress, but now for some reason unknow got this error. I manage to install everything in the tutorial using the mamba on top of conda. but when i run the notebook locally i get this error. :(

RuntimeError Traceback (most recent call last)
Input In [5], in <cell line: 176>()
173 print('Using device:', DEVICE)
174 device = DEVICE # At least one of the modules expects this name..
--> 176 if torch.cuda.get_device_capability(DEVICE) == (8,0): ## A100 fix thanks to Emad
177 print('Disabling CUDNN for A100 gpu', file=sys.stderr)
178 torch.backends.cudnn.enabled = False

File ~/anaconda3/envs/disco_v5/lib/python3.8/site-packages/torch/cuda/init.py:342, in get_device_capability(device)
329 def get_device_capability(device: Optional[_device_t] = None) -> Tuple[int, int]:
330 r"""Gets the cuda capability of a device.
331
332 Args:
(...)
340 tuple(int, int): the major and minor cuda capability of the device
341 """
--> 342 prop = get_device_properties(device)
343 return prop.major, prop.minor

File ~/anaconda3/envs/disco_v5/lib/python3.8/site-packages/torch/cuda/init.py:356, in get_device_properties(device)
346 def get_device_properties(device: _device_t) -> _CudaDeviceProperties:
347 r"""Gets the properties of a device.
348
349 Args:
(...)
354 _CudaDeviceProperties: the properties of the device
355 """
--> 356 _lazy_init() # will define _get_device_properties
357 device = _get_device_index(device, optional=True)
358 if device < 0 or device >= device_count():

File ~/anaconda3/envs/disco_v5/lib/python3.8/site-packages/torch/cuda/init.py:214, in _lazy_init()
210 raise AssertionError(
211 "libcudart functions unavailable. It looks like you have a broken build?")
212 # This function throws if there's a driver initialization error, no GPUs
213 # are found or any other error occurs
--> 214 torch._C._cuda_init()
215 # Some of the queued calls may reentrantly call _lazy_init();
216 # we need to just return without initializing in that case.
217 # However, we must not let any other threads in!
218 _tls.is_initializing = True

RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory

@MSFTserver
Copy link
Author

MSFTserver commented Mar 29, 2022

I have removed pytorch3d from needing to be compiled, should be much easier to get going now

@Highlyhotgames
Copy link

Highlyhotgames commented Mar 29, 2022

Halfway down step 4, running: conda install pytorch=1.10.0 torchvision torchaudio cudatoolkit=11.3 -c pytorch -c conda-forge --yes gives me the following, and then it just hangs on the last line: Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve.

see if u can find any different line on my tutorial that may help you get through.
https://www.reddit.com/r/DiscoDiffusion/comments/tjd8mi/tutorial_for_ubuntu_2004/?utm_source=share&utm_medium=web2x&context=3

@etherElric
Copy link

etherElric commented Jun 20, 2022

For others stuck like me, I got past the

ModuleNotFoundError: No module named 'regex'

by adding
"pipi('regex')\n",
at line 473 in the file: Disco_Diffusion_v5_2_[w_VR_Mode].ipynb

This way, it installs the missing dependency directly in the running environment... Even though I was trying to add it to my local python environment, the virtual environment clearly does not see it...

@etherElric
Copy link

etherElric commented Jun 20, 2022

I fixed the models not downloading by using the path in the 5.4 version:
Changed line 96597 and 96598 of the file: Disco_Diffusion_v5_2_[w_VR_Mode].ipynb to
"model_512_link = 'https://the-eye.eu/public/AI/models/512x512_diffusion_unconditional_ImageNet/512x512_diffusion_uncond_finetune_008100.pt'\n",
"model_secondary_link = 'https://the-eye.eu/public/AI/models/v-diffusion/secondary_model_imagenet_2.pth\n",

@thegrandmasterflash
Copy link

thegrandmasterflash commented Jul 16, 2022

Halfway down step 4, running: conda install pytorch=1.10.0 torchvision torchaudio cudatoolkit=11.3 -c pytorch -c conda-forge --yes gives me the following, and then it just hangs on the last line

It doesn't hang, it just takes a really, really long time before it finishes solving.

@GafferWiles
Copy link

GafferWiles commented Aug 11, 2022

I tried the python 3.9 version solution and let the step 4 "error" just run until it fixed itself. It eventually resolved itself and I was able to complete the tutorial and the install. I opened up Chrome on my host machine and selected local runtime, using the URL, and it accepted it, but clicking on some of the parts of the disco diffusion codes where you have to do the settings and I got a lot of "errors" about variables not been set etc. I changed back to the Google-hosted runtime and the red errors disappeared when I clicked on them.

Would it be better to do all the settings except the one where you initialize the video card with the Google Colab runtime, and then before you run the Diffuse, change the video to my local GPU? (I have a GForce GTX 1060 6Gb RAM)

It probably wouldn't work, but I have to ask...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment