Skip to content

Instantly share code, notes, and snippets.

@lelayf
Last active September 30, 2020 12:21
Show Gist options
  • Save lelayf/18ee03b677e0ba953f380e1b68b12b2e to your computer and use it in GitHub Desktop.
Save lelayf/18ee03b677e0ba953f380e1b68b12b2e to your computer and use it in GitHub Desktop.
CUDA on Windows Subsystem for Linux (WSL2)

I followed instructions documented at https://ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2.

A few things that wwere not documented:

  • After the Insiders Dev channel install, my WSL kernel version was still unchanged. Restarting the computer fixed it.
  • sudo service docker start did not work for me. It looks like Systemd is not/can't be the init system in WSL2. What worked is simply sudo dockerd. Then I had an issue with cgroups, fixed with:
sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

When I tried running the classification notebook from the tensorflow container, model.fit breaks on a GEMM library call. The issue is that I am running WSL2 on a RTX 2070 Super shared by many Windows processes and not dedicated to GPU computing. Because Tensorflow expects to be the sole process utilizing the GPU memory, it fails doing so and it manifests itself very quickly. I fixed it by allowing the GPU memory usage to grow, right after initial module import:

import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    # Currently, memory growth needs to be the same across GPUs
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
  except RuntimeError as e:
    # Memory growth must be set before GPUs have been initialized
    print(e)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment