Skip to content

Instantly share code, notes, and snippets.

@wmantly
Last active October 30, 2020 03:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save wmantly/3ff1cdd1482e6b2b5976fd9898994f87 to your computer and use it in GitHub Desktop.
Save wmantly/3ff1cdd1482e6b2b5976fd9898994f87 to your computer and use it in GitHub Desktop.
  • Turn off the container and make note of its id, like 100

  • Open the proxmox shell or SSH into proxmox

  • Move to the conf directory

    cd /etc/pve/lxc/
  • Add the below conf file to the {id}.conf file

  • If you need nvida support, download the drives in the container

    wget http://us.download.nvidia.com/XFree86/Linux-x86_64/440.82/NVIDIA-Linux-x86_64-440.82.run
  • Install the drives in the container

    bash NVIDIA-Linux-x86_64-440.82.run --no-kernel-module
  • Test it with nvidia-smi

## Acess to the ZFS pool, /darfour/streamables can be changed to any directory, or even the root.
mp0: /darfour/streamables,mp=/media/stuff,replicate=0,shared=1
## The following lines will bring the CUDA cores and NVENC into the container.
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 236:* rwm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment