Skip to content

Instantly share code, notes, and snippets.

@egg82
Last active July 11, 2024 19:01
Show Gist options
  • Save egg82/90164a31db6b71d36fa4f4056bbee2eb to your computer and use it in GitHub Desktop.
Save egg82/90164a31db6b71d36fa4f4056bbee2eb to your computer and use it in GitHub Desktop.
NVidia Proxmox + LXC

Proxmox

Find the proper driver at the NVidia website.

Note: Make sure to select "Linux 64-bit" as your OS

Hit the "Search" button.

Hit the "Download" button.

Right-click the download button and "Copy link address".

SSH into to your Proxmox instace.

Create the file /etc/modprobe.d/nvidia-installer-disable-nouveau.conf with the following contents:

# generated by nvidia-installer
blacklist nouveau
options nouveau modeset=0

Reboot the machine:

reboot now

Run the following:

apt install build-essential pve-headers-$(uname -r)
wget <link you copied>
chmod +x ./NVIDIA-Linux-x86_64-<VERSION>.run
./NVIDIA-Linux-x86_64-<VERSION>.run

Edit /etc/modules-load.d/modules.conf and add the following to the end of the file:

nvidia
nvidia_uvm

Run the following:

update-initramfs -u

Create the file /etc/udev/rules.d/70-nvidia.rules and add the following:

# /etc/udev/rules.d/70-nvidia.rules
# Create /nvidia0, /dev/nvidia1 … and /nvidiactl when nvidia module is loaded
KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'"
# Create the CUDA node when nvidia_uvm CUDA module is loaded
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"

Reboot the machine.

For each container

SSH into the Proxmox.

Run the following:

modprobe nvidia-uvm
ls /dev/nvidia* -l

Note these numbers, you'll need them in the next step

Edit /etc/pve/lxc/<container ID>.conf and add the following:

lxc.cgroup.devices.allow: c <number from previous step>:* rwm
lxc.cgroup.devices.allow: c <number from previous step>:* rwm
lxc.cgroup.devices.allow: c <number from previous step>:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file

Container/LXC

SSH into your container.

Run the following:

dpkg --add-architecture i386
apt update
apt install libc6:i386

wget <link you copied for the Proxmox step>
chmod +x ./NVIDIA-Linux-x86_64-<VERSION>.run
./NVIDIA-Linux-x86_64-<VERSION>.run --no-kernel-module

Reboot the container.

CUDA

SSH back into your container.

Run the following:

apt install nvidia-cuda-toolkit nvidia-cuda-dev

Note: Plex DOES NOT USE THE GPU until you install CUDA

Plex will pick up the fact that you have a GPU in the install process and will enable the hardware transcoding checkbox, but it will NOT use the GPU until CUDA is installed.

Python/cuDNN

SSH into your container.

Run the following:

apt install python3 python3-dev python3-pip python3-pycuda

Check your CUDA version:

nvidia-smi

Download the correct cuDNN library from the NVidia website (requires creating an account, but it's free).

Upload it to your container.

Run the following:

tar -xvf cudnn-<tab>
mkdir -p /usr/local/cuda/lib64/
mkdir -p /usr/local/cuda/include/
cp cuda/lib64/* /usr/local/cuda/lib64/
cp cuda/include/* /usr/local/cuda/include/
export CUDA_ROOT=/usr/local/cuda
export LD_LIBRARY_PATH=$CUDA_ROOT/lib64:$LD_LIBRARY_PATH
export CPATH=$CUDA_ROOT/include:$CPATH
export LIBRARY_PATH=$CUDA_ROOT/lib64:$LIBRARY_PATH
echo "export CUDA_ROOT=/usr/local/cuda" >> .bashrc
echo "export LD_LIBRARY_PATH=\$CUDA_ROOT/lib64:\$LD_LIBRARY_PATH" >> .bashrc
echo "export CPATH=\$CUDA_ROOT/include:\$CPATH" >> .bashrc
echo "export LIBRARY_PATH=\$CUDA_ROOT/lib64:\$LIBRARY_PATH" >> .bashrc

Done!

@amangupta20
Copy link

would it be possible to add the configuration to multiple containers and let them all have access to the gpu?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment