Skip to content

Instantly share code, notes, and snippets.

@krzys-h
Last active April 30, 2024 21:56
Show Gist options
  • Star 45 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save krzys-h/e2def49966aa42bbd3316dfb794f4d6a to your computer and use it in GitHub Desktop.
Save krzys-h/e2def49966aa42bbd3316dfb794f4d6a to your computer and use it in GitHub Desktop.
Ubuntu 21.04 VM with GPU acceleration under Hyper-V...?

Ubuntu 21.04 VM with GPU acceleration under Hyper-V...?

Modern versions of Windows support GPU paravirtualization in Hyper-V with normal consumer graphics cards. This is used e.g. for graphics acceleration in Windows Sandbox, as well as WSLg. In some cases, it may be useful to create a normal VM with GPU acceleration using this feature, but this is not officially supported. People already figured out how to do it with Windows guests though, so why not do the same with Linux? It should be easy given that WSLg is open source and reasonably well documented, right?

Well... not quite. I managed to get it to run... but not well.

How to do it?

  1. Verify driver support

Run Get-VMHostPartitionableGpu in PowerShell. You should see your graphics card listed, if you get nothing, update your graphics drivers and try again.

  1. Create a new VM in Hyper-V Manager.

Make sure to:

  • Use Generation 2
  • DISABLE dynamic memory (it interferes with vGPU on Windows so it probably won't work on Linux either, I didn't check this yet though)
  • DISABLE automatic snapshots (they are not supported with vGPU and will only cause problems)
  • DISABLE secure boot (we'll need custom kernel drivers, and I never tried to make this work with secure boot)
  • Don't forget to add more CPU cores because the stupid wizard still adds only one vCPU...
  1. Add GPU-PV adapter

From PowerShell running as administrator:

Set-VM -VMName <vmname> -GuestControlledCacheTypes $true -LowMemoryMappedIoSpace 1GB -HighMemoryMappedIoSpace 32GB
Add-VMGpuPartitionAdapter -VMName <vmname>
  1. Install Ubuntu 21.04 in the VM like you would usually

  2. Build the dxgkrnl driver

Until Microsoft upstreams this driver to the mainline Linux kernel, you will have to build it manually. Use the following script I made to get the driver from the WSL2-Linux-Kernel tree, patch it for out-of-tree build and add it to DKMS:

#!/bin/bash -e
BRANCH=linux-msft-wsl-5.10.y

if [ "$EUID" -ne 0 ]; then
    echo "Swithing to root..."
    exec sudo $0 "$@"
fi

apt-get install -y git dkms

git clone -b $BRANCH --depth=1 https://github.com/microsoft/WSL2-Linux-Kernel
cd WSL2-Linux-Kernel
VERSION=$(git rev-parse --short HEAD)

cp -r drivers/hv/dxgkrnl /usr/src/dxgkrnl-$VERSION
mkdir -p /usr/src/dxgkrnl-$VERSION/inc/{uapi/misc,linux}
cp include/uapi/misc/d3dkmthk.h /usr/src/dxgkrnl-$VERSION/inc/uapi/misc/d3dkmthk.h
cp include/linux/hyperv.h /usr/src/dxgkrnl-$VERSION/inc/linux/hyperv_dxgkrnl.h
sed -i 's/\$(CONFIG_DXGKRNL)/m/' /usr/src/dxgkrnl-$VERSION/Makefile
sed -i 's#linux/hyperv.h#linux/hyperv_dxgkrnl.h#' /usr/src/dxgkrnl-$VERSION/dxgmodule.c
echo "EXTRA_CFLAGS=-I\$(PWD)/inc" >> /usr/src/dxgkrnl-$VERSION/Makefile

cat > /usr/src/dxgkrnl-$VERSION/dkms.conf <<EOF
PACKAGE_NAME="dxgkrnl"
PACKAGE_VERSION="$VERSION"
BUILT_MODULE_NAME="dxgkrnl"
DEST_MODULE_LOCATION="/kernel/drivers/hv/dxgkrnl/"
AUTOINSTALL="yes"
EOF

dkms add dxgkrnl/$VERSION
dkms build dxgkrnl/$VERSION
dkms install dxgkrnl/$VERSION
  1. Copy GPU drivers from your host system

Now you will also need to copy some files from the host machine: the closed-source D3D12 implementation provided by Microsoft, as well as Linux parts of the graphics driver provided by your GPU vendor. If you ever tried to run GPU-PV with a Windows guest, this part should look familiar. Figuring out how to transfer the files into the VM is left as an exercise to the reader, I'll just assume that your Windows host volume is available at /mnt for simplicity:

mkdir -p /usr/lib/wsl/{lib,drivers}
cp -r /mnt/Windows/system32/lxss/lib/* /usr/lib/wsl/lib/
cp -r /mnt/Windows/system32/DriverStore/FileRepository/nv_dispi.inf_amd64_* /usr/lib/wsl/drivers/   # this may be different for different GPU vendors, refer to tutorials for Windows guests if needed
chmod -R 0555 /usr/lib/wsl

Note: You will need to repeat this step every time you update Windows or your graphics drivers

  1. Set up the system to be able to load libraries from /usr/lib/wsl/lib/:
echo "/usr/lib/wsl/lib" > /etc/ld.so.conf.d/ld.wsl.conf
ldconfig  # (if you get 'libcuda.so.1 is not a symbolic link', just ignore it)
  1. Workaround a bug in the D3D12 implementation (it assumes that the /usr/lib/wsl/lib/ mount is case-insensitive... just Windows things...)
ln -s /usr/lib/wsl/lib/libd3d12core.so /usr/lib/wsl/lib/libD3D12Core.so
  1. Reboot the VM

If you've done everything correctly, glxinfo | grep "OpenGL renderer string" should display D3D12 (Your GPU Name). If it does not, here are some useful commands for debugging:

sudo lspci -v  # should list the vGPU and the dxgkrnl driver
ls -l /dev/dxg  # should exist if the dxgkrnl
/usr/lib/wsl/lib/nvidia-smi  # should be able to not fail :P

The problems

  1. The thing is UNSTABLE. Just running glxgears crashes GNOME, spectacularly. I'd recommend switching to a simple window manager like i3 for testing.
  2. GPU acceleration doesn't seem to be picked up everywhere, sometimes it falls back to software rendering with llvmpipe for no apparent reason
  3. While when it works you can clearly see that the GPU is working from the FPS counter... I didn't figure out a good way to get these frames from the VM at a reasonable rate yet! The Hyper-V virtual display is slow, and even if you get enhanced session to work it's just RDP under the hood which is not really designed for high FPS output either. On Windows, you can simply use something like Parsec to connect to the VM, but all streaming solutions I know of don't work on a Linux host at all.
@residentcode
Copy link

Directory '/usr/lib/wsl/lib' is not writable

@Boshchuk
Copy link

sudo chmod a+rwx /usr/lib/wsl/{lib,drivers}

@Marietto2008
Copy link

Marietto2008 commented Jul 27, 2023

Hello.

I'm trying to use Ubuntu VM on a Hyper-V with Microsoft GPU-P support.

The result that I have achieved has been to enable the nVidia driver and CUDA libraries within an Ubuntu 20.04 VM,but Blender Cycles does not recognize my GPUs :

255019449-d0784b36-3ac8-4496-897a-19bf47b6a7d2

Do you know the reason ? Instead,Blender Cycles recognizes at least one GPU if the VM is Windows 11 (I have 2 GPUs,but it recognizes only one : but I have an idea about the reason : on the script I have declared 16 GB of memory instead of the default,32 GB) :

255019449-d0784b36-3ac8-4496-897a-19bf47b6a7d2

Something is missing or it does not work well if the VM is Ubuntu 20.04. I don't know what it is. What I can do to allow Blender Cycles to recognize my GPU as a good GPU for rendering ?

@backcountrymountains
Copy link

Hello. I can get this working passing through a 3070 to an Ubuntu 22.04 VM but if I also try to pass through a PCI device (Coral TPU) using DDA, the video card is very unstable and causes FFMPEG to segfault. Either GPU-PV alone or DDA alone work fine but don't play well together on the VM. It was really cool to be able to use GPU-PV in Hyper-V.

Anyway, I was wondering if you saw the D3D12/VAAPI capability that was added to WSL2 and if you had any thoughts about how to enable VAAPI for Hyper-V VMs.

Thanks.

@simonlange
Copy link

simonlange commented Aug 24, 2023

Im getting dkms errors when actually trying to build the module.
i dont think it the warning about the different subversion is the cause of it. at least it shouldnt. ;)
This is Ubuntu 22.04.3 in a VM (Hyper-V) 8 cores, 32gb ram and an AMD 6900XT.
build_error

DKMS make.log for dxgkrnl-d489414c2 for kernel 6.2.0-26-generic (x86_64) Do 24. Aug 18:16:22 CEST 2023 make: Entering directory '/usr/src/linux-headers-6.2.0-26-generic' warning: the compiler differs from the one used to build the kernel The kernel was built by: x86_64-linux-gnu-gcc-11 (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 You are using: gcc-11 (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgmodule.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/hmgr.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/misc.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/ioctl.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgprocess.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgsyncfile.o /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c: In function ‘dxgallocation_destroy’: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c:934:66: warning: passing argument 2 of ‘vmbus_teardown_gpadl’ makes pointer from integer without a cast [-Wint-conversion] 934 | vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); | ~~~~~^~~~~~~ | | | u32 {aka unsigned int} In file included from /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c:15: ./include/linux/hyperv.h:1226:58: note: expected ‘struct vmbus_gpadl *’ but argument is of type ‘u32’ {aka ‘unsigned int’} 1226 | struct vmbus_gpadl *gpadl); | ~~~~~~~~~~~~~~~~~~~~^~~~~ /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c: In function ‘create_existing_sysmem’: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c:1425:53: error: passing argument 4 of ‘vmbus_establish_gpadl’ from incompatible pointer type [-Werror=incompatible-pointer-types] 1425 | alloc_size, &dxgalloc->gpadl); | ^~~~~~~~~~~~~~~~ | | | u32 * {aka unsigned int *} In file included from /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c:18: ./include/linux/hyperv.h:1223:59: note: expected ‘struct vmbus_gpadl *’ but argument is of type ‘u32 *’ {aka ‘unsigned int *’} 1223 | struct vmbus_gpadl *gpadl); | ~~~~~~~~~~~~~~~~~~~~^~~~~ cc1: some warnings being treated as errors make[1]: *** [scripts/Makefile.build:260: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make: *** [Makefile:2026: /var/lib/dkms/dxgkrnl/d489414c2/build] Error 2 make: Leaving directory '/usr/src/linux-headers-6.2.0-26-generic'

@seflerZ
Copy link

seflerZ commented Nov 1, 2023

Hello. I can get this working passing through a 3070 to an Ubuntu 22.04 VM but if I also try to pass through a PCI device (Coral TPU) using DDA, the video card is very unstable and causes FFMPEG to segfault. Either GPU-PV alone or DDA alone work fine but don't play well together on the VM. It was really cool to be able to use GPU-PV in Hyper-V.

Anyway, I was wondering if you saw the D3D12/VAAPI capability that was added to WSL2 and if you had any thoughts about how to enable VAAPI for Hyper-V VMs.

Thanks.

Up vote! I've tried so hard but failed to enable VAAPI in Hyper-V VMs. Seems like it goes beyond the dxgkrnl module and needs WSL2-Linux-Kernel specific drivers.

I've found this PR and this PR which might be helpful.

@mattenz
Copy link

mattenz commented Nov 5, 2023

Thanks to this guide I was able to get GPU-PV working on Server 2022 running Hyper-V with an Ubuntu 22.04.3 VM. I did have to do a few things differently though.

First, I changed the WSL branch to "linux-msft-wsl-5.15.y" since this Ubuntu version uses the 5.15 kernel. I then also had to add the following before the dkms steps otherwise dkms would fail to build:

apt install dwarves  
cp /sys/kernel/btf/vmlinux /usr/lib/modules/`uname -r`/build/

With those changes, everything started working and my GPU is working in my Ubuntu VM now.

@ColbyHF
Copy link

ColbyHF commented Nov 27, 2023

Thanks to this guide I was able to get GPU-PV working on Server 2022 running Hyper-V with an Ubuntu 22.04.3 VM. I did have to do a few things differently though.

First, I changed the WSL branch to "linux-msft-wsl-5.15.y" since this Ubuntu version uses the 5.15 kernel. I then also had to add the following before the dkms steps otherwise dkms would fail to build:

apt install dwarves  
cp /sys/kernel/btf/vmlinux /usr/lib/modules/`uname -r`/build/

With those changes, everything started working and my GPU is working in my Ubuntu VM now.

I applied what you stated but to no avail, any insight? When loading in my screen is stuck after the splash screen.

@mattenz
Copy link

mattenz commented Nov 27, 2023

I applied what you stated but to no avail, any insight? When loading in my screen is stuck after the splash screen.
As in you're stuck booting? Are you running a UI or server? My Ubuntu install was just a server install without a UI.

@ColbyHF
Copy link

ColbyHF commented Nov 27, 2023

I applied what you stated but to no avail, any insight? When loading in my screen is stuck after the splash screen.
As in you're stuck booting? Are you running a UI or server? My Ubuntu install was just a server install without a UI.

That would pry cause it, I'm running Ubuntu Desktop

@LaZoRBear
Copy link

Thank you for the guide, it has been very helpful implementing this with Manjaro. However, I have some issues now that everything is installed. If I start my VM with the VMGpuPartitionAdapter already added for the VM, I will get a black screen after seeing the initial loading splash screen. If I activate it only after login into the VM, my vm will freeze pretty much after only a few interactions or commands entered in the terminal.

glxinfo | grep "OpenGL renderer string" -> Freezes instantly or locks up with the terminal printing white lines indefinitely.
sudo lspci -v  # should list the vGPU and the dxgkrnl driver -> list me the proper info and everything seems normal
ls -l /dev/dxg  # should exist if the dxgkrnl -> I get a directory or file doesn't exist
/usr/lib/wsl/lib/nvidia-smi -> Freezes instantly or locks up with the terminal printing white lines indefinitely.

I'm not sure what steps I need to take to fix this. Also, for copying the drivers over the path on windows mentioned wasn't on my system, I ended up having to go into C:\Windows\System32\DriverStore\FileRepository\nvmdi.inf_amd64_509c7440ad905b9c. It was the folder in there that had the created date that was inline with my last driver update.

@D4rkGambit
Copy link

Thanks to this guide I was able to get GPU-PV working on Server 2022 running Hyper-V with an Ubuntu 22.04.3 VM. I did have to do a few things differently though.

First, I changed the WSL branch to "linux-msft-wsl-5.15.y" since this Ubuntu version uses the 5.15 kernel. I then also had to add the following before the dkms steps otherwise dkms would fail to build:

apt install dwarves  
cp /sys/kernel/btf/vmlinux /usr/lib/modules/`uname -r`/build/

With those changes, everything started working and my GPU is working in my Ubuntu VM now.

Did you have to rebuild the Ubuntu kernel on for Server 2022?
lspci only shows the vGPI adaptor on BrokeDudes custom kernel, but nvidia-smi is still broke when using the one provided by 2022.
image

@residentcode
Copy link

sudo chmod a+rwx /usr/lib/wsl/{lib,drivers}

chmod a+rwx /usr/lib/wsl/drivers
chmod: changing permissions of '/usr/lib/wsl/drivers': Read-only file system

@Marietto2008
Copy link

But,instead of using Linux inside the WSL2,its not better to virtualize Linux with qemu + kvm + hyperV ? I did it with FreeBSD,but it will work also with Linux :

https://www.reddit.com/r/freebsd/comments/1c71mjn/how_to_virtualize_freebsd_14_as_a_vm_on_top_of/

@Heodel
Copy link

Heodel commented Apr 25, 2024

Im getting dkms errors when actually trying to build the module. i dont think it the warning about the different subversion is the cause of it. at least it shouldnt. ;) This is Ubuntu 22.04.3 in a VM (Hyper-V) 8 cores, 32gb ram and an AMD 6900XT. build_error

DKMS make.log for dxgkrnl-d489414c2 for kernel 6.2.0-26-generic (x86_64) Do 24. Aug 18:16:22 CEST 2023 make: Entering directory '/usr/src/linux-headers-6.2.0-26-generic' warning: the compiler differs from the one used to build the kernel The kernel was built by: x86_64-linux-gnu-gcc-11 (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 You are using: gcc-11 (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgmodule.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/hmgr.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/misc.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/ioctl.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgprocess.o CC [M] /var/lib/dkms/dxgkrnl/d489414c2/build/dxgsyncfile.o /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c: In function ‘dxgallocation_destroy’: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c:934:66: warning: passing argument 2 of ‘vmbus_teardown_gpadl’ makes pointer from integer without a cast [-Wint-conversion] 934 | vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl); | ~~~~~^~~~~~~ | | | u32 {aka unsigned int} In file included from /var/lib/dkms/dxgkrnl/d489414c2/build/dxgadapter.c:15: ./include/linux/hyperv.h:1226:58: note: expected ‘struct vmbus_gpadl *’ but argument is of type ‘u32’ {aka ‘unsigned int’} 1226 | struct vmbus_gpadl *gpadl); | ~~~~~~~~~~~~~~~~~~~~^~~~~ /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c: In function ‘create_existing_sysmem’: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c:1425:53: error: passing argument 4 of ‘vmbus_establish_gpadl’ from incompatible pointer type [-Werror=incompatible-pointer-types] 1425 | alloc_size, &dxgalloc->gpadl); | ^~~~~~~~~~~~~~~~ | | | u32 * {aka unsigned int *} In file included from /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.c:18: ./include/linux/hyperv.h:1223:59: note: expected ‘struct vmbus_gpadl *’ but argument is of type ‘u32 *’ {aka ‘unsigned int *’} 1223 | struct vmbus_gpadl *gpadl); | ~~~~~~~~~~~~~~~~~~~~^~~~~ cc1: some warnings being treated as errors make[1]: *** [scripts/Makefile.build:260: /var/lib/dkms/dxgkrnl/d489414c2/build/dxgvmbus.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make: *** [Makefile:2026: /var/lib/dkms/dxgkrnl/d489414c2/build] Error 2 make: Leaving directory '/usr/src/linux-headers-6.2.0-26-generic'

I have the same error when building kernel, what should we do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment