Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active April 27, 2024 03:51
Show Gist options
  • Star 77 You must be signed in to star a gist
  • Fork 11 You must be signed in to fork a gist
  • Save scyto/e4e3de35ee23fdb4ae5d5a3b85c16ed3 to your computer and use it in GitHub Desktop.
Save scyto/e4e3de35ee23fdb4ae5d5a3b85c16ed3 to your computer and use it in GitHub Desktop.

Enable & Using vGPU Passthrough

This gist is almost entirely not unlike Derek Seaman's awesome blog:

Proxmox VE 8: Windows 11 vGPU (VT-d) Passthrough with Intel Alder Lake

As such please refer to that for pictures, here i will capture the command lines I used as i sequence the commands a little differently so it makes more logic to me.

This gists assumes you are not running ZFS and are not passing any other PCIE devices (as both of these can require addtional steps - see Derek's blog for more info)

This gist assumes you are not running proxmox in UEFI Secure boot - if you are please refer entirely to dereks blog.

ALSO pleas refere to the comments section as folks have found workarounds and probably corrections (if the mistakes remain in my write up it is because i have't yet tested the corrections)

Note:i made no changes to the BIOS defaults on the Intel Nuc 13th Gen. This just worked as-is.

this gist is part of this series

Preparation

Install Build Requirements

apt update && apt install pve-headers-$(uname -r)
apt install git sysfsutils dkms build-* unzip -y

Install Other Drivers / Tools

This allow you to run vainfo, intel_gpu_top for testing and non-free versions of the encoding driver - without this you will not AFAIK be able to encoding with this GPU. This was missed in EVERY guide i saw for this vGPU, so not sure, but i had terrible issues until i did this.

edits the sources list with nano /etc/apt/sources.list

add the following lines:

#non-free firmwares
deb http://deb.debian.org/debian bookworm non-free-firmware

#non-free drivers and components
deb http://deb.debian.org/debian bookworm non-free

and save the file

apt update && apt install intel-media-va-driver-non-free intel-gpu-tools vainfo

This next step copies a driver missing on proxmox installs and will remove the -2 error for this file in dmesg.

wget -r -nd -e robots=no -A '*.bin' --accept-regex '/plain/' https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915/adlp_dmc.bin

cp adlp_dmc.bin /lib/firmware/i915/

Compile and Install the new driver

Clone github project

cd ~
git clone https://github.com/strongtz/i915-sriov-dkms.git

modify dkms.conf

cd i915-sriov-dkms
nano dkms.conf

change these two lines as follows:

PACKAGE_NAME="i915-sriov-dkms"
PACKAGE_VERSION="6.5"

save the file

Compile and Install the Driver

cd ~
mv i915-sriov-dkms/ /usr/src/i915-sriov-dkms-6.5
dkms install --force -m i915-sriov-dkms -v 6.5

and use dkms status to verify the module is now installed

Modify grub

edit the grub fle with nano /etc/default/grub

change this line in the file

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7"

note: if you have already made modifications to this line in your grub file for other purposes you should also still keep those items

finally run

update-grub
update-initramfs -u

Find PCIe Bus and update sysfs.conf

use lspci | grep VGA t find the bus number

you should see something like this:

root@pve2:~# lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04)

take the number on the far left and add to the sysfs.conf as follows - note all the proceeding zeros on the bus path are needed

echo "devices/pci0000:00/0000:00:02.0/sriov_numvfs = 7" > /etc/sysfs.conf

REBOOT

Testing On Host

check devices

check devices with dmesg | grep i915

the last two lines should read as follows:

[    7.591662] [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.7 on minor 7
[    7.591818] i915 0000:00:02.0: Enabled 7 VFs

if they don't then check all steps carefully

Validate with VAInfo

validate with vainfo you should see no errors (note this needs the drivers and tool i said to install at the top) and vainfo --display drm --device /dev/dri/cardN where N is a number from 0 to 7 - this will show you the acceleration endpoints for each VF

Check you can monitor the VFs - if not you have issues

monitor any VF renderer in real time with intel_gpu_top -d drm:/dev/dri/renderD128 there is one per VF - to see them all use ls -l /dev/dri

Configure vGPU Pool in Proxmox

  1. navigate to Datacenter > Resource Mappings
  2. click add in PCI devices
  3. name the pool something like vGPU-Pool
  4. map all 7 VFs for pve 1 but NOT the root device i.e 0000:00:02.x not 0000:00:02
  5. click create
  6. on the created pool lcikc the plus button next to vGPU-Pool
  7. select mapping on node = pve 2, ad all devices and click create
  8. repeat for pve3

The pool should now look like this:

image

Note: machines with PCI pass through devices cannot be live migrated, they must be shutdown, migrated offline to the new node and then started.

EVERYTIME THE KERNEL IS UPDATED IN PROXMOX YOU SHOULD DO THE FOLLOWING

update the kernel using proxox ui
dkms install -m i915-sriov-dkms -v 6.5 --force
reboot

How to get working in a privileged container

wow this one is hard.... you can avoid the id mapping stuff by not using a privileged container...

Assumptions:

  1. you have a debian 12 container, you added the non-free deb and have installed the non-free drivers as per the host instructions
  2. you have run cat /etc/groups in the container and noted down the GID for render (lets call that CTRGID) and gid for video (lets call that CTVGID).
  3. you have run cat /etc/groups in the container and noted down the GID for render (lets call that HSTRGID) and gid for video (lets call that HSTVGID). 5 that you have va info fully working

Create Container

  1. create container privileged, with debian 12, starts it
  2. apt update, apt upgrade, install non free drivers, vainfo and intel_gpu_top tools
  3. add root to user and video groups (this will mean when we get to ID mapping you don't need to tart about with user mappings - only group ones)
usermod -a -G render root
usermod -a -G video root
  1. shutdown container

Edit container conf file

  1. These are stored in /etc/pve/lxc and have the VMID.conf anme
  2. nano /etc/pve/lxc/VMID.conf

Add lxc device mapping

Here you add a line for the card uyou want and the rendere. Note if you map a VF (card) to a container it means that is hard mapped, if you have that VF in a pool for VMs please remove it from the pool (this means also these containers cannot be HA)

In the example below i chose card6 - which is renderD134 These are mapped into the container as card0 and renderD128 Change your numbers as per your own VF / card mappings

lxc.cgroup2.devices.allow: c 226:6 rwm
lxc.mount.entry: /dev/dri/card6 dev/dri/card0 none bind,optional,create=file

lxc.cgroup2.devices.allow: c 226:134 rwm
lxc.mount.entry: /dev/dri/renderD134 dev/dri/renderD128 none bind,optional,create=file

Add ID mapping (only needed in unprivileged)

  1. add the following... and here it gets complex as it will vary based on the numbers you recorded earlier - let me try... the aim is to have a continguois block of mappings but the syntax is um difficult...
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 CTVGID
lxc.idmap: g CTVGID HSTVGID 1
lxc.idmap: g CTVGID+1 1000{CTVGID+1} CTRGID-CTVGID-1
lxc.idmap: g CTRGID HSTVGID 1
lxc.idmap: g CTRGID+1 100{CTRGID+1} 65536-{CTRGID+1}

so as an example, these are my values:

        host > ct
video:    44 > 44
render:  104 > 106

this is what i added to my VMID.conf file (in my case /etc/pve/lxc/107.conf

lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 61
lxc.idmap: g 106 104 1
lxc.idmap: g 107 100107 65429
  1. add your two CT values to nano /etc/subgid (only needed in unprivileged)

in my case:

root:106:1
root:44:1

after this you should be able to start up the container and run vainfo and perform transcoding.

check permissions with ls -la /dev/dri it should look like this:

root@vGPU-debian-test:~# ls -la /dev/dri
total 0
drwxr-xr-x 2 root   root         80 Oct  7 00:22 .
drwxr-xr-x 7 root   root        500 Oct  7 00:22 ..
crw-rw-rw- 1 nobody video  226,   0 Oct  4 21:42 card0
crw-rw-rw- 1 nobody render 226, 128 Oct  4 21:42 renderD128

if the group names do not say video and render then you did something wrong

**Note: YYMV **

For example plex HW transcoded just fine on my system.

Emby on the otherhand seems to interrogate the kernel driver directly and gets the wrong answers - this is IMHO an issue with their detection logic not supporting this scenario.

Another example is intel_gpu_top which doesn't seem to work in this mode either - this is because it only works with the PMUs not the VFs (so somoene said)

Or maybe i just have no clue what i am doing, lol.

---work in progress 2023.10.6---

add vGPU to a Windows 11 or Server 2022 VM

  1. create VM with CPU set to host DO NOT CHANGE THIS
  2. boot VM without vGPU and display set to default
  3. install windows 11
  4. install VirtIO drivers [as of 4.6.2024 do not install guest tools - this may cause repair loops]
  5. shutdown VM and change display to VirtIO-GPU
  6. Now add the vGPU pool as a PCI device
  7. when creating a VM add a PCI device and add the poool as follows:

image

  1. now boot into VM and install latest IrisXe drivers from intel
  2. you should now have graphics acceleration availble to apps wether you connect by webcolse VNC, SPICE or an RDP client

From @rinze24:

If you follow the guide successfully, in Device Manager you will see:

  • Microsoft Basic Display Adapter - If you use Display in VM Settings
  • Intel iGPU - passthrough

You have 2 options (or more) to use your iGPU. Because Windows 11 decide on its own which graphics to use.

  1. Setup Remote Desktop Connection in Windows 11 and set the display to none in VM Hardware settings.
  • Pro: No configuration per app, Responsive Connection.
  • Con: No proxmox console.
  1. Inside Windows Set which graphics preference to use per application in Display Settings -> Graphics Settings-
  • Pro: Have proxmox console.
  • Con: Need to configure per application / program.

If you hit automatic repair loop at any point shutdown the machine and edit its conf file in /etc/pve/qemu-server and add args: -cpu Cooperlake,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx

@nautilus7
Copy link

@scyto In your guide for unprivileged containers and id mapping there is an error. In /etc/subgid you need to map the host render group (104), not the container render group (106).

@rjblake
Copy link

rjblake commented Feb 26, 2024

@scyto In your guide for unprivileged containers and id mapping there is an error. In /etc/subgid you need to map the host render group (104), not the container render group (106).

Had noticed the same previously, but seems the guide has not been updated - correction

@DarkPhyber-hg
Copy link

DarkPhyber-hg commented Mar 4, 2024

This guide didn't fully work for me, i had to get some info from Derek Seaman's blog that was linked.

But more importantly it seems like plex HDR tone mapping doesn't work in an LXC with a vgpu right now. Trancoding works fine, but in order to get tone mapping working i had to pass card0 through instead of card1-7.

@ovizii
Copy link

ovizii commented Mar 12, 2024

Thanks for the awesome guide, it all seems to have worked out for me, but I have a few follow-up questions.

When passing through a vGPU to say Plex running in Docker, all how-tos just say to do:

devices:
  - /dev/dri:/dev/dri 

But that seems plain wrong as I only want to map one of the 7 virtual devices through. I wasn't sure if cardx AND renderDxxx are both needed, but this works for me atm:

devices:
  - /dev/dri/card1:/dev/dri/card1
  - /dev/dri/renderD129:/dev/dri/renderD129

Plex is using HW encoding now. Any reason I should maybe map them to card0/renderD128 inside the container? Do you really need to assign both to the container?

Lastly, in your guide you said:

Check you can monitor the VFs - if not you have issues
monitor any VF renderer in real time with intel_gpu_top -d drm:/dev/dri/renderD128 there is one per VF

Unfortunately, I can only monitor renderD128, all the others up to renderD135 give this error:

 intel_gpu_top -d drm:/dev/dri/renderD129
Failed to detect engines! (No such file or directory)
(Kernel 4.16 or newer is required for i915 PMU support.)
Segmentation fault

which also shows in dmesg:

[36946.200816] intel_gpu_top[1957775]: segfault at 78 ip 00005619a4ae7464 sp 00007ffc75ed4ad0 error 4 in intel_gpu_top[5619a4ae4000+a000] likely on CPU 2 (core 2, socket 0)
[36946.200824] Code: ff 4c 8b bd 98 e4 ff ff 48 89 b5 f0 e9 ff ff 48 89 bd f8 e9 ff ff eb 12 4c 89 f8 48 83 c0 08 49 89 c7 48 8b 00 48 85 c0 74 11 <80> 78 38 00 74 e8 48 8b 78 30 e8 5d d2 ff ff eb dd 48 8b 85 70 e4

Any ideas if this somehow limits me or can I still use the 7 VFs as apparently I got one working with Plex in Docker already. Will test giving one out to a Windows VM next.

@ovizii
Copy link

ovizii commented Mar 12, 2024

Now it runs fine. I had apparently had problems with the kernel driver.

Before: Kernel driver in use: vfio-pci Kernel modules: i915

After: Kernel driver in use: i915 Kernel modules: i915

Your tutorial also work with a UHD 730 iGPU.

Thank you :)

What exactly did you change to get from "before" to "after"?

@komw
Copy link

komw commented Mar 28, 2024

@KrzysztofC Could you please share your windows VM config? What exactly you installed? I have the same Beeling S12 Pro PC, and Im trying to get GPU working for almost week without any success...
I'm able to get GPU working with ubuntu-desktop, and it uses the HDMI to display screen, but in windows 10/11 I'm always getting the error code 43 and GPU isnt working.

@rebebop
Copy link

rebebop commented Apr 4, 2024

Have anyone actually looked into passing the video to physical output? Intel themselves mention that:

The key benefits of Intel Graphics SR-IOV are:

  • Support up to four independent display output and seven virtualized functions (12th generation Intel® Core™ embedded processors).

They say the same in their Github repo, however nowhere did I find any actual mentioning of how this would work in practice.

EDIT: asked them directly: intel/kubevirt-gfx-sriov#5

@wrobelda did you ever find an answer to this? The linked repo is archived.

@wrobelda
Copy link

wrobelda commented Apr 4, 2024

@wrobelda did you ever find an answer to this? The linked repo is archived.

I haven't, but I was thinking about pointing Level1Techs people in that direction, since they recently were looking (in their YouTube channel) into vGPUs using industrial Intel GPU and they'd probably find that interesting.

@scyto
Copy link
Author

scyto commented Apr 4, 2024

i haven't either, YT also started surfacing the same videos up, would be great if the new intel cards worked well.

I am pretty certain the iris cards in the nuc cannot pass to HDMI from a VM based on the research i did.

@scyto
Copy link
Author

scyto commented Apr 4, 2024

I passed one of the 7 to a Win10 VM and it seemed to work as expected, even installed the Intel driver with no errors. The only thing i noticed is that Sunshine/Moonlight does not work with this setup, i think thats because there is no monitor connected to the HDMI port (thats how i got it to work with the nVidia GPU), i figure that if i add a virtual monitor to the VM from the Proxmox resources it should do the trick!?

I found this recently that might help, i haven't tried tho on anything but a real machine https://github.com/itsmikethetech/Virtual-Display-Driver

@scyto
Copy link
Author

scyto commented Apr 5, 2024

@KrzysztofC thanks for the post on the things you did, interesting about the re-install working - i may have to try that, i also have code 43 issues i can no longer resolve (this is all very fragile) and thats even with following David's excellent article. There is definitely somehting we are all missing on why code 43 appears so much... and seems so random sometimes....

@scyto
Copy link
Author

scyto commented Apr 5, 2024

This guide didn't fully work for me, i had to get some info from Derek Seaman's blog that was linked.

But more importantly it seems like plex HDR tone mapping doesn't work in an LXC with a vgpu right now. Trancoding works fine, but in order to get tone mapping working i had to pass card0 through instead of card1-7.

Yeah how both plex and emby do their enumeration of the gpu is problematic, also yes IIRC some features of the gpu are only available on 0 - i never found a full list tho

@scyto
Copy link
Author

scyto commented Apr 7, 2024

Ok i seem to be getting somewhere.... changed my instructions, brief set of things i did and what i think was relevant

  1. Converted to secure boot (this didn't seem to change anything)
  2. used alternate fork of i915 dkms drivers (this didn't seem to change anything)
  3. wondered why my vCPU reported as Xeon, despite having host set...... found a bunch of bloody args in the VM i had forgot about - lesson, don't be like scyto and re-install in a VM, make new vm...
  4. made new VM, was sure to set to host before installing AND do NOT install the guest tools - for me this was causing repair loops. I used win11 23H2 March ISO - i have yet to try doing windows update....
  5. if you get to this stage, wait until ARC is launchable and works, then you can click X on the installer
  6. i will report back on wether it is the guest tools or the virtio driver package install that causes the boot loops
    tl;dr don't set any ARGs it will break the VM and the vGPU

image

@scyto
Copy link
Author

scyto commented Apr 7, 2024

ok it seems that 'after some time' windows 11 on a reboot will get stuck into a repair loop - this isn't windows updates, this isn't installing virtio drivers or guest tools - it seems time related based on one users finding on the forum. Obviously setting another CPU type fixes this, but at this time, on my machine, that breaks vGPU. This means i can only get what you see above working once, and then it will break.

I note when i use the repair console for some reason the virtioscsi drivers are not loaded, loading gives access to the disk.

Having read a lot of things on qemu and windows this seems to be common issue for years with win11 - have any of you managed to keep your win11 vGPU working over many reboots / days?

@scyto
Copy link
Author

scyto commented Apr 7, 2024

I fixed infinite automatic repairing when using host by using args: -cpu host,hv_passthrough,level=30,-waitpkg in the 102.conf gile.
Unfortunately this seems to break the vGPU virtualization.

I am stumped as to why it works and then after some time doesn't work....

@scyto
Copy link
Author

scyto commented Apr 8, 2024

ok to boil this down about windows / i915 / 13th gen CPU, it seems one can run as follows:

  1. a win11 VM that has only been set up with local account (no AAD or Microsoft account on it) will run perfectly for ever, updates seem safe, must be installed and run as CPU = host and WSL/windows hello must never be enabled
  2. a win11 VM with WSL, Hyper-V, windows hello (aad join / microsoft account) will run - but vGPU will stop working after a couple of reboots. CPU needs to be host and may require CPU args to run args: -cpu host,+svm,-hypervisor,-waitpkg,hv_passthrough,level=30 this will not get vGPU working, but will get WSL working and allow windows hello

If anyone can show me another way, i am all ears, i want to be proven wrong on this....

my next step is to try the new i915 backports drivers when i have time, that might not be for a few weeks

@rjblake
Copy link

rjblake commented Apr 10, 2024

ok to boil this down about windows / i915 / 13th gen CPU, it seems one can run as follows:

  1. a win11 VM that has only been set up with local account (no AAD or Microsoft account on it) will run perfectly for ever, updates seem safe, must be installed and run as CPU = host and WSL/windows hello must never be enabled
  2. a win11 VM with WSL, Hyper-V, windows hello (aad join / microsoft account) will run - but vGPU will stop working after a couple of reboots. CPU needs to be host and may require CPU args to run args: -cpu host,+svm,-hypervisor,-waitpkg,hv_passthrough,level=30 this will not get vGPU working, but will get WSL working and allow windows hello

If anyone can show me another way, i am all ears, i want to be proven wrong on this....

my next step is to try the new i915 backports drivers when i have time, that might not be for a few weeks

Thankfully on 12th Gen CPU. I have tried both local account and WSL. In fact the one I use the most, is a VM created using a WSL. I have never had any issue with the vGPU stopping working after a period of time or multiple reboots. This is running on an HP Elite Mini 800 G9 (i7-12700). In addition, I have both RDP and a Display configured using VirtIO-GPU. Configuration as below:

image
image
image

@jeanpaulrh
Copy link

This is running on an HP Elite Mini 800 G9 (i7-12700). In addition, I have both RDP and a Display configured using VirtIO-GPU

Hi, I have the same generation of HP Elite Mini only with a i5-12500. I tried some of the tutorial found online and I got Windows VM to use GPU but I wasn't able to get Plex to use vGPU for transcoding. I set up an Ubuntu VM which loads the drivers, vainfo shows the card but plex simply uses the cpu. Have you tried to use HW transcoding on a linux VM with Plex?
Thanks

@rjblake
Copy link

rjblake commented Apr 10, 2024

Have you tried to use HW transcoding on a linux VM with Plex? Thanks
No - I run Plex in an Unprivileged Container (Ubuntu 22.04.4 LTS) with iGPU passed through. Runs solid and uses GPU for HW transcoding. Dumb question, but assume you set the option for HW transcoding in Plex itself?

@scyto
Copy link
Author

scyto commented Apr 10, 2024

I have never had any issue with the vGPU stopping working after a period of time or multiple reboots. T

I am starting to think this is hardware specific related QEMU/KVM bug - you will find years of folks complaining about windows 10 and windows 11 doing this on promox and native qemu/kvm and n ot understanindg why (irrespective of vGPU). Thanks for sharing the config. It is the same issue that causes the automatic repair for some people when they enabled WSL2 too.

@pcmike
Copy link

pcmike commented Apr 10, 2024

For anyone running into "install error on pve 8.1 kernel version : 6.5.13-5" go here for the fix: strongtz/i915-sriov-dkms#151

@scyto
Copy link
Author

scyto commented Apr 11, 2024

@pcmike i deleted your mega post, next time look at the open issues on the github repro as you first port of call

@jeanpaulrh
Copy link

No - I run Plex in an Unprivileged Container (Ubuntu 22.04.4 LTS) with iGPU passed through. Runs solid and uses GPU for HW transcoding. Dumb question, but assume you set the option for HW transcoding in Plex itself?

Yes I did. The last try I got it working for a while, but it was unstable... after a couple of minutes the host deactivated the 7 VGPU with an error on the log. Reboot was the only solution. I tried to google for the error but with no luck :(

@jaxjexjox
Copy link

Hello,

I currently have plex transcoding hardware successfully on docker, in an LXC on proxmox, with mapped SMB drives.
This is on Intel 7xxx series, old processors and it was hard to get working but it's working great.

I've upgraded to 12xxx series processors and again, would like to use an LXC, run docker inside it with plex and continue to hardware transcode.

Are these instructions only applicable to VMs or will it work for an LXC as well?
Thanks for the hard work.

@rjblake
Copy link

rjblake commented Apr 17, 2024

Hello,

I currently have plex transcoding hardware successfully on docker, in an LXC on proxmox, with mapped SMB drives. This is on Intel 7xxx series, old processors and it was hard to get working but it's working great.

I've upgraded to 12xxx series processors and again, would like to use an LXC, run docker inside it with plex and continue to hardware transcode.

Are these instructions only applicable to VMs or will it work for an LXC as well? Thanks for the hard work.

Works on LXC - it is covered in the text here specifically for Privileged container. I have it running on a number Unprivileged containers

@rinze24
Copy link

rinze24 commented Apr 18, 2024

In the unprivileged id mapping the 5th line lxc.idmap: g CTRGID HSTVGID 1 should be lxc.idmap: g CTRGID HSTRGID 1

@rjblake
Copy link

rjblake commented Apr 18, 2024

lxc.idmap: g CTRGID+1 100{CTRGID+1} 65536-{CTRGID-1}

Also, line 6 lxc.idmap: g CTRGID+1 100{CTRGID+1} 65536-{CTRGID+1} should be lxc.idmap: g CTRGID+1 100{CTRGID+1} 65536-{CTRGID-1}

@rjblake
Copy link

rjblake commented Apr 24, 2024

Proxmox have just released v8.2 using Kernel version 6.8. As I understand, the Strongtz DKMS module will NOT work on Kernel 6.8 and it seems he has stopped any major updates as no longer using SR-IOV. As such, seems there is quite a lot of work to do to update the code (beyond my ability) and my suggestion would be to pin Kernel 6.5 for now. Seems Intel will not have a driver in mainline until at least Kernel 6.9 (if even then), so I'd also suggest ensuring you have an archived ISO installer of PVE V8.1

@blebo
Copy link

blebo commented Apr 25, 2024

Known Issues regarding DKMS in the PVE v8.2 release notes also suggest pinning a kernel package https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.2

@TBDuval
Copy link

TBDuval commented Apr 27, 2024

Can someone help? I literally installed this from this guide a week ago now it wont work. Running
Kernel Version Linux 6.5.11-8-pve (2024-01-30T12:27Z)
Boot Mode EFI
Manager Version pve-manager/8.1.4/ec5affc9e41f1d79

I went to build another one today and I am getting this error on dkms bad exit status: 2

Here is the make.log where the error occurs.
/var/lib/dkms/i915-sriov-dkms/6.5.11-8/build/drivers/gpu/drm/i915/display/intel_dp_mst.c: In function ‘intel_dp_mst_find_vcpi_slots_for_bpp’:
/var/lib/dkms/i915-sriov-dkms/6.5.11-8/build/drivers/gpu/drm/i915/display/intel_dp_mst.c:85:31: error: too few arguments to function ‘drm_dp_calc_pbn_mode’
85 | crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock,
| ^~~~~~~~~~~~~~~~~~~~
In file included from /var/lib/dkms/i915-sriov-dkms/6.5.11-8/build/drivers/gpu/drm/i915/display/intel_display_types.h:36,
from /var/lib/dkms/i915-sriov-dkms/6.5.11-8/build/drivers/gpu/drm/i915/display/intel_dp_mst.c:40:
./include/drm/display/drm_dp_mst_helper.h:835:5: note: declared here
835 | int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
| ^~~~~~~~~~~~~~~~~~~~
/var/lib/dkms/i915-sriov-dkms/6.5.11-8/build/drivers/gpu/drm/i915/display/intel_dp_mst.c: In function ‘intel_dp_mst_mode_valid_ctx’:
/var/lib/dkms/i915-sriov-dkms/6.5.11-8/build/drivers/gpu/drm/i915/display/intel_dp_mst.c:898:13: error: too few arguments to function ‘drm_dp_calc_pbn_mode’
898 | drm_dp_calc_pbn_mode(mode->clock, min_bpp) > port->full_pbn) {
| ^~~~~~~~~~~~~~~~~~~~
./include/drm/display/drm_dp_mst_helper.h:835:5: note: declared here
835 | int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
| ^~~~~~~~~~~~~~~~~~~~
make[2]: *** [scripts/Makefile.build:251: /var/lib/dkms/i915-sriov-dkms/6.5.11-8/build/drivers/gpu/drm/i915/display/intel_dp_mst.o] Error 1
make[1]: *** [/usr/src/linux-headers-6.5.11-8-pve/Makefile:2039: /var/lib/dkms/i915-sriov-dkms/6.5.11-8/build] Error 2
make: *** [Makefile:234: __sub-make] Error 2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment