Skip to content

Instantly share code, notes, and snippets.

@Misairu-G
Last active March 8, 2024 10:50
Star You must be signed in to star a gist
Save Misairu-G/616f7b2756c488148b7309addc940b28 to your computer and use it in GitHub Desktop.
[GUIDE] Optimus laptop dGPU passthrough

Reddit post (Archived)

Table of Content

This is a guide for passing through you dGPU on your laptop for your VM. This guide only apply to laptops that does not load dGPU firmware through acpi call, which include all MUXed laptop and some MUXless laptop. For laptops that use acpi call to load dGPU firmware, please refer to to u/jscinoz 's optimus-vfio-docs.

Sorry but currently I don't know how to check if your dGPU load its firmware through acpi call.

Update: Use hexadecimal id directly instead of convert it to decimal, add some note for romfile option

Update: Forget that -vga none would cause Guest has not initialized the display (yet) problem if you don't have a system installed

Update: Use qemu 2.11.2 with pulse audio patch and vcpupin, add some caveats for 18.04

Update: Outdated link to VirtIO windows-guest drivers, thanks to @pascalav, who also attach a link of how to embed an ACPI table for VBIOS

What to expect?

Depends on your hardware, you can have a laptop that:

  • Physically running a Linux distribution as the host machine,
  • Can power on/off and utilize your Nvidia dGPU on demand with bumblebee,
  • Can pass your Nvidia dGPU to your VM when you don't need it in your host machine,
  • Can have your dGPU back when the VM shutdown,
  • Can use your dGPU with bumblebee again without any problem,
  • No need to reboot during this dGPU binding/unbinding process,
  • No need for external display (depend on your hardware and the version of Windows your VM running),
  • Can connect external display directly to your VM (only some machine with specific setup).

Frame rate test

Unigine Heaven 4.0 Basic test

Steam in-home streaming between Windows VM and host:

  • Both game use high preset with V-Sync enabled.
  • Max fps of Witcher 3 has set to 60.
  • No extra monitor what so ever.

DOOM

Witcher 3

*This is my laptop running in Optimus mode with a 1080p@120Hz panel (I swapped the original 1080p@60Hz myself) and a MXM form factor Quadro P5000(QS). This laptop is MUXed.

Some TLDR about the idea behind

As you might read after, this tutorial is pretty much the same as most passthrough guide. The keypoint, however, is to assign Subsystem ID for the dGPU using some vfio-pci options. My dGPU appears to have a Subsystem ID 00000000 inside the VM by default.

About one display setup, although frames are rendered in GPU memory, display ports is not the only way to get those frames. Nvidia itself provides API to capture things in GPU memory, this is why we can have technology like Steam in-home streaming and Geforce experience. For me, I have RemoteFX working, and that is the only reason why I put that in this tutorial. Despite I use a Quadro, this mobile version GPU does not support NvFBC capture API (the same as other consumer card), which means it's capability is no more than a GeForce, so you should be able to get RemoteFX working with Geforce.

Some might be heard of gnif's phenomenal work, which made a huge step forward for one-display setup. Unfortunately, a dummy device is still required for that setup, which is a no go for laptop. Even with a MUXed laptop, having a dummy device plug-in still means that your GPU needs to expose some form of display output signal physically, but most Laptop don't support this. As far as I know, Dell precision 7000 line-up can enable DisplayPortDirectOutput mode in BIOS, which would route GPU signal directly to video output port, while keeping iGPU rendering the built-in display.

Prerequisites

Please noted that this tutorial does not support every Optimus laptop. Generally, a good laptop with some specific hardware capability is required. If you have a laptop that come with a swappable MXM form factor graphics card, its highly possible that you'll success.

Also, due to the nature that laptop varies so much from manufacture to manufacture, there is no way you can tell if it is MUXed, or MUXless, or how a MUXless laptop load its firmware before you get your hands on it. However, the firmware loading mechanism of your GPU is crucial for resolving the infamous Code 43 problem. So please do enough homework (find some success report in particular) before you plan to purchase a laptop for this purpose.

Hardware

  • A CPU that support hardware virtualization (Intel VT-x) and IOMMU (Intel VT-d).

    • Check here for a full list of qualified CPU
  • A motherboard that support IOMMU with decent IOMMU layout e.g. your dGPU is in its own IOMMU group aside from other devices.

    • For the reason that there is no ACS support for laptop (maybe some bare-bone does), so far, a decent IOMMU layout is crucial since the ACS override patch is not applicable.
  • Verification:

    • Boot with intel_iommu=on kernel parameter and use dmesg | grep -i iommu to verify you IOMMU support, this will also print your IOMMU layout.

    • Example:

      • # From "lspci":
        # 00:01.0 PCI bridge: Intel Corporation Sky Lake PCIe Controller (x16) (rev 05)
        # 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1bb6 (rev a1)
        
        # From "dmesg | grep iommu"
        [    0.000000] DMAR: IOMMU enabled
        [    0.086383] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
        [    1.271222] iommu: Adding device 0000:00:00.0 to group 0
        [    1.271236] iommu: Adding device 0000:00:01.0 to group 1
        [    1.271244] iommu: Adding device 0000:00:04.0 to group 2
        [    1.271257] iommu: Adding device 0000:00:14.0 to group 3
        [    1.271264] iommu: Adding device 0000:00:14.2 to group 3
        [    1.271277] iommu: Adding device 0000:00:15.0 to group 4
        [    1.271284] iommu: Adding device 0000:00:15.1 to group 4
        [    1.271293] iommu: Adding device 0000:00:16.0 to group 5
        [    1.271301] iommu: Adding device 0000:00:17.0 to group 6
        [    1.271313] iommu: Adding device 0000:00:1c.0 to group 7
        [    1.271325] iommu: Adding device 0000:00:1c.2 to group 8
        [    1.271339] iommu: Adding device 0000:00:1c.4 to group 9
        [    1.271360] iommu: Adding device 0000:00:1f.0 to group 10
        [    1.271367] iommu: Adding device 0000:00:1f.2 to group 10
        [    1.271375] iommu: Adding device 0000:00:1f.3 to group 10
        [    1.271382] iommu: Adding device 0000:00:1f.4 to group 10
        [    1.271390] iommu: Adding device 0000:00:1f.6 to group 10
        [    1.271395] iommu: Adding device 0000:01:00.0 to group 1
        [    1.271407] iommu: Adding device 0000:02:00.0 to group 11
        [    1.271418] iommu: Adding device 0000:03:00.0 to group 12
        
      • Here the GPU and its root port are in the same group, and there is no other device in this group, thus make it a decent IOMMU layout.

System & Software

  • Host:
    • I'm currently running Ubuntu 16.04 (with 4.15 kernel), but it should also work on other distribution.
    • System should be installed in UEFI mode, and boot via UEFI.
  • Guest:
    • Windows that support RemoteFX. Windows 10 Pro for example.
  • QEMU:
    • Currently running QEMU 2.11.2 with pulse audio and vcpupin patch
    • If you you use QEMU 2.10 or higher and encounter a boot hang (dots spinning forever), check your OVMF version, it might need an upgrade. Refer here for further detail.
  • RDP Client:
    • Freerdp 2.0 or above for RDP 8 with RemoteFX connection.

Note: Keep your dual-boot Windows if you still want to run software like XTU.

Update: Attention for MUXless laptop

Not sure anyone succeseded with a MUXless laptop yet (Or failed with a MUXed laptop). If you do success, please consider leave a comment with your setup (laptop model, year of production/purchase, etc.), so that other people can have some reference.

Now for switchable graphics, there are three different solutions: MUXed(Old), MUXless and MUXed(New)

Circuits diagram

Most modern Optimus laptop use MUXless scheme, while some others, HP/Thinkpad/Dell mobile workstation, Clevo P650, some Alienware, etc. use MUXed scheme. At the dark age before Optimus solution came out, there is an old MUXed scheme which require reboot to switch graphics card and can only use one at a time, while the modern MUXed allow switch between Optimus and dGPU only, and can even have display output port hooked directly to the dGPU when using Optimus (only applicable for some laptop).

For people who encounter Code 43 with a MUXless scheme, that is to say, you can see your dGPU in guest, can even have nvidia driver installed without any problem, but still have this error code. This is because ACPI call failed for firmware loading, in short:

  • Nvidia driver try to read your dGPU ROM from system BIOS instead of using the ROM you provided through vfio-pci (this is actually how a real MUXless dGPU get its ROM).

  • Please refer to u/jscinoz 's optimus-vfio-docs if you encounter such problem

Some success reports

Bumblebee setup guide

Note: For people who don't want to setup bumblebee, follow this to get your GPU's ACPI address, and power it on/off by refering script here. (Credit to Verequies from reddit)

Note: You might need to disable secure boot before following continue on this part.

We will first go through my bumblebee setup process. I did install bumblebee first and setup passthrough the second. But it should work the other way around.

  1. (Optional) Solving the known interference between TLP and Bumblebee

    • If you don't want to use tlp, please skip this part.

    • TLP is a must have for a Linux laptop since it provides extra policies to save your battery. Install TLP by sudo apt install tlp

    • Add the output of lspci | grep "NVIDIA" | cut -b -8 to RUNTIME_PM_BLACKLIST in /etc/default/tlp, uncomment it if necessary. This will solve the interference.

  2. Install Nvidia proprietary driver through Ubuntu system settings (Or other install method you prefer).

  3. (Trouble shooting) Solving the library linking problem in Nvidia driver.

    • If error messages show up after executing sudo prime-select intel or sudo prime-select nvidia, follow instructions below.

    • # Replace 'xxx' to the velrsion of nvidia driver you installed
      # You might need to perform this operation everytime your upgrade your nvidia driver.
      sudo mv /usr/lib/nvidia-xxx/libEGL.so.1 /usr/lib/nvidia-xxx/libEGL.so.1.org
      sudo mv /usr/lib32/nvidia-xxx/libEGL.so.1 /usr/lib32/nvidia-xxx/libEGL.so.1.org
      sudo ln -s /usr/lib/nvidia-xxx/libEGL.so.375.66 /usr/lib/nvidia-xxx/libEGL.so.1
      sudo ln -s /usr/lib32/nvidia-xxx/libEGL.so.375.66 /usr/lib32/nvidia-xxx/libEGL.so.1
    • If everything work correctly, sudo prime-select nvidia and then logout will give you a login loop. While sudo prime-select intel (do this in other tty with Ctrl+Alt+F2) will solve the login loop problem.

    • It is recommended to switch back and forth for once, if you run into some problem after a nvidia driver update.

  4. Blocking nouveau

    • Adding content below to /etc/modprobe.d/blacklist-nouveau.conf:

      • blacklist nouveau
        options nouveau modeset=0
        
    • sudo update-initramfs -u when finish.

    • If you have a DM running under wayland (such as Ubuntu 18.04, it runs GDM in wayland mode, despite GNOME is running under X11), some extra work might be needed to prevent nouveau from loading. Refer here for details.

  5. (Optional) Install CUDA, since the CUDA installation process is well guided by Nvidia, I will skip this part.

    • For CUDA, I personally recommend runfile installation. It is far more easy to maintain compare to other installation method. Just make sure neither the display driver (self-contain in the runfile) nor the OpenGL libraries is checked during the runfile installation process. ONLY install the CUDA Toolkit and don't run nvidia-xconfig.
  6. Solve some ACPI problem before bumblebee install:

    • Add nogpumanager acpi_osi=! acpi_osi=Linux acpi_osi=\"Windows 2015\" pcie_port_pm=off for GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub
    • sudo update-grub when finish.
    • (Trouble shooting) If prime-select command updates grub, be sure to check your grub file again, as it does not handle escape character correctly, \" would become \
  7. Install bumblebee

    • # For Ubuntu 18.04, the official ppa should work
      sudo add-apt-repository ppa:bumblebee/testing
      sudo apt update
      
      sudo apt install bumblebee bumblebee-nvidia
    • Edit /etc/bumblebee/bumblebee.conf:

      • Change Driver= to Driver=nvidia
      • Change all occurrences of nvidia-current to nvidia-xxx (xxx is your nvidia driver version)
      • KernelDriver=nvidia-xxx
      • It appears that nvidia driver change its location in Ubuntu 18.04, refer here for details and solutions.
    • Save the file and sudo service bumblebeed restart

  8. Kernel module loading modification:

    • Make sure corresponding section in /etc/modprobe.d/bumblebee.conf look like below

      • # Again, xxx is your nvidia driver version.
        blacklist nvidia-xxx
        blacklist nvidia-xxx-drm
        blacklist nvidia-xxx-updates
        blacklist nvidia-experimental-xxx
    • Add content below to /etc/modules-load.d/modules.conf

      • i915
        bbswitch
        
    • sudo update-initramfs -u when finish.

  9. (Optional) Create a group for bumblebee so that you don't need to sudo every time:

    • If cat /etc/group | grep $(whoami) already gives your user name under bumblebee group, skip this part.

    • groupadd bumblebee && gpasswd -a $(whoami) bumblebee

  10. (Trouble shooting) Try optirun nvidia-smi, if encounter [ERROR][XORG] (EE) Failed to load module "mouse" (module does not exist, 0), add lines below to /etc/bumblebee/xorg.conf.nvidia

  • Section "Screen"
      Identifier "Default Screen"
      Device "DiscreteNvidia"
    EndSection
    
  • Check here for more information about this problem.

  1. Verification:

    • cat /proc/acpi/bbswitch should gives you Ouput:0000:01:00.0 OFF

    • optirun cat /proc/acpi/bbswitch should gives you Ouput:0000:01:00.0 ON

    • nvidia-smi should give you something like:

      • NVIDIA-SMI couldn't find libnvidia-ml.so library in your system. Please make sure that the NVIDIA Display Driver is properly installed and present in your system.
        Please also try adding directory that contains libnvidia-ml.so to your system PATH.
        
    • optirun nvidia-smi should gives you something like:

      • Wed Nov 15 00:36:53 2017       
        +-----------------------------------------------------------------------------+
        | NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
        |-------------------------------+----------------------+----------------------+
        | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
        | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
        |===============================+======================+======================|
        |   0  Quadro P5000        Off  | 00000000:01:00.0 Off |                  N/A |
        | N/A   44C    P0    30W /  N/A |      9MiB / 16273MiB |      3%      Default |
        +-------------------------------+----------------------+----------------------+
                                                                                       
        +-----------------------------------------------------------------------------+
        | Processes:                                                       GPU Memory |
        |  GPU       PID   Type   Process name                             Usage      |
        |=============================================================================|
        |    0      7934      G   /usr/lib/xorg/Xorg                             9MiB |
        +-----------------------------------------------------------------------------+
        
  2. Congratulations, stay and enjoy this moment a little bit before running into the next part.

dGPU passthrough guide

System & Environment setup

  1. Set up QEMU:

    • QEMU from Ubuntu official PPA should work, just sudo apt install qemu-kvm qemu-utils qemu-efi ovmf.

      • Please note that QEMU 2.10 or above require a higher version of OVMF (say if you use UEFI for your VM), otherwise will cause boot hang. Refer here for details about which version. Simplest solution is to use ovmf package from 18.04 ppa directly.
    • Here I use QEMU 2.11.2 with pulse audio patch from spheenik to provide better audio quality and resolve the crackling issue, and vcpupin patch from saveriomiroddi for better performance.

    • Follow instructions below to build the QEMU I use (only if you prefer):

      • # Clone saveriomiroddi's vcpupin version of QEMU
        git clone https://github.com/saveriomiroddi/qemu-pinning.git qemu
        cd qemu
        git checkout v2.11.2-pinning
        
        # Apply pulseaudio from spheenik's git, we're applying the v1 version.
        wget -O - https://gist.githubusercontent.com/spheenik/8140a4405f819c5cd2465a65c8bb6d09/raw/9735bcfaaaef45cf47e1b5d92c5006adf6ecd737/v1.patch | patch -p0
        
        # (Optional)
        # You might need to set your git email or name before commiting changes
        git commit -am "Apply pulse audio patch"
        
        # Install dependencies
        sudo apt install libjpeg-turbo8-dev libepoxy-dev libdrm-dev libgbm-dev libegl1-mesa-dev libboost-thread1.58-dev libboost-random1.58-dev libiscsi-dev libnfs-dev libfdt-dev libpixman-1-dev libssl-dev socat libsdl1.2-dev libspice-server-dev autoconf libtool xtightvncviewer tightvncserver x11vnc libsdl1.2-dev uuid-runtime uuid uml-utilities bridge-utils python-dev liblzma-dev libc6-dev libusb-1.0-0-dev checkinstall virt-viewer cpu-checker nettle-dev libaio-dev
        
        # Prepare to build
        mkdir build
        cd build
        
        # QEMU does not support python3
        ../configure --prefix=/usr \
            --audio-drv-list=alsa,pa,oss \
            --enable-kvm \
            --disable-xen \
            --enable-sdl \
            --enable-vnc \
            --enable-vnc-jpeg \
            --enable-opengl \
            --enable-libusb \
            --enable-vhost-net \
            --enable-spice \
            --target-list=x86_64-softmmu \
            --python=/usr/bin/python2
        
        make -j8
        
        # QEMU does not provide 'make uninstall'
        # Use checkinstall here so that you can remove it by 'dpkg -r'
        # Assign a version number start with numeric number is mandatory when using checkinstall
        sudo checkinstall
  2. Setup kernel module and parameters:

    • Add intel_iommu=on,igfx_off kvm.ignore_msrs=1 to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, then sudo update-grub.

      • From here: Since some windows guest 3rd patry application / tools (like GPU-Z / Passmark9.0) will trigger MSR read / write directly, if it access the unhandled msr register, guest will trigger BSOD soon. So we added the kvm.ignore_msrs=1 into grub for workaround.
    • Add content below to /etc/initramfs-tools/modules (order matters!)

      • vfio
        vfio_iommu_type1
        vfio_pci
        vfio_virqfd
        vhost-net
        
      • sudo update-initramfs -u when finish.

    • Reboot.

    • lsmod for verification.

  3. (Optional) Setup hugepages

    • Reasons to use hugepages

    • Check cat /proc/cpuinfo see if it has the pse flag (for 2MB pages) or the pdpe1gb flag (for 1GB pages)

    • For pdpe1gb:

      • Add default_hugepagesz=1G hugepagesz=1G hugepages=8 transparent_hugepage=never to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, this will assign a 8GB huge page.
    • For pse:

      • Add default_hugepagesz=2M hugepagesz=2M hugepages=4096 transparent_hugepage=never to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, this does the same thing above.
    • sudo update-grub when finish.

    • Reboot.

    • ls /dev | grep hugepages for verification.

Prepare your script

  1. Get your Subsystem ID (SSID) and Subsystem Vendor ID (SVID):

    • Run optirun lspci -nnk -s 01:00.0, which will gives you an output like this:

      • 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1bb6] (rev a1)
        	Subsystem: Dell Device [1028:07b1]
        	Kernel driver in use: nvidia
        	Kernel modules: nvidiafb, nouveau, nvidia_384_drm, nvidia_384
        
    • Here, 1028 is the SVID and 07b1 is the SSID. We will use them later.

  2. Setup audio:

  3. Setup VM:

    • Note: Command here only serve as a reference, checkout QEMU documentation for more detail.

    • Note: I personally don't prefer libvirt as editing xml is annoying for me. Use libvirt if you like. virsh domxml-from-native qemu-argv xxx.sh can help you converting a QEMU startup script to libvirt XML. Refer here for more information.

    • Note: If you would like to put you GPU at some other address, refer here for details about ICH9 and GMCH (Graphics & Memory Controller Hub) defines. Layout of PCIe devices of your guest machines should follow these guidelines, as to prevent potential problem.

    • Note: The romfile option in the script below is not required if there is a stand alone GPU ROM chip bundled with your GPU (the case for MXM, not sure for soldered). However, if you decide to use the romfile option, please extract it yourself instead of download a copy from the Internet.

    • Create a disk image for your VM:

      • qemu-img create -f raw WindowsVM.img 75G
    • Install iptables and tunctl if you don't have it.

    • Create two script for tap networking:

      • tap_ifup (check files below in this gist)
      • tap_ifdown (check files below in this gist)
    • Use dpkg -L ovmf to locate your OVMF_VARS.fd file, copy that to the directory where you store your VM image, then rename it to WIN_VARS.fd(or other names you like).

    • Create a script for starting your VM:

      • Recall that our GPU have a SVID 1028, and a SSID 07b1, use these two value to set the corresponding vfio-pci options (see script below).

        • This will solve the SSID/SVID all zero problem inside the VM.
      • Don't forget to get a copy of VirtIO Driver

      • #!/bin/bash
        
        # Set audio output options
        export QEMU_AUDIO_DRV=pa
        export QEMU_PA_SERVER="<your-pulse-socket>"
        export QEMU_AUDIO_TIMER_PERIOD=500
        
        # Use command below to generate a MAC address
        # printf '52:54:BE:EF:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256))
        
        # Refer https://github.com/saveriomiroddi/qemu-pinning for how to set your cpu affinity properly
        qemu-system-x86_64 \
          -name "Windows10-QEMU" \
          -machine type=q35,accel=kvm \
          -global ICH9-LPC.disable_s3=1 \
          -global ICH9-LPC.disable_s4=1 \
          -enable-kvm \
          -cpu host,kvm=off,hv_vapic,hv_relaxed,hv_spinlocks=0x1fff,hv_time,hv_vendor_id=12alphanum \
          -smp 6,sockets=1,cores=3,threads=2 \
          -vcpu vcpunum=0,affinity=1 -vcpu vcpunum=1,affinity=5 \
          -vcpu vcpunum=2,affinity=2 -vcpu vcpunum=3,affinity=6 \
          -vcpu vcpunum=4,affinity=3 -vcpu vcpunum=5,affinity=7 \
          -m 8G \
          -mem-path /dev/hugepages \
          -mem-prealloc \
          -balloon none \
          -rtc clock=host,base=localtime \
          -device ich9-intel-hda -device hda-output \
          -device qxl,bus=pcie.0,addr=1c.4,id=video.2 \
          -vga none \
          -nographic \
          -serial none \
          -parallel none \
          -k en-us \
          -spice port=5901,addr=127.0.0.1,disable-ticketing \
          -usb \
          -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
          -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,x-pci-sub-device-id=0x07b1,x-pci-sub-vendor-id=0x1028,multifunction=on,romfile=MyGPU.rom \
          -drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE.fd \
          -drive if=pflash,format=raw,file=WIN_VARS.fd \
          -boot menu=on \
          -boot order=c \
          -drive id=disk0,if=virtio,cache=none,format=raw,file=WindowsVM.img \
          -drive file=windows10.iso,index=1,media=cdrom \
          -drive file=virtio-win-0.1.141.iso,index=2,media=cdrom \
          -netdev type=tap,id=net0,ifname=tap0,script=tap_ifup,downscript=tap_ifdown,vhost=on \
          -device virtio-net-pci,netdev=net0,addr=19.0,mac=<address your generate>
          -device pci-bridge,addr=12.0,chassis_nr=2,id=head.2 \
          -device usb-tablet
          
        # The -device usb-tablet will not be accurate regarding the pointer in some cases, another option is to use 
        # -device virtio-keyboard-pci,bus=head.2,addr=03.0,display=video.2 \
        # -device virtio-mouse-pci,bus=head.2,addr=04.0,display=video.2 \
      • For libvirt, refer here for an example of how to masquerade your Subsystem ID. (Credit to jscinoz)

Run your VM and configure guest side

  1. Binding your dGPU to vfio-pci driver:
    • echo "10de 1bb6" > "/sys/bus/pci/drivers/vfio-pci/new_id"
  2. Run the script to launch your VM
    • Install your Windows system through host side VNC (remote-viewer spice://127.0.0.1:5930).
      • -device qxl,bus=pcie.0,addr=1c.4,id=video.2 need to be comment out, change -vga none to -vga qxl so that QXL would become the first GPU and can see POST screen from spice client.
      • Change back once you have everything working.
    • IMPORTANT: Driver could be a cause for Code 43, please try both the driver your manufacture provided, and the driver from Nvidia website.
    • Add 192.168.99.0/24 to your Windows VM firewall exception:
      • In Control Panel\System and Security\Windows Defender Firewall, click Advance settings in the right panel, and Inbound Rules -> New rules.
      • Make sure you can ping to your VM from host.
      • Some details about setting up VirtIO driver not included here.
    • Enable remote desktop in Windows VM:
      • Right click This PC, click Remote settings in the right panel.
    • Verify that your GPU (in guest) have the correct hardware ID. Device manager -> double click your dGPU -> Detailtab -> Hardware Ids
      • For me, its PCI\VEN_10DE&DEV_1BB6&SUBSYS_07B11028. I'll get PCI\VEN_10DE&DEV_1BB6&SUBSYS_00000000 if I did't have it masqueraded.
      • In some cases, you will find your dGPU as a Video controller(VGA compatible) under Unknown Device before your install nvidia driver.
    • Install the official nvidia driver.
      • If everything goes smoothly, you will now be able to see your GPU within Performance tab in Task Manager.
  3. Post VM shut down operation:
    • Unbind your dGPU from vfio-pci driver, echo "0000:01:00.0" > "/sys/bus/pci/drivers/vfio-pci/0000:01:00.0/driver/unbind"
    • Power off your dGPU, echo "OFF" >> /proc/acpi/bbswitch
    • Run optirun nvidia-smi for verification.

RemoteFX configure and fine tuning

Configure RemoteFX

  1. Run gpedit.msc through Win+R.
  2. Locate yourself to Computer Configuration -> Administrative Templates -> Windows Components -> Remote Desktop Service -> Remote Desktop Session Host -> Remote Session Environment
    • Enable Use advanced RemoteFX graphics for RemoteApp
    • (Optional) Enable Configure image quality for RemoteFX adaptive Graphics, set it to High
    • Enable Enable RemoteFX encoding for RemoteFX clients designed for Windows Servier 2008 R2 SP1
    • Enable Configure compression for RemoteFX data, set it to Do not use an RDP compression algorithm
      • Connection compression will result extra latency for encode and decode, we don't want this.
  3. Locate yourself to Computer Configuration -> Administrative Templates -> Windows Components -> Remote Desktop Service -> Remote Desktop Session Host -> Remote Session Environment -> RemoteFX for Windows Server 2008 R2
    • Enable Configure RemoteFX
    • (Optional) Enable Optimize visual experience when using RemoteFX, set both option to Highest.

FreeRDP client configuration:

  • Make sure your have FreeRDP 2.0 (Do NOT use Remmina from Ubuntu Official PPA)
    • Compile one yourself or get a nightly build from here
  • Get your Windows VM IP address (or assign a static one), here we use 192.168.99.2 as an example.
  • xfreerdp /v:192.168.99.2:3389 /w:1600 /h:900 /bpp:32 +clipboard +fonts /gdi:hw /rfx /rfx-mode:video /sound:sys:pulse +menu-anims +window-drag
    • Refer here for more detail.

Lifting 30-ish fps restriction:

  1. Start Registry Editor.
  2. Locate and then click the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations
  3. On the Edit menu, click New, and then click DWORD(32-bit) Value.
  4. Type DWMFRAMEINTERVAL, and then press Enter.
  5. Right-click DWMFRAMEINTERVAL, click Modify.
  6. Click Decimal, type 15 in the Value data box, and then click OK. This sets the maximum frame rate to 60 frames per second (FPS).

Verify codec usage and fine tuning your frame rate:

  • Bring up your task manager, if a simple start menu pop-up animation (Windows 10) could consume you 40+ Mbps, then you are NOT using RemoteFX codec but just vanilla RDP. With a 1600x900 resolution, the start menu pop-up animation should consume a bandwidth less than 25 Mbps, while a 1600x900 Heaven benchmark consume less than 170 Mbps at peak.
  • Fire up a benchmark like Unigine Heaven in the VM, check if your dGPU can maintain a higher than 90~95% utility stably. If not, tune down your resolution and try again. You will find a sweet spot that suits your hardware.
  • For those don't concern much about image quality, try adding /gfx-h264:AVC444 option to your FreeRDP script. This will use RDP 8.1 with H.264 444 codec, which consume only 20~30-ish bandwidth even when runing full window Heaven benchmark. But artifacts this codec bring is more than noticeable.

For gaming:

  • 1600x900 or lower resolution RFX connection is recommended for most Core i7 laptop.
  • 1080p connection with game running at 1600x900 windowed mode have the same performance as above.

For other task:

  • Tasks that are more GPU compute intensive (which does its operation asynchronously from display update) will not be bottlenecked by CPU, thus you can choose a higher resolution like 1080p.

Steam in-home Streaming

For the limitations of RemoteFX, service like Steam in-home streaming or Geforce Experience is more recommended for gaming scenario.

Extra precautions should be taken for Steam in-home Streaming:

  • A Remote desktop connection that use dGPU inside the VM to render its display is still required, or the game will literally not running on the dGPU you just passed.
    • Not 100 percent about this. Maybe manually tell the game to use which GPU is possible?
    • One more thing, Nvidia control panel is not accessible within a RDP session. Nothing will pop-up no matter how hard you click it.
  • Make sure your dGPU is the ONLY display adapter enabled inside the VM.
  • Use this method to unlock the remote screen, note that current RDP session will be terminated once unlock success.
    • Pro or higher version of Windows is required.
    • Do not launch the script until the game appears in taskbar, otherwise it won't use your dGPU.

External display setup

External display require a BIOS setting that can rarely be seen on Optimus laptop.

  • For some Dell laptop (such as mine), There is a Display port direct output mode option in Video -> Switchable Graphics, enable it and it will assign all display port (mDP, HDMI, Thunder Bolt etc.) directly to the dGPU. Check if your BIOS offer some similar options.
  • However, you will lose your capability to extend your host machine display. As there is no display output port connect to the iGPU, e.g. your host.
  • While RemoteFX will compress the image in exchange for performance (which is not good if you required extreme image quality for professional use), such problem don't exist for external display setup, as it hook the dGPU directly.

Looking glass

  • If your machine can expose video output port to dGPU, then using Looking Glass is possible.
  • Moreover, if you have a Quadro card, you can load EDID directly from file in Nvidia Control Panel, and don't need to plug anything. Can even run without physical video output port expost to dGPU.
    • Though you still need to plug something for the first time setup otherwise Nvidia Control Panel won't show.

FAQ

How did you extract you vBIOS?

Well, except for laptop that use MXM graphics card, vBIOS of onboard graphics card is actually part of the system BIOS.

  • For the record, I did success without romfile option, but there is no guarantee for this approach.
  • For MXM graphics card, try using nvflash instead of GPU-Z. (In Windows) Disable your dGPU in device manager and run command nvflash -6 xxx.rom with privilege will extract your vBIOS as xxx.rom (This is the way I did). Try different version of nvflash if you fail.
  • For on board GPU:
    • Put the AFUDOS.EXE (or other BIOS backup tool depending on your BIOS) in a DOS-bootable USB device, then use it to extract your entire BIOS.
    • Then boot to windows and use PhoenixTool (or other similar tools) to extract modules contain in that BIOS.
      • Noted that those extracted modules will have weird name thus you can't be sure which one is for your onboard graphics card.
    • Finally use some vBIOS Tweaker (MaxwellBiosTweaker or Mobile Pascal Tweaker or other equivalence) to find out which module is your vBIOS.
      • Simply drag those module rom to the tweaker. Module roms that are not a vBIOS will be displayed as Unsupport Device, while vBIOS (typically around 50~300KB in size) will be successfully readed and show is information like device ID and vendor ID.
      • Manufactures tend to include several vBIOS for generic purpose. Be sure you find the correct vBIOS that have the same device ID as the one shown in device manager.
      • Disclaimer: I just know that you can use this method to extract the vBIOS of onboard graphics in the old days. However laptop BIOS may vary and I am not sure either the extraction process can go smoothly or the extracted and identified vBIOS rom can be used in QEMU without any problem.

Regarding AMD CPU/GPU?

Never own a laptop with AMD CPU/GPU myself, worth trying though.

What about GVT-g? Can I replicate a Optimus system inside a VM?

Recently GVT project successful expose guest screen with dmabuf, might be some hope?

Last time I try this, passing dGPU to a GVT-g VM is possible, but the dGPU will report Code 12 with "no enough resources" inside the VM. No idea why.

What about those bare-bone laptop?

Bare-bone laptop with desktop CPU already have their iGPU disabled in a way you cannot revert (as far as I know), and can only use their dGPU to render the display. Thus there will be no display if you pass it to your VM.

For those bare-bone laptops who have two dGPUs, passing one to your VM sounds possible? Not sure. Just take extra care if you have two identical dGPU. Check here for more detail.

Options other than RemoteFX?

Try nvidia gamestream with moonlight client, or Parsec. Or just pick whatever handful for you.

Known issue

For RemoteFX connection with xfreerdp:

  • Only windowed game can work, full screen will triger d3d11 0x087A0001 cannot set resolution blablabla problem. Media player does not affect by this.
    • As a solution, use borderless gaming or other equivalence.
    • Windowed client doesn't seems to have this problem.
  • Mouse will go wild due to relative mouse is unsupported in RDSH/RDVH connection.

Reference

XPS-15 9560 Getting Nvidia To Work on KDE Neon

Hexadecimal to Decimal Converter

FreeRDP-User-Manual

PCI passthrough via OVMF - Arch Wiki

CUDA installation guide

Frame rate is limited to 30 FPS in Windows 8 and Windows Server 2012 remote sessions

#!/bin/bash
# tap device name
TAP=tap0
# Network information
NETWORK=192.168.99.0
NETMASK=255.255.255.0
GATEWAY=192.168.99.1
DNSMASQPID=$(cat "/var/run/qemu-dnsmasq-$TAP.pid")
if [ ! "$DNSMASQPID" = "" ]; then
kill -s SIGTERM $DNSMASQPID && echo "DNSMASQ terminated"
fi
ip link set $TAP down
ip addr flush dev $TAP
iptables -t nat -D POSTROUTING -s $NETWORK/$NETMASK -j MASQUERADE
iptables -D INPUT -i $TAP -s $NETWORK/$NETMASK -d $NETWORK/$NETMASK -j ACCEPT
iptables -D INPUT -i $TAP -p tcp -m tcp --dport 67 -j ACCEPT
iptables -D INPUT -i $TAP -p udp -m udp --dport 67 -j ACCEPT
iptables -D INPUT -i $TAP -p tcp -m tcp --dport 53 -j ACCEPT
iptables -D INPUT -i $TAP -p udp -m udp --dport 53 -j ACCEPT
iptables -D FORWARD -i $TAP -o $TAP -j ACCEPT
iptables -D FORWARD -s $NETWORK/$NETMASK -i $TAP -j ACCEPT
iptables -D FORWARD -s $GATEWAY -i $TAP -j ACCEPT
iptables -D FORWARD -d $NETWORK/$NETMASK -o $TAP -m state --state RELATED,ESTABLISHED -j ACCEPT
echo 0 | dd of=/proc/sys/net/ipv4/ip_forward > /dev/null && echo "ip_forward disabled"
#!/bin/bash
# Set to the name of your tap device
TAP=tap0
# Network information
NETWORK=192.168.99.0
NETMASK=255.255.255.0
GATEWAY=192.168.99.1
DHCPRANGE=192.168.99.2,192.168.99.10
check_tap() {
if ip link show | grep $TAP > /dev/null; then
return
else
exit 1
fi
}
enable_ip_forward() {
echo 1 | dd of=/proc/sys/net/ipv4/ip_forward > /dev/null
}
start_dnsmasq(){
dnsmasq \
--strict-order \
--interface=$TAP \
--listen-address=$GATEWAY \
--bind-interfaces \
--dhcp-range=$DHCPRANGE \
--dhcp-no-override \
--pid-file=/var/run/qemu-dnsmasq-$TAP.pid
}
add_iptable_rules() {
iptables-restore -n <<EOF
*nat
-A POSTROUTING -s $NETWORK/$NETMASK -j MASQUERADE
COMMIT
*filter
-A INPUT -i $TAP -s $NETWORK/$NETMASK -d $NETWORK/$NETMASK -j ACCEPT
# Allow port 67 for DHCP, port 53 for dnsmasq
-A INPUT -i $TAP -p tcp -m tcp --dport 67 -j ACCEPT
-A INPUT -i $TAP -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i $TAP -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i $TAP -p udp -m udp --dport 53 -j ACCEPT
# Connect the external network
-A FORWARD -i $TAP -o $TAP -j ACCEPT
-A FORWARD -s $NETWORK/$NETMASK -i $TAP -j ACCEPT
-A FORWARD -s $GATEWAY -i $TAP -j ACCEPT
-A FORWARD -d $NETWORK/$NETMASK -o $TAP -m state --state RELATED,ESTABLISHED -j ACCEPT
COMMIT
EOF
}
start_tap() {
enable_ip_forward
check_tap
# Flush old config and set new config
ip addr flush dev $TAP
ip addr add $GATEWAY/$NETMASK dev $TAP
ip link set $TAP up
start_dnsmasq
add_iptable_rules
}
start_tap
@Simbaclaws
Copy link

Simbaclaws commented May 20, 2020

@amadejkastelic
I haven't tried supplying a fake battery to the acpitable yet, no.
However in my case I did try to clear the hyper-v vendor id and would still not be able to display any sort of output on a linux vm as guest.

Which should work regardless of this setup, because the drivers in use are in this case nouveau. Open source drivers that don't do some kind of acpi trickery with battery detection.

I have been able to circumvent error 43, if you need details I would suggest reading over my post on supplying a x-pci-sub-vendor-id and pci-sub-device-id. These can be found when booting the vm bare metal when this is setup as a disk passthrough instead of a disk file. You can then find out what ID's you should supply to the nvidia drivers in order for them to work with the card you're using.

If that doesn't work then you'd probably want to do the battery trick.

Thank you for finding this battery trick, I might try this out as soon as I find some time.
Although I'm doubtful whether it would do anything in my situation.

@wxianxin
Copy link

wxianxin commented May 23, 2020

Has anyone tried KVM passthrough on recent AMD ryzen CPU laptops? I wonder if they are muxed.

@KingTheGuy
Copy link

im back, anyone try using the new nvidia studio ready drivers? maybe error 43 aint an issue there? worth a try.

@jdw1023
Copy link

jdw1023 commented Jul 28, 2020

Has anyone tried KVM passthrough on recent AMD ryzen CPU laptops? I wonder if they are muxed.

https://www.reddit.com/r/VFIO/comments/hx5j8q/success_with_laptop_gpu_passthrough_on_asus_rog/

@wxianxin
Copy link

Muxed or muxless is no longer the obstacle for GPU passthrough on laptops(Not sure this would still be the case for rtx 3000 series). Google “ACPI battery Nvidia gpu laptop passthrough”. And yes I am able to achieve 4800h + rtx 2060 passthrough. Although the cpu memory performance is not so good and I have yet to find a fix for it.

@SBAPKat
Copy link

SBAPKat commented Aug 9, 2020

Using an ASUS fx705GE, 1050ti which is apparently muxed (VGA adapter in lspci).
Using my physical windows partition from my dual boot.
I have hidden KVM, used the acpi battery fix, extracted my vbios but to no avail, error 43 is still standing strong.
I don't even know what to try anymore, though I still am motivated to try anything if any of y'all have suggestions.

@Rabcor
Copy link

Rabcor commented Dec 1, 2020

I believe you might be able to tell if your laptop is Muxed or Muxless by seeing if the bios has an option to disable the igpu, if the igpu can be disabled, then it is muxed, if not, then it's probably muxless.

@Rabcor
Copy link

Rabcor commented Dec 5, 2020

I have the same kind of setup as Blastbeng (he has MSI GE75 Raider 9SE, I have 8SE, we have the same 2060 GPU too), I'm having the same issue of code 43, I also have not managed to get freerdp (or remmina) to connect, it's weird, everything seems to be set up for remote connect on the windows VM but it keeps giving me a connection error, googling the error revealed a bunch of posts about wrong IP address, but I have the right one. I don't know what's up, it might be because my host is connected to a hotspot or something idk...

This clearing code 43 is clearly the biggest hurdle for laptop users to clear to use gpu passthrough. I'm gonna at least go through the motions and try everything that's been listed as potential solutions for it at least. I already have some of the simpler ones (like the x-pci-sub-device-id) enabled because I used this script: https://github.com/T-vK/MobilePassThrough

Which came with it, honestly I don't think I ever could've gotten my qemu machine running properly without it's help 😅 and even with it, it still took me a day of troubleshooting just to get to the point where I could connect to the VM with Spice/spicy

Edit: My first step of installing the nvidia driver provided by the manufacturer seems to have resolved code 43! it can't be that easy though (besides I kinda need the more recent drivers anyways, it's like a year old), at least I'm not seeing code 43 in the device manager anymore. I still haven't managed to connect to the remote desktop thing though god freaking damn it, this is driving me nuts.

Edit2: Got code 43 again immediately after reboot. Reinstalling the driver does seem to resolve it again tho.

@Rabcor
Copy link

Rabcor commented Dec 7, 2020

Does anyone have any idea how to unbind the gpu when prime offloading is enabled?

My laptop seems to take issue with bumblebee (It might just be the kernel version or osmething idk, the quality control of these kernels seems to be getting worse by the day...) most noticably it causes regular audio glitches for me when installed & active.

I've been trying to unbind the gpu from the nvidia drivers with prime offloading enabled but the command always just hangs, and my cpu starts churning like no tomorrow in what I assume is some kind of negative feedback loop.

I also tried de-powering the GPU with the acpi_call method, this hard freezes/crashes my laptop, so it's a no-go method.

I think I also just almost bricked my laptop though, i tried using rmmod to disable the nvidia drivers, there was no dice however until I tried rmmod -f nvidia_drm, which broke my X server, gave me graphical glitches and what looked like a kernel panic, after I forcefully shut down my laptop to escape from that situation, it seemed like it wasn't going to POST, it just got stuck there for what felt like forever, so I eventually decided to just turn it on and give it time, it took like 5 full minutes to POST, then at least another 5 full minutes to fail to boot Linux (said waiting for mount to mount my secondary hard drives timed out or something).

I had to go through this full process and reboot from the console in order for my laptop to start functioning normally again, if I turned the laptop off forcefully at any point part-way through that process I'd have to start all over, with 5 minute POST...

So after that little experience almost giving me a heart attack, I'm kinda afraid to try anything else now.

@MakiseKurisu
Copy link

I'd like to see if someone can help me with dGPU passthrough on my Thinkpad W540. I have done the standard stuff and I can passthrough the dGPU to VM. I can even see it in a Linux guest with lspci. However, my issue is that the Windows VM is extremely unresponsive when the dGPU is added. Once removed everything is snappy, but runs like a snail when dGPU is included.

I have my set up script listed below, so I hope someone can catch what I'm missing:

https://gist.github.com/MakiseKurisu/1e16ef1448d66a56f3f9f07740010d35

PS: beyond the set up script I also tried a bunch of extra options, they are:

  1. echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf
  2. add igfx_off in kernel parameter
  3. echo "vhost-net" >> /etc/modules
  4. add disable_vga=1 to /etc/modprobe.d/vfio.conf (the file that instruct vfio-pci to overtake dGPU)
  5. also using extracted vBIOS

None of them helped with the issue.

@nh2
Copy link

nh2 commented Mar 9, 2021

Hey, anyone know how to fix this issue with installing NVIDIA drivers?
https://photos.app.goo.gl/Y3TKYST5A8bNj9Yy8

Other installations are running. Finish the other installations then try again.
This NVIDIA graphics driver is not compatible with your version of Windows.
This graphics driver could not find compatible graphics hardware.

@Techwizz-somboo This can be solved by setting the subsystem correctly. Example with libvirt (I also have the 940MX, in a Thinkpad T25):

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=0x17aa'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=0x2246'/>
  </qemu:commandline>

Inside the VM, the Hardware IDs need to look like this (they should show the same as in a real Windows):

image

Note the SUBSYS_224617AA is the concatenation of 0x2246 and 0x17aa above, without the 0x.


After this I am now, too, stuck at the Code 43 problem, but at least the one you reported I managed to get past.

@Simbaclaws
Copy link

Simbaclaws commented Mar 16, 2021

Has anyone already found out how to patch the ACPI_TABLE of the OVMF code with your own extracted vbios.rom to see if that works?

I suspect the nvidia driver fetches the vbios rom from the bios or uefi ovmf code rather then the one supplied to vfio-pci.

I've just given up on this and am now using separate machines connected through barrier. It would be a lot nicer to have a portable graphics accelerated laptop for external displays in vm's though....

@Chihacheol
Copy link

Can this method disable Intel graphics on bios for GE70 2OC₩2OD₩2OE? Backup BIOS is here : https://drive.google.com/file/d/1UBvxCpaJoLYyNJnuCMKICxo8CAgT4skh/view?usp=drivesdk

@lengors
Copy link

lengors commented Apr 25, 2021

Does anyone have any idea how to unbind the gpu when prime offloading is enabled?

My laptop seems to take issue with bumblebee (It might just be the kernel version or osmething idk, the quality control of these kernels seems to be getting worse by the day...) most noticably it causes regular audio glitches for me when installed & active.

I've been trying to unbind the gpu from the nvidia drivers with prime offloading enabled but the command always just hangs, and my cpu starts churning like no tomorrow in what I assume is some kind of negative feedback loop.

I also tried de-powering the GPU with the acpi_call method, this hard freezes/crashes my laptop, so it's a no-go method.

I think I also just almost bricked my laptop though, i tried using rmmod to disable the nvidia drivers, there was no dice however until I tried rmmod -f nvidia_drm, which broke my X server, gave me graphical glitches and what looked like a kernel panic, after I forcefully shut down my laptop to escape from that situation, it seemed like it wasn't going to POST, it just got stuck there for what felt like forever, so I eventually decided to just turn it on and give it time, it took like 5 full minutes to POST, then at least another 5 full minutes to fail to boot Linux (said waiting for mount to mount my secondary hard drives timed out or something).

I had to go through this full process and reboot from the console in order for my laptop to start functioning normally again, if I turned the laptop off forcefully at any point part-way through that process I'd have to start all over, with 5 minute POST...

So after that little experience almost giving me a heart attack, I'm kinda afraid to try anything else now.

I've add the exact same problem except for the last part since I didnt even tried that. Have you found any solution already?

@Rabcor
Copy link

Rabcor commented Aug 23, 2021

@lengors nope, haven't found one, right now I'm just working to get igpu passthrough working tbh, as an officially supported feature it should be easy (and with how good some integrated gpus are these days, a potential alternative for dgpu passthrough altogether) but I'm having no dice with that just getting weird errors that don't relay enough information as always from qemu. (errors other sites even say it's safe to ignore, so it might be crashing for an entirely different reason and I don't have the debugging skills to dig deeper)

Incidentally, last time around when I tried it, enabling igpu passthrough did work, in the sense that it crashed my host lol.

For dGPU passthrough I'm still sorta waiting for someone else to figure out using it with Prime before I try it again, because bumblebee is supported less and less and I basically need to use ugly workarounds to install it in the first place.

@XRaTiX
Copy link

XRaTiX commented Oct 28, 2021

@Rabcor
@lengors
I succefully passthrough my dGPU to my VM with official NVIDIA drivers (with prime),hope it helps you

https://lantian.pub/en/article/modify-computer/laptop-intel-nvidia-optimus-passthrough.lantian/

For reference I'm using Manjaro KDE in a muxed laptop Dell 7567 with a i5 7300HQ and GTX 1050 Ti.

@Cringicide
Copy link

I have a OMEN Laptop 15-ek0xxx rtx 2070 max q is this laptop compatible for this guide ?

@gregos-winus
Copy link

Same question here for a Dell XPS 15 9510 with RTX 3050 ti

@VitorRamos
Copy link

Thank you for the amazing tuturoial.

I skipped the bumblebee part but the passthrough worked on the MSI creator p75 with an RTX 2070 super.

@Kitsumi
Copy link

Kitsumi commented Mar 10, 2022

I got it working with my Gigabyte G5 MD. Turned out to be muxed so it was actually not too different from doing it on a desktop. Only problem, the displayport seems to be controlled exclusively by the dGPU.

@marcussacana
Copy link

Mehh, too many steps, more easy do a dualboot...
Maybe there's a auto script for this?

@Rabcor
Copy link

Rabcor commented Apr 25, 2022

Mehh, too many steps, more easy do a dualboot... Maybe there's a auto script for this?

There are a couple around, but chances are none of them will actually just work for you. VGA passthrough is still very experimental, if you think it's not worth following all these steps then it's not for you. A lot of tinkering is pretty much always needed to get it working, and some tinkering is usually also needed to keep it working.

@XRaTiX thanks I'll give that a try.

@tadghh
Copy link

tadghh commented Jun 11, 2022

Hey I got this working on a dell 7590 only problem I am having is input delay while playing games. Is there anything I should look into?

@MrCsabaToth
Copy link

Do you have a guide for AMD + Radeon systems?

@D0ot
Copy link

D0ot commented Jul 23, 2022

any posible ways to load EDID on geforce card...? i know it is almost impossible...

@D0ot
Copy link

D0ot commented Jul 23, 2022

hey, i got my Swift x 2021 5800U+3050 muxless laptop woking. however it is impossible to play games because it is muxless, i cant use a dummy hdmi doggle to load EDID in guest Windows. i got very low resolution in steam link and i can not change it.

@danielkrajnik
Copy link

I'm amazed how much information is packed in here. Amazing resource!

@Milor123
Copy link

Question bro. I dont have a laptop, however I have a desktop pc with archlinux

I have a igpu intel 13500
and a GPU nvidia 4070

Could I use, optimus manager for switch between GPU and iGPU (when I need connect my GPU to my virtual machine)
is it possible?

If it is possible, is your guide the way correct for my purpose?

@danielkrajnik
Copy link

danielkrajnik commented Jan 20, 2024

@Milor123 Some portions of this guide are outdated by now, but it can be still very very useful to understand these mechanisms. Nvidia drivers now support prime render offload usually you just need to prepend something like DRI_PRIME=pci-0000_01_00_0 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia to your program's command and if you want to passthrough your GPU you should be able to switch nvidia driver for vfio-pci driver with something like echo 0000:01:00.0 | sudo tee /sys/bus/pci/devices/0000:01:00.0/driver/unbind and echo "vfio-pci" | sudo tee /sys/bus/pci/devices/0000:01:00.0/driver_override. To do it you have to make sure though that it isn't used by the X11 server or any other program (I've removed these files from my system: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf /usr/share/glvnd/egl_vendor.d/10_nvidia.json /usr/share/vulkan/icd.d/nvidia_icd.json ).

@Milor123
Copy link

@Milor123 Some portions of this guide are outdated by now, but it can be still very very useful to understand these mechanisms. Nvidia drivers now support prime render offload usually you just need to prepend something like DRI_PRIME=pci-0000_01_00_0 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia to your program's command and if you want to passthrough your GPU you should be able to switch nvidia driver for vfio-pci driver with something like echo 0000:01:00.0 | sudo tee /sys/bus/pci/devices/0000:01:00.0/driver/unbind and echo "vfio-pci" | sudo tee /sys/bus/pci/devices/0000:01:00.0/driver_override. To do it you have to make sure though that it isn't used by the X11 server or any other program (I've removed these files from my system: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf /usr/share/glvnd/egl_vendor.d/10_nvidia.json /usr/share/vulkan/icd.d/nvidia_icd.json ).

Thank you very much for bro !!! amazing !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment