Skip to content

Instantly share code, notes, and snippets.

@ulkeshkosh
Last active February 2, 2024 07:05
Show Gist options
  • Star 24 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save ulkeshkosh/5f1b17c00cf6844c9f4fd911f4983a64 to your computer and use it in GitHub Desktop.
Save ulkeshkosh/5f1b17c00cf6844c9f4fd911f4983a64 to your computer and use it in GitHub Desktop.
PCI-Passthrough Rig, OS, and Setup

Introduction

This is my guide for a successful PCI-Passthrough from Linux (Arch Linux) to QEMU/KVM via virt-manager and libvirtd into a Windows 10 Home guest.

NOTE: This is a guide for Intel only. I do not own an AMD machine, and will not add AMD information this guide until such time that I do, which could be never.

Hardware

Device Type Device
CPU Intel Core i7 7700K Quad-Core, Hyperthreading
Motherboard Gigabyte Z270X-Gaming 5
RAM 64 GB Patriot Viper 4 (4x 16GB DDR4 3200MHz, 32GB for VM)
Storage (KVM) Samsung 970 EVO Plus 1TB (SSD #1)
Storage (HOST) Samsung 970 EVO Plus 1TB (SSD #2)
Graphics 1 (KVM) EVGA NVIDIA GeForce GTX 1070 8GB
Graphics 2 (HOST) CPU Integrated Graphics - Intel
Network Onboard Gigabit Ethernet Connection I219-V
Audio HyperX Cloud II USB Headset
Keyboard (KVM) DAS Keyboard 4 Professional
Keyboard (HOST) DAS Keyboard 4 Ultimate
Mouse (KVM) Razer Naga Trinity
Mouse (HOST) Logitech Marathon Mouse M705
Display (KVM) Dell 27" S2740L (Display #1)
Display (HOST) Dell 27" S2740L (Display #2)

Operating Systems

Entity Operating System
HOST Manjaro x64 Gnome Arch Linux
KVM Windows 10 Home

Hardware Setup

  1. It is vitally important that the BIOS is set to use the Integrated Graphics as its default display output, otherwise, this will never work for passing through the discrete graphics to the VM.
  2. Hook one display up to the integrated GPU, and the second display to the discrete GPU.
  3. The BIOS must have everything set to UEFI and the discrete graphics card must support UEFI (most cards made within the last 5-7 years do).

BIOS Setup

BIOS Information VT-d / Internal Graphics

Prerequisite ISO

  1. Download a Windows 10 ISO ahead of time. You can do so from here.
  2. If you research and decide you wish to use a flat-file instead of the separate SSD for the guest, then you will want to have the bus of that storage device be virtio-scsi, therefore you will need the ISO for that as well to load during Windows setup.

Software Setup

  1. Install the following: sudo pacman -S libvirt virt-manager ovmf qemu
  2. Then run:
$ sudo systemctl start libvirtd.service 
$ sudo systemctl start virtlogd.socket
$ sudo systemctl enable libvirtd.service
$ sudo systemctl enable virtlogd.socket
  1. Verify your user is added to the libvirt group: sudo usermod -a -G libvirt usernamehere (replace usernamehere with your logged-in user).

GRUB Setup

  1. Edit /etc/default/grub under sudo.
  2. Alter GRUB_CMDLINE_LINUX_DEFAULT to include intel_iommu=on. An example: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on".
  3. Save the file, and then run: sudo update-grub.
  4. Reboot.
  5. Upon reboot, run sudo dmesg | grep "Virtualization Technology for Directed I/O". If the output contains something akin to [ 0.902214] DMAR: Intel(R) Virtualization Technology for Directed I/O, then all is well thus far.

Verifying IOMMU Groups

  1. Run the following as a shell script:
#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;
  1. Example output may be something like:
IOMMU Group 0:
	00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers [8086:591f] (rev 05)
IOMMU Group 1:
	00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
	01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1070] [10de:1b81] (rev a1)
	01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
IOMMU Group 10:
	00:1c.5 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #6 [8086:a295] (rev f0)
IOMMU Group 11:
	00:1c.6 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #7 [8086:a296] (rev f0)
IOMMU Group 12:
	00:1c.7 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #8 [8086:a297] (rev f0)
IOMMU Group 13:
	00:1d.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #9 [8086:a298] (rev f0)
...
  1. Notice the group that has the discrete graphics card, in this case the GTX 1070. It has three total lines, but it's possible to have more. If it has more, then either you must pass through all of the devices (in VFIO Setup below) listed (except for the PCI bridge, if shown), or you should move the video card to a different PCIe slot. There is more detail outlined here.
  2. Take note of the device IDs of the graphics card and its corresponding audio chipset. In the above example, they are: 10de:1b81 and 10de:10f0.

Isolating Discrete GPU and VFIO Setup

  1. Using the above information for the device IDs, edit /etc/modprobe.d/vfio.conf under sudo (create if it doesn't exist) and insert the following information (replacing the device IDs as necessary):
softdep nouveau pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci
options vfio-pci ids=10de:1b81,10de:10f0
  1. The softdep lines were required for my setup because of the order of operations upon boot. Those lines will pre-empt those modules (nouveau and snd_hda_intel) from loading for the defined vfio-pci IDs and will load the defined vfio-pci module in its place.
  2. Save the file.
  3. Edit /etc/mkinitcpio.conf under sudo and alter the MODULES= line so it includes the following: vfio_pci vfio vfio_iommu_type1 vfio_virqfd. It should look something like:
MODULES="vfio_pci vfio vfio_iommu_type1 vfio_virqfd"
  1. Also alter the HOOKS= line if necessary to include modconf if it doesn't already exist in the list. It should look something like:
HOOKS="base udev autodetect modconf block keyboard keymap resume filesystems"
  1. Save the file.
  2. Now run: mkinitcpio -g /boot/linux-custom.img
  3. Reboot.
  4. If successful, the display connected to the discrete GPU will no longer show anything. Otherwise, something is wrong and it must be fixed before you can continue.
  5. You can verify the proper module is bound to the hardware by running: lspci -nnk and looking for something like:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1070] [10de:1b81] (rev a1)
	Subsystem: eVga.com. Corp. GP104 [GeForce GTX 1070] [3842:6276]
	Kernel driver in use: vfio-pci
	Kernel modules: nouveau
01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
	Subsystem: eVga.com. Corp. GP104 High Definition Audio Controller [3842:6276]
	Kernel driver in use: vfio-pci
	Kernel modules: snd_hda_intel
  1. The fact that it says Kernel driver in use: vfio-pci is enough to know that it is indeed working for those device IDs previously entered into the vfio.conf file.

libvirt Setup

  1. Edit /etc/libvirt/qemu.conf under sudo and put in the following contents:
nvram = [
	"/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]
  1. Restart libvirtd with: sudo systemctl restart libvirtd.service

virt-manager VM Setup

  1. Run virt-manager
  2. Click the + in the upper left.
  3. Choose Local Install Media (ISO image or CDROM) and click Foward.
  4. Browse for the Windows 10 ISO file.
  5. Uncheck Automatically detect from the installation media / source. and in the input box, start typing Windows 10 and then select the option for Windows 10. Click Foward.
  6. Choose a proper amount of RAM to assign to the VM. I used 32768.
  7. Choose a proper amount of CPUs to assign to the VM. I used 8.
  8. Click Foward.
  9. If using a physical disk (separate SSD), put the dot in Select or create custom storage and click Manage. In here, you will have to add a Physical Disk Device pool that includes the direct device path. An example of mine: /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4P4NF0M409590P-part1. No need to format it since Windows will do that. And definitely use the by-id pointer in the file system since the root /dev pointer may change depending on how Linux boots and sees the drives.
  10. Once chosen, click Forward
  11. Give the VM a name, mine was: win10
  12. Put a checkmark in Customize configuration before install
  13. Open Network selection and choose the Host device that matches the one being used by your host system. Mine was: Host device enp5s0: macvtap. Also set Source mode to Bridge. I found out which device I was using on my host by running ifconfig and determining which one had a proper local IP for my network.
  14. Click Finish
  15. Below are all my settings for the first few options: Overview CPUs Memory Boot Options SATA Disk 1 Network Interface (NIC)
  16. Now you will add your graphics card, graphics audio chipset, keyboard, mouse, and optionally a USB audio device to the VM. Click Add Hardware.
  17. Select PCI Host Device and then select the Graphics card in the list. Then click Finish.
  18. Click Add Hardware again, and in the PCI Host Device select the Graphics Audio chipset. Then click Finish.
  19. The above two PCI graphics card devices should be the same two you set as the VFIO IDs in the VFIO setup above.
  20. Do the same thing again for adding new hardware, but this time select USB Host Device and add at least one keyboard and one mouse to the VM. Optionally add a USB audio device.
  21. Below are what the Add Hardware screens should look similar to: PCI Host Device USB Host Device
  22. Make sure to Apply all of these settings updates.

NVIDIA Code 43 Issue

  1. With NVIDIA cards, Windows doesn't seem to want to install the graphics drivers properly and therefore if you install Windows, you will notice the default resolution instead of your display's resolution. To fix this, you must edit the VM configuration file directly.
  2. Close down virt-manager if open, and then edit with (win10 is the name of the VM):
$ sudo su
$ EDITOR=vim virsh edit win10
  1. Alter the <features> tag as such (adding the <kvm> section as a sibling to <hyperv> if necessary, and adding <ioapic> as a sibling to <hyperv> if running on qemu 4.0 with using the q35 chipset):
<hyperv>
    ...
    <vendor_id state="on" value="123456789ab"/>
    ...
</hyperv>
...
<kvm>
    <hidden state="on"/>
</kvm>
...
<ioapic driver="kvm"/>
  1. Save the file.

CPU Pinning (performance)

  1. Please read through this guide to understand what is going on.
  2. Run: lscpu -e
  3. If your output looks similar (in terms of the CPU, NODE, SOCKET, and CORE), then you can continue, otherwise revert to what the guide above says:
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE    MAXMHZ   MINMHZ
  0    0      0    0 0:0:0:0          yes 4500.0000 800.0000
  1    0      0    1 1:1:1:0          yes 4500.0000 800.0000
  2    0      0    2 2:2:2:0          yes 4500.0000 800.0000
  3    0      0    3 3:3:3:0          yes 4500.0000 800.0000
  4    0      0    0 0:0:0:0          yes 4500.0000 800.0000
  5    0      0    1 1:1:1:0          yes 4500.0000 800.0000
  6    0      0    2 2:2:2:0          yes 4500.0000 800.0000
  7    0      0    3 3:3:3:0          yes 4500.0000 800.0000
  1. Close down virt-manager if open, and then edit with (win10 is the name of the VM):
$ sudo su
$ EDITOR=vim virsh edit win10
  1. Add the following as a sibling to <vcpu>:
<cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='4'/>
    <vcpupin vcpu='2' cpuset='1'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <vcpupin vcpu='4' cpuset='2'/>
    <vcpupin vcpu='5' cpuset='6'/>
    <vcpupin vcpu='6' cpuset='3'/>
    <vcpupin vcpu='7' cpuset='7'/>
</cputune>
  1. Save the file.

Install Windows

  1. You are now ready to attempt to install Windows.
  2. In virt-manager, right click the win10 VM and click Run
  3. Install Windows!

Alternate Option for Keyboard/Mouse

DISCLAIMER: I noticed after introducing this into my setup that games in Windows would lag when long-pressing keys. When I set things back to passing through a dedicated mouse and keyboard, it fixed my gaming to zero lag.

  1. If you wish to have only a single keyboard/mouse connected, or only have one of each, then it's possible to toggle between each OS with the same keyboard and mouse by holding down both the left CTRL and right CTRL and then release. Perform these steps.
  2. Shut down virt-manager
  3. Determine which device pointer each device is on (notice on my Naga Trinity, it has two). Simply run ls -la /dev/input/by-id/ to see the list of input devices. If it's easy to narrow down, then copy the names of them. If it's not, then you can cat /dev/input/by-id/ID_NAME_HERE and type on the keyboard or move the mouse to see if any garbage output occurs, if so, then that's the right device.
  4. Once the device names are determined, run:
$ sudo su
$ EDITOR=vim virsh edit win10
  1. Change the <domain> tag to: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  2. Above the following:
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>

Add

<input type='mouse' bus='virtio'>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
</input>
<input type='keyboard' bus='virtio'>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/>
</input>
  1. Add to the end of the file, just before the </domain> tag (replacing MOUSE_NAME_HERE and KEYBOARD_NAME_HERE):
<qemu:commandline>
  <qemu:arg value='-object'/>
  <qemu:arg value='input-linux,id=mouse1,evdev=/dev/input/by-id/MOUSE_NAME_HERE'/>
  <qemu:arg value='-object'/>
  <qemu:arg value='input-linux,id=kbd1,evdev=/dev/input/by-id/KEYBOARD_NAME_HERE,grab_all=on,repeat=on'/>
</qemu:commandline>
  1. With my Naga Trinity, mine looks like:
<qemu:commandline>
  <qemu:arg value='-object'/>
  <qemu:arg value='input-linux,id=mouse1,evdev=/dev/input/by-id/usb-Razer_Razer_Naga_Trinity_00000000001A-event-mouse'/>
  <qemu:arg value='-object'/>
  <qemu:arg value='input-linux,id=kbd2,evdev=/dev/input/by-id/usb-Razer_Razer_Naga_Trinity_00000000001A-if02-event-kbd'/>
  <qemu:arg value='-object'/>
  <qemu:arg value='input-linux,id=kbd1,evdev=/dev/input/by-id/usb-Metadot_-_Das_Keyboard_Das_Keyboard-event-kbd,grab_all=on,repeat=on'/>
</qemu:commandline>
  1. Save the file.
  2. Now edit /etc/libvirt/qemu.conf under sudo.
  3. Uncomment user = and set it to your user name. Example: user = "ulkeshkosh"
  4. Uncomment group = and set it to the following: group = "kvm"
  5. Where the section exists talking about the cgroup acl, paste the following beneath the comment block (replacing MOUSE_NAME_HERE and KEYBOARD_NAME_HERE):
cgroup_device_acl = [
    "/dev/kvm",
    "/dev/input/by-id/KEYBOARD_NAME_HERE",
    "/dev/input/by-id/MOUSE_NAME_HERE",
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/sev"
]
  1. With my Naga Trinity, mine looks like:
cgroup_device_acl = [
    "/dev/kvm",
    "/dev/input/by-id/usb-Metadot_-_Das_Keyboard_Das_Keyboard-event-kbd",
    "/dev/input/by-id/usb-Razer_Razer_Naga_Trinity_00000000001A-event-mouse",
    "/dev/input/by-id/usb-Razer_Razer_Naga_Trinity_00000000001A-if02-event-kbd",
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/sev"
]
  1. Run (replacing USER_HERE with your logged-in user):
$ sudo usermod -a -G kvm USER_HERE
$ sudo usermod -a -G input USER_HERE
  1. Restart libvirtd.service as such: sudo systemctl restart libvirtd.service
  2. Run virt-manager and edit the VM to remove any USB-Host-device for keyboard and mouse you may have added earlier (don't remove the items that simply say Keyboard or Mouse.
  3. Remember, to toggle the keyboard and mouse between each operating system, simply hold down both the left CTRL and right CTRL and then release.

My VM configuration file:

<domain type='kvm'>
  <name>win10</name>
  <uuid>53b45fa5-4554-4d25-88ed-cadc99f05460</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>33554432</memory>
  <currentMemory unit='KiB'>33554432</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='4'/>
    <vcpupin vcpu='2' cpuset='1'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <vcpupin vcpu='4' cpuset='2'/>
    <vcpupin vcpu='5' cpuset='6'/>
    <vcpupin vcpu='6' cpuset='3'/>
    <vcpupin vcpu='7' cpuset='7'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-4.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/win10_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='123456789ab'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
    <ioapic driver='kvm'/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='4' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='directsync' io='threads'/>
      <source dev='/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4P4NF0M409590P-part1'/>
      <target dev='sdc' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <interface type='direct'>
      <mac address='52:54:00:f2:b9:bf'/>
      <source dev='enp5s0' mode='bridge'/>
      <model type='e1000e'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <sound model='ich9'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    </sound>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x0951'/>
        <product id='0x16a4'/>
      </source>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x24f0'/>
        <product id='0x0140'/>
        <address bus='1' device='8'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x0067'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
</domain>
@mlotysz
Copy link

mlotysz commented Mar 15, 2020

Thank you for helping me resolve "NVIDIA Code 43 Issue" on AMD Ryzen 3900X CPU!

@ulkeshkosh
Copy link
Author

Awesome! I’m glad this could help. This gist was pulled together from a lot of different places (screenshots were from my setup, however). One day I hope we can get a more turn-key solution for pci-passthrough.

@radunanescu
Copy link

radunanescu commented Apr 27, 2020

helloo quick question? if i disconnect (shut down) my vm can i recconect my pci card back to the host ?

@ulkeshkosh
Copy link
Author

ulkeshkosh commented Apr 27, 2020

@radunanescu

helloo quick question? if i disconnect (shut down) my vm can i recconect my pci card back to the host ?

I have not tried it. I believe you would have to undo some of the work done in the Isolating Discrete GPU and VFIO Setup section, and then reboot. As far as I know, there is no way to live-toggle the modes (isolation of discrete GPU for VM only when running the VM).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment