Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save sean-gilliam/5aad3b4d434275c484a3c33e0166cfdf to your computer and use it in GitHub Desktop.
Save sean-gilliam/5aad3b4d434275c484a3c33e0166cfdf to your computer and use it in GitHub Desktop.
Setup Qemu for GPU Passthough with Looking Glass and Scream support.

Setup Qemu for GPU Passthough with Looking Glass and Scream support.

Before we begin, make sure that you have a monitor hooked up to the second GPU. In some situations, the video card will fail to properly initialize if one isnt attached. Also it helps with debugging issues that will most assuredly arise.

GPU Passthrough setup

Ensure that IOMMU is enabled in the BIOS. For me using an Intel system from Dell, it was under Virtualization -> Direct I/O. It should be something along those lines for AMD as well. That was the easy part :).

Before we proceed we're going to need to gather some information. Run the command, lspci -nn|grep NVIDIA (or AMD if your using that). The output for me was:

05:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] [10de:1e87] (rev a1)
05:00.1 Audio device [0403]: NVIDIA Corporation TU104 HD Audio Controller [10de:10f8] (rev a1)
05:00.2 USB controller [0c03]: NVIDIA Corporation TU104 USB 3.1 Host Controller [10de:1ad8] (rev a1)
05:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller [10de:1ad9] (rev a1)
22:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)
22:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)

Make a note of the number pairs at the beginning of the line and the hex pairs at the end line of the video card you intend to passthrough. We need both the GPU and the Audio controller. For me I wanted to passthrough the 1050Ti, so I focused on

22:00.0 ... [10de:1c82]
22:00.1 ... [10de:0fb9]

We also need to find out which version of the BIOS is on the card. Using a utility or the NVIDIA/AMD settings panel or some other hardware utility, get the BIOS number. With this number, open up a browser and go to the following url: https://www.techpowerup.com/vgabios/. Search for your particular card. My 1050ti card had a 86.07.39.00.52 BIOS on it so I downloaded the file (https://www.techpowerup.com/vgabios/193723/evga-gtx1050ti-4096-170207). This is the option BIOS qemu will use to setup the card. Copy that file to the /usr/share/vgabios directory (eg. /usr/share/vgabios/EVGA.GTX1050Ti.4096.170207.rom) and ensure it has 775 permissions on it (eg. chmod 775 /usr/share/vgabios/EVGA.GTX1050Ti.4096.170207.rom).

With this information and file, we are now reday to proceed. Edit the following file, /etc/default/grub. Find the line containing GRUB_CMDLINE_LINUX_DEFAULT= and add the following: intel_iommu=on iommu=pt for intel or amd_iommu=on iommu=pt for amd. Also add this to the line, vfio-pci.ids=10de:1c82,10de:0fb9. Replace the hex pairs with the hex pairs from your lspci output. This will reserve the GPU for passthrough. So the line should be something like

Note: For kernels greater than 5.x see Configuration updated for kernels newer than 5.x

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt vfio-pci.ids=10de:1c82,10de:0fb9"

Save the file, then run sudo update-grub. Now reboot the system.

If all goes well, you should see something like this in when you run dmesg

DMAR: Intel(R) Virtualization Technology for Directed I/O

Also check to see if the following files are present:

/sys/class/iommu/dmar0
/sys/class/iommu/dmar1

There should be the same number of files that you reserved in the vfio-pci.ids parameter list. We also need to ensure that the GPUS are in separate IOMMU groups. Run the following script to determine this.

#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;

If they are in the same group, then that means that the ACS patch hasn't been applied to your kernel. Go through the process of recompiling your kernel with the patch and after rebooting, rerun script. If they aren't in the same group, then you're golden.

Virtual Machine setup

Open a browser and go to the following url, https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md. Download the virtio-win ISO.

Since I'm using virt-manager to manage my VMs, we create a virtual machine to run the GPU passthrough into. For now leave everything as is and add the following:

  • a second CDROM to the vm (use the virtio-win ISO from previously as the image)
  • a PCI device for the GPU being passed through (easily identified in the list using the number pairs from the previous lsapi output (eg. 22:00.0))
  • a PCI device for the GPU audio being passed through (easily identified in the list using the number pairs from the previous lsapi output (eg. 22:00.1))

Ensure that the OVMF package is installed, sudo apt install ovmf. Set the chipset to Q35 and use one of the OVMF UEFI BIOSes (one of the non-secboot ones is preferrable). Start the VM, but before installing Windows, shut down the VM. We still have work to do. Run virsh edit win10. Add the following:

  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vpindex state='on'/>
      <synic state='on'/>
      <stimer state='on'/>
      <reset state='on'/>
      <frequencies state='on'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
    <ioapic driver='kvm'/>
  </features>

We need to hide kvm so programs/drivers dont know they're in a virtual machine. Windows will still know as evident by opening task manager and seeing Virtual Machine: Yes, but everything else won't. The ioapic line is for NVIDIA. Without these two options the NVIDIA driver will throw an Error 43 and refuse to load. Sometimes you also need to add a <vendorid> tag as well, but current NVIDIA drivers seem to play nice without it. The hyperv bits are to help Windows optimize the VM.

Find the <hostdev> tag that corresponds to your passthrough GPU and add a <rom file='path to your cards option BIOS'>. So for me it was:

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x22' slot='0x00' function='0x0'/>
      </source>
      <rom file='/usr/share/vgabios/EVGA.GTX1050Ti.4096.170207.rom'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>

With all that in place, proceed to boot the VM and install the OS as normal. After the OS is finished installing, open device manager and clear any missing drivers by using the virtio-win CDROM. Install the NVIDIA/AMD drivers like normal. It will be tempting to remove the virtual GPU, but we still need it for setup. So leave it enabled for now.

Looking Glass setup

VM Configuration

We need to configure the VM to add the shared memory device, so run virsh edit win10. Just before the </device> tag, add the following:

    <shmem name='looking-glass'>
      <model type='ivshmem-plain'/>
      <size unit='M'>32</size>
    </shmem>

The formula for calculating the corect memory is as follows:

width x height x 4 x 2 = total bytes

total bytes / 1024 / 1024 = total megabytes + 10

For example, for a resolution of 1920x1080 (1080p):

1920 x 1080 x 4 x 2 = 16,588,800 bytes

16,588,800 / 1024 / 1024 = 15.82 MB + 10 = 25.82 MB

We need to round that to the nearest power of 2, thus 32MB.

We also need to set memballoon to none, so find the memballoon tag and set it:

<memballoon model="none"/>

Finally we need to add some Spice elements to get keyboard and mouse going. Ensure the following:

<graphics type='spice' autoport='yes'>
    <listen type='address'/>
    <image compression='off'/>
    <gl enable='no' rendernode='/dev/dri/by-path/pci-0000:05:00.0-render'/>
</graphics>
<input type='keyboard' bus='virtio'/>
<input type='mouse' bus='ps2'/>

Remove the <input type='tablet'/> device, if you have one.

For clipboard sync, add the following:

<channel type="spicevmc">
  <target type="virtio" name="com.redhat.spice.0"/>
  <address type="virtio-serial" controller="0" bus="0" port="1"/>
</channel>

Configuring Windows

Inside the Windows VM, open a browser and go to the following url, https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/upstream-virtio/. Download the latest zip file (eg. virtio-win10-prewhql-0.1-161.zip). Extract the file. Open Device Manager, under System Devices, right click PC RAM Controller and select update driver. Select the folder of the OS and arch (eg. Win10/amd64). This will update the controller to use the RedHat IVSHMEM driver.

Now go to the following url, https://looking-glass.io/downloads. Download the zip file for Windows, it should be the green button with the Windows logo. Note which version you are downloading (eg. B5.0.1). Extract the zip file and run the installer. This will install the Looking Glass Host application. After it finishes installing, create a scheduled task to run at system startup and set it to run the Looking Glass Host application. Add a few minutes delay just to ensure that Windows is finished booting.

Configuring the Linux Host

With the Looking Glass Host installed in Windows, we can finally remove the virtual GPU from the VM. You can't fully remove it because qemu will always add it back, but we can set it to none. Run virsh edit win10 and set video to none:

<video>
    <model type='none'/>
</video>

Now we need to make sure that a shared memory file is present. Issue the following commands:

touch /dev/shm/looking-glass
chown libvirt-qemu:kvm /dev/shm/looking-glass
chmod 660 /dev/shm/looking-glass

If you are using AppArmor add the following to /etc/apparmor.d/local/abstractions/libvirt-qemu:

/dev/shm/looking-glass rw,

Then restart AppArmor, sudo systemctl restart apparmor

On the Linux side, open a browser and got to the following url, https://looking-glass.io/downloads. Download the zip file for Linux, it should be the cyan button with a terminal logo. This should be the same version as the Windows one (eg. B5.0.1). Extract the zip file. In the directory you extracted the zip, go into the client folder. Create a build directory in that folder and change into that directory. Before building the executable, ensure the following packages are installed: binutils-dev, cmake, fonts-dejavu-core, libfontconfig-dev, gcc, g++, pkg-config, libegl-dev, libgl-dev, libgles-dev, libspice-protocol-dev, nettle-dev, libx11-dev, libxcursor-dev, libxi-dev, libxinerama-dev, libxpresent-dev, libxss-dev, libxkbcommon-dev, libwayland-dev, wayland-protocols. Now run the following commands: cmake ../ and then make. The looking-glass-client executable is self contained and can be moved after building.

This is the docker file I used to build the looking-glass-client executable:

# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/cpp/.devcontainer/base.Dockerfile

# [Choice] Debian / Ubuntu version (use Debian 11/9, Ubuntu 18.04/21.04 on local arm64/Apple Silicon): debian-11, debian-10, debian-9, ubuntu-21.04, ubuntu-20.04, ubuntu-18.04
ARG VARIANT="bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/cpp:0-${VARIANT}

# [Optional] Uncomment this section to install additional packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
#     && apt-get -y install --no-install-recommends <your-package-list-here>

RUN apt update
RUN apt install -y binutils-dev cmake fonts-dejavu-core libfontconfig-dev gcc g++ pkg-config libegl-dev libgl-dev libgles-dev libspice-protocol-dev nettle-dev libx11-dev libxcursor-dev libxi-dev libxinerama-dev libxpresent-dev libxss-dev libxkbcommon-dev libwayland-dev wayland-protocols

Runing Looking Glass Client

On the Linux side, run the following command looking-glass-client -a. -a means auto resize the window to the VM screen. A -F means full screen. See manual for additional flags if you need them.

Scream setup

VM Configuration

Configure the guest VM to have a network adapter with a device model of virtio. Now edit the libvirt xml for that VM (virsh edit win10).

Change the following from:

<domain type='kvm'>

to

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

Go all the way to the end of the file and after the </devices> tag, add the following

<qemu:commandline>
  <qemu:arg value='-device'/>
  <qemu:arg value='ich9-intel-hda,bus=pcie.0,addr=0x1b'/>
  <qemu:arg value='-device'/>
  <qemu:arg value='hda-micro,audiodev=hda'/>
  <qemu:arg value='-audiodev'/>
  <qemu:arg value='pa,id=hda,server=unix:/run/user/1000/pulse/native'/>
</qemu:commandline>

Note: The change the 1000 to your user id. 1000 is the default.

In Virt-Manager remove all other audio devices.

If your distro is using AppArmor, then proceed with these instructions. Open the apparmor libvirt abstractions file via sudo nano /etc/apparmor.d/abstractions/libvirt-qemu. Append the following lines:

/etc/pulse/client.conf.d/ r,
/etc/pulse/client.conf.d/* r,
/run/user/1000/pulse/native rw,
/home/your-username/.config/pulse/* r,
/home/your-username/.config/pulse/cookie k,

Replace your-username with your username. Reboot your system. (You could probably just restart the AppArmor service)

Note: If you have no sound at all, run pax11publish and check if a server with name /run/user/1000/pulse/native is available.

Configuring Windows

Inside the Windows VM, open a browser and got to the following url, https://github.com/duncanthrax/scream/releases. Download the non-source zip file of the latest release. Note which version you are downloading (eg. Installer package for 3.9 (x64/x86/arm64)). Extract the zip file. In the directory you extracted the zip, go into the Install folder.

Note: The inf I downloaded had unix line endings causing the driver to fail. So as a precaution open the inf file here /Install/drivers/x64/Scream.inf in Wordpad. Dont make any changegs to the file just save it. This will strip the unix line endings.

There should be a Install-x64.bat file in that directory. Right click the file and run it as Administrator. Now select the Scream (WDDM) driver in the sound mixer from the sound icon in the system tray.

Configuring the Linux Host

On the Linux host, open a browser and got to the following url, https://github.com/duncanthrax/scream/releases. Download the source zip file of the latest release. It should match the version that you downloaded for the Windows VM (eg. Installer package for 3.9 (x64/x86/arm64)). Extract the zip file. In the directory you extracted the zip, go into the Receivers/unix folder. Create a build directory in that folder and change into that directory. Before building the executable, ensure the following packages are installed: git, libpulse-dev, pkg-config, libpcap-dev, cmake, and make. You may or may not also need libpulse0. Now run the following commands: cmake ../ and then make. The scream executable is self contained and can be moved after building.

This is the docker file I used to build the scream executable:

# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/cpp/.devcontainer/base.Dockerfile

# [Choice] Debian / Ubuntu version (use Debian 11/9, Ubuntu 18.04/21.04 on local arm64/Apple Silicon): debian-11, debian-10, debian-9, ubuntu-21.04, ubuntu-20.04, ubuntu-18.04
ARG VARIANT="bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/cpp:0-${VARIANT}

# [Optional] Uncomment this section to install additional packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
#     && apt-get -y install --no-install-recommends <your-package-list-here>

RUN apt update
RUN apt install -y pkg-config libpcap-dev libpulse-dev libpulse0 cmake

Note: When I initially built the scream executable it gave an error stating that pulse audio wasn't compiled in. So I commented out all the #ifdef/#else/#endif stuff related to PULSEAUDIO_ENABLE to force it to compile pulseaudio support. To do this open the scream.c file and find the lines

//#if PULSEAUDIO_ENABLE
#include "pulseaudio.h"
//#endif

and

    case Pulseaudio:
//#if PULSEAUDIO_ENABLE
     if (verbosity) fprintf(stderr, "Using Pulseaudio output\n");
     if (pulse_output_init(target_latency_ms, pa_sink, pa_stream_name) != 0) {
       return 1;
     }
     output_send_fn = pulse_output_send;
//#else
//      fprintf(stderr, "%s compiled without Pulseaudio support. Aborting\n", argv[0]);
//      return 1;
//#endif

If not using Pulseaudio, then the same probably applies to ALSA or JACK.

Runing Scream

On the Linux side, run the following command scream -o pulse -i virbr0. Change virbr0 to whichever ethernet apdater you have configured for the VM. Also if not using pulse change it to whichever output you have configured (eg. alsa/jack)

If all goes well, you should see something like the following when scream receives audio:

Switched format to sample rate 48000, sample size 32 and 2 channels.

ADDENDUM 1 - Running Games that rely on Easy AntiCheat

Recently Easy AntiCheat (EAC) had an update that seems to make games that rely on it run flaky. To get around this add the following under the <os></os> node:

<os>
    ...
    <smbios mode='sysinfo'/>
</os>

and add the following to your config under the <os></os> node:

<sysinfo type="smbios">
  <bios>
    <entry name="vendor">LENOVO</entry>
  </bios>
  <system>
    <entry name="manufacturer">Microsoft</entry>
    <entry name="product">Windows10</entry>
    <entry name="version">10.11345</entry>
  </system>
  <baseBoard>
    <entry name="manufacturer">LENOVO</entry>
    <entry name="product">20BE0061MC</entry>
    <entry name="version">0B98401 Pro</entry>
    <entry name="serial">W1KS427111E</entry>
  </baseBoard>
  <chassis>
    <entry name="manufacturer">Dell Inc.</entry>
    <entry name="version">2.12</entry>
    <entry name="serial">65X0XF2</entry>
    <entry name="asset">40000101</entry>
    <entry name="sku">Type3Sku1</entry>
  </chassis>
  <oemStrings>
    <entry>myappname:some arbitrary data</entry>
    <entry>otherappname:more arbitrary data</entry>
  </oemStrings>
</sysinfo>

The above is an example. To get the values for these install the dmidecode package and then run the following commands:

dmidecode --type bios
dmidecode --type baseboard
dmidecode --type system

ADDENDUM 2 - Configuration updated for kernels newer than 5.x

With kernels 6.x and greater, the way to load the vfio modules has changed. The vfio modules need to be in the initramfs and in the correct order for pass through to work correctly.

First edit the kernel boot parameters and remove any vfio parameters.

Eg. Change from this

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt vfio-pci.ids=10de:1c82,10de:0fb9"

to this

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt

Then we need to update grub sudo update-grub and reboot the system.

After the reboot, we need to update several files so that we can create a good initramfs.

  • Edit /etc/initramfs-tools/modules and add the following
vfio vfio_iommu_type1 vfio_pci ids=10de:1c82,10de:0fb9
  • Edit /etc/modules and add the following
vfio vfio_iommu_type1 vfio_pci ids=10de:1c82,10de:0fb9
  • Edit /etc/modprobe.d/nvidia.conf and add the following lines. This ensures that the nvidia drivers are loaded after vfio masks the relevant ids.
softdep nouveau pre: vfio-pci 
softdep nvidia pre: vfio-pci 
softdep nvidia* pre: vfio-pci
  • Edit /etc/modprobe.d/vfio.conf and add the following
options vfio-pci ids=10de:1c82,10de:0fb9
  • Edit /etc/modprobe.d/kvm.conf and add the following
options kvm ignore_msrs=1

After these files have been edited and saved, run the following command to rebuild the initramfs

update-initramfs -u -k all

This will take a while so sit back and relax a bit. After the command finishes, reboot the system.

Information

https://linuxhint.com/install_virtio_drivers_kvm_qemu_windows_vm/

https://looking-glass.io/docs/B5.0.1/install/

https://mathiashueber.com/virtual-machine-audio-setup-get-pulse-audio-working/

https://github.com/duncanthrax/scream/releases

https://old.reddit.com/r/VFIO/comments/wvnbx9/comment/ili6nib/

https://mathiashueber.com/windows-virtual-machine-gpu-passthrough-ubuntu/

Final Configuration

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>win10</name>
  <uuid>9b320ba9-6f44-49ae-8499-116c21efc170</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>33554432</memory>
  <currentMemory unit='KiB'>33554432</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
    <bootmenu enable='yes'/>
    <smbios mode='sysinfo'/>
  </os>
  <sysinfo type="smbios">
    <bios>
      <entry name="vendor">LENOVO</entry>
    </bios>
    <system>
      <entry name="manufacturer">Microsoft</entry>
      <entry name="product">Windows10</entry>
      <entry name="version">10.11345</entry>
    </system>
    <baseBoard>
      <entry name="manufacturer">LENOVO</entry>
      <entry name="product">20BE0061MC</entry>
      <entry name="version">0B98401 Pro</entry>
      <entry name="serial">W1KS427111E</entry>
    </baseBoard>
    <chassis>
      <entry name="manufacturer">Dell Inc.</entry>
      <entry name="version">2.12</entry>
      <entry name="serial">65X0XF2</entry>
      <entry name="asset">40000101</entry>
      <entry name="sku">Type3Sku1</entry>
    </chassis>
    <oemStrings>
      <entry>myappname:some arbitrary data</entry>
      <entry>otherappname:more arbitrary data</entry>
    </oemStrings>
  </sysinfo>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vpindex state='on'/>
      <synic state='on'/>
      <stimer state='on'/>
      <reset state='on'/>
      <frequencies state='on'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
    <ioapic driver='kvm'/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='4' cores='1' threads='2'/>
    <feature policy='require' name='topoext'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writeback' io='threads' discard='unmap' detect_zeroes='unmap'/>
      <source file='/home/sgilliam/.local/share/libvirt/images/win10.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writeback' io='threads' discard='unmap' detect_zeroes='unmap'/>
      <source file='/home/sgilliam/.local/share/libvirt/images/games.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/sgilliam/Downloads/virtio-win-0.1.215.iso'/>
      <target dev='sdb' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='11' model='pcie-to-pci-bridge'>
      <model name='pcie-pci-bridge'/>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0x9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:e2:4c:36'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='network'>
      <mac address='52:54:00:8f:51:8e'/>
      <source network='isolated'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='virtio'>
      <address type='pci' domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/>
    </input>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <image compression='off'/>
      <gl enable='no' rendernode='/dev/dri/by-path/pci-0000:05:00.0-render'/>
    </graphics>
    <video>
      <model type='none'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x22' slot='0x00' function='0x0'/>
      </source>
      <rom file='/usr/share/vgabios/EVGA.GTX1050Ti.4096.170207.rom'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x22' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </hostdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='3'/>
    </redirdev>
    <memballoon model='none'/>
    <shmem name='looking-glass'>
      <model type='ivshmem-plain'/>
      <size unit='M'>32</size>
      <address type='pci' domain='0x0000' bus='0x0b' slot='0x01' function='0x0'/>
    </shmem>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='ich9-intel-hda,bus=pcie.0,addr=0x1b'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='hda-micro,audiodev=hda'/>
    <qemu:arg value='-audiodev'/>
    <qemu:arg value='pa,id=hda,server=unix:/run/user/1000/pulse/native'/>
  </qemu:commandline>
</domain>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment