Skip to content

Instantly share code, notes, and snippets.

@benjamingwynn
Created December 3, 2016 17:46
Show Gist options
  • Save benjamingwynn/54480cc2ff8f14a9244854cacb4e640b to your computer and use it in GitHub Desktop.
Save benjamingwynn/54480cc2ff8f14a9244854cacb4e640b to your computer and use it in GitHub Desktop.
= HOW TO: A detailed guide to VGA Passthrough =
Firstly, be aware that this task takes a great deal of time, patience and [[determination]], as well as a keen attitude to learning. '''You will also need previous knowledge using Linux and Windows. This is not for the light of heart.'''
Secondly, I will be writing this guide for Arch Linux, as it is the Linux distribution I use and am most familiar with. You may be able to roughly follow this guide on other Linux distributions if you substitute steps specifically for Arch with the steps needed for your OS, but this won’t be covered in this guide.
Thirdly, before reading this guide I strongly suggest you become familiar with Arch Linux, including its package manager, pacman, as well as building from the AUR. It may also be helpful to fully read through the additional resources listed below.
This was written as part of the 'just-do-it' challenge put forth by @Atomic_Charge.
'''Requirements''':
You will need:
* A CPU that support both virtualization extensions (VT-X for Intel cpus or AMD-V for AMD platforms) and I/O MMU virtualization extension (VT-d for Intel cpus or AMD-Vi for AMD platforms). This can be consulted in the manufacturer webpage.
* A Motherboard that support those extensions and booting into UEFI. While manufacturers usually indicate in some form if the motherboard support UEFI, many do not indicate the support for extensions. It necessary for more research in user forums and similar pages.
* 2 Graphics Cards: a Graphics Card for the Windows Guest that is capable of UEFI boot ''(Windows 8, 8.1 or 10 - the Windows 7 installer does not boot from UEFI without additional workarounds)'', as well as a Graphics Card for the Linux Host. Using a pci express gpu and the cpu integrated graphics will do just fine.
* OPTIONAL - 2 monitors: it is easier for testing and use of virtual machine when using to have 2 monitors, one connected to the host graphics and the other to the gpu that the virtual machine is going to use. However this not mandatory, since you can connected the 2 graphic card to same monitor and switch between graphics with the monitor controls or some type of switch.
You must also ensure to physically configure your graphics cards correctly when you are building your computer, generally, the graphics card you wish to use for your Linux Host must be in the top-most x16 PCIe slot, and the graphics card you wish to use for your Windows Guest must be in any other PCIe slot under it, preferably in a slot which is wired directly to the CPU and not via your chipset, check your UEFI settings or motherboard manufacturer's manual for more information on this. This generally means that you cannot run VGA passthrough on a laptop, even with both integrated graphics and dedicated graphics.
Some hardware compatibility lists can be found [https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware here] and [https://docs.google.com/spreadsheet/ccc?key=0Aryg5nO-kBebdFozaW9tUWdVd2VHM0lvck95TUlpMlE here].
{{Note|It is strongly recommended that you use an AMD/ATI Graphics Card for the Windows Guest, as NVIDIA have put barriers in place to stop non-Quadro cards from being ran in a virtual machine by setting the driver to disable the card when it detects that is in a Virtual Machine. It is however possible to bypass this check with additional configuration in the Virtual machine settings. }}
== Motherboard Settings Configuration ==
In most consumer motherboards, the settings for VT-D and VT-X (or their AMD counterparts) are disabled by default. In order to use VGA passthrough, we must turn these on.
This various from motherboard to motherboard. The setting may also be called IOMMU, PCI Passthrough or Virtualization Technologies. If you cannot find it and your motherboard is not listed above, please leave a comment.
If you planning to use the CPU integrated graphics for the linux host, search your UEFI settings for a setting to boot using the integrated graphics.
== Step 1: Install Arch ==
There are many tutorials on installing Arch Linux on the internet, follow one of those tutorials. You can use the [https://wiki.archlinux.org/index.php/installation_guide Installation guide] and [https://wiki.archlinux.org/index.php/beginners'_guide Beginners guide] for reference. Alternately you can use a archlinux based distro like [https://antergos.com/ Antergos] or [http://manjaro.github.io/ Manjaro] to install without too much work, but '''BE WARNED''' that some distros might add unnecessary and/or problematic changes to your installation and that asking support for these distros on arch forums usually doesn't produce good results (and some users really dislike that).
Ensure you boot Arch using an EFI loader, such as systemd-boot. '''I will be using systemd-boot for this 'tutorial', so if you’re following letter for letter then I strongly suggest using systemd-boot too.'''
After installing Arch and systemd-boot, reboot into the system and ensure everything is working.
== Step 2: Install VFIO Kernel (arguably optional) ==
In the Arch User Repository, there is a [https://aur.archlinux.org/packages/linux-vfio/ kernel package] which includes patches for GPU passthrough. Many of these patches are not needed when using UEFI to boot our virtual guest (which we will be doing), but some of them are.
You will need to compile the kernel manually, however, this is made significantly easier thanks to Arch Linux’s build system.
* If you don’t wish to use the VFIO kernel, regardless of it’s potential fixes, skip to step 3.
=== Building the Kernel ===
==== Option 1: Building the Kernel With yaourt ====
Add the yaourt repo by editing ''/etc/pacman.conf'' and adding:
[archlinuxfr]
SigLevel = Optional TrustAll
Server = http://repo.archlinux.fr/x86_64
run the following commands:
pacman -Sy yaourt
yaourt -S linux-vfio
==== Option 2: Build the Kernel Manually ====
Set up and enter a new build environment in our home folder to build our kernel in. **You will need to do this as a normal user, makepkg will not run as root.**
mkdir -p ~/build && cd ~/build
Get the packages we need for building the kernel using pacman:
sudo pacman -S base-devel wget
Get the kernel source code from the AUR:
wget https://aur.archlinux.org/cgit/aur.git/snapshot/linux-vfio.tar.gz
Extract the files using tar:
tar -xvf linux-vfio.tar.gz && cd linux-vfio
Build and install the kernel:
makepkg -sri
The above command will compile and install the kernel to /boot. Compiling the kernel can take a while on slower processors (and even faster ones!), so patience is required.
=== Create a new RAMDISK ===
First, let’s set up mkinitcpio to use the VFIO stuff. We must modify the Linux ramdisk to load our VFIO modules, modify the mkinitcpio file using your favorite editor as root, for example:
sudo nano /etc/mkinitcpio.conf
Next, add the vfio, vfio_iommu_type1, vfio_pci vfio_virqfd and pcis-tub modules under the modules section. We will be using these modules later:
MODULES="vfio vfio_iommu_type1 vfio_pci vfio_virqfd pci-stub"
Exit the editor (use CTRL+X if using nano) and save. By default, the ramdisk created by makepkg uses the default settings, not the settings we specified earlier. To use the settings we created, just run mkinitcpio as root on the VFIO kernel:
sudo mkinitcpio -p linux-vfio
=== Using a NVIDIA Graphics Card on your Linux Host ===
'''If you are not planning to use a NVIDIA card on the Linux Host, or you are using the opensource nouveau driver, you may safely skip this step.'''
If you wish to use a NVIDIA Graphics card for your Linux Host and you wish to use NVIDIA’s proprietary drivers, you will need to use DKMS because you are using a third-party kernel. Do the following command to install ''nvidia-dkms'' and uninstall the current drivers:
sudo pacman -R nvidia && sudo pacman -S nvidia-dkms
=== Add the new Kernel to the Bootloader ===
After the kernel is created, we need to tell our bootloader the location of new kernel and initramfs file, along with our other kernel arguments.
Assuming you are familiar with ''systemd-loader'', create a new entry as you usually would.
Alternatively, assuming you have successfully installed the kernel, you are using systemd-boot, and that you are currently using the mainstream Linux kernel, you can create a new automatically entry using the following commands:
echo 'title Linux VFIO' > /tmp/linux-vfio-test.conf
echo 'linux /vmlinuz-linux-vfio' >> /tmp/linux-vfio-test.conf
echo 'initrd /initramfs-linux-vfio.img' >> /tmp/linux-vfio-test.conf
(printf 'options' && cat /proc/cmdline | awk '{first = $1; $1 = ""; print $0, first; }') >> /tmp/linux-vfio-test.conf
sudo cp /tmp/linux-vfio-test.conf /boot/loader/entries/linux-vfio.conf
This will create a bootloader entry called Linux VFIO and point to the correct files, and copy your current boot parameters to the new entry. If you don’t wish to use these commands, or it fails, simply write your own entry as per the [https://wiki.archlinux.org/index.php/Systemd-boot#Adding_boot_entries systemd-boot documentation], pointing the kernel and ramdisk to their VFIO equivalents. Alternately you can user [https://aur.archlinux.org/packages/grub-customizer/ Grub Customizer] application to add these entries.
Then set this to our default entry by doing:
sudo nano /boot/loader/loader.conf
and changing the default flag to linux-vfio (or whatever you called the entry file), your /boot/loader/loader.conf file should now look something like this:
timeout 1
default linux-vfio
Reboot your system and use the following command to verify you are booted into the VFIO kernel:
uname -a | grep -i --color=force 'vfio'
If there is no output, then you are not booted into the VFIO kernel.
If it outputs a string similar to:
Linux guppy 4.4.3-1-vfio #1 SMP PREEMPT Tue Mar 8 17:59:23 GMT 2016 x86_64 GNU/Linux
Then you have successfully booted into your new VFIO kernel!
== Step 3: Boot with IOMMU. ==
'''IOMMU is needed to use VT-D, or AMD-Vt, which are needed for VGA passthrough. To boot with IOMMU enabled, you must add a boot parameter to your kernel when starting Linux.'''
If you are using an AMD system, add amd_iommu=on, likewise, if you are using an Intel system, add intel_iommu=on to your boot parameters.
If you are using systemd-boot as suggested, you can modify your boot parameters by modifying your boot entry file, if you installed the VFIO kernel above, this would be /boot/loader/entries/linux-vfio.conf.
sudo nano /boot/loader/entries/linux-vfio.conf
If using systemd-boot, add this to the end of your '''''options''''' line, add:
* ''intel_iommu=on'' for intel systems
* ''amd_iommu=on'' for amd systems
Then reboot your system after saving your changes. After rebooting, check if your parameter was recognized by doing:
cat /proc/cmdline | grep 'iommu=on' --color=force
If it outputs, then you’ve successfully passed the IOMMU flag to your Linux kernel, but to ensure that it is working, do:
dmesg | grep -i 'iommu: adding device'
If this outputs too, then you’re good to go! Otherwise ensure that you enabled VT-D and VT-X (or their AMD equivalents) in your UEFI settings. You can use dmesg to diagnose this further, looking for IOMMU related messages.
== Step 4: Setting Modules, Pre-configuring your hardware ==
=== Blacklisting the Graphic Cards Modules ===
As previously mentioned, we’re going to need to use our second graphics card for our guest operating system. In this example I will be using an AMD R9 390.
There are three possible methods to do this…
The easy way to isolate the graphics card from the system is to simply not load its driver, forcing it to be free and available for libvirt. To get started, create a new file /etc/modprobe.d as root:
sudo nano /etc/modprobe.d/blacklist.conf
If you plan to use an AMD card for your Windows Guest, enter:
blacklist radeon
blacklist fglrx
blacklist amdgpu
Otherwise, if you plan to use a NVIDIA card for your Windows Guest, enter:
blacklist nvidia
blacklist nouveau
blacklist nouveau
Lastly, save the file, reboot, and skip to next step.
=== Load the PCI passtrough module ===
Before using any pci passtrough module you need the domain/bus address and the Vendor and Hardware IDs of your graphics cards graphic cards PCI ID's first.
To get that information use the command:
sudo lspci -vnn | grep --color=always -i 'amd\|nvidia'
On my system, with an AMD R9 390 for the Guest and a NVIDIA GTX 660 for the host, this command outputs:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106 [GeForce GTX 660] [10de:11c0] (rev a1) (prog-if 00 [VGA controller])
Subsystem: NVIDIA Corporation Device [10de:0995]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia
01:00.1 Audio device [0403]: NVIDIA Corporation GK106 HDMI Audio Controller [10de:0e0b] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:0995]
0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290] [1002:67b1] (rev 80) (prog-if 00 [VGA controller])
0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aac8]
There are two main parts of each entry, the first part is the domain/bus address ''0a:00.0'' and the last part, in brackets, is the Vendor and Hardware ID's ''1002:67b1''.
Because I want to use my AMD card, I will write down '''1002:67b1''' witch is the VGA component of the card and '''1002:aac8''' witch is the audio component.
==== Option 1: Vfio-pci ====
'''You will need to be using the VFIO kernel above to use this method or you must rebuilt your RAMDISK manually with the 'vfio vfio_iommu_type1 vfio_pci vfio_virqfd' modules.'''
Alternatively, if we are going to be using two AMD cards or two NVIDIA cards, one for the guest and one for the host, we must use this way.
You should also use this way if you have problems with HDMI audio being automatically grabbed by snd_hda_intel (this can also be bypassed by checking instruction in Step 5), or if you have issues with PCI reset after shutting down the VM.
We’re going to assign our Graphics Card a 'fake' driver called '''vfio-pci''' that is used by virtualization software to communicate with the card. To do this, we need create a new file in ''/etc/modprobe.d/'' folder to tell linux to use the ID's of our card when loading the '''vfio-pci''' module at startup. On our case we’re going to call the file vfio.conf. Use the following command to create it:
sudo nano /etc/modprobe.d/vfio.conf
and simply replace the device ID’s in the text below with the device ID’s you just got.
options vfio-pci ids=AAAA:aaaa,BBBB:bbbb
on my case it is:
options vfio-pci ids=1002:67b1,1002:aac8
After you’ve done this command, save the file, reboot and check the card is using the vfio-pci using the following command:
lspci -v | grep 'vfio-pci' -B 9
If it outputs the graphics card and sound subsystem then you can move onto the next step!
==== Option 2: Pci-Stub ====
There is little point in using PCI-Stub over Vfio-pci, except if you have problems with Vfio-pci or if you kernel does not support it.
'''You will need to be using the VFIO kernel above to use this method or you must rebuilt your RAMDISK manually with the ' pci-stub' module.'''
We need to tell our kernel to use PCI-Stub on these two devices, to do so, modify your bootloader file again:
sudo nano /boot/loader/entries/linux-vfio.conf
And append this to the end of the options line, separating each value by a comma:
pci-stub.ids=XXXX:xxxx,YYYY:yyyy
on my case it is:
pci-stub.ids=1002:67b1,1002:aac8
After you’ve done this command, save the file, reboot your system and check whether or not the card is using the vfio-stub using the following command:
lspci -v | grep 'pci-stub' -B 9
If it outputs the graphics card and sound subsystem then you can move onto the next step!
== Step 5: Initial Configuration ==
=== Obtain the necessary files ===
* '''Operating System ISO file''' - generate ISO from you windows installation for faster installation with the [https://wiki.archlinux.org/index.php/Optical_disc_drive#Burning_CD.2FDVD.2FBD_with_a_GUI appropriate software].
* '''Virtio Drivers cd''' - similar to ''Virtualbox Guest Additions CD'', this is a cd with several drivers for the virtual machine windows installation which will add functionality and improve performance . The cd can be obtained from [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/ here].
* '''UEFI binaries''' - necessary file to emulate e UEFI in the virtual machine. You can get them from [https://www.kraxel.org/repos/jenkins/edk2/ here]
=== Installing the UEFI binaries ===
To install UEFI binaries, we need create a temporary directory, download UEFI binaries and extract the contents and then copy the contents to ''/usr/share'' with. You can do this with the commnads:
mkdir -p /tmp/efi
cd /tmp/efi
wget https://www.kraxel.org/repos/jenkins/edk2/edk2.git-ovmf-x64-0-20160311.b1594.gf6326d1.noarch.rpm
rpmextract.sh edk2.git-ovmf-x64-0-20160311.b1594.gf6326d1.noarch.rpm
rm edk2.git-ovmf-x64-0-20160311.b1594.gf6326d1.noarch.rpm
sudo cp -R . /
{{note| The files in https://www.kraxel.org/repos/jenkins/edk2/ are updated quite frequently, so the filename might not match with filename in previous command.}}
=== Setting Permissions ===
You will also need to ensure that your user is able to access the gpu and any hdd that you might want to use when running the virtualization software. To do this, create a new udev rule in ''/etc/udev'' with the command:
sudo nano /etc/udev/rules.d/10-qemu-hw-users.rules
and add the following contents, replacing YOUR_USERNAME with the username you will primarily use the system as:
KERNEL=="sda[3-6]", OWNER="YOUR_USERNAME", GROUP="YOUR_USERNAME"
KERNEL=="vfio", SUBSYSTEM=="vfio", OWNER="YOUR_USERNAME", GROUP="YOUR_USERNAME"
=== OPTIONAL: Remove IOMMU Group restriction using ACS Override Patch ===
IOMMU groups, a feature of this vfio modules, are essentially a set of devices which are isolated from all other system devices for use in virtual machines. When we want the to use a pci ou pci express device on the virtual machine we will have specify the domain bus of that device on the parameters of the virtual machine as well all the domain bus of all devices assigned to that group. More info in the ''QEMU Installation and Configuration'' section.
However If you have installed the VFIO kernel, you can use the functionality of the ''ACS Override Patch'' included in this kernel, to override the restriction of using all the devices of a IOMMU groups, and use a specific device without passing all other devices in the group. Before using this patch be sure to know [http://vfio.blogspot.pt/2014/08/iommu-groups-inside-and-out.html are risks] are.
You need to add ''pcie_acs_override='' with one of the following options:
* '''downstream''' - All downstream ports - full ACS capabilties
* '''multifunction''' - All multifunction devices - multifunction ACS subset
* ''' id:nnnn:nnnn''' - Specfic device - full ACS capabilities, specified as vid:did (vendor/device ID) in hex
Usually adding ''pcie_acs_override=downstream'' to options parameter in ''/boot/loader/entries/linux-vfio.conf'' should be sufficient like this:
...
options root=.......intel_iommu=on,pcie_acs_override=downstream BOOT_IMAGE=/vmlinuz-linux
...
=== OPTIONAL: Passing devices to VMs without blacking modules ===
The methods for doing this will vary depending on the module and sometimes is not optimal or recommended because of stability issues related to ceratin module.
==== Module snd_hda_intel ====
This method is usually necessary when the module detects the hdmi component of a graphics card or when you have multiple sound cards on the system and we want to pass one of them to the virtual machine whitouth disabling the other.
Get a list of detected card using the command ''aplay --list-devices'' and should produces something like this:
**** List of PLAYBACK Hardware Devices ****
card 0: ...
...
card 1: ...
....
create the file to pass parameters to the module with:
sudo nano /etc/modprobe.d/sound.conf
and set the parameter ''enable'' with a value for each detected sound card, with value 1 to enable a sound card and 0 do disable a sound card, like this:
options snd_hda_intel enable=1,0
if the case the wrong card is disabled, switch position of the values.
==== Other Modules ====
'''WARNING''': While many modules will work fine with the instructions below, there are modules that do not work correctly after unbinding the device from them. Check the output of ''dmesg'' and ''journalctl -xe'' for any error.
Create the file ''/usr/bin/vfio-bind'' with the following content:
#!/bin/bash
modprobe vfio-pci
for dev in "$@"; do
# Unbind the device from the current driver that is using
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
# Binds the device to the vfio-pci module
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
make it executable with the command:
chmod 755 /usr/bin/vfio-bind
to run it manually, run the command with the hostbus addresses (given by lspci command) of the devices that you want to use in virtual machine in the parameters like this:
vfio-bind 0000:07:00.0 0000:07:00.1
It is recommended to run it manually to see if there are any errors when unbind devices from the driver, but you to run this script at startup create the file ''/etc/systemd/system/vfiobind.service'' with the content:
[Unit]
Description=Binds devices to vfio-pci
After=syslog.target
[Service]
EnvironmentFile=-/etc/vfio-pci.cfg
Type=oneshot
RemainAfterExit=yes
ExecStart=-/usr/bin/vfio-bind $DEVICES
[Install]
WantedBy=multi-user.target
and the configuration file ''/etc/vfio-pci.cfg'' with the hostbus addresses like this:
DEVICES="0000:00:11.0 0000:04:00.0 0000:05:00.0 0000:06:00.0 0000:07:00.0 000:07:00.1"
and run the commands:
systemctl enable vfiobind.service
systemctl start vfiobind.service
== Step 6: Install and Configure Virtualization Software ==
There are a few software programs that can use the vfio modules and allow the virtual machines to use the gpu and other pci and/or pci express devices, but the most popular ones are qemu and Virt-manager.
* '''Virt-manager''' is a desktop-driven virtual machine manager and a graphic interface for libvirt, that allows a user to manage several virtual machines (KVM, Xen or QEMU), assign hardware devices to the them, create virtual network devices for use among them, as well providing other benefits. It is the easiest and most straightforward way for a user to create and modify the virtual machines on a desktop.
* '''qemu''' is a command line hypervisor that allows running virtual machines with less dependencies than Virt-manager. It's use is recommend over virt-manager when the user needs to add additional parameters to the virtual machine not available on graphical interfaces (like parameters to allow the virtual machine use specific nvidia cards or specific hyper-v extensions) or new functions that were added to qemu but no yet implemented in libvirtd.
The choice will depend on depend on your system and preferences.
=== OPTION 1: Virt-manager and libvirt ===
==== Installation and Configuration ====
We will need a lot of software to start running a Windows Guest on our Linux Install, this neat little command will grab all the packages we need from pacman:
sudo pacman -S qemu virt-manager libvirt firewalld virt-install rpmextract dnsmasq
You may notice that we are installing ''firewalld'', even though we may not need it. Libvirt seems to use firewalld for it’s NAT networking, even when it’s disabled. I am including it here as it seems to be an unlisted requirement.
''dnsmasq'' is another requirement for networking, and ''rpmextract'' is needed to install the UEFI binaries.
To use libvirt, and virt-manager, we also need to start and enable libvirt service and the if logging service using systemctl. Run the following command:
sudo systemctl enable libvirtd
sudo systemctl enable virtlogd.socket
sudo systemctl start libvirtd
sudo systemctl start virtlogd.socket
After that we need to tell libvirt where our UEFI binaries are and to do so, we must append a parameter to ''/etc/libvirt/qemu.conf'' by editing the file with:
sudo nano /etc/libvirt/qemu.conf
and add the following content:
nvram = [
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",
]
==== Creating the Virtual Machine ====
The majority of this step is simply a paraphrased version of [http://vfio.blogspot.co.uk/2015/05/vfio-gpu-how-to-series-part-4-our-first.html this article], however I’ve decided to include this step anyway.
Before we start to create a VM, we must make sure we have a Windows install ISO or some other means to install Windows, and that we have done all of the previous steps correctly. Once you are ready, start up virt-manager/Virtual Machine Manager, which you can find in your app drawer. If you are still struggling to find it, start it from the command line with ''virt-manager''. You might need to enter your password to start virt-manager.
http://i.imgur.com/kXTImtC.png
Go to File -> New Virtual Machine and select the method to install, I’ve chosen ''Local install media'' because I plan to use an ISO...
http://i.imgur.com/U58hNKK.png
Select your ISO and select the OS manually if it detects it as ''Unknown'', which it seems to do for Windows 10 ISO’s.
http://i.imgur.com/wosJh4g.png
Click next on this screen after configuring the amount of memory allocated, do not worry about the CPU count as we will be changing this later anyway.
http://i.imgur.com/rbNn0M7.png
Select whether you would like to use existing storage, or create a new virtual disk image. I am going to be using a spare partition on my SSD here. You can use pre-existing images or create a new one using the option here.
According to [http://vfio.blogspot.pt/ vfio.blogspot] and the [https://wiki.archlinux.org/ Arch Linux Wiki], using an entire physical disk partition should be done for optimal disk performance, although it is not necessary.
http://i.imgur.com/2NwXHCg.png
*'''Important''': Ensure that ''Customize Configuration Before Install'' is selected, and click finish.
http://i.imgur.com/7UnJUjh.png
You may be asked to start the virtual network, click yes.
http://i.imgur.com/ZBdXphR.png
You will then have a selection of options to select from to configure your VM, first, select your chipset and your firmware.
We are going to be using UEFI and the i440FX chipset. '''DO NOT USE BIOS'''
If UEFI does not show up, ensure you did the previous step correctly, and make sure you modified the libvirt config as shown.
You can also chose Q35 as the chipset here, which is supposedly supposed to bring better support for VGA passthrough and improved performance - but I couldn’t get the AMD drivers to install on it without either crashing the guest or the host.
If you’re up to the challenge of trying to get Q35 to work anyway, try it and post your results.
You can also give the system a title here. Don’t forget to hit ''Apply'' once you’ve changed any of these values.
http://i.imgur.com/cHAJGZZ.png
Next, go to the CPU pane, expand ‘Topology’ and uncheck ‘Copy host CPU configuration’, then set your CPU’s topology to your real CPU’s topology:
* '''Sockets''' is the number of physical CPU’s you have, usually one except in dual CPU systems.
* '''Cores''' is the number of cores per CPU you have.
* '''Threads''' is the number of threads per core you have, on hyperthreaded systems this is 2, otherwise it is 1.
Then, assign the number of cores you want your VM to have in the ''Current Allocation'' field, I am going to assign 8/12 cores for my virtual machine.
{{note| It generally recommend to not assign all cores to the virtual machine leave and leave some cores to the host to avoid performance issues in some cases. }}
'''You’ll also want to enter ''host-passthrough'' as your CPU model. You will need to type this manually in most versions of virt-manager.'''
I personally couldn’t get the AMD Graphics Drivers to work with ''Copy host CPU configuration'' checked.
http://i.imgur.com/C5JJ7he.png
Remember to hit apply to save your changes.
Next, go to '''IDE Disk 1''', click ''Advanced options'' and change the ''Disk bus'' field to SATA using the dropdown. Remember to hit apply.
http://i.imgur.com/oYK6mlF.png
Then, do the same for ''IDE CDROM 1'', and click apply.
http://i.imgur.com/M6Ay4js.png
Next, delete unnecessary hardware from the virtual machine the following by right clicking on them and selecting ''Remove Hardware'':
* Tablet
* Display Spic
* Sound: ich6
* Console
* Channel Spice
* Video QXL
* USB Redirector 1
* USB Redirector 2
Your hardware tree on the left should now look like this:
http://i.imgur.com/UgS6w6a.png
Next, I am going to modify my NIC to use “virtio” as the device model. This allows our VM to have 10 gigabit networking to our host.
http://i.imgur.com/XJ4hmmg.png
Click apply to save your changes.
Now, use the Add Hardware button to add your Graphics Card and it’s HDMI Audio component, which you can find under PCI Host Device.
http://i.imgur.com/OfgoLDx.png
http://i.imgur.com/OzarrtT.png
You can also add other hardware here, such as a mouse and keyboard under USB Host Device, which you will need for controlling the installation of Windows.
{{note| By adding USB hardware using this window you are giving full access to VM to that device. The Linux host will lose access to that hardware while the virtual machine is running.}}
http://i.imgur.com/diLQMKo.png
http://i.imgur.com/tqIpHQQ.png
Next, we need to add the virtio-win driver disk which has the drivers for the NIC. Go to ''Add Hardware'' once again, click ''Storage''->''Select or create custom storage'->"Browse Local" to open a file browser with which you can point to the irtio-win driver disk. You’ll then need to change ''Device type'' to ''CDROM device'', and change ''Bus type'' to ''SATA''. Before clicking ''Finish'', your screen should look something like this:
http://i.imgur.com/ULtlgsd.png
Lastly, click the 'Finish' to add the ISO as a Disk so you can install network hardware, then click 'Begin Installation' in the top right corner of the main window.
If you did everything right, you should see video output from your graphics card!
Install Windows like on any other machine. It should be fairly straightforward. After doing so, you may notice you have no network connectivity, to fix this, go into ''Device Manager'' on the VM and update the driver of the network card using the CD drive that you added. You should have 2 CD drives, one with the windows installation and the other with the virtio drivers. Just tell device manager to search the entire CD drive withe virtio drivers of for a driver.
Then, navigate to AMD or NVIDIA’s website and install your driver.
Hopefully, you will now have a fully hardware accelerated Windows Virtual Machine!
==== OPTIONAL: Additional Resources ====
For detailed information virt-manager:
* [https://www.suse.com/documentation/sles11/book_kvm/data/part_1_book_book_kvm.html Suse Managing Virtual Machines with libvirt]
=== OPTION 2: QEMU ===
==== Installation and Configuration ====
To install the ''qemu'' package just run the command:
sudo pacman -Sy qemu
To setup the permissions to allow qemu to use the pci passthrought module edit the file ''/etc/libvirt/qemu.conf'' and setup the following parameters:
...
user = "YOUR-USER"
group = "78"
clear_emulator_capabilities = 0
...
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
"/dev/vfio/1"
]
...
Group 78 is usually called kvm. You can check this by opening the file ''/etc/group''. If doesn't exist create the group and add your user to that group or put other group in which your and a member in the configuration file.
{{note| Somes users managed to use qemu with no permissions problems while in other installations this not seem to work and used ''sudo'' to run the virtual machine. THIS AREA REQUIRES A MORE TESTING.}}
For this setup you will need to obtain 2 bits of information on you systems. They are:
* Information about '''IOMMU groups''' on your system. IOMMU groups, a feature of this vfio modules, are essentially a set of devices which are isolated from all other system devices for use in virtual machines. When we want the to use a pci ou pci express device on the virtual machine we will have specify the domain bus of that device on the parameters of the virtual machine as well all the domain bus of all devices assigned to that group. To get a list of available IOMMU groups and devices run the command ''find /sys/kernel/iommu_groups/ -type l'' and it should return something like this
...
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
...
and we compare with the output the ''lspci'' command:
...
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
...
01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)
01:00.1 Audio device: NVIDIA Corporation Device 0fba (rev a1)
...
and on my case I going to use ''01:00.0'' and ''01:00.1'' in the virtual machine which is my graphics card.
Some notes on IOMMU groups:
* For each group, you will have to add ''"/dev/vfio/GROUP-NUMBER"'' to variable ''cgroup_device_acl'' in the ''/etc/libvirt/qemu.conf'' to allow qemu to access the devices on that group
* Switching the position of pci/pci express card in the motherboard will likely cause the device to be assigned to other IOMMU group, allowing to you use the card without passing other unwanted devices to virtual machine.
* In some situations it possible to run the virtual machine without passing all the devices. In this particular case it not necessary to pass the PCI Express x16 Controller (possibly because it is the PCI express controller and not the particular device ???)
* If you have installed the VFIO kernel, you can use the functionality of the ''ACS Override Patch'' included in this kernel, to override this restriction and use a the device without passing all other device in the group. If you want to try consult the instructions in step 5.
The other bit of information necessary if the '''vendor and hardware IDs''' for your usb devices, namely the usb keyboard, mouse or other usb devices that you might want to pass to the virtual machine. Alternately you can also assign a specific usb port to virtual machine and every device connected to that port will be added to virtual machine:
For specific devices run the command ''sudo lsusb'':
...
Bus 003 Device 002: ID 1038:1702 SteelSeries ApS
Bus 003 Device 003: ID 13ba:0018 PCPlay Barcode PCP-BCG4209
...
on my case it will be ''1038:170'' for my mouse and ''13ba:0018'' for my PS/2 to USB adapter.
For a usb port run the command ''lsusb -t'' '''before''' and '''after''' plugging a device to that port to help pinpoint which bus and port address of that port that you want to pass:
/: Bus '''04'''.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
/: Bus '''03'''.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/14p, 480M
|__ Port '''9''': Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port '''9''': Dev 2, If 1, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port '''10''': Dev 3, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
|__ Port '''10''': Dev 3, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M
|__ Port '''11''': Dev 5, If 0, Class=Mass Storage, Driver=usb-storage, 480M
|__ Port '''14''': Dev 4, If 0, Class=Vendor Specific Class, Driver=ath9k_htc, 480M
/: Bus '''02'''.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M
|__ Port '''1''': Dev 2, If 0, Class=Hub, Driver=hub/8p, 480M
/: Bus '''01'''.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M
|__ Port '''1''': Dev 2, If 0, Class=Hub, Driver=hub/6p, 480
on this case I want to use the port where I connected a usb pen so it will be hostbus 3 and hostport 11.
==== Creating the Virtual Machine ====
For this tutorial we will consider the the folder ''/home/YOUR-USER/Vms/'' as location of all the virtual machine files that you obtained in the beginning of step 5.
Create a 60gb virtual disk with the command:
dd if=/dev/zero of=/home/YOUR-USER/Vms/windows_hddisk.img bs=1M seek=60000 count=0
Run the virtual machine with the following commands:
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-cpu host,kvm=off,check \
-vga qxl \
-device vfio-pci,host=01:00.0,multifunction=on
-device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-device virtio-scsi-pci,id=scsi \
-drive if=virtio,id=drive0,file=/home/YOUR-USER/Vms/windows_hddisk.img,format=raw,cache=none,aio=native \
-soundhw hda \
-usbdevice host:13ba:0018 \
-usbdevice host:1038:1702 \
-device usb-host,bus=xhci.0,hostbus=3,hostport=11 \
-netdev user,id=vmnic \
-device virtio-net,netdev=vmnic \
-boot order=d \
-device ide-cd,drive=drive-cd-disk1,id=cd-disk1,unit=0,bus=ide.0 \
-drive file=/home/YOUR-USER/Vms/Windows8.1.iso,if=none,id=drive-cd-disk1,media=cdrom \
-device ide-cd,drive=drive-cd-disk2,id=cd-disk2,unit=0,bus=ide.1 \
-drive file=/home/YOUR-USER/Vms/virtio-win-0.1.112.iso,if=none,id=drive-cd-disk2,media=cdrom \
'''NOTES''':
* ''',multifunction=on''' is necessary when passing a single physical device with multiple functions. On this case is graphics that is also recognized as hdmi sound card.
* '''-vga qxl''' generate a graphic connection and presents a window were you can install and configurate the virtual machine. After you successfully install you graphics card in the virtual machine replace this parameter with ''-vga none'' and ''-nographic'' to disable this connection.
* '''kvm=off''' if only necessary if you are using a non-quadro nvidia card.
* '''-netdev user,id=vmnic''' and '''-device virtio-net,netdev=vmnic''' provide a NAT network access to the virtual machine (It will necessary to install the network drivers from the virtio drivers cd).
* Using '''-drive if=virtio,id=drive0,file=/home/YOUR-USER/Vms/windows_hddisk.img,format=raw,cache=none,aio=native''' will use the virtio drivers for the virtual disk access resulting usually in better performance, but you will need to use virtio drivers cd (Instructions in Client Configuration below). You can tell qemu to emulate a sata disk using the parameter '''-device ide-drive,drive=/home/YOUR-USER/Vms/windows_hddisk.img,bus=ahci.0''' and it won't be necessary the virtio drivers.
* The last 5 lines won't necessary after you finish the installation and configuration of operation system and drivers.
Also don't forget to Check the links in '''''Additional Resources''''' bellow.
==== OPTIONAL: Create a host-only network for the virtual machine ====
The NAT parameter will allow the virtual machine to access the internet and lan network, but if want to do the reverse and access to the virtual machine you need to more configuration. One way is to create a host-only network.
Install ''dnsmasq'' package:
sudo pacman -S dnsmasq
and to create the host-only network run:
killall -9 dnsmasq -q
brctl addbr br0
ip addr add 192.168.179.1/24 broadcast 192.168.179.255 dev br0
ip link set br0 up
ip tuntap add dev tap0 mode tap
ip link set tap0 up promisc on
brctl addif br0 tap0
dnsmasq --interface=br0 --bind-interfaces --dhcp-range=192.168.179.10,192.168.179.254
run the virtual machine with the additional parameter (mac parameter can be random and is not mandatory):
-netdev tap,id=t0,ifname=tap0,script=no,downscript=no -device e1000,netdev=t0,id=nic0,mac=DE:AD:BE:EF:A4:B7
a you should have another network device with a 192.168.179.2 ip address on the virtual machine. After you finish the virtual machine remove the host-only network with:
killall -9 dnsmasq
ip link set down br0
ip link set down tap0
brctl delbr br0
{{note| If operating system for some reason, does not receive automatically an IP from dnsmasq, assign manually the ip address 192.168.179.2, netmask 255.255.255.0 and gateway 192.168.179.1}}
==== OPTIONAL: Additional Resources ====
For detailed information about qemu parameters and related information:
* [https://wiki.archlinux.org/index.php/QEMU QEMU Archwiki page]
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]
* [https://en.wikibooks.org/wiki/QEMU Qemu Wikibooks]
For tips in improving performance on qemu:
* [https://wiki.archlinux.org/index.php/KVM#Enabling_huge_pages Enabling Hugepages]
* [https://bbs.archlinux.org/viewtopic.php?pid=1270311#p1270311 Disabling nested pages]
* [http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Enabling Hyper-V enlightenments with KVM]
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpuset.html Using cgroups, specifically cpuset]
* [http://www.linux-kvm.org/page/Tuning_KVM Tuning_KVM]
* [https://wiki.mikejung.biz/KVM_/_Xen KVM QEMU Virtio Tuning and SSD VM Optimization Guide]
For user script that allow run automatize running a virtual machine with gpu passtroughh:
* [http://pastebin.com/rcnUZCv7 Youtube user blu3bird84 script]
* [https://gist.github.com/f-koehler/055559ed20c54b737569 f-koehler script]
* [http://pastebin.com/eXTrd2GB nakamura script (adapted from f-koehler script)]
== Step 7: Virtual Machine configuration ==
=== Windows Installation ===
If you selected a Virtio Scsi disk during the virtual machine creation (which results in better performance) instead of Sata or Ide, you need to specify the driver cd during the installation following these instructions (you do add the virtio driver cd to virtual machine):
[[File:Virtio driver win install 1.jpg]]]
[[File:Virtio driver win install 2.jpg]]]
[[File:Virtio driver win install 3.jpg]]]
Select the ''viostor'' folder and you windows version:
[[File:Virtio driver win install 4.jpg]]]
[[File:Virtio driver win install 5.jpg]]]
=== Windows Configuration ===
After the windows installation, you probably going to have install drivers contained in the virtio drivers cd. The cd contains the following drivers and software:
* NetKVM/: Virtio Network driver
* viostor/: Virtio Block driver
* vioscsi/: Virtio SCSI driver
* viorng/: Virtio RNG driver
* vioser/: Virtio serial driver
* Balloon/: Virtio Memory Balloon driver
* qxl/: QXL graphics driver for Windows 7 and earlier. (build virtio-win-0.1.103-1 and later)
* qxldod/: QXL graphics driver for Windows 8 and later. (build virtio-win-0.1.103-2 and later)
* pvpanic/: QEMU pvpanic device driver (build virtio-win-0.1.103-2 and later)
* guest-agent/: QEMU Guest Agent 32bit and 64bit MSI installers
* qemupciserial/: QEMU PCI serial device driver
* *.vfd: VFD floppy images for using during install of Windows XP
Install the ''QEMU Guest Agent'' by double clicking the executable. Depending of the hardware that you assigned to the virtual machine, install its driver by right-click the file ''.inf'' of that folder and clicking install. In doubt you can install all of them if there is driver for your specific windows version.
== Step 8: Additional Post Configuration ==
=== Disabling Monitors ===
If you are using 2 or more monitors and want to disable a monitor or changed you monitor configuration you can use ''xrandr'' command line. You can find some information about it [https://wiki.archlinux.org/index.php/xrandr here] and [https://wiki.archlinux.org/index.php/Multihead here].
'''''TIP''''': use the ''ARandR'' application, which is a graphic interface for the ''xrandr'' command line application, setup your monitor configuration and click on Layout->Save and the program will create a bash script that will apply that layout when the script is runned.
=== Install Remote Client Software ===
I highly recommend either installing [http://synergy-project.org/ Synergy] or [http://www.tightvnc.com/ TightVNC] to easily control your guest machine after you've finished installing Windows.
=== Install Sinergy ===
Synergy is mouse and keyboard sharing software, allowing a single keyboard and mouse control several computers. On my case:
* I have 2 screens, one for the virtual machine and for the host.
* I will install sinergy on the virtual machine and host.
* To prevent lag, I will give access to my mouse and keyboard to virtual machine and setup sinergy as server.
* Configure sinergy on host as client.
* The hostname of hostside is HOSTNAME and ip address is 192.168.179.1
* The hostname of virtual machine hostname is VMNAME is 192.168.179.2
Assuming you want to replicate this setup follow the instructions below:
''Host side''
* Install synergy on host with the command:
sudo yaourt -S synergy
* Run synery with the command on the background:
/usr/lib/synergy/synergyc -f --no-tray --debug ERROR --name VMNAME 192.168.179.2:24800 &
* After shutting down the virtual machine run:
killall -9 synergyc
''Virtual machine''
* Download synergy windows version from [https://synergy-project.org/nightly here] on windows and install on the virtual machine.
* Configure it like this:
[[File:synergy1.jpg]]]
Drag the icon the position of you screen:
[[File:synergy2.jpg]]]
[[File:synergy6.jpg]]]
Double click each screen with the following configuration:
[[File:synergy3.jpg]]]
[[File:synergy5.jpg]]]
Press ''start'' and if the client is connected, press ''apply'' to create a service that will start the application at startup
[[File:synergy7.jpg]]]
=== Misc ===
You can also install Steam and use Steam in-home streaming like I am doing, although you will need a dummy VGA adapter to enable video acceleration. I recommend using an 82 Ohm resistor on a displayport to VGA adapter and shorting the RED pin to then ground pin with it. This will allow you to do 1080p@60Hz with CRU, and 1440p@40Hz with CRU.
== Troubleshooting ==
=== Audio crackling and other issues ===
There are huge number of '''possible''' solutions that you might try, altogether the might no be possible so solve the issue depending on you system and preferences:
'''All virtualization software'''
* Try shutting down all audio programs that might cause conflicts like music players or web browsers on the host side.
'''Qemu'''
* '''''Test different audio drivers / parameters''''': using the command ''qemu-system-x86_64 -audio-help'' will give a list of drivers and available parameters. There is no definite guide for this, it is pretty much trial and error. When testing several parameters change 1 parameter at a time to see how it affects the virtual machine. Some examples include:
** QEMU_ENV="QEMU_AUDIO_DRV=alsa QEMU_AUDIO_TIMER_PERIOD=0 qemu-system-x86_64 ...
** QEMU_ENV="QEMU_AUDIO_DRV=alsa QEMU_ALSA_DAC_BUFFER_SIZE=4096 QEMU_ALSA_DAC_PERIOD_SIZE=0 qemu-system-x86_64 ...
** QEMU_ENV="QEMU_AUDIO_DRV=pa qemu-system-x86_64 ...
** QEMU_ENV="QEMU_AUDIO_DRV=pa QEMU_PA_SAMPLES=128" qemu-system-x86_64 ...
** QEMU_AUDIO_TIMER_PERIOD=10 qemu-system-x86_64 ...
* '''''Use a usb sound card''''': this can be done with (use only solution one at a time):
** replace the ''-soundhw hda'' with ''-device usb-audio'' parameter (test also with and without the parameter QEMU_AUDIO_TIMER_PERIOD=10 qemu-system-x86_64 ...)
** pass the entire usb sound card by removing ''-soundhw hda'' and ''-device usb-audio'' and adding "-usbdevice host:XXXX:YYYY" (where XXXX:YYYY is the manufacturer and hardware ids given by the ''lsusb'' command)
* '''''Use the hdmi audio component of the gpu''''': if your card supports this and you have a monitor with speakers or a external DAC with hdmi.
* '''''Use the a PCI/PCI Express soundcard''''': using the same process used in the gpu, you can pass the pci/pci express soundcard, to do this:
** use the command ''sudo lspci -nnv'' detect you sound card and modules that are being used by the card
** blacklist those modules by adding its name to ''/etc/modprobe.d/blacklist.conf'' file or set the module to ignore the card.
** add the ''-device vfio-pci,host=0000:XX:00.0'' replacing with you domain/bus address.
* '''''Change the hv_vendor flag''''': more info [https://forum.teksyndicate.com/t/linux-qemu-win10-fizzling-sound/98847/10 here].
=== vfio: error opening /dev/vfio/XX: No such file or directory ===
* '''''Check if vfio-pci module is loaded''''': use ''lsmod'' to check it out.
* '''''Check if vfio-pci module claimed ALL devices in the group''''': use the command ''lspci -nnk'' to check the if your device(s) has(ve) the line(s) ''Kernel driver in use:vfio-pci'' meaning the module has successively claimed the device(s).
** If it does not have any line ''Kernel driver in use'' check if the ids in ''nano /etc/modprobe.d/vfio.conf'' are correct and reload the module with ''sudo rmmod vfio-pci'' and ''sudo modprobe vfio-pci''.
** If it does have a line ''Kernel driver in use'' but the module is not vfio-pci, blacklist list that(those) module(s) to prevent from claiming the device(s) at startup or check the section on how to pass devices to the virtual machines wihtout blacklisting modules in step 5.
=== vfio: error, group XX is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver ===
* Test the same solutions above.
=== vfio: Error: Failed to setup INTx fd: Device or resource busy ===
Run ''dmesg'' and if has the following line:
genirq: Flags mismatch irq XX. 00000000 (vfio-intx(0000:XX:XX.X)) vs. 00000080 (vfio-intx(0000:XX:XX.X))
it means a least 2 devices passed to virtual machine share the same irq resulting in this error.
You can obtain more information by using the addresses mentioned in that line (address of devices mentioned inside vfio-intx) and compare with the output of ''lspci'' to determine what devices are causing the conflict and run the ''cat /proc/interrupts'' to see a what irq the devices are using. After that, test the possible solutions:
* '''''Change the pci/pci express port of one or more devices on the motherboard''''' in order to change devices irq and make so that they don't share the same irq.
* '''''Check possible bios options''''' and search if you motherboard has for ability to assign or change the irq of some devices.
* '''''Use the ACS Override Patch in VFIO kernel''''' to pass only one device function if one of the device is multi-function and you only need that function. Example: a Sound Blaster Audigy 2 ZS card is recognized in ''lspci'' as 3 devices: a ''Multimedia audio controller'' device, a ''Input device controller'' device and a ''FireWire (IEEE 1394)'' device, instead of passing all the devices in vfio-pci parameters, using this patch, pass '''only''' the ''Multimedia audio controller'' device (the function of the card that produces sound) leaving out 2 devices that might be cause of conflict with other pci/pci express devices. Combine this solution with the other solutions mentioned before.
* '''''Detach the unused device via software'''''. Assuming that you don't need one of the devices that is causing the conflict you can completely disable it using the command ''echo -n 1 > "/sys/devices/pci0000:00/0000:XX:XX.X/remove"'' (where XX:XX.X is you device address). More detailed info [http://ubuntuforums.org/showthread.php?t=2299835 here] and [https://www.redhat.com/archives/libvirt-users/2014-March/msg00093.html here].
=== Nvidia code 43 error ===
This is mostly caused by fact the nvidia driver detect that they are being runned in a virtual environment and disable the card (except only using a quadro card) but it can be caused by other factors.
'''All virtualization software'''
* Make sure you have the gpu and its power connector properly connected. It have mentioned if some forum post that a disconnected power connector can cause this error.
'''Qemu'''
* make sure have have permissions to access the files in ''/dev/vfio/'' folder or run qemu as root
* use '' -cpu host,kvm=off,check'' parameter instead of ''-cpu host''
'''Virt-manager'''
Nvidia drivers don't seem to like these HyperV optimisations and will probably fail if they're encountered so set to "off". More info [http://ubuntuforums.org/showthread.php?t=2266916 here]:
For ''hv_relaxed'', etc use these values:
<features>
<acpi/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='4096'/>
</hyperv>
</features>
For ''hv_time'' use these values:
<clock offset='localtime'>
<timer name='hypervclock' present='yes'/>
<timer name='hpet' present='no'/> # Not sure about this one
</clock>
=== Nvidia GeForce Experience not working ===
If GeForce Experience complains about an unsupported CPU being present and some features, e.g. game optimization, don't work, passing the ignore_msrs=1 option to the KVM module will most likely solve the problem by ignoring accesses to unimplemented MSRs. Edit the file ''/etc/modprobe.d/kvm.conf'' and add:
options kvm ignore_msrs=1
= Additional Resources/References =
These following resources were very helpful in writing this how-to, and I strongly recomend checking them out:
* https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
* http://vfio.blogspot.co.uk/
* https://tekwiki.beylix.co.uk/index.php/GTA_V_on_Linux_(Skylake_Build_%2B_Hardware_VM_Passthrough)
* https://wiki.archlinux.org/index.php/QEMU
* https://bbs.archlinux.org/viewtopic.php?id=162768
= Credits =
* User [https://forum.teksyndicate.com/users/Xenxier Xenxier] from [https://forum.teksyndicate.com/ Tek Syndicate Forums] - Creator of this tutorial
* User [https://forum.teksyndicate.com/users/Eden Eden] from [https://forum.teksyndicate.com/ Tek Syndicate Forums] - Cleanup
* User [https://forum.teksyndicate.com/users/nakamura nakamura] from [https://forum.teksyndicate.com/ Tek Syndicate Forums] - Cleanup, additional information
* All authors and contributors of the additional resources and reference above
* User [https://forum.teksyndicate.com/users/wendell wendell] from [https://forum.teksyndicate.com/ Tek Syndicate Forums] - indirectly introducing the concept of VGA passthrough with his [https://www.youtube.com/watch?v=16dbAUrtMX4 indroductory video on the TekLinux channel] to many users.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment