Taking my workflow to the next level - Graphics passthrough using VFIO

     

Introduction and motivation

Historically I have always used two computers at my desk at home: One for gaming and one for my coding and development work. The gaming machine is the more powerful of the two, with the latest iteration running Windows 10 on an AMD Ryzen 2600X, 16GB of RAM and a Nvidia GTX 1080Ti. My “work” computer runs macOS High Sierra with an Intel Core i5 3570K, 24GB of RAM and a Nvidia GTX 680. In order to switch between these, I need to physically get out of my seat, unplug the two display cables as well as the USB extension cable that connects all my peripherals from the currently-connected machine, and plug these cables into the machine I want to use. On top of this, my GTX 680 does not have two DisplayPort outputs, so I have to run one of my monitors using a HDMI cable from this card, foregoing the high refresh rate that the monitor offers when using DisplayPort. This solution to a multiple-OS computing space is, in my eyes, inelegant, and once I discovered the possibility of PCIe Passthrough with a KVM hypervisor, I immediately began exploring the option of moving my gaming and productivity OS’ onto a single physical machine.

The two GPUS

The two GPUS

Initial preparations and setup

I intended to virtualise both physical disks by simply passing through the device locations to KVM. I had done this before using VMWare and Virtualbox and have had a good experience. However, since the Windows OS was on a Samsung 950 Pro NVME drive, I definitely noted to test the speed of the drive once the Windows VM was set up. The macOS drive is a Crucial 240GB SSD.

The rest of the hardware in my machine is as follows:

OS Choice and installation

Naturally a flavour of Linux is required for QEMU/KVM, and the choices are quite extensive. I opted for swagArch, as I enjoy the rolling nature of Arch Linux but didn’t want to go through the hassle of setting everything up from scratch. SwagArch has a sensible choice of WM and pre-installed applications, and that terminal is just gorgeous right off the bat:

Installation was trivial - I wrote the ISO to a USB and set the destination to my Corsair SSD. Installation took less than five minutes and upon rebooting and setting the primary boot device to the swagArch EFI Target, I was ready to rock.

Setting up the environment and PCIe passthrough

For the most part of the process, I followed the excellent resource “PCI passthrough via OVMF” on the Arch Linux Wiki. This guide took me step-by-step in making sure KVM_AMD was configured on the host, the 1080Ti blacklisted and virt-manager set up.

While my conception of this system had initially envisaged macOS and Windows potentially running in tandem with a dedicated GPU each, my IOMMU grouping could not allow this. The second and third PCIe slots on my motherboard were tied in the same group that contained the SATA controller, wifi/ethernet controllers, and USB controllers, so isolating it would be nearly impossible (to my knowledge). So I settled on the goal of running one virtual OS at a time, with each OS having a maximum number of CPUs assigned to it (10) as well as the 1080Ti used by each, and 14GB of RAM.

The IOMMU groupings in question

The IOMMU groupings in question

Once I rebooted and confirmed the 1080ti no longer functioned within swagArch, I set up the Windows VM. I configured it to use the physical target of /dev/nvme0n1 as the boot drive, and gave it 4GB of RAM and 4 Physical CPU’s to begin with. I also added the 1080Ti PCIe Device. Windows booted, however, upon reaching Task Manager, I saw that the 1080Ti had the little yellow warning icon next to it, citing “Error 43” - a well known error that has caused the VFIO community no shortage of frustration.

I’ll quickly summarise the troubleshooting steps I took: I used virsh to edit the XML configuration file of the VM and add the ID spoofing as recommended here, but it didn’t work. I tried (to no avail) getting a clean GPU ROM and loading this as described here. Eventually, I looked into BIOS and swapped the “Initial PCIe Display Output” variable to the second PCIe slot. This fixed the issue for me, as it turns out the 1080Ti EFI can’t be initialised even by the Aorus WiFi Pro POST before being used in a OVMF environment.

The BIOS setting that I needed to change

The BIOS setting that I needed to change

Configuring Windows

Windows now booted with the 1080Ti working perfectly - I could use the DP outputs for both my monitors, with G-Sync enabled. I updated the GPU drivers and verified that Nvidia Control Panel was showing the same settings as bare metal. Additionally, I passed through the USB3.0 Controller for the motherboard’s rear ports (in it’s own IOMMU group) as a PCIe device to the VM, which worked perfectly. Also requiring setup was my existing Storage Spaces volume, which is used as a cache drive for my games. Luckily, despite my fears that the SATA emulation could possibly disrupt the volume initialisation, passing through both drives by setting the target in virt-manager as a physical location worked fine, and Storage Spaces mounted the volume cleanly when Windows booted. Everything else, so far, was working perfectly. I ensured that I used my RJ45 port in passthrough mode (macvtap) so that I could access the local network.

At this point, I was definitely impressed at how close-to-native the performance in the virtual machine felt. Moving windows about, opening programs, typing on the keyboard - the experience was snappy and responsive. However - there were definitely moments where the virtualisation’s overhead was apparent. When loading multiple programs at once, the VM would sometimes freeze completely for about half a second. I played a few games of CS:GO, and there were moments where the frame rate dropped barely over 100 for no explicable reason (on bare metal, I usually get around 200-250 in this game), and overall the responsiveness didn’t feel as high as before.

Configuring macOS

At first, I thought I would try and install macOS High Sierra 1 and restore my existing Hackintosh via a network Time Machine restore. I cloned macOS-Simple-KVM into my home directory and ran ./jumpstart.sh --high-sierra, and then ./make.sh --add to import the configuration with base image into virt-manager. This gave me a bootable macOS installation medium. However, I could not get the networking to work correctly in the installer to restore over the network. At this junction I decided to simply remove the SSD from my old Hackintosh machine, put it in a 2.5” enclosure, plug it into the VM host via USB3, and attempt to boot from it (drive passed through as usual in virt-manager). Clover bootloader, amazing as it is, recognised the bootloader on the SSD and I could boot into my physical Hackintosh drive.

Now came the interesting part - passing through the 1080Ti. I added the PCIe device in virt-manager and started the VM. macOS booted, but it didn’t recognise the GPU. I mounted the EFI partition of the boot drive and copied Lilo.kext and WhateverGreen.kext to the Kexts directory (This enables Nvidia cards) used Clover Configurator to open config.plist and make sure there were no VGA injections or other settings that might interfere with the GPU. I rebooted the VM, and the 1080Ti was working correctly. I double checked the Nvidia Web Drivers were enabled and tested plugging both monitors in.

All good so far. The only thing missing was USB. However, passing through the USB3.0 Controller that is in it’s own group is an AMD family chip that will not enumerate when macOS loads. So instead, I passed through all the USB devices I needed (keyboard, mouse, headset) using USB passthrough in virt-manager. This worked fine in the case of all devices except my headset, with which audio cuts out and when it does work, is very noisy.

Apart from that, everything worked very well. Like Windows, the network setup is the RJ45 port in passthrough mode. Time Machine backups needed to be reconfigured to work. Performance was very responsive with 10 cores in use by the VM and I noticed none of the “hiccups” that the Windows VM was having.

Issues and Further Improvements

Both operating systems were now running (independently) with very useable performance, at least by my standards. For both VM’s, I allocated 10 CPUs and 14GB of RAM. In Windows, I set the power manager to High Performance, and played several games for an extended period of time. I noticed that the stuttering and freezes became less frequent, eventually disappearing altogether. I have yet to do in-game benchmarks, but from a general gaming perspective, my experience felt very close to bare metal, with games such as CS:GO perhaps being the exception2. The 2GB RAM loss is not noticeable, but I think a RAM upgrade would benefit here me in the future.

I ran a benchmark on my Samsung NVME drive in Windows and obtained the following result:

These numbers were a definite downgrade from the numbers I was getting previously:

My guess is that this downgrade is the result of KVM SATA emulation inhibiting some performance enhancing features available with direct PCI access. I will definitely need to address this, as this project aims to bridge the performance gap as completely as possible.

For macOS, the performance felt on par with the previous physical machine. The 1080Ti is a definite upgrade in terms of graphical horsepower, despite graphics not being my typical workload. The one thing that did significantly improve the fluidity of my user experience was moving the SSD interface from USB3.0 to SATA, but this came at the cost of having to unplug my optical drive from the motherboard. Yes, the Gigabyte Aorus B450 only has 4 SATA ports2.

Final thoughts and future work

After the many hours that this experiment took to troubleshoot and get into its current state, I needed to make a decision as to whether I would keep this configuration as my daily driver. That choice, wholeheartedly, is to keep it. Performance is definitely at a level that I am happy with, and one of my biggest frustrations is resolved: I no longer have to get out of my chair to switch between computers. Yes, with my second monitor connecting via HDMI to the GTX 680, all I have to do is change the input of the monitor when turning VM’s on or off, rather than unplugging and replugging several cables.

However, this does not mean that the project is complete. I intend to further improve the performance of at least the Windows VM and also upgrade the physical hardware of the host machine. In list form:

Overall, this project has been an excellent challenge and learning experience for me, and I feel that I have definitely pushed the boundaries of my knowledge of computing - as well as making my life easier. I am very impressed by the potential of computing today that makes projects like this a reality, as well as the engineers and computer scientists behind it who make it possible. Additionally, I would to note my appreciation for the people in the VFIO community who have made excellent resources available online, without which this project would definitely not be a success.


  1. macOS 10.13 (High Sierra) is the last version of macOS that supports Nvidia cards with official drivers. [return]
  2. There are two other SATA ports (Motherboard Manual Download here) but these seem to be disabled (at least nothing I plug into them is picked up by the motherboard). [return]

comments powered by Disqus