Skip to content

Instantly share code, notes, and snippets.

@vancluever
Created March 9, 2023 20:42
Show Gist options
  • Save vancluever/afb682641fd3cfc1fb4cbeda0bfc44fc to your computer and use it in GitHub Desktop.
Save vancluever/afb682641fd3cfc1fb4cbeda0bfc44fc to your computer and use it in GitHub Desktop.
KVM vhost-net vs userspace networking

KVM - vhost-net vs Userspace Host CPU Performance Comparison

I've been kicking the tires recently on a new KVM server I'm using for a home lab, and one of the things I've been investigating recently is network optimization in the guest.

Being especially anal about it and during my investigation of whether or not I could reasonably fix an approximately 500usec latency difference between pinging the guest and host, I started to look into making sure vhost-net was enabled on my rudimentary and extremely minimal KVM host running Alpine Linux. After ultimately getting it set up and finding the same latency, I wanted to see where the real value laid with having this setup on what will ultimately still be a not-so-busy low-end machine. I found some interesting results!

TL;DR: at the very least, vhost-net will save you a decent amount of CPU. Read on!

The test setup

  • Client: My M1 Macbook Air (2020)
  • KVM Host: A mini PC w/Intel Celeron N5105, 16 GB DDR4-2666, RealTek 8168, running Alpine 3.17
  • Guest: Alpine 3.17, 4 cores, 1 GB RAM

The test

The test is simple: We set up iperf3 on the guest, and watch vmstat on the host while we run the test on the guest. We repeat the test after loading the vhost_net kernel module (which isn't loaded by default when you install KVM on Alpine).

We don't do anything fancy with the iperf3 settings - just run iperf3 -s on the guest, and then iperf3 -c GUESTADDR on the client.

Confirming that vhost_net is in use

There's a couple of things that you can do to check to see if vhost_net is in use.

  • lsmod | grep vhost_net. This one is pretty easy, if the kernel module isn't loaded, it's not in use.
  • Ensure that libvirt is actually launching with vhost networking by just doing a ps auxfwww | less, and search your running guest for a -netdev tap,...vhost=on... switch. The key here is looking for vhost=on, which should be set automatically if the kernel module is loaded.

Scripting the test

To ensure that our vmstat figures for the test were synchronized with the iperf3 execution, I just made sure I ran both commands at once:

iperf3 -c test-guest & ssh test-user@test-host vmstat 10 2 > vmstat.$(date +%s)

This will launch iperf3 in the background (you will still get console output), and then SSH to the host server and run the vmstat. As the default iperf3 test time is 10s, you want the vmstat interval set for the same time. Note you need to set the sample count 2 - if you set it to 1, you will get a single, instantaneous sample, and then vmstat will exit. However, the two samples are great to show that the host was idle initially.

The results

The results are pretty neat!

Without vhost_net

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 4  0      0 15496868   8060 167704    0    0    13     0   99   49  0  0 99  0  0
 0  0      0 15494552   8068 167704    0    0     0     1 76615 7572 27 56 17  0  0

With vhost_net

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 4  0      0 15544444   8400 168876    0    0    12     0  161   64  1  1 99  0  0
 0  0      0 15523876   8408 168876    0    0     0     1 87071 147592 23 40 37  0  0

Spread out between user and system CPU%, we get a 20% savings worth of CPU performance. This can be discerned by just looking at the idle time (37% using vhost_net versus 17% without it).

The other thing I noticed at first was the large amounts of context switches in the vhost_net test. Given that it was not accompanied with a massive performance decrease, my guess here is that this is a side-effect of moving the operation into kernel space on the host.

Conclusion

If you're not necessarily even worried about the network performance improvements that vhost-net is supposed to bring to KVM, you might, at the very least, be interested in the CPU savings you can see, especially when CPU might be at a premium, such as on a commodity lower-end system. This might largely be taken for granted - this really is a solved problem by now, AFAIK - but it is still good to be cognizant of.

Oddities

There was one thing that I found odd during my iperf3 tests - it almost seems like I could reproduce a ~5% decrease in network traffic when running the guest with vhost-net on versus it off. I'm not too sure if this could be attributed to something else that needs to be looked at on the host or what not. This should definitely not be taken as a hit on vhost-net's performance, given that plenty of data exists on the benefits of it - much more scientific than this article as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment