Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mcastelino/f70bdc136583f06a676dcdc26626e630 to your computer and use it in GitHub Desktop.
Save mcastelino/f70bdc136583f06a676dcdc26626e630 to your computer and use it in GitHub Desktop.
Basic VPP setup for vhost-user and VMs

Basic VPP Setup for vhost-user interfaces and VMs

There are many examples of using VPP with tap interfaces/namespaces. After reviewing these, and details at https://wiki.fd.io/view/VPP/Tutorial_Routing_and_Switching, we came up with the following setup for testing inter-VM connectivity using VPP's vhost-user.

First time setup

Grab VPP packages, per directions from FD.io at https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

$ sudo rm /etc/apt/sources.list.d/99fd.io.list
$ echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io.ubuntu.xenial.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
$  sudo apt-get update
$ sudo apt-get install vpp vpp-dpdk-dkms

At this point, you should see vpp service is available on the system, and should start by default on boot. Default startup confiugration can be found at /etc/vpp/startup.conf

Setup VPP interfaces for L2 Bridge connection of two virtual machines

We will use VPP to create a L2 bridge between two VMs, connected using vhost-user interface. With this in mind, first create the vhost-user interfaces, and set them up:

 $ sudo vppctl create vhost socket /tmp/sock1.sock server
VirtualEthernet0/0/0
$ sudo vppctl create vhost socket /tmp/sock2.sock server
VirtualEthernet0/0/1
$ sudo vppctl set interface state VirtualEthernet0/0/0 up
$ sudo vppctl set interface state VirtualEthernet0/0/1 up

Next, connect the interfaces to the same bridge domain. Below we are creating and attaching the vhost-user interfaces to bridge domain 1:

$ vppctl set interface l2 bridge VirtualEthernet0/0/0 1
$ vppctl set interface l2 bridge VirtualEthernet0/0/1 1

You can see the setup of the bridge domain as follows:

$ sudo vppctl show bridge-domain 1 detail
  ID   Index   Learning   U-Forwrd   UU-Flood   Flooding   ARP-Term     BVI-Intf   
  1      1        on         on         on         on         off          N/A     

           Interface           Index  SHG  BVI  TxFlood        VLAN-Tag-Rewrite       
     VirtualEthernet0/0/0        1     0    -      *                 none             
     VirtualEthernet0/0/1        2     0    -      *                 none 

Boot virtual machines, making use of vhost-user network interface

Now that VPP is setup, we are ready to boot our VM. For this example, pull down a Clear Linux KVM image from https://download.clearlinux.org/image/ , as well as the BIOS image, OVMF.fd and the sample startup script, start_qemu.sh.

Use start_qemu.sh to boot the KVM image, and install necessary packages to test connectivity, and then shutdown:

$ bash start_qemu.sh clear-14200-kvm.img
...
~ # swupd update
~ # swupd bundle-add web-server-basic network-basic
~ # shutdown now

Copy the clear.*kvmimg to a second unique .img (so that you can boot two of them at a time)

Since we are making use of vhost-user, its required that the VM be launched with prealloced numa-mode memory, backed by hugepages. Since hugepages are also being used by VPP (only 1024 by default), it is necessary to allocate more (how much will vary depending on the VM, and should be selected pending your system's available RAM):

sudo sysctl -w vm.nr_hugepages=4096

Launch the first VM:

qemu-system-x86_64 \
    -enable-kvm -m 1024 \
    -bios OVMF.fd \
    -smp 4 -cpu host \
    -vga none -nographic \
    -drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \
    -chardev socket,id=char1,path=/tmp/sock1.sock \
    -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
    -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
    -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
    -numa node,memdev=mem -mem-prealloc \
    -debugcon file:debug.log -global isa-debugcon.iobase=0x402

Launch the second VM similar as above, only update the image, the mac address and the vhost-user socket:

qemu-system-x86_64 \
    -enable-kvm -m 1024 \
    -bios OVMF.fd \
    -smp 4 -cpu host \
    -vga none -nographic \
    -drive file="2-clear-14200-kvm.img",if=virtio,aio=threads \
    -chardev socket,id=char1,path=/tmp/sock2.sock \
    -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
    -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 \
    -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
    -numa node,memdev=mem -mem-prealloc \
    -debugcon file:debug.log -global isa-debugcon.iobase=0x40

In VM #1, setup your IP address as follows:

$ ip addr add dev enp0s2 192.168.0.1/24

In VM #2, setup your IP address as follows:

$ ip addr add dev enp0s2 192.168.0.2/24

You can now test basic connectivity from VM#1 to VM#2:

 # ping 192.168.0.1 -c1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.242 ms

--- 192.168.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment