Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 18 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save Jiab77/4cf278ac3ad59665969bdf73e083a847 to your computer and use it in GitHub Desktop.
Save Jiab77/4cf278ac3ad59665969bdf73e083a847 to your computer and use it in GitHub Desktop.
Bridged Networking on Wireless Interface with KVM and more...

Bridged Networking on Wireless Interface with KVM and more...

So I needed to upgrade my home "web hosting" server from a Raspberry Pi 3b to something more flexible where I could even simulate a Raspberry Pi 3b given power. The new server hardware is now an Intel NUC i7 16GB / 250Gb SSD NVME. 😁

I order to accomplish this task I had to find a way to bridge the wireless interface which is the faster one on my actual home network setup.

I've also tried to mix the functionnality from another Rapsberry Pi (3b+ this time) who's acting as WLAN to LAN bridge. More details on this setup. But this was finally a bad idea and I was not able to make it work along the virtual network bridge created by libvirt or manually created... (I will explain why later)

The main difficulty was to use the DMZ IP address given by the router and route the traffic to the guest VM's.

Server / Desktop

The process will be explained for both server and desktop installations.

Virtualization Host configuration

The following will describe the required steps for configuring your virtual host server.

Livbirt / Virt-Manager / Qemu

For desktop installation, this is already covered here.

Ubuntu Server 16.04 / 18.04

The server installation require less packages than the desktop one.

sudo apt install virt-manager libvirt-bin qemu

If the displayed dependency list is too big, cancel with Ctrl+C and retry without the package virt-manager.

Reboot to complete the installation.

Network configuration using netplan

Starting from 18.04, the network stack is now managed by netplan.io and network config has to be written in YAML language in /etc/netplan/01-network.yaml for example.

The network configuration will be read from the /etc/netplan directory and will lookup for *.yaml files.

Desktop

On desktop installation, netplan will write config files for NetworkManager but for this project, it was not needed finally, I've just disabled the LAN interface and configured the wireless interface using NetworkManager.

image image image image image

Server

On server installation, you can decide between systemd-networkd and NetworkManager backends. Both should work but NetworkManager renderer might be more complicated to manage on servers.

Sample configuration
# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    renderer: networkd
    #renderer: NetworkManager
    ethernets:
        eth0:
            optional: true
            dhcp4: false
            dhcp6: false
    wifis:
        wlan0:
            optional: true
            access-points:
                "YOUR-SSID-NAME":
                    password: "YOUR-PASSWORD"
            dhcp4: true
            #dhcp-identifier: mac # uncomment this line if you're using a Microsoft DHCP Server

This will left LAN interface UP without IP address assigned (if not, remove the whole ethernets block and reboot.) and the wireless interace UP with IP assigned from your local DHCP server.

Apply configuration

Once you have defined your network configuration in the YAML file, there are command lines to run.

To try your new configuration before applying to see if there is no errors:

sudo netplan --debug try

If there is no errors, then you can generate related configuration files:

sudo netplan --debug generate

Once done, you can apply your new network configuration:

sudo netplan --debug apply

Bridges and Virtual Networks

Okay, this was the most difficult part but no so difficult when you finally understand all the related concepts... 😅

The following is designed to work with NAT'ed virtual switches but it will also work for Routed'ed ones.

Create a new virtual network

You can dit from both virt-manager or virsh command. virt-manager would be more designed for desktop installation.

Desktop

image image image image

Server

Here is the dumped XML configuration used.

For the NAT version:

virsh net-dumpxml test-nated
<network connections='1'>
  <name>test-nated</name>
  <uuid>79cbec68-135e-4cb2-8272-3ffbf1477740</uuid>
  <forward dev='wlp58s0' mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
    <interface dev='wlp58s0'/>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:d3:87:9e'/>
  <ip address='192.168.REDACTED.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.REDACTED.REDACTED' end='192.168.REDACTED.REDACTED'/>
    </dhcp>
  </ip>
</network>

For the Routed version:

virsh net-dumpxml test-routed
<network>
  <name>test-routed</name>
  <uuid>ccc336d3-7637-4d85-9281-c185503f4798</uuid>
  <forward dev='wlp58s0' mode='route'>
    <interface dev='wlp58s0'/>
  </forward>
  <bridge name='virbr2' stp='on' delay='0'/>
  <mac address='52:54:00:91:af:bd'/>
  <ip address='192.168.REDACTED.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.REDACTED.REDACTED' end='192.168.REDACTED.REDACTED'/>
    </dhcp>
  </ip>
</network>

Kernel IPv4 forwarding

This is necessary is order to get all the IPv4 traffic forwarded to all interfaces.

You can also enable the IPv6 traffic forwarding by changing ipv4 by ipv6 below.

Check

echo /proc/sys/net/ipv4/ip_forward

'0' means disabled and '1' means enabled.

Enable / Disable (temporary)

sudo su -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'

'0' to disable and '1' to enable.

Enable / Disable (permanently)

sudo sysctl -w net.ipv4.ip_forward=1

'0' to disable and '1' to enable.

This should be already applied by libvirt but if not, you know how now. 😄

KVM / Qemu Hook for IPTables

libvirt will use iptables to setup the POST/PRE routing tables, the hook will be used in order to setup required tables for each guest VM's pointing to the DMZ IP address.

#!/bin/bash

# IMPORTANT: Change the "VM NAME" string to match your actual VM Name.
# In order to create rules to other VMs, just duplicate the below block and configure
# it accordingly.
if [ "${1}" = "low-srv-vm" ]; then

   # Update the following variables to fit your setup
   BRIDGE_IFACE=virbr1
   HOST_IP=192.168.REDACTED.REDACTED
   GUEST_IP=192.168.REDACTED.REDACTED
   GUEST_PORT=
   HOST_PORT=

   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
	/sbin/iptables -D FORWARD -o $BRIDGE_IFACE -d  $GUEST_IP -j ACCEPT
	#/sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
        /sbin/iptables -t nat -D PREROUTING -d $HOST_IP -j DNAT --to-destination $GUEST_IP
        /sbin/iptables -t nat -D POSTROUTING -s $GUEST_IP -j SNAT --to-source $HOST_IP
   fi
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
	/sbin/iptables -I FORWARD -o $BRIDGE_IFACE -d $GUEST_IP -j ACCEPT
	#/sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
        /sbin/iptables -t nat -A PREROUTING -d $HOST_IP -j DNAT --to-destination $GUEST_IP
        /sbin/iptables -t nat -A POSTROUTING -s $GUEST_IP -j SNAT --to-source $HOST_IP
   fi
fi

if [ "${1}" = "high-srv-vm" ]; then

   # Update the following variables to fit your setup
   BRIDGE_IFACE=virbr1
   HOST_IP=192.168.REDACTED.REDACTED
   GUEST_IP=192.168.REDACTED.REDACTED
   GUEST_PORT=
   HOST_PORT=

   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
        /sbin/iptables -D FORWARD -o $BRIDGE_IFACE -d  $GUEST_IP -j ACCEPT
        #/sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
        /sbin/iptables -t nat -D PREROUTING -d $HOST_IP -j DNAT --to-destination $GUEST_IP
        /sbin/iptables -t nat -D POSTROUTING -s $GUEST_IP -j SNAT --to-source $HOST_IP
   fi
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
        /sbin/iptables -I FORWARD -o $BRIDGE_IFACE -d $GUEST_IP -j ACCEPT
        #/sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
        /sbin/iptables -t nat -A PREROUTING -d $HOST_IP -j DNAT --to-destination $GUEST_IP
        /sbin/iptables -t nat -A POSTROUTING -s $GUEST_IP -j SNAT --to-source $HOST_IP
   fi
fi

This is a modified version from the one found here: https://wiki.libvirt.org/page/Networking#Forwarding_Incoming_Connections

Write this and modify it in /etc/libvirt/hooks/qemu file.

If you want to NAT using ports instead of IP address, just define GUEST_PORT and HOST_PORT variables and uncomment the second line in each if blocks.

Comment the two last lines in each if blocks.

sudo nano /etc/libvirt/hooks/qemu

Then close all running gues VM's and restart the libvirtd service to apply the new defined hook.

sudo systemctl restart libvirtd

If you are using virt-manager you will have to reconnect to the qemu server, just dbl-click on it. (see pictures)

image image

Now restart the guest VM linked to the DMZ address.

I was not able to make this working with more than one guest VM linked to the DMZ IP address... I may try to add a new intermediary VM that will act as a load-balancer and decide where to push traffic so I could keep both gues VM's online and linked to the DMZ IP address! 🤘

Host virtual server network configuration

To resume, you should have something like the following as network configuration.

Show network links:

ip -s -s -c l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    RX: bytes  packets  errors  dropped overrun mcast   
    8420207    138864   0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    8420207    138864   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       0       
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether f4:4d:30:6d:9b:2f brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
3: wlp58s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000
    link/ether f8:63:3f:3a:eb:1b brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    2626462996 1941220  0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    26234421   210183   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       2       
4: virbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:91:af:bd brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
5: virbr2-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr2 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:91:af:bd brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
6: virbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d3:87:9e brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    131446567  439317   0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    2418206543 277695   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       4       
7: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr1 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d3:87:9e brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr1 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:54:00:d9:e1:a7 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    137585811  439269   0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    2418728374 287718   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       0 

Show network addresses:

ip -s -s -c a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast   
    8452947    139400   0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    8452947    139400   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       0       
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether f4:4d:30:6d:9b:2f brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
3: wlp58s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether f8:63:3f:3a:eb:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.REDACTED.REDACTED/24 brd 192.168.REDACTED.255 scope global dynamic noprefixroute wlp58s0
       valid_lft 1476sec preferred_lft 1476sec
    inet6 fe80::fa63:3fff:fe3a:eb1b/64 scope link 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast   
    2626488340 1941523  0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    26243920   210243   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       2       
4: virbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:91:af:bd brd ff:ff:ff:ff:ff:ff
    inet 192.168.REDACTED.1/24 brd 192.168.REDACTED.255 scope global virbr2
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
5: virbr2-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr2 state DOWN group default qlen 1000
    link/ether 52:54:00:91:af:bd brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
6: virbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:d3:87:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.REDACTED.1/24 brd 192.168.REDACTED.255 scope global virbr1
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast   
    131462225  439370   0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    2418210827 277752   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       4       
7: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr1 state DOWN group default qlen 1000
    link/ether 52:54:00:d3:87:9e brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       1       
9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:d9:e1:a7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fed9:e1a7/64 scope link 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast   
    137602211  439322   0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    2418734374 287808   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       0 

Show network routes:

ip -s -s -c r
default via 192.168.REDACTED.1 dev wlp58s0 proto dhcp metric 600 
169.254.0.0/16 dev virbr2 scope link metric 1000 linkdown 
192.168.REDACTED.0/24 dev virbr2 proto kernel scope link src 192.168.REDACTED.1 linkdown 
192.168.REDACTED.0/24 dev virbr1 proto kernel scope link src 192.168.REDACTED.1 
192.168.REDACTED.0/24 dev wlp58s0 proto kernel scope link src 192.168.REDACTED.REDACTED metric 600 

Show network configuration:

cat /etc/netplan/01-network.yaml

It can also be named as 50-cloud-init.yaml on Ubuntu Server 18.04

# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    renderer: networkd
    #renderer: NetworkManager
    #ethernets:
    #    eth0:
    #        optional: true
    #        dhcp4: false
    #        dhcp6: false
    wifis:
        wlan0:
            optional: true
            access-points:
                "YOUR-SSID-NAME":
                    password: "YOUR-PASSWORD"
            dhcp4: true
            #dhcp-identifier: mac # uncomment this line if you're using a Microsoft DHCP Server

Guest VM's network configuration

To resume, you should have something like the following as network configuration.

Show network links:

ip -s -s -c l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    RX: bytes  packets  errors  dropped overrun mcast   
    798805     2363     0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    798805     2363     0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       0       
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d9:e1:a7 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    2418806941 288925   0       3674    0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    137819192  440031   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       2       

Show network addresses:

ip -s -s -c a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast   
    798805     2363     0       0       0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    798805     2363     0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       0       
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:d9:e1:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.REDACTED.REDACTED/24 brd 192.168.REDACTED.255 scope global dynamic ens3
       valid_lft 2585sec preferred_lft 2585sec
    inet6 fe80::5054:ff:fed9:e1a7/64 scope link 
       valid_lft forever preferred_lft forever
    RX: bytes  packets  errors  dropped overrun mcast   
    2418807871 288937   0       3675    0       0       
    RX errors: length   crc     frame   fifo    missed
               0        0       0       0       0       
    TX: bytes  packets  errors  dropped carrier collsns 
    137821330  440038   0       0       0       0       
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       2       

Show network routes:

ip -s -s -c r
default via 192.168.REDACTED.1 dev ens3 proto dhcp src 192.168.REDACTED.REDACTED metric 100 
192.168.REDACTED.0/24 dev ens3 proto kernel scope link src 192.168.REDACTED.REDACTED 
192.168.REDACTED.1 dev ens3 proto dhcp scope link src 192.168.REDACTED.REDACTED metric 100

Show network configuration:

cat /etc/netplan/01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    ens3:
      dhcp4: yes

Check the whole setup

So after all this work done, we need to be sure that everything will be kept on reboots and be functional, so we will reboot now.

sudo reboot

or:

systemctl reboot

Then... Pray! 🙏

Guest VM testing

To verify if everything was working correctly I just tried to reach my personal DNS domain (well not really mine to be honest, I'm using https://freedns.afraid.org/)

To reach the server: http://jiab77.dyn.ch

Speedtest

I wanted to know my knew hosting speed compared to what I could with my previous setup on Raspberry Pi's (server + bridge)

The custom speedtest might be deleted from the server later.

Bonus

Some bonus because I think they might interest you 😄

Shared storage space between VM's

In order to proceed to different performance testing, I don't really want to have to copy all data files between all my test VM's so I've decided to use a shared folder with 9p.

For more details, see both pages: http://www.linux-kvm.org/page/9p_virtio and https://wiki.qemu.org/Documentation/9psetup.

I've planned to use the passthrough mode to have the best performance.

I thought that would not be difficult to put in place... Guess what?! I was wrong! 😅

Load required modules in the kernel

There is two methods to load the required module in the kernel, I will use both of them but I think you can use only one method.

In my case I've decided the following:

  • Put them directly in the host kernel image
  • Loading them dynamicaly on the guest VM

Check if not already loaded

lsmod | grep 9p

If the output is empty on both host and guest, then you have to load the required modules.

1. Load from kernel image

sudo -s
cat >>/etc/initramfs-tools/modules <<EOF
9p
9pnet
9pnet_virtio
EOF
sudo update-initramfs -uk all

Then reboot to load the new kernel image.

2. Load dynamicaly

sudo -s
cat >>/etc/modules <<EOF
9p
9pnet
9pnet_virtio
EOF

Then reboot to load the new kernel modules.

If using only one method does not work for you, try while using my setup.

Define file-system permissions

In order to give access to the account libvirt-qemu on your system, you will have to do some minor changes on it. I choose the method that gives the minimal required permissions and also the minimal required system changes. Other methods are described in reference links.

Shared folder in user home folder

This will have the minimal impact regarding the system changes that has to be done.

# Create your shared folder
cd ~/
mkdir -v shared

# Assign file-system permissions
chmod -v 777 shared

Start with the maximal permissions then decrease according to your needs. The goal is to avoid all possible issues during the first mount. Normally 755 should be enougth.

Shared folder in media folder

This will have a bigger impact regarding the system changes that has to be done. This is because you will have to grant access to all content located in /media/user. As I need to use my second hard drive mounted in /media/user/disk, I've used this method.

# Create your shared folder
cd /media/user/disk
sudo mkdir -v shared

# Grant access to libvirt-qemu account in ACL's
sudo setfacl -R -m u:libvirt-qemu:rwx /media/user

# Assign file-system permissions
sudo chmod -v 755 /media/user/disk/shared

I had to use sudo in my case because everything mounted in /media/user/ is mounted as root. So my normal user account can't make any changes.

Attach the shared folder to the guest VM

Now you should have everything ready so you should be able to map your shared folder and run your guest VM without any troubles. (not like me 😅)

image

The Target Path is not really a path it is in fact a tag that will be used on the guest side to identify the shared folder.

Choose the Access Mode that correspond to your needs. In my case I've used the mapped mode, so host and guest must use the same user to access or make change on the shared folder.

Access Modes

This will define how file-system permissions will be written on the shared folder from the guest side to the host side. According to the documentation this work as follow:

The filesystem block has an optional attribute accessmode which specifies the security mode for accessing the source (since 0.8.5). Currently this only works with type='mount' for the QEMU/KVM driver. The possible values are:

  • passthrough: The source is accessed with the permissions of the user inside the guest. This is the default accessmode if one is not specified. More info. Beware that changes to permissions/ownership will affect all guests using that filesystem. This mode is generally quite fast.
  • mapped: The source is accessed with the permissions of the hypervisor (QEMU process). More info. This means you need to make sure that files on the hypervisor are accessible to the QEMU process (username libvirt-qemu on my setup). The advantage is that file attributes and permissions are "mapped" for the guest so that they are independent changes elsewhere (as long as the files stay accessible). If your host system supports ACLs, this mode will also allow proper ACL support in the guest. This mode is generally a bit slower than passthrough.
  • squash: Similar to 'passthrough', the exception is that failure of privileged operations like 'chown' are ignored. This makes a passthrough-like mode usable for people who run the hypervisor as non-root. More info

References:

Map the shared folder in the guest VM

Once the VM is started, you will have to mount your newly attached folder. Again, two methods to proceed:

Temporary

sudo mkdir -v /tmp/shared
sudo mount -v -t 9p -o trans=virtio,version=9p2000.L shared /tmp/shared
mount: shared mounted on /tmp/shared.

Permanently

sudo mkdir -v /mnt/shared
echo 'shared   /mnt/shared    9p  trans=virtio,version=9p2000.L,rw    0   0' | sudo tee -a /etc/fstab

Beware that if you mistype the shared folder name, the system will crash on boot...

Other observations:

Even if it is mounted in read/write mode because of rw, is it the Access Mode that will define the real access. But you can restrict the access even more by changing rw to ro to mount the shared folder in read-only mode.

Netdata

I've planned to use netdata to get realtime server status on every guest VM's and physical servers.

Installation

Very simple, just run the script and follow instructions.

bash <(curl -Ss https://my-netdata.io/kickstart.sh)

Run without using sudo.

Once finished just scroll up to see if you get a message like this:

Memory de-duplication instructions

You have kernel memory de-duper (called Kernel Same-page Merging,
or KSM) available, but it is not currently enabled.

To enable it run:

    echo 1 >/sys/kernel/mm/ksm/run
    echo 1000 >/sys/kernel/mm/ksm/sleep_millisecs

If you enable it, you will save 40-60% of netdata memory.

Proceed as explained.

Using sudo:

sudo su -c 'echo 1000 >/sys/kernel/mm/ksm/sleep_millisecs'
sudo su -c 'echo 1 >/sys/kernel/mm/ksm/run'

Should make it work.

Now you can check your server stats by using http://your-ip-address:19999. In my case, the guest VM IP address.

image

Documentation

If you want to know more about netdata, check the documentation or the project itself.

Ntop-ng

As my host defined as virtualization server, it will "see" all the traffic coming from the DMZ IP address, so to have a better look of what's going on the pipe 😁 I've planned to use Ntop-ng.

Installation

I've already covered this part in another gist: https://gist.github.com/Jiab77/023bbe036d7f60008ecd044c9a61591c

Fixes

After the installation I've seen that Ntop-ng was complaining about GRO, GSO and TSO, I don't really know what are they but if you want to get rid of this consistent message, just power on your guest VM's and run on your host server:

# First VM
sudo ethtool -K vnet0 gro off gso off tso off

# Second VM
sudo ethtool -K vnet1 gro off gso off tso off

Do this on every vnetX interfaces you have.

Contribute

Feel free to contribute by giving new ideas, fixes, or anything you would consider useful by posting a comment to this gist.

References

Some very helpful references that helped me to finally my it work after more than a week of fighting 😅.

Contact

You can reach me on Twitter by using: @Jiab77

What was failing and so why so long to make this???

Just because I've tried to provide two main features:

  1. Replace my Raspberry Pi based webserver.
  2. Replace my Raspberry Pi based WLAN / LAN bridge.

So I've tried many things like:

  • Create a pseudo-ethernet interface using macvlan type.
  • Create a dummy ethernet interface using dummy type.
  • Create a virtual ethernet interface using veth type.
  • Create a bridge interface using bridge type and:
    • Map LAN interface to the bridge
    • Map WLAN interface to the bridge
    • Map any othe the newly created interface to the bridge

and each time I've assigned the DMZ IP address to the newly created interface.

All of this worked but created network issues:

  • No more WLAN / LAN communication once the bridge bringing UP...
  • No more external access on guest VM's
  • No more DNS resolution on guest VM's
  • No more IP given by the libvirt virtual switch...
  • And so many other issues... 😓
#!/bin/bash
# IMPORTANT: Change the "VM NAME" string to match your actual VM Name.
# In order to create rules to other VMs, just duplicate the below block and configure
# it accordingly.
if [ "${1}" = "VM NAME" ]; then
# Update the following variables to fit your setup
BRIDGE_IFACE=
HOST_IP=
GUEST_IP=
GUEST_PORT=
HOST_PORT=
if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -D FORWARD -o $BRIDGE_IFACE -d $GUEST_IP -j ACCEPT
#/sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
/sbin/iptables -t nat -D PREROUTING -d $HOST_IP -j DNAT --to-destination $GUEST_IP
/sbin/iptables -t nat -D POSTROUTING -s $GUEST_IP -j SNAT --to-source $HOST_IP
fi
if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -I FORWARD -o $BRIDGE_IFACE -d $GUEST_IP -j ACCEPT
#/sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
/sbin/iptables -t nat -A PREROUTING -d $HOST_IP -j DNAT --to-destination $GUEST_IP
/sbin/iptables -t nat -A POSTROUTING -s $GUEST_IP -j SNAT --to-source $HOST_IP
fi
fi
@Jiab77
Copy link
Author

Jiab77 commented Jul 4, 2019

Added pictures.

@n01root
Copy link

n01root commented Feb 27, 2022

So this article is in english why not just put the screenshots in english as well??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment