Skip to content

Instantly share code, notes, and snippets.

@pulsar256
Last active February 9, 2023 22:20
Show Gist options
  • Save pulsar256/5f901a7fa903fae6332e to your computer and use it in GitHub Desktop.
Save pulsar256/5f901a7fa903fae6332e to your computer and use it in GitHub Desktop.
Hetzner Xen + v4 Subnet + v6 Subnet Setup HowTo

Hetzner Primary IPv4 IP + IPv4/2x Subnet + "Non-Routed" IPv6/64 Subnet HowTo

... so I do not forget the next time I have to figure this stuff out. And perhaps to help other poor souls fiddling with v6/v4 xen setups in a Hetzner network environment.

Basic setup

You can basically follow along the Xen Project Beginners Guide.

The short version

Install Debian Wheezy via Hetzner's installimage on the rescue system, the only important part about partitioning is that you have an LVM volume group named vg0 with enough space for your guests' disks.

Stuff you will need:

apt-get install bridge-utils xen-linux-system xen-tools
dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
update-grub

and to switch from xm to xl set TOOLSTACK=xl in /etc/default/xen, then reboot or do service xen stop && service xen start

Dom0

dom0:/etc/network/interfaces

auto lo
iface lo inet loopback

auto  eth0
iface eth0 inet static
  address   <PrimaryIP>
  broadcast <PrimaryIP_Broadcast>
  netmask   <PrimaryIP_Netmask>
  gateway   <PrimaryIP_Gateway>
  up route add -net <PrimaryIP_NET> netmask <PrimaryIP_Netmask> gw <PrimaryIP_Gateway> eth0

iface eth0 inet6 static
  address <IPv6_Prefix>::1
  netmask 128
  gateway <IPv6_Gateway>

auto br0
iface br0 inet static
  address       <PrimaryIP>
  netmask       255.255.255.255
  bridge_ports  none
  bridge_stp    off
  bridge_fd     0
  pre-up        brctl addbr br0
  up            ip -4 route add <Available_Subnet_v4_IP_Dom0>/32 dev br0
  up            ip -4 addr add <Available_Subnet_v4_IP_Dom0>/<Subnet_Mask> dev br0
  down          ip -4 route del <Available_Subnet_v4_IP_Dom0>/32 dev br0
  down          ip -4 addr del <Available_Subnet_v4_IP_Dom0>/<Subnet_Mask> dev br0

iface br0 inet6 static
  address <IPv6_Prefix>::1
  netmask 64

####In a nutshell:

  • Leave primary IP config as autogenerated by Hetzner
  • Add an IP Address out of the /64 Pool to eth0 with /128 netmask
  • create a bridge with no bridged devices
  • add an IP out of the assigned additional IP subnet with a /32 netmask (used by XenU Domains for routing)
  • add a route for the additional subnet with the appropriate netmask
  • add the same IPv6 Address to the bridge as for the eth0 device with netmask of /64

dom0:/etc/xen/xend-config.sxp

Default configuration can be used as of Xen 4.1.4-3+deb7u2

# -*- sh -*-
(vif-script vif-bridge)
(dom0-min-mem 512)
(enable-dom0-ballooning no)
(total_available_memory 0)
(dom0-cpus 0)
(vncpasswd '')

####In a nutshell:

  • leave default values, optionally restrict dom0 resources (min-mem, ballooning, cpu)
  • no network-script as we use a bridge configured via etc/network/interfaces
  • vif-bridge script will pick up the first available bridge device.

dom0:/etc/sysctl.conf

net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.default.forwarding = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.proxy_ndp = 1
net.ipv6.conf.all.proxy_ndp = 1

####In a nutshell:

  • enable ipv4 forwarding for all existing and all future devices
  • enable ipv6 forwarding for all existing and all future devices
  • enable ipv6 ndp proxy forwarding for all existing and all future devices
  • ndp proxy is needed to properly "announce" XenU domain's IPv6 addresses.

XenU

dom0:/etc/xen/example-vm.cfg

kernel      = '/boot/vmlinuz-3.2.0-4-amd64'
ramdisk     = '/boot/initrd.img-3.2.0-4-amd64'
extra       = "earlyprintk=xen console=hvc0"
vcpus       = '4'
memory      = '512'
root        = '/dev/xvda2 ro'
disk        = [
                  'phy:/dev/vg0/vmtest1-disk,xvda2,w',
                  'phy:/dev/vg0/vmtest1-swap,xvda1,w',
              ]

name        = 'example-vm'

vif         = [ 'ip=<Available_Subnet_v4_IP_XenU> <Available_v6_IP> ,mac=<ETH0_MAC> ,bridge=br0' ]

on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'

In a nutshell

  • Choose any available IPv4 (Available_Subnet_v4_IP_XenU) and IPv6 IPs (Available_v6_IP_XenU) from your subnets
  • Configuring the IPv6 IP is optional here and only required if you plan on extending the vif-bridge script and implement automatic ip -6 neigh add proxy <VM-IPv6-IP> dev eth0 registering, see "Handling Ipv6 Neighbor Discovery Protocol" section for mode details.
  • "Clone" eth0 MAC Address
  • Optional: add kernel parameters to work around virtual console issues.
  • Pay attention to the kernel / ramdisk settings. These need to match your Xen0 kernel. Alternatively use xen-create-imageto bootstrap your VM and autogenerate the config.

###domU:/etc/network/interfaces

auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
 address <Available_Subnet_v4_IP_XenU>
 gateway <Available_Subnet_v4_IP_Dom0>
 netmask <Subnet_Netmask>
 broadcast <Subnet_Broadcast>

iface eth0 inet6 static
 address <Available_v6_IP_XenU> 
 gateway <IPv6_Gateway> 

In a nutshell

  • configure IPv4 to use the previously (dom0:/etc/xen/example-vm.cfg) selected IPv4 Address, use netmask, broadcast and gateway as dictated by Hetzner
  • configure IPv6 to use any available IP address from your /64 pool as defined in (dom0:/etc/xen/example-vm.cfg)
  • use the Xen0's br0 v4 IP as your v4 gateway.
  • Use the Xen0's v6 IP Address and make sure that this very address is assigned to eth0 with /128 and to br0 with /64. Alternatively use RADVD on Xen0 and omit the gateway statement so it can be automatically configured (see "Variant I: radvd" section below).

Once the VM starts up the vifX.Y device will be added automatically to the br0 bridge. It can be verified on Dom0 via:

root@xen2 ~ # brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.feffffffffff       no              vif2.0

Handling IPv6 Neighbor Discovery Protocol

A couple of times this article mentioned the NDP packets and NDP proxies. NDP could be compared to ARP in the IPv4 world, at least in the scope of this article. Depending on the nature of the IPv6 network your server will be connected to, you might need to deal with this topic or not. For instance, if you experience symptoms like:

  • unreachable VM guests from outside via IPv6 becoming reachable once you fire up a ping from the VM towards any public IPv6 address
  • VMs becoming unreachable after a period of time via IPv6

You will most likely need to handle the NDP yourself. There are a couple alternatives how to do that.

Variant I: radvd

Most convenient one would be to use a Router Advertisement Daemon such as radvd. It will provide automatic IPv6 Address + Gateway configuration and handle ND packets for you. And yes, you can use this in a mixed manner, assign a static IPv6 address to your VM and let radvd configure the gatweay / routing using local links / IPv6 addresses (fe80::/10) dom0:br0 <-> xenU:eth0. There is a catch though, if you plan on using smaller subnets than /64 radvd will be of little use as the router advertisement / auto configuration method cannot handle any subnets smaller than that.

dom0:/etc/radvd.conf

interface br0
{
        AdvSendAdvert on;
        prefix <IPv6_Prefix>::/64
        {
        };
};

Variant II: Manual NDISC proxy list

The next approach is to add every XenU v6 IP manually to Xen0's eth0 proxy list:

ip -6 neigh add proxy <VM-IPv6-IP> dev eth0

You can do that either by hand or my modifying the xen network scripts to do that for you. In that case you will also need to pass the v6 IP via the VM's xen configuration file to the vif script as an automatic detection of the VM's configured IP is not possible in a generic manner to my knowledge.

###Vairant III: ND Proxy Daemon

Running a ND Proxy daemon usually means answering all NDP Queries ("hey, can I reach this IP address here?") with a "yes" on a specific device/subnet. I would not recommend this method as it can quickly exhaust RAM resources on upstream routers if somebody decides to do a /64 network scan and you happen answer all the resulting NDP inquiries with a "yes". You will find, that these proxies are not included the Debian standard repository, perhaps for a reason. Nevertheless here are two choices you might want consider:

NPD6 Configuration Example

Use default configuration but with your own prefix. And pay close attention to the list type configuration section as it will allow you not to spam upstream routers. One approach here would be to use a /120 subnet for all your XenU VMs and thus restrict the proxy daemon IP range to 256 hosts.

// If 'none' (the default) any NS matching the prefix gets a reply.
listtype = none

// listtype = black
// listtype = white
//addrlist = 2a01:0123:4567:89aa:aaaa:bbbb:cccc:dddd
//addrlist = 2a01:123:4567:89aa:dead:beef:dead:beef
//.
//.
//. (add as many addrlist entires as desired)
// Format: must be a 128-bit address, but all formats
// accepted, e.g. 2a01::22, 2a01::0022, etc.
 
// Pattern matching is also supported, via use of
// exprlist = <expression to match>
// *Please* check the man page and/or web site for more details
// on this. It's very powerful but needs a little thought first! It's 
// not just simple regexps.


// If we're using black or white lists, this controls whether to 
// log matches or not (if we're using debug mode, then they get
// logged anyway)

NDPPD Configuration Example

Pay attention to the rule statement and try to restrict the daemon not to handle a whole /64 subnet. Again, a good approach here would be to use a /120 subnet for your XenU VMs and thus restrict the IP range handled by the daemon to 256 Hosts.

route-ttl 30000
proxy eth0 {
   router yes
   timeout 500
   ttl 30000
   rule <IPv6_Prefix>::/<SUBNET> {
    auto
  }
}

Links, Sources

##TODO I am not very happy with the <FOO_This_And_That> variable naming scheme. I tried to keep the naming consistent across all configuration sections in this document though. If you have a better Idea - go ahead and fork off and/or let me know in the comments ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment