Skip to content

Instantly share code, notes, and snippets.

@panperla
Last active November 5, 2023 16:19
Show Gist options
  • Star 12 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save panperla/77c169b1a8a1b745277d67f0979c86fd to your computer and use it in GitHub Desktop.
Save panperla/77c169b1a8a1b745277d67f0979c86fd to your computer and use it in GitHub Desktop.
Setting up IPv6 with Proxmox 4.3 - Debian jessi based on ovh template installation

Setting IPv6 for HOST machine working on Debian jessi and GUEST VM using Proxmox 4.3 installation from ovh template

HOST SECTION

Installation process is giving us ready to use machine which we can access via SSH (port 22) or web interface (port 8006) with setup network interfaces. Unfortunately during the process of testing IPv6 on vanilla Proxmox 4.3 delivered by OVH doesn't work out of the box.

ping6 ipv6.google.com 
connect: Network is unreachable

First let's check if you have IPv6 entry for vmbr0 in /etc/network/interfaces it should look like this

cat /etc/network/interfaces

iface vmbr0 inet6 static
	address 2001:xxxx:xxxx:xxxx::
	netmask 64
	post-up /sbin/ip -f inet6 route add 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0
	post-up /sbin/ip -f inet6 route add default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff
	pre-down /sbin/ip -f inet6 route del default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff
	pre-down /sbin/ip -f inet6 route del 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0

If you have IPv6 configuration for vmbr0 than you are ready to go just run those commands in the terminal in order to set default route:

/sbin/ip -f inet6 route add 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0
/sbin/ip -f inet6 route add default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff

This will set up proper routing and you should be able to test your connectivity

ping6 -c 3 ipv6.google.com

PING ipv6.google.com(par03s15-in-x0e.1e100.net) 56 data bytes
64 bytes from par03s15-in-x0e.1e100.net: icmp_seq=1 ttl=57 time=9.47 ms
64 bytes from par03s15-in-x0e.1e100.net: icmp_seq=2 ttl=57 time=11.1 ms
64 bytes from par03s15-in-x0e.1e100.net: icmp_seq=3 ttl=57 time=10.4 ms

--- ipv6.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 9.470/10.375/11.194/0.706 ms

To sum up post-up commands from your /etc/network/interfaces are not being executed and that's why HOST has no IPv6 route set. In order to fix that I have placed them in /etc/rc.local just before exit 0 so it should look like this:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

true > /etc/motd
if [ -e /etc/lsb-release ]
then
        grep DISTRIB_DESCRIPTION /etc/lsb-release | sed 's/^DISTRIB_DESCRIPTION="\(.*\)"$/\1/' > /etc/motd
fi
uname -a >> /etc/motd
echo >> /etc/motd
echo "server    : `cat /root/.mdg 2>/dev/null`" >> /etc/motd
echo "ip        : `cat /etc/network/interfaces | grep "address" | head -n 1 | cut -f 2 -d " "`"  >> /etc/motd
echo "hostname  : `hostname`" >> /etc/motd
echo >> /etc/motd
/bin/cat /etc/motd > /etc/issue

# setting here IPv6 routing because it's not working in post-up phase in /etc/network/interfaces
/sbin/ip -f inet6 route add 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0
/sbin/ip -f inet6 route add default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff

exit 0

Without going in to many details we need proxy ndp enabled and IPv6 forwarding enabled in order to have our VMs connected to the outside world.

In order to do that we need to set up net.ipv6.conf.default.proxy_ndp = 1 and net.ipv6.conf.default.forwarding = 1 and disable net.ipv6.conf.all.autoconf = 0. We do that in /etc/sysctl.conf

net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.vmbr0.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv6.conf.vmbr0.accept_ra = 0
net.ipv6.conf.vmbr0.accept_ra = 0
net.ipv6.conf.vmbr0.autoconf = 0

net.ipv6.conf.all.router_solicitations = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.proxy_ndp = 1
net.ipv6.conf.all.proxy_ndp = 1
net.ipv4.ip_forward = 1

After we finish setting up parameters in sysctl.conf we need to run sysctl -p in order to load new settings into kernel.

sysctl -p

And final step we need to set up ndp proxy which means for each IPv6 address of VM running in our HOST environment we need to execute following command (NOTE: 2001:xxxx:xxxx:xxxx::22 is and IPv6 address set on the VM):

ip -6 neigh add proxy 2001:xxxx:xxxx:xxxx::22 dev vmbr0

In this way we tell the system that we have a VM with the address 2001:xxxx:xxxx:xxxx::22 and it's accessible via vmbr0. After setting the proxy we should be able to ping our VM from Internet and also ping IPv6 address from VM.

Obviously this is very not practical approach that's why we will install ndppd which we do this for us.

wget http://debian.sdinet.de/jessie/main/ndppd/ndppd_0.2.4-1~sdinetG1_amd64.deb
dpkg -i ndppd_0.2.4-1~sdinetG1_amd64.deb
echo "proxy vmbr0 {
          rule 2001:xxxx:xxxx:xxxx::/64 {
	  }
}">/etc/ndppd.conf

NOTE 2001:xxxx:xxxx:xxxx:: is the main IPv6 address for vmbr0 After installing the ndppd and creating above config file all we need to do is restart the ndppd daemon

/etc/init.d/ndppd restart

This is all we need to do in order to have IPv6 HOST and VM connectivity.

GUEST VM SECTION

After setting up HOST we can start VM machine and set up network interface with network address from /64 range. Log in to your vm and set up public interface with IPv6 address:

/etc/network/interfaces

auto ens18
iface ens18 inet6 static
      address 2001:xxxx:xxxx:xxxx::1
      netmask 64
      # Our HOST IPv6 address
      gateway 2001:xxxx:xxxx:xxxx::

After we have set the inet6 entry for ens18 network interface we are IPv6 ready which means we can access VM from the Internet and access IPv6 addresses from VM.

@iandk
Copy link

iandk commented Aug 9, 2017

Any differences in Debian Stretch/ Proxmox 5.0?

@matteli
Copy link

matteli commented Jun 7, 2019

the post-up command is not executed because your netmask is incorrect.
For OVH, it's 56.

@panperla
Copy link
Author

panperla commented Jun 7, 2019

@matteli thanks for comment you are saying Guest VM should have netmask 56?

@matteli
Copy link

matteli commented Jun 7, 2019

@matteli thanks for comment you are saying Guest VM should have netmask 56?

No, it's in your host section. That's why post-up command is not executed.

@evandixon
Copy link

evandixon commented Jun 7, 2019

I'm using Proxmox 5.4-6 and primarily Debian-based VMs/containers. This guide has been very helpful in getting IPv6 working. Until today, the VMs I tried it on also had IPv4 addresses. Today I tried one with just an IPv6, and I found an issue.

My Debian 9.7 container could ping other IPv6's in the same block, but nothing else. Running my IP's equivalent of ip -6 neigh add proxy 2001:xxxx:xxxx:xxxx::22 dev vmbr0 made it work, suggesting the NDPPD config isn't right. I checked the service output, and saw

(error) You must specify either 'iface', 'auto' or 'static'

So there's something missing from here:

proxy vmbr0 {
          rule 2001:xxxx:xxxx:xxxx::/64 {
	  }
}

After looking up another example of NDPPD config, I updated the config on my box, and after restarting the service, things seem to be working:

proxy vmbr0 {
          rule 2001:xxxx:xxxx:xxxx::/64 {
	            static
	  }
}

To verify it wasn't the previous ip -6 neigh add proxy 2001:xxxx:xxxx:xxxx::22 dev vmbr0 making it work, I changed my container's IP address and it still worked.

@kamzata
Copy link

kamzata commented May 24, 2020

I'm trying implement a IPv4 NAT + IPv6 routing configuration. Something like this:

iface lo inet loopback
iface lo inet6 loopback

auto eno1
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
    address  51.XX.53.186
    netmask  255.255.255.0
    gateway  51.XX.53.254
    bridge_ports eno1
    bridge_stp off
    bridge_fd 0

iface vmbr0 inet6 static
    address  2001:41d0:XXXX:17ba::ffff
    netmask  128
    post-up sleep 5; /sbin/ip -6 route add 2001:41d0:XXXX:17FF:FF:FF:FF:FF dev vmbr0
    post-up sleep 5; /sbin/ip -6 route add default via 2001:41d0:XXXX:17FF:FF:FF:FF:FF
    pre-down /sbin/ip -6 route del default via 2001:41d0:XXXX:17FF:FF:FF:FF:FF
    pre-down /sbin/ip -6 route del 2001:41d0:XXXX:17FF:FF:FF:FF:FF dev vmbr0

auto vmbr1
iface vmbr1 inet static
    address  192.168.1.1
    netmask  255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0

iface vmbr1 inet6 static
    address  2001:41d0:XXXX:17ba::1
    netmask  64

Then using Npd6 daemon to discover neighbor containers. It works but partially... thus... Containers become reachable from internet (but they lose some packets) and, from inside, they works just after I run (for example) traceroute6 2600:: and then, after a while (30 minutes or more) I cannot ping outside anymore. IPv4 with NAT works great instead.

Any ideas?

@ludoc
Copy link

ludoc commented Nov 5, 2023

I'm trying implement a IPv4 NAT + IPv6 routing configuration. Something like this:

iface lo inet loopback
iface lo inet6 loopback

auto eno1
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
    address  51.XX.53.186
    netmask  255.255.255.0
    gateway  51.XX.53.254
    bridge_ports eno1
    bridge_stp off
    bridge_fd 0

iface vmbr0 inet6 static
    address  2001:41d0:XXXX:17ba::ffff
    netmask  128
    post-up sleep 5; /sbin/ip -6 route add 2001:41d0:XXXX:17FF:FF:FF:FF:FF dev vmbr0
    post-up sleep 5; /sbin/ip -6 route add default via 2001:41d0:XXXX:17FF:FF:FF:FF:FF
    pre-down /sbin/ip -6 route del default via 2001:41d0:XXXX:17FF:FF:FF:FF:FF
    pre-down /sbin/ip -6 route del 2001:41d0:XXXX:17FF:FF:FF:FF:FF dev vmbr0

auto vmbr1
iface vmbr1 inet static
    address  192.168.1.1
    netmask  255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0

iface vmbr1 inet6 static
    address  2001:41d0:XXXX:17ba::1
    netmask  64

Then using Npd6 daemon to discover neighbor containers. It works but partially... thus... Containers become reachable from internet (but they lose some packets) and, from inside, they works just after I run (for example) traceroute6 2600:: and then, after a while (30 minutes or more) I cannot ping outside anymore. IPv4 with NAT works great instead.

Any ideas?

same problem here @kamzata did you find a solution ?

@kamzata
Copy link

kamzata commented Nov 5, 2023

I'm trying implement a IPv4 NAT + IPv6 routing configuration. Something like this:

iface lo inet loopback
iface lo inet6 loopback

auto eno1
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
    address  51.XX.53.186
    netmask  255.255.255.0
    gateway  51.XX.53.254
    bridge_ports eno1
    bridge_stp off
    bridge_fd 0

iface vmbr0 inet6 static
    address  2001:41d0:XXXX:17ba::ffff
    netmask  128
    post-up sleep 5; /sbin/ip -6 route add 2001:41d0:XXXX:17FF:FF:FF:FF:FF dev vmbr0
    post-up sleep 5; /sbin/ip -6 route add default via 2001:41d0:XXXX:17FF:FF:FF:FF:FF
    pre-down /sbin/ip -6 route del default via 2001:41d0:XXXX:17FF:FF:FF:FF:FF
    pre-down /sbin/ip -6 route del 2001:41d0:XXXX:17FF:FF:FF:FF:FF dev vmbr0

auto vmbr1
iface vmbr1 inet static
    address  192.168.1.1
    netmask  255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0

iface vmbr1 inet6 static
    address  2001:41d0:XXXX:17ba::1
    netmask  64

Then using Npd6 daemon to discover neighbor containers. It works but partially... thus... Containers become reachable from internet (but they lose some packets) and, from inside, they works just after I run (for example) traceroute6 2600:: and then, after a while (30 minutes or more) I cannot ping outside anymore. IPv4 with NAT works great instead.
Any ideas?

same problem here @kamzata did you find a solution ?

Yes, I do. I don't remember the exact option but you should check your /etc/sysctl.conf and set it accordingly to your provider.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment