Skip to content

Instantly share code, notes, and snippets.

@fristonio
Last active November 4, 2020 12:26
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save fristonio/69c60a47651913238732ddd50af8a417 to your computer and use it in GitHub Desktop.
Save fristonio/69c60a47651913238732ddd50af8a417 to your computer and use it in GitHub Desktop.
Enabling dual-stack for CIlium dev environment

Dual Stack setup for development env

Dual Stack setup for Kubernetes does not work with RUNTIME=containerd or RUNTIME=crio. Make sure you are running with default Docker container runtime.

Cilium

Edit /etc/sysconfig/cilium and modify CILIUM_OPTS and CILIUM_OPERATOR_OPTS.

Cilium Agent Options

No Cilium options are required to be set. Make sure that --enable-ipv6 is true for the cilium-agent flags.

Cilium-Operator Options

Update options to include the correct cluster IPv4 and IPv6 cidr. From the below K8s configuration use the same values for --cluster-pool-{ipv4, ipv6}-cidr flags.

--cluster-pool-ipv4-cidr=10.16.0.0/12 --cluster-pool-ipv6-cidr=fd77::/112 --cluster-pool-ipv6-mask-size 120 --cluster-pool-ipv4-mask-size 24

K8s

Edit Systemd unit files(/etc/systemd/system/kube-*.service) to include the below mentioned arguments. For some arguments you might need to edit the existing values.

kube-apiserver

--feature-gates="EndpointSlice=true,IPv6DualStack=true"
--service-cluster-ip-range=172.20.0.0/24,fd88::/112

Controller manager

--cluster-cidr=10.16.0.0/12,fd77::/112
--feature-gates="IPv6DualStack=true"
--service-cluster-ip-range=172.20.0.0/24,fd88::/112
--node-cidr-mask-size-ipv4=24
--node-cidr-mask-size-ipv6=120

Remove existing --node-cidr-mask-size 16 option from the systemd unit file.

Kubelet

--feature-gates="IPv6DualStack=true"

Kube-proxy

--cluster-cidr=10.16.0.0/12,fd77::/112
--feature-gates="IPv6DualStack=true"

Kube-Scheduler

--feature-gates="IPv6DualStack=true"

After all the systemd unit changes are done. First delete all the existing nodes objects in the cluster and then reload the daemon and restart all the services.

$ kubectl delete nodes k8s1 k8s2
$ kubectl delete ciliumnodes k8s1 k8s2
$ sudo systemctl daemon-reload

# Master Node
$ sudo systemctl restart cilium cilium-operator kube-apiserver kube-controller-manager kubelet kube-proxy kube-scheduler

# Worker Node
$ sudo systemctl restart cilium cilium-operator kubelet kube-proxy

The above will set up a Dual Stack cluster in the Dev environment. You can validate the above by checking that nodes have an IPv6 CIDR. All the new pods should also have an IPv6 addresses associated with them.

$ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDRs}'
$ kubectl get pods -o jsonpath='{.items[*].status.podIPs}'

Note

This is a fix required for kube-proxy when running in dual-stack mode. Sometimes there is a condition where kube-proxy repeatedly fails as it is not able to find KUBE-MARK-DROP iptables chain for IPv6 which should be created by kubelet.

Error occurred at line: 79
Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
)
I1013 15:26:18.762727       1 proxier.go:850] Sync failed; retrying in 30s
E1013 15:26:18.780765       1 proxier.go:1570] Failed to execute iptables-restore: exit status 2 (ip6tables-restore v1.8.3 (legacy):
Couldn't load target `KUBE-MARK-DROP':No such file or directory

This was fixed upstream for IPVS mode but still fails for iptables mode in our CI. For more information see kubernetes/kubernetes issues #80462 #84422 #85527

vagrant@k8s1 $ sudo ip6tables -t nat -N KUBE-MARK-DROP && sudo iptables -t nat -N KUBE-MARK-DROP
vagrant@k8s2 $ sudo ip6tables -t nat -N KUBE-MARK-DROP && sudo iptables -t nat -N KUBE-MARK-DROP

For IPv6 serivces to work from the Host, one might need to add a default IPv6 route on all nodes if not installed.

$ sudo ip -6 route add default dev enp0s8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment