- kind (v0.8.0+, Tested with v0.8.1)
- kubectl
- helm (v3+, tested with v3.3.0)
- Vagrant w/ vagrant-libvirt
- Setup a new docker network to share between libvirt and kind
docker network create -d=bridge --subnet 172.30.0.0/16 --ip-range 172.30.100.0/24 \
-o com.docker.network.bridge.name=tink-dev \
-o com.docker.network.bridge.enable_icc=1 \
-o com.docker.network.bridge.host_binding_ipv4=0.0.0.0 \
-o com.docker.network.bridge.enable_ip_masquerade=true \
tink-dev
- Bring up a new kind cluster
KIND_EXPERIMENTAL_DOCKER_NETWORK=tink-dev kind create cluster --name tink-dev
- Install MetalLB (from metallb.universe.tf/installation/)
# create the metal-lb config
cat << EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: metallb-config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.30.10.0-172.30.10.255
EOF
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install lb --set existingConfigMap=metallb-config bitnami/metallb
- Deploy the source-ip-app and create the service of type loadbalancer
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}'
- Bring up the test instance on the tink-dev network and make a request to the source-ip-app through the load balancer
vagrant up test
vagrant ssh test
curl 172.30.10.0
If setting bridge-nf-call-iptables
to 0, then the client-ip correctly returns 172.30.0.100 (the IP address of the libvirt instance). However, if it is set to 1 (the default), the client-ip is reported as 172.30.100.0 (the IP assigned to the bridge device and returned as the docker host IP by kind entrypoint)
This issue has been resolved, it turned out to be related to Fedora 32 and self-induced pain in implementing workarounds to get moby-engine to work.
First, I had neglected to change the configuration for firewalld to use iptables rather than nftables (so there were nftables rules in play alongside iptables rules).
Second, I had added masquerading to the FedoraWorkstation Zone when setting up moby-engine (following https://fedoramagazine.org/docker-and-fedora-32/), which was the cause of the SNAT.
Resolving these two issues fixed the problem and moby-engine still works.