This writeup show how to bring up a multi-cluster multi-network Istio service mesh (Primary-Remote on Different Networks) using minikube VMs on a single Linux machine.
Acknowledments: This text reuses Istio docs. Thanks for making such a great documentation, Istio team!
The host machine (might work on a less beefy system):
- x86_64 architecture
- 8 CPU cores
- 32GB memory
- 40+GB storate (2x 20GB for the minikube VMs -- can be reduced)
Note: similar/recent versions should work too
- Debian GNU/Linux 11
- qemu 5.2, libvirt 7.0.0 (for the kvm2 minikube driver)
- minikube v1.25.2
- kubectl v1.23.6
- kubernetes v1.21.12
- istio v1.13.4
We use two disjoint IP networks for the two clusters:
- 1st cluster's service IP range:
10.96.0.0/12
- 2nd cluster's service IP range:
10.112.0.0/12
Next, we install an Istio service mesh on 2 minikube VMs.
First, we create the minikube VMs.
- Start the first cluster (will use as Istio Primary)
minikube start \
--driver=kvm2 \
--cni=calico \
--service-cluster-ip-range="10.96.0.0/12" \
--profile=cluster1 \
--memory=16384 \
--cpus=4
- Start the second cluster (will use as Istio Remote)
minikube start \
--driver=kvm2 \
--cni=calico \
--service-cluster-ip-range="10.112.0.0/12" \
--profile=cluster2 \
--memory=16384 \
--cpus=4
As the minikube VMs fire up, expect to see a similar setup of VMs and virtual bridges:
+--------+ +-------------------+ +--------+
| virbr1 | | virbr0 | | virbr2 |
+-----+--+ +-+-------------+---+ +-+------+
| | | |
+---+-------+-+ +--+--------+-+
| eth1 eth0| |eth0 eth1|
| | | |
| cluster1 | | cluster2 |
| minikube VM | | minikube VM |
| | | |
+-------------+ +-------------+
To enable connectivity between the two minikube VMs, we need to enable routing on the virtual bridges manually (as of minikube v1.25.2). More info: kubernetes/minikube#11499
minikube stop -p cluster1
minikube stop -p cluster2
sudo virsh net-edit mk-cluster1
<network>
<name>mk-c1</name>
<uuid>a0dc0099-7016-44b9-96e1-31005095eaf8</uuid>
+ <forward mode='route'/>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:f1:7b:c8'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.254'/>
<host mac='52:54:00:6e:ef:95' name='c1' ip='192.168.39.102'/>
</dhcp>
</ip>
</network>
sudo virsh net-destroy mk-cluster1
sudo virsh net-destroy mk-cluster2
sudo virsh net-start mk-cluster1
sudo virsh net-start mk-cluster2
minikube start -p cluster1
minikube start -p cluster2
There is traffic filtering on the virtual bridges which prevents connectivity between the VMs. A workaround is to delete them:
sudo iptables -D LIBVIRT_FWI -o virbr2 -j REJECT --reject-with icmp-port-unreachable
sudo iptables -D LIBVIRT_FWI -o virbr1 -j REJECT --reject-with icmp-port-unreachable
sudo iptables -D LIBVIRT_FWO -i virbr2 -j REJECT --reject-with icmp-port-unreachable
sudo iptables -D LIBVIRT_FWO -i virbr1 -j REJECT --reject-with icmp-port-unreachable
The k8s service networks needs to be added to the routing tables of minikube VMs.
minikube ssh -p cluster1 -- sudo ip route add 10.112.0.0/12 via 192.168.39.1 dev eth0
minikube ssh -p cluster2 -- sudo ip route add 10.96.0.0/12 via 192.168.50.1 dev eth0
Fire up minikube tunnels on minikube VMs to enable LoadBalancer services (this is necessary for Istio gateways).
minikube -p cluster1 tunnel
minikube -p cluster2 tunnel
Two types of connections should be checked (a simple ping
test is sufficient):
-
connectivity between VMs
-
connectivity between a VM and the Internet
minikube -p cluster1 ssh -- ping 8.8.8.8 -c 2
minikube -p cluster2 ssh -- ping 8.8.8.8 -c 2
At this point we have two minikube VMs up and running. Next step is to install Istio following the Istio install guide (as of Istio 1.13).
Get the Istio install script and follow the instructions:
curl -L https://istio.io/downloadIstio | sh -
Set the following environmental variables:
export CTX_CLUSTER1="cluster1"
export CTX_CLUSTER2="cluster2"
Generate certificates following the docs:
mkdir -p certs
pushd certs
make -f ../istio-*/tools/certs/Makefile.selfsigned.mk root-ca
make -f ../istio-*/tools/certs/Makefile.selfsigned.mk cluster1-cacerts
make -f ../istio-*/tools/certs/Makefile.selfsigned.mk cluster2-cacerts
kubectl --context $CTX_CLUSTER1 create namespace istio-system
kubectl --context $CTX_CLUSTER1 create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem
kubectl --context $CTX_CLUSTER2 create namespace istio-system
kubectl --context $CTX_CLUSTER2 create secret generic cacerts -n istio-system \
--from-file=cluster2/ca-cert.pem \
--from-file=cluster2/ca-key.pem \
--from-file=cluster2/root-cert.pem \
--from-file=cluster2/cert-chain.pem
popd
Now it is time to proceed with the Istio multicluster install guide.
- Set the default network for cluster1
kubectl --context="${CTX_CLUSTER1}" get namespace istio-system && \
kubectl --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
- Configure cluster1 as a primary
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
EOF
istioctl install --context="${CTX_CLUSTER1}" -f cluster1.yaml
- Install the east-west gateway in cluster1
istio-*/samples/multicluster/gen-eastwest-gateway.sh \
--mesh mesh1 --cluster cluster1 --network network1 | \
istioctl --context="${CTX_CLUSTER1}" install -y -f -
Wait for the east-west gateway to be assigned an external IP address:
kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system
- Expose the control plane and services in cluster1
kubectl apply --context="${CTX_CLUSTER1}" -n istio-system -f \
istio-*/samples/multicluster/expose-istiod.yaml
kubectl --context="${CTX_CLUSTER1}" apply -n istio-system -f \
istio-*/samples/multicluster/expose-services.yaml
- Set the default network for cluster2
kubectl --context="${CTX_CLUSTER2}" get namespace istio-system && \
kubectl --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
- Enable API Server Access to cluster2
istioctl x create-remote-secret \
--context="${CTX_CLUSTER2}" \
--name=cluster2 | \
kubectl apply -f - --context="${CTX_CLUSTER1}"
- Configure cluster2 as a remote
export DISCOVERY_ADDRESS=$(kubectl \
--context="${CTX_CLUSTER1}" \
-n istio-system get svc istio-eastwestgateway \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
remotePilotAddress: ${DISCOVERY_ADDRESS}
EOF
istioctl install --context="${CTX_CLUSTER2}" -f cluster2.yaml
- Install the east-west gateway in cluster2
istio-*/samples/multicluster/gen-eastwest-gateway.sh \
--mesh mesh1 --cluster cluster2 --network network2 | \
istioctl --context="${CTX_CLUSTER2}" install -y -f -
Wait for the east-west gateway to be assigned an external IP address:
kubectl --context="${CTX_CLUSTER2}" get svc istio-eastwestgateway -n istio-system
- Expose services in cluster2
kubectl --context="${CTX_CLUSTER2}" apply -n istio-system -f \
istio-*/samples/multicluster/expose-services.yaml
- Enable Endpoint Discovery
istioctl x create-remote-secret \
--context="${CTX_CLUSTER1}" \
--name=cluster1 | \
kubectl apply -f - --context="${CTX_CLUSTER2}"
istioctl x create-remote-secret \
--context="${CTX_CLUSTER2}" \
--name=cluster2 | \
kubectl apply -f - --context="${CTX_CLUSTER1}"
We follow the Istio guide.
- Deploy the HelloWorld Service
kubectl create --context="${CTX_CLUSTER1}" namespace sample
kubectl create --context="${CTX_CLUSTER2}" namespace sample
kubectl label --context="${CTX_CLUSTER1}" namespace sample \
istio-injection=enabled
kubectl label --context="${CTX_CLUSTER2}" namespace sample \
istio-injection=enabled
kubectl apply --context="${CTX_CLUSTER1}" \
-f istio-*/samples/helloworld/helloworld.yaml \
-l service=helloworld -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
-f istio-*/samples/helloworld/helloworld.yaml \
-l service=helloworld -n sample
- Deploy HelloWorld V1
kubectl apply --context="${CTX_CLUSTER1}" \
-f istio-*/samples/helloworld/helloworld.yaml \
-l version=v1 -n sample
Wait until the status of helloworld-v1 is Running:
kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=helloworld
- Deploy HelloWorld V2
kubectl apply --context="${CTX_CLUSTER2}" \
-f istio-*/samples/helloworld/helloworld.yaml \
-l version=v2 -n sample
Wait until the status of helloworld-v2 is Running:
kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=helloworld
- Deploy Sleep
kubectl apply --context="${CTX_CLUSTER1}" \
-f istio-*/samples/sleep/sleep.yaml -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
-f istio-*/samples/sleep/sleep.yaml -n sample
Wait until the status of the Sleep pod is Running:
kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=sleep
kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=sleep
- Verifying Cross-Cluster Traffic
for i in `seq 10`; do \
kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello; \
done
for i in `seq 10`; do \
kubectl exec --context="${CTX_CLUSTER2}" -n sample -c sleep \
"$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello; \
done
Verify that the HelloWorld version should toggle between v1 and v2:
Hello version: v2, instance: helloworld-v2-54df5f84b-44sc4
Hello version: v1, instance: helloworld-v1-776f57d5f6-zqnlg
Hello version: v2, instance: helloworld-v2-54df5f84b-44sc4
Hello version: v1, instance: helloworld-v1-776f57d5f6-zqnlg
Hello version: v1, instance: helloworld-v1-776f57d5f6-zqnlg
...
- Check IP routes on minikube VMs:
minikube -p cluster1 -- ip route
minikube -p cluster2 -- ip route
- Remove the default entry via eth0
How to we setup connectivity between two minikube VMs running on different laptops ?