Skip to content

Instantly share code, notes, and snippets.

@levaitamas
Last active August 9, 2022 11:00
Show Gist options
  • Save levaitamas/4a634db3b270cf9ff941eb68aa0e1b69 to your computer and use it in GitHub Desktop.
Save levaitamas/4a634db3b270cf9ff941eb68aa0e1b69 to your computer and use it in GitHub Desktop.
Multi-network Multi-cluster Istio Service Mesh on Minikube kvm VMs Install Guide

Overview

Introduction

This writeup show how to bring up a multi-cluster multi-network Istio service mesh (Primary-Remote on Different Networks) using minikube VMs on a single Linux machine.

Acknowledments: This text reuses Istio docs. Thanks for making such a great documentation, Istio team!

Setup

Hardware

The host machine (might work on a less beefy system):

  • x86_64 architecture
  • 8 CPU cores
  • 32GB memory
  • 40+GB storate (2x 20GB for the minikube VMs -- can be reduced)

Software

Note: similar/recent versions should work too

  • Debian GNU/Linux 11
  • qemu 5.2, libvirt 7.0.0 (for the kvm2 minikube driver)
  • minikube v1.25.2
  • kubectl v1.23.6
  • kubernetes v1.21.12
  • istio v1.13.4

K8s Clusters

We use two disjoint IP networks for the two clusters:

  • 1st cluster's service IP range: 10.96.0.0/12
  • 2nd cluster's service IP range: 10.112.0.0/12

Installation Steps

Next, we install an Istio service mesh on 2 minikube VMs.

Create minikube instances

First, we create the minikube VMs.

Start the clusters

  • Start the first cluster (will use as Istio Primary)
minikube start \
 --driver=kvm2 \
 --cni=calico \
 --service-cluster-ip-range="10.96.0.0/12" \
 --profile=cluster1 \
 --memory=16384 \
 --cpus=4
  • Start the second cluster (will use as Istio Remote)
minikube start \
 --driver=kvm2 \
 --cni=calico \
 --service-cluster-ip-range="10.112.0.0/12" \
 --profile=cluster2 \
 --memory=16384 \
 --cpus=4

Expected Setup

As the minikube VMs fire up, expect to see a similar setup of VMs and virtual bridges:

+--------+  +-------------------+  +--------+
| virbr1 |  |      virbr0       |  | virbr2 |
+-----+--+  +-+-------------+---+  +-+------+
      |       |             |         |
  +---+-------+-+        +--+--------+-+
  | eth1    eth0|        |eth0     eth1|
  |             |        |             |
  |  cluster1   |        |  cluster2   |
  | minikube VM |        | minikube VM |
  |             |        |             |
  +-------------+        +-------------+

Change virbr forwarding mode

To enable connectivity between the two minikube VMs, we need to enable routing on the virtual bridges manually (as of minikube v1.25.2). More info: kubernetes/minikube#11499

Steps

1. Stop minikube VMs:

minikube stop -p cluster1
minikube stop -p cluster2

2. Edit virbr network configs

2.1 Open up the editor
sudo virsh net-edit mk-cluster1
2.2 Add line <forward mode='route'/>
 <network>
   <name>mk-c1</name>
   <uuid>a0dc0099-7016-44b9-96e1-31005095eaf8</uuid>
+  <forward mode='route'/>
   <bridge name='virbr1' stp='on' delay='0'/>
   <mac address='52:54:00:f1:7b:c8'/>
   <dns enable='no'/>
   <ip address='192.168.39.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.168.39.2' end='192.168.39.254'/>
       <host mac='52:54:00:6e:ef:95' name='c1' ip='192.168.39.102'/>
     </dhcp>
   </ip>
 </network>
2.3 Repeat the editing steps for network mk-cluster2

3. Restart virbr networks

3.1 Stop networks
sudo virsh net-destroy mk-cluster1
sudo virsh net-destroy mk-cluster2
3.2 Start networks
sudo virsh net-start mk-cluster1
sudo virsh net-start mk-cluster2

4. Start minikube VMs

minikube start -p cluster1
minikube start -p cluster2

Disable iptables traffic filtering on virtual bridges

There is traffic filtering on the virtual bridges which prevents connectivity between the VMs. A workaround is to delete them:

sudo iptables -D LIBVIRT_FWI -o virbr2 -j REJECT --reject-with icmp-port-unreachable
sudo iptables -D LIBVIRT_FWI -o virbr1 -j REJECT --reject-with icmp-port-unreachable
sudo iptables -D LIBVIRT_FWO -i virbr2 -j REJECT --reject-with icmp-port-unreachable
sudo iptables -D LIBVIRT_FWO -i virbr1 -j REJECT --reject-with icmp-port-unreachable

Add routes on minikube hosts

The k8s service networks needs to be added to the routing tables of minikube VMs.

Add route to cluster2 service CIDR on cluster1

minikube ssh -p cluster1 -- sudo ip route add 10.112.0.0/12 via 192.168.39.1 dev eth0

Add route to cluster1 service CIDR on cluster2

minikube ssh -p cluster2 -- sudo ip route add 10.96.0.0/12 via 192.168.50.1 dev eth0

Start minikube tunnels

Fire up minikube tunnels on minikube VMs to enable LoadBalancer services (this is necessary for Istio gateways).

Start the minikube tunnel for cluster 1

minikube -p cluster1 tunnel

Start the minikube tunnel for cluster 2

minikube -p cluster2 tunnel

(optional) Check Connectivity

Two types of connections should be checked (a simple ping test is sufficient):

  • connectivity between VMs

  • connectivity between a VM and the Internet

minikube -p cluster1 ssh -- ping 8.8.8.8 -c 2
minikube -p cluster2 ssh -- ping 8.8.8.8 -c 2

Install Istio

At this point we have two minikube VMs up and running. Next step is to install Istio following the Istio install guide (as of Istio 1.13).

Preparations

Download Istio

Get the Istio install script and follow the instructions:

curl -L https://istio.io/downloadIstio | sh -

Set env variables

Set the following environmental variables:

export CTX_CLUSTER1="cluster1"
export CTX_CLUSTER2="cluster2"

Setup certs

Generate certificates following the docs:

mkdir -p certs
pushd certs

make -f ../istio-*/tools/certs/Makefile.selfsigned.mk root-ca
make -f ../istio-*/tools/certs/Makefile.selfsigned.mk cluster1-cacerts
make -f ../istio-*/tools/certs/Makefile.selfsigned.mk cluster2-cacerts

kubectl --context $CTX_CLUSTER1 create namespace istio-system
kubectl --context $CTX_CLUSTER1 create secret generic cacerts -n istio-system \
      --from-file=cluster1/ca-cert.pem \
      --from-file=cluster1/ca-key.pem \
      --from-file=cluster1/root-cert.pem \
      --from-file=cluster1/cert-chain.pem

kubectl --context $CTX_CLUSTER2 create namespace istio-system
kubectl --context $CTX_CLUSTER2 create secret generic cacerts -n istio-system \
      --from-file=cluster2/ca-cert.pem \
      --from-file=cluster2/ca-key.pem \
      --from-file=cluster2/root-cert.pem \
      --from-file=cluster2/cert-chain.pem

popd

Install multi-cluster Istio

Now it is time to proceed with the Istio multicluster install guide.

  • Set the default network for cluster1
kubectl --context="${CTX_CLUSTER1}" get namespace istio-system && \
kubectl --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
  • Configure cluster1 as a primary
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: cluster1
      network: network1
EOF

istioctl install --context="${CTX_CLUSTER1}" -f cluster1.yaml
  • Install the east-west gateway in cluster1
istio-*/samples/multicluster/gen-eastwest-gateway.sh \
  --mesh mesh1 --cluster cluster1 --network network1 | \
  istioctl --context="${CTX_CLUSTER1}" install -y -f -

Wait for the east-west gateway to be assigned an external IP address:

kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system
  • Expose the control plane and services in cluster1
kubectl apply --context="${CTX_CLUSTER1}" -n istio-system -f \
    istio-*/samples/multicluster/expose-istiod.yaml

kubectl --context="${CTX_CLUSTER1}" apply -n istio-system -f \
    istio-*/samples/multicluster/expose-services.yaml
  • Set the default network for cluster2
kubectl --context="${CTX_CLUSTER2}" get namespace istio-system && \
  kubectl --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
  • Enable API Server Access to cluster2
istioctl x create-remote-secret \
    --context="${CTX_CLUSTER2}" \
    --name=cluster2 | \
    kubectl apply -f - --context="${CTX_CLUSTER1}"
  • Configure cluster2 as a remote
export DISCOVERY_ADDRESS=$(kubectl \
    --context="${CTX_CLUSTER1}" \
    -n istio-system get svc istio-eastwestgateway \
    -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: cluster2
      network: network2
      remotePilotAddress: ${DISCOVERY_ADDRESS}
EOF

istioctl install --context="${CTX_CLUSTER2}" -f cluster2.yaml
  • Install the east-west gateway in cluster2
istio-*/samples/multicluster/gen-eastwest-gateway.sh \
    --mesh mesh1 --cluster cluster2 --network network2 | \
    istioctl --context="${CTX_CLUSTER2}" install -y -f -

Wait for the east-west gateway to be assigned an external IP address:

kubectl --context="${CTX_CLUSTER2}" get svc istio-eastwestgateway -n istio-system
  • Expose services in cluster2
kubectl --context="${CTX_CLUSTER2}" apply -n istio-system -f \
    istio-*/samples/multicluster/expose-services.yaml
  • Enable Endpoint Discovery
istioctl x create-remote-secret \
  --context="${CTX_CLUSTER1}" \
  --name=cluster1 | \
  kubectl apply -f - --context="${CTX_CLUSTER2}"

istioctl x create-remote-secret \
  --context="${CTX_CLUSTER2}" \
  --name=cluster2 | \
  kubectl apply -f - --context="${CTX_CLUSTER1}"

(optional) Verify Install

We follow the Istio guide.

Steps

  • Deploy the HelloWorld Service
kubectl create --context="${CTX_CLUSTER1}" namespace sample
kubectl create --context="${CTX_CLUSTER2}" namespace sample

kubectl label --context="${CTX_CLUSTER1}" namespace sample \
    istio-injection=enabled
kubectl label --context="${CTX_CLUSTER2}" namespace sample \
    istio-injection=enabled

kubectl apply --context="${CTX_CLUSTER1}" \
    -f istio-*/samples/helloworld/helloworld.yaml \
    -l service=helloworld -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
    -f istio-*/samples/helloworld/helloworld.yaml \
    -l service=helloworld -n sample
  • Deploy HelloWorld V1
kubectl apply --context="${CTX_CLUSTER1}" \
    -f istio-*/samples/helloworld/helloworld.yaml \
    -l version=v1 -n sample

Wait until the status of helloworld-v1 is Running:

kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=helloworld
  • Deploy HelloWorld V2
kubectl apply --context="${CTX_CLUSTER2}" \
    -f istio-*/samples/helloworld/helloworld.yaml \
    -l version=v2 -n sample

Wait until the status of helloworld-v2 is Running:

kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=helloworld
  • Deploy Sleep
kubectl apply --context="${CTX_CLUSTER1}" \
    -f istio-*/samples/sleep/sleep.yaml -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
    -f istio-*/samples/sleep/sleep.yaml -n sample

Wait until the status of the Sleep pod is Running:

kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=sleep
kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=sleep
  • Verifying Cross-Cluster Traffic
for i in `seq 10`; do \
kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep \
    "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')" \
    -- curl -sS helloworld.sample:5000/hello; \
done
for i in `seq 10`; do \
kubectl exec --context="${CTX_CLUSTER2}" -n sample -c sleep \
    "$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')" \
    -- curl -sS helloworld.sample:5000/hello; \
done

Verify that the HelloWorld version should toggle between v1 and v2:

Hello version: v2, instance: helloworld-v2-54df5f84b-44sc4
Hello version: v1, instance: helloworld-v1-776f57d5f6-zqnlg
Hello version: v2, instance: helloworld-v2-54df5f84b-44sc4
Hello version: v1, instance: helloworld-v1-776f57d5f6-zqnlg
Hello version: v1, instance: helloworld-v1-776f57d5f6-zqnlg
...

Troubleshooting

VMs cannot access the Internet after reboot (minikube stop/start)

  • Check IP routes on minikube VMs:
minikube -p cluster1 -- ip route
minikube -p cluster2 -- ip route
  • Remove the default entry via eth0
@mudit000
Copy link

mudit000 commented Jun 5, 2022

How to we setup connectivity between two minikube VMs running on different laptops ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment