Skip to content

Instantly share code, notes, and snippets.

@dgiebert
Last active July 5, 2024 13:06
Show Gist options
  • Save dgiebert/e22a169c80fd095856102fa60a473f34 to your computer and use it in GitHub Desktop.
Save dgiebert/e22a169c80fd095856102fa60a473f34 to your computer and use it in GitHub Desktop.
Cilium Cluster Mesh RKE2

Prepare the Cluster

Deploy this config to all clusters that you want to form a mesh with

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-cilium
  namespace: kube-system
spec:
  valuesContent: |-
    kubeProxyReplacement: strict
    k8sServiceHost: 127.0.0.1
    k8sServicePort: 6443
    ipv4NativeRoutingCIDR: 10.0.0.0/8
    # Transparent Encryption
    encryption:
      enabled: true
      type: wireguard
    # Cluster-mesh
    # This needs to be unique for all nodes
    # cluster:
    #   name: cilium01
    #   id: 1

Use the CLI to connect 2 clusters

Download and combine the KUBECONFIG files with a tool of your choice and install the CLI (docs)

export CLUSTER1=cilium01 CLUSTER2=cilium02
cilium --helm-release-name rke2-cilium clustermesh enable --context $CLUSTER1 --service-type LoadBalancer
kubectl --context=$CLUSTER1 annotate svc -n kube-system clustermesh-apiserver cloudprovider.harvesterhci.io/ipam='dhcp'
# Fix cilium-ca for Hubble
kubectl --context=$CLUSTER1 label secret -n kube-system cilium-ca app.kubernetes.io/managed-by="Helm"
kubectl --context=$CLUSTER1 annotate secret -n kube-system cilium-ca meta.helm.sh/release-name="rke2-cilium"
kubectl --context=$CLUSTER1 annotate secret -n kube-system meta.helm.sh/release-namespace="kube-system"
kubectl --context=$CLUSTER1 get secret -n kube-system cilium-ca -o yaml | kubectl --context $CLUSTER2 create -f -
# Fix LoadBalancer in CLUSTER2
cilium --helm-release-name rke2-cilium clustermesh enable --context $CLUSTER2 --service-type LoadBalancer
kubectl --context=$CLUSTER2 annotate svc -n kube-system clustermesh-apiserver cloudprovider.harvesterhci.io/ipam='dhcp'

# Check Status of the Cluster Mesh Components in each cluster
cilium clustermesh status --context $CLUSTER1 --wait
cilium clustermesh status --context $CLUSTER2 --wait
# Start connecting
cilium --helm-release-name rke2-cilium clustermesh connect --context $CLUSTER1 --destination-context $CLUSTER2
cilium connectivity test --context $CLUSTER1 --multi-cluster $CLUSTER2

!! Persist all settings in the HelmChartConfig (helm get values -n kube-system rke2-cilium)!!

Use the CLI to connect 3 clusters

Download and combine the KUBECONFIG files with a tool of your choice

export CLUSTER1=cilium01 CLUSTER2=cilium02 CLUSTER3=cilium03
cilium --helm-release-name rke2-cilium clustermesh enable --context $CLUSTER1 --service-type LoadBalancer
kubectl --context=$CLUSTER1 annotate svc -n kube-system clustermesh-apiserver cloudprovider.harvesterhci.io/ipam='dhcp'
# Fix cilium-ca for Hubble
kubectl --context=$CLUSTER1 label secret -n kube-system cilium-ca app.kubernetes.io/managed-by="Helm"
kubectl --context=$CLUSTER1 annotate secret -n kube-system cilium-ca meta.helm.sh/release-name="rke2-cilium"
kubectl --context=$CLUSTER1 annotate secret -n kube-system meta.helm.sh/release-namespace="kube-system"
kubectl --context=$CLUSTER1 get secret -n kube-system cilium-ca -o yaml | kubectl --context $CLUSTER2 create -f -
kubectl --context=$CLUSTER1 get secret -n kube-system cilium-ca -o yaml | kubectl --context $CLUSTER3 create -f -
# Deploy API Server and fix LoadBalancer in CLUSTER2
cilium --helm-release-name rke2-cilium clustermesh enable --context $CLUSTER2 --service-type LoadBalancer
kubectl --context=$CLUSTER2 annotate svc -n kube-system clustermesh-apiserver cloudprovider.harvesterhci.io/ipam='dhcp'
# Deploy API Server and fix LoadBalancer in CLUSTER2
cilium --helm-release-name rke2-cilium clustermesh enable --context $CLUSTER3 --service-type LoadBalancer
kubectl --context=$CLUSTER3 annotate svc -n kube-system clustermesh-apiserver cloudprovider.harvesterhci.io/ipam='dhcp'

# Check Status of the Cluster Mesh Components in each cluster
cilium clustermesh status --context $CLUSTER1 --wait
cilium clustermesh status --context $CLUSTER2 --wait
cilium clustermesh status --context $CLUSTER3 --wait
# Start connecting
cilium --helm-release-name rke2-cilium clustermesh connect --context $CLUSTER1 --destination-context $CLUSTER2
cilium --helm-release-name rke2-cilium clustermesh connect --context $CLUSTER2 --destination-context $CLUSTER3
cilium --helm-release-name rke2-cilium clustermesh connect --context $CLUSTER3 --destination-context $CLUSTER1
cilium connectivity test --context $CLUSTER1 --multi-cluster $CLUSTER2
cilium connectivity test --context $CLUSTER2 --multi-cluster $CLUSTER3
cilium connectivity test --context $CLUSTER3 --multi-cluster $CLUSTER1

!! Persist all settings in the HelmChartConfig (helm get values -n kube-system rke2-cilium)!!

Based on: https://docs.cilium.io/en/stable/network/clustermesh/clustermesh/

Test

Configure the Ingress Controller to use ClusterIP and add the correct annotations

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      hostPort:
        enabled: false
      service:
        enabled: true
        type: ClusterIP
        annotations:
          service.cilium.io/affinity: remote
          service.cilium.io/global: 'true'

Execute into the cattle-cluster-agent Pod and run curl http://rke2-ingress-nginx-controller.kube-system.svc watch the magic in Hubble :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment