Skip to content

Instantly share code, notes, and snippets.

@john-a-joyce
Last active February 17, 2022 14:29
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save john-a-joyce/45af23d3cbc2d2d947b85de203f83cb5 to your computer and use it in GitHub Desktop.
Save john-a-joyce/45af23d3cbc2d2d947b85de203f83cb5 to your computer and use it in GitHub Desktop.
# This is the script that performs the step by step procedure to bring up an edge GW.
# We use KinD as our cluster environment the first section sets up the environment
#
# The prequisites and version info is as follows.
# The two clusters must have network rechability to each other. We use mettallb to help achieve that
# Pod level reachbility is not required.
# The kube apiservers in each cluster must be able to reach each other. Appropriate Kind settings allow that.
#
# kind version
# kind v0.11.1 go1.17.2 linux/amd64
# kubectl version
# Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
# Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
#
# ------------------------ Section 1 Cluster Env ----------------------------------
rm /root/.banzai/backyards/kind-central*
. kind_utils.sh
kind delete cluster --name=central
kind delete cluster --name=edge
# Create 2 clusters
kind_create_cluster central kubeconfigs/central.yaml 10.87.49.211 0
kind_create_cluster edge kubeconfigs/edge.yaml 10.87.49.211 1
#
# To minimize edge components want to manually load proxy image for now
# The proxy image to load will depend on which smm version you are using and whether you are using the default configuration
# If using upstream istio you should be able to directly load images from docker.
kind load docker-image 033498657557.dkr.ecr.us-east-2.amazonaws.com/banzaicloud/istio-proxyv2:v1.11.4-bzc.1 --name=edge
cluster1=kubeconfigs/central.yaml
cluster2=kubeconfigs/edge.yaml
# Setup metallb
cluster1=kubeconfigs/central.yaml cluster2=kubeconfigs/edge.yaml ./multicluster-metallb.sh
#
# ------------------------ Section 2 Istiod Control Plane -------------------------
# We need the Istiod control plane running in the central cluster. We use Service Mesh Manger (SMM) cli
# commands to install what we need. We used version 1.8.2 for this work.
#
./smm-cli-1.8.2 install -c $cluster1 -a
#
# ------------------------ Section 3 Webhook Setup --------------------------------
# The webhook configuration is required and applied to the edge cluster.
# The first thing to do is make sure the ISTIOD_CUSTOM_HOST values include the host name that will be used in the URL.
# For this work we used an IP address to avoid DNS complexities.
# In an SMM installation the easy way to change this is to edit the istiocontrolplanes CR
# We also change the pull policy to IfNotPresent so we don't need additional secrets to pull from privated ECRs.
# add to env: section
# - name: ISTIOD_CUSTOM_HOST
# value: istiod-cp-v111x.istio-system.svc,172.18.251.1
# 172.18.251.1 is the address of the meshexpansion gateway (the ingress point for istiod traffic) on the central cluster
kubectl edit istiocontrolplanes cp-v111x -n istio-system $CEN
# sleep to let the above change propogate
sleep 10
#
# Note all steps below are edge cluster related and should be repeated for each edge cluster
#
# We use the central clusters webhook as a model for the remote clusters webhook configuration
# The main things required is a correct CAbundle value - using the central cluster as a model insures this
# Add the url for the location of the webhook. url: https://$IP:15017/inject/:ENV:cluster=${NAME}:ENV:net=${network1}
# $IP is the external IP for the ingress in the central cluster that allows istiod to be reached.
# $NAME is the name used of the cluster within istiod. We use a istiodctl command below creae the entry
# $network1 is the name of the network you want to use for the edge clusters
kubectl get mutatingwebhookconfiguration istio-sidecar-injector-cp-v111x-istio-system $CEN -o yaml > webhook_edge.yaml
cat webhook_edge.yaml | sed '/path: \/inject/d' | sed '/name: istiod-cp-v111x/d' | sed '/namespace: istio-system/d' | sed '/port: 443/d' | sed 's/service:/url: https:\/\/172.18.251.1:15017\/inject\/:ENV:cluster=edge:ENV:net=k3d-demo3/g' | kubectl apply -f - --kubeconfig="${cluster2}"
#
# ------------------------ Section 4 Kube Apiserver Access --------------------------------
#
# We need to create a cluster entry in istiod running in the central cluster. We do this with an istiodctl command that
# creates a secret with the access credentails for the edge cluster.
# Note this was failing with a 1.11 based istioctl we used version 1.8.6
$HOME/istioctl/istio-1.8.6/bin/istioctl x create-remote-secret --kubeconfig="${cluster2}" --type=remote --namespace=istio-system --service-account=istiod-service-account --create-service-account=true --name=edge | kubectl apply -f - --kubeconfig="${cluster1}"
#
# ------------------------ Section 5 xDS mTLS Credentials --------------------------------
#
# To properly connect using mTLS for the xDS exchange we need to add a root cert to the namespace the gateway will be in.
# Create root cert for mounting. Note namespace should match GW deployment
kubectl get configmap istio-ca-root-cert-cp-v111x -n istio-system $CEN -o yaml | sed '/namespace/d' | kubectl apply -f - --kubeconfig="${cluster2}"
#
# ------------------------ Section 6 Service and Endpoint --------------------------------
#
# We need to ensure the discovery address used by the proxy is both resolvable and reachable from the edge cluster
# We do that by creating a selectorless service and endpoint. We use the standard naming for the address
# and the endpoint points to the ingress address on the central cluster that provides istiod reachability.
#
# Create a local svc pointing to remote ingress GW for pilot and CA access.
kubectl create -f discovery_svc.yaml --kubeconfig="${cluster2}"
#
# ------------------------ Section 6 Deploy Gateway --------------------------------
#
# we deploy a gatway deployment resource and the service resource. The webhook injection
# will add additional environmental variables, startup arguements and flags and xDS settings
kubectl apply -f gateway_deploy_svc.yaml --kubeconfig="${cluster2}"
#
# ------------------------ Section 7 Deploy Any Apps --------------------------------
#
# First ensure the namespace that is being used is properly labeled for sidecar injection.
# For our work we deployed two helloworld pods (one in each cluster) and a sleep pod in the edge cluster
# We used the manifest provided by istio and referenced here:
# https://istio.io/latest/docs/setup/install/external-controlplane/
# point samples to an istio repo and CEN
sleep 10
kubectl edit namespace default
export samples=../../istio.io/istio/samples
kubectl apply -f $samples/helloworld/helloworld.yaml -l service=helloworld --kubeconfig="${cluster2}"
kubectl apply -f $samples/helloworld/helloworld.yaml -l version=v1 --kubeconfig="${cluster2}"
kubectl apply -f $samples/sleep/sleep.yaml --kubeconfig="${cluster2}"
kubectl apply -f $samples/helloworld/helloworld.yaml -l version=v1 --kubeconfig="${cluster1}"
kubectl apply -f $samples/helloworld/helloworld.yaml -l service=helloworld --kubeconfig="${cluster1}"
# Then we curled to helloworld and saw the request was loadbalanced to the 2 helloworld pods.
#
#kubectl exec -c sleep sleep-557747455f-x6j4m --kubeconfig="${cluster1}" -- curl -sS helloworld:5000/hello
---
apiVersion: v1
kind: Service
metadata:
name: istiod-cp-v111x
namespace: istio-system
spec:
clusterIP: None
ports:
- name: grpc-xds
port: 15010
protocol: TCP
targetPort: 15010
- name: https-dns
port: 15012
protocol: TCP
targetPort: 15012
- name: https-webhook
port: 443
protocol: TCP
targetPort: 15017
- name: http-monitoring
port: 15014
protocol: TCP
targetPort: 15014
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: istiod-cp-v111x
namespace: istio-system
subsets:
- addresses:
- ip: 172.18.251.1
ports:
- name: https-dns
port: 15012
protocol: TCP
- name: grpc-xds
port: 15010
protocol: TCP
- name: https-webhook
port: 15017
protocol: TCP
- name: http-monitoring
port: 15014
protocol: TCP
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
gateway-name: edge-gateway
istio.io/rev: cp-v111x.istio-system
name: edge-gateway
spec:
externalTrafficPolicy: Cluster
ports:
- name: tcp-status-port
port: 15021
protocol: TCP
targetPort: 15021
- name: tls-istiod
port: 15012
protocol: TCP
targetPort: 15012
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: tls
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: edge-gateway
gateway-name: edge-gateway
istio.io/rev: cp-v111x.istio-system
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
gateway-name: edge-gateway
istio.io/rev: cp-v111x.istio-system
name: edge-gateway
spec:
replicas: 1
selector:
matchLabels:
app: edge-gateway
gateway-name: edge-gateway
istio.io/rev: cp-v111x.istio-system
strategy: {}
template:
metadata:
annotations:
# Select the gateway injection template (rather than the default sidecar template)
inject.istio.io/templates: gateway
labels:
app: edge-gateway
gateway-name: edge-gateway
istio.io/rev: cp-v111x.istio-system
sidecar.istio.io/inject: "true"
spec:
containers:
- env:
- name: ISTIO_META_LOCAL_ENDPOINTS_ONLY
value: "false"
imagePullPolicy: IfNotPresent
name: istio-proxy
image: auto
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment