Skip to content

Instantly share code, notes, and snippets.

@amcginlay
Last active May 3, 2023 12:04
Show Gist options
  • Save amcginlay/94422635047c8ce52902dbf38ff17dc1 to your computer and use it in GitHub Desktop.
Save amcginlay/94422635047c8ce52902dbf38ff17dc1 to your computer and use it in GitHub Desktop.
istio-mvp.md

istio-mvp

Question: What's the absolute minimum I need to show Istio in action?

Create cluster.

kind create cluster --name istio-mvp --image kindest/node:v1.26.3

Install istioctl, run a precheck, install, then check status.

curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.17.2 sh -
sudo mv istio-*/bin/istioctl /usr/local/bin
istioctl x precheck
istioctl install --set profile=default -y
istioctl proxy-status

Install client and server apps into demos namespace.

App Image Description
Client curlimages/curl a minimal container image capable, from which we will send a curl request to the server
Server kennethreitz/httpbin A simple HTTP Request & Response Service from which we will invoke the /headers endpoint to reveal the headers received on the inbound request

NOTES:

  • It is technically possible to have the server pod curling itself, but the traditional client/server configuration provides a more compelling Istio experience.
  • In this demo you will not immediately "Istio-enable" the namespace so we can observe the standard unmeshed behaviour
  • It is considered good practice to assign identifying service accounts to your deployments as these become encoded into the SPIFFE identities for the pods.
kubectl create namespace demos

cat <<EOF | kubectl -n demos apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: client
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client
  labels:
    app: client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client
  template:
    metadata:
      labels:
        app: client
    spec:
      serviceAccountName: client
      containers:
        - name: curl
          image: curlimages/curl
          ports:
            - containerPort: 80
          command: ["sleep", "infinity"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: server
  labels:
    app: server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: server
  template:
    metadata:
      labels:
        app: server
    spec:
      serviceAccountName: server
      containers:
        - name: httpbin
          image: kennethreitz/httpbin
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: server
spec:
  type: ClusterIP
  selector:
    app: server
  ports:
    - name: http
      port: 80
      targetPort: 80
EOF

First, check the headers observed when curling from client to server before the mesh is enabled.

kubectl -n demos exec -it deploy/client -c curl -- curl http://server.demos.svc.cluster.local/headers

Initiate the mesh by Istio-enabling the demos namespace and restarting the workloads. Observe that each restarted pod is auto-injected with the Istio-proxy (Envoy) so we now have two containers per pod. From this point, all traffic to/from the pod is routed via its proxy. The default behaviour of Istio is to enroll each proxy for an X.509 certificate which encapsulates the pod's identity and enforces mTLS between all mesh-enabled pods.

kubectl label namespace/demos istio-injection=enabled
kubectl -n demos rollout restart deploy

Recheck the proxy-status and cert chains for the apps.

istioctl proxy-status
istioctl proxy-config secret $(kubectl get pod -n demos -l app=client -o jsonpath="{.items[0].metadata.name}").demos
istioctl proxy-config secret $(kubectl get pod -n demos -l app=server -o jsonpath="{.items[0].metadata.name}").demos

Now, check the headers observed when curling from client to server after the mesh is enabled.

kubectl -n demos exec -it deploy/client -c curl -- curl http://server.demos.svc.cluster.local/headers

Notice the addition of a set of X- prefixed headers including X-Forwarded-Client-Cert headers which indicates that traffic between the client and server proxies was TLS-enabled.

You can gain further insight by port-forwarding the envoy proxy admin interface as follows, then navigate a browser to http://localhost:15000.

app=client # or server
kubectl -n demos port-forward deploy/${app} 15000:15000
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment