Skip to content

Instantly share code, notes, and snippets.

@yokawasa
Last active February 11, 2022 02:27
Show Gist options
  • Save yokawasa/a4da5d5a717b58c60e297e7f15351d20 to your computer and use it in GitHub Desktop.
Save yokawasa/a4da5d5a717b58c60e297e7f15351d20 to your computer and use it in GitHub Desktop.
Testing Istio Circuit Breaker

Setup an environment for testing

Setup Kind cluster and Istio

Create kind cluster manifest

cat << EOF | > cluster.yaml 
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4 
nodes:
- role: control-plane
- role: worker
EOF

Create kind cluster

K8S_NODE_IMAGE=v1.19.11
kind create cluster --name my-kind-cluster \
--image=kindest/node:${K8S_NODE_IMAGE} \
--config cluster.yaml

kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:58:11Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.11", GitCommit:"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33", GitTreeState:"clean", BuildDate:"2021-05-27T23:47:11Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}

kubectl get node
NAME                            STATUS   ROLES    AGE   VERSION
my-kind-cluster-control-plane   Ready    master   37m   v1.19.11
my-kind-cluster-worker          Ready    <none>   37m   v1.19.11

Install Istio-1.12.1 using configuration profile demo

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.12.1 sh
cd istio-1.12.1
./bin/istioctl install --set profile=demo -y

./bin/istioctl version
client version: 1.12.1
control plane version: 1.12.1
data plane version: 1.12.1 (4 proxies)

Deploy sample apps

Create namespaces testns1 and testns2 for the testing

kubectl create ns testns1
kubectl create ns testns2
# add label in order to inject istio sidecar proxy 
kubectl label namespace testns1 istio-injection=enabled --overwrite
kubectl label namespace testns2 istio-injection=enabled --overwrite

Deploy sample apps:

  • deploy sleep app in testns1
  • deploy httpbin app in testns2
kubectl apply -f samples/sleep/sleep.yaml -n testns1
kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml -n testns1
kubectl apply -f samples/httpbin/httpbin.yaml -n testns2

Scale out sleep app to 2 replicas

kubectl scale --replicas=2 deploy/sleep -n testns1

Deploy VirutalService and DestinationRule

Deploy DestinationRule httpbin in testns2

cat << EOM | kubectl apply -n testns2 -f - 
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin.testns2.svc.cluster.local
  trafficPolicy:
    # eject 100% pods for 1 minutes if 3 consecutive5xxErrors are detected with 5 sec interval 
    outlierDetection:
      consecutive5xxErrors: 3
      consecutiveGatewayErrors: 3
      interval: 5s
      baseEjectionTime: 10s
      maxEjectionPercent: 100
  subsets:
  - name: v1
    labels:
      version: v1
EOM
  • consecutiveGatewayErrors: When the upstream host is accessed over HTTP, a 502, 503, or 504 return code qualifies as a gateway error. When the upstream host is accessed over an opaque TCP connection, connect timeouts and connection error/failure events qualify as a gateway error.
  • consecutive5xxErrors: When the upstream host is accessed over an opaque TCP connection, connect timeouts, connection error/failure and request failure events qualify as a 5xx error. Note that consecutivegatewayerrors and consecutive5xxerrors can be used separately or together. Because the errors counted by consecutivegatewayerrors are also included in consecutive5xxerrors

ref: https://istio.io/latest/docs/reference/config/networking/destination-rule/#OutlierDetection

Deploy VirtualService httpbin in testns2

cat << EOM | kubectl apply -n testns2 -f - 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
  - httpbin.testns2.svc.cluster.local
  http:
  - route:
    - destination:
        host: httpbin.testns2.svc.cluster.local
        port:
          number: 8000
        subset: v1
    timeout: 10s
EOM

Check if DestinationRule and VirtualService resources are created in testns2 as expected

kubectl get dr -n testns2
NAME      HOST                                AGE
httpbin   httpbin.testns2.svc.cluster.local   16s

kubectl get vs -n testns2
NAME      GATEWAYS   HOSTS                                   AGE
httpbin              ["httpbin.testns2.svc.cluster.local"]   54s

Testing Behavior of Istio Circuit Breaker

Execute the following commands in 3 different terminal windows respectively

Terminal1

SLEEP_POD1=$(kubectl get pod -l app=sleep -n testns1 -o jsonpath='{.items[0]..metadata.name}')
watch -n 1 kubectl exec "${SLEEP_POD1}" -c sleep -n testns1 -- curl -sv  "http://httpbin.testns2.svc.cluster.local/headers"

Terminal2

SLEEP_POD1=$(kubectl get pod -l app=sleep -n testns1 -o jsonpath='{.items[0]..metadata.name}')
watch -n 1 kubectl exec "${SLEEP_POD1}" -c sleep -n testns1 -- curl -sv  "http://httpbin.testns2.svc.cluster.local/status/500"

Terminal3

SLEEP_POD2=$(kubectl get pod -l app=sleep -n testns1 -o jsonpath='{.items[1]..metadata.name}')
watch -n 1 kubectl exec "${SLEEP_POD2}" -c sleep -n testns1 -- curl -sv  "http://httpbin.testns2.svc.cluster.local/headers"

Observed behavior

Consecutive errors(3) and ejection periods (10s) are applied to each client proxy in sleep app speparately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment