Skip to content

Instantly share code, notes, and snippets.

@therealmitchconnors
Last active April 6, 2023 16:15
Show Gist options
  • Save therealmitchconnors/9a0a8230a5f4c5d5738917d6e836983e to your computer and use it in GitHub Desktop.
Save therealmitchconnors/9a0a8230a5f4c5d5738917d6e836983e to your computer and use it in GitHub Desktop.
Istio Latency Analysis

Istio Performance Analysis

The goal of this experiment is to measure the amount of latency that is added to a simple service by the addition of various Istio Components. To isolate the contribution of each component, many combinations of avialable components were tested and recorded.

Components

  • Sidecar
  • Istio Ingress
  • Load Balancer Ingress

Other Parameters

  • Scale: 1 or 5 instances of services and ingress
  • Service: Single HTTP server or frontend/backend combo

Methodology

Each combination of Parameters was tested using a fortio client running inside the cluster, with 64 concurrent connections for 30 seconds. First the max QPS was measured (using -qps 0), then a second test was run at 75% load as per fortio recommendations, and the P50 and P99 latencies were observed.

Reproducing this study

Isotope files

  • frontend.yaml - includes two services in frontend/backend configuration
  • singleton.yaml - includes minimalist service that responds to requests immediately. To generate kubernetes yaml for these scenarios, run:
go run $GOPATH/src/istio.io/isotope/converter/main.go kubernetes --service-image gcr.io/istio-testing/isotope:0.0.1 singleton.yaml | kubectl apply -f -

Load generator files (fortio used for this experiment, wrk for comparison)

  • fortio.yaml
  • wrk.yaml

For example, to test latency with no sidecar or ingress on the frontend service using the fortio pod above, run:

kubectl exec deployment/client -- fortio load -qps 0 -c 64 -t 30s http://go-sample:8080

The last line of output for the above command will include the maximum qps of the service in this configuration. Multiply that value by 75%, and run:

kubectl exec deployment/client -- fortio load -qps [75%] -c 64 -t 30s http://go-sample:8080

Which will give realistic results for a service under 75% of maximum load.

Results

Svc Scale Istio Ingress Istio Sidecar P99 P50 MaxQPS
single 1 No No 0.002662 0.000612 64000
double 1 No No 0.447564 0.006938 1837
single 1 Istio No 0.023874 0.009830 5347
double 1 Istio No 0.367515 0.018978 1300
single 1 Istio Istio 0.029908 0.014137 4382
double 1 Istio Istio 0.294953 0.046129 943
single 1 No Istio 0.017214 0.009103 6400
double 1 No Istio 0.273101 0.036778 1095
single 5 No No 0.00226841 0.00057761 98789
double 5 No No 0.0619578 0.00283516 9454
single 5 Istio No 0.0819547 0.00272854 11690
double 5 Istio No 0.113905 0.00507379 4745
single 5 Istio Istio 0.0319723 0.00660786 6475
double 5 Istio Istio 0.0541879 0.0147881 2796
single 5 No Istio 0.0113906 0.00360995 17484
double 5 No Istio 0.0660513 0.0130243 4301
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: client
name: client
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: client
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/scrape: "true"
sidecar.istio.io/inject: "false"
creationTimestamp: null
labels:
app: client
spec:
containers:
- args:
- server
image: fortio/fortio
imagePullPolicy: Always
name: fortio
ports:
- containerPort: 8080
name: fortio-web
protocol: TCP
- containerPort: 42422
protocol: TCP
resources:
requests:
cpu: "2"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
apiVersion: v1alpha1
kind: MockServiceGraph
defaults:
type: http
requestSize: 1 B
responseSize: 1 B
services:
- name: go-sample
script:
- call:
service: go-sample-dependency
- name: go-sample-dependency
defaults:
requestSize: 1 KB
responseSize: 1 KB
services:
- name: a
isEntrypoint: true
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: client
name: wrkclient
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: client
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/scrape: "true"
sidecar.istio.io/inject: "false"
creationTimestamp: null
labels:
app: client
spec:
containers:
- args:
- server
image: williamyeh/wrk
imagePullPolicy: Always
name: wrk
ports:
- containerPort: 8080
name: fortio-web
protocol: TCP
- containerPort: 42422
protocol: TCP
resources:
requests:
cpu: "2"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment