Skip to content

Instantly share code, notes, and snippets.

@haproxytechblog
Last active April 23, 2021 13:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save haproxytechblog/71713528d55891e9ec3ebfd1d72aabc6 to your computer and use it in GitHub Desktop.
Save haproxytechblog/71713528d55891e9ec3ebfd1d72aabc6 to your computer and use it in GitHub Desktop.
Autoscaling with the HAProxy Kubernetes Ingress Controller and KEDA
$ helm repo add prometheus-community \
https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/prometheus
$ kubectl port-forward service/prometheus-server 9090:80
controller:
podAnnotations:
prometheus.io/scrape: true
prometheus.io/path: /metrics
prometheus.io/port: 1024
$ helm repo add haproxytech https://haproxytech.github.io/helm-charts
$ helm repo update
$ helm install kubernetes-ingress haproxytech/kubernetes-ingress -f values.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app
name: app
spec:
replicas: 1
selector:
matchLabels:
run: app
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: quay.io/nickmramirez/webapp
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
run: app
name: app
annotations:
haproxy.org/pod-maxconn: "30"
spec:
selector:
run: app
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: default
annotations:
haproxy.org/path-rewrite: /app/(.*) /\1 # strip off /app from URL path
spec:
rules:
- http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: app
port:
number: 80
$ kubectl apply -f app.yaml
$ helm repo add kedacore https://kedacore.github.io/charts
$ helm repo update
$ kubectl create namespace keda
$ helm install keda kedacore/keda --namespace keda
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: app-scaledobject
spec:
scaleTargetRef:
kind: Deployment
name: app
pollingInterval: 20
minReplicaCount: 1
maxReplicaCount: 10
advanced:
restoreToOriginalReplicaCount: false
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 25
periodSeconds: 60
triggers:
- type: prometheus
metadata:
serverAddress: http://prometheus-server.default.svc.cluster.local
metricName: haproxy_backend_current_queue
query: sum(avg_over_time(haproxy_backend_current_queue{proxy="default-app-http-port"}[1m]))
threshold: '10'
$ kubectl apply -f scaledobject.yaml
spec:
scaleTargetRef:
kind: Deployment
name: app
pollingInterval: 20
minReplicaCount: 1
maxReplicaCount: 10
advanced:
restoreToOriginalReplicaCount: false
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 25
periodSeconds: 60
triggers:
- type: prometheus
metadata:
serverAddress: http://prometheus-server.default.svc.cluster.local
metricName: haproxy_backend_current_queue
query: sum(avg_over_time(haproxy_backend_current_queue{proxy="default-app-http-port"}[1m]))
threshold: '10'
import http from 'k6/http';
import { sleep } from 'k6';
export let options = {
stages: [
{ duration: '30s', target: 100 }, // ramp-up of traffic from 1 to 100 users over 30 seconds.
{ duration: '2m', target: 100 }, // stay at 100 users for 2 minutes
{ duration: '10s', target: 200 }, // ram up to 200 users
{ duration: '5m', target: 200 }, // stay at 200 users for 5 minutes
]
};
// call the /delay/5 URL path to simulate a slow application
export default () => {
http.get('http://192.168.99.105:30034/app/delay/5');
sleep(1);
};
$ k6 run test.js
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: test.js
output: -
scenarios: (100.00%) 1 scenario, 200 max VUs, 8m10s max duration (incl. graceful stop):
* default: Up to 200 looping VUs for 7m40s over 4 stages (gracefulRampDown: 30s, gracefulStop: 30s)
running (0m49.8s), 100/200 VUs, 432 complete and 0 interrupted iterations
default [===>----------------------------------] 100/200 VUs 0m49.8s/7m40.0s
$ kubectl get pods -l "run=app" -w
NAME STATUS AGE
# starting off with just 1 pod
app-659c4db59d-qwsgz Running 42h
# When load was 100 concurrent users,
# another pod was created
app-659c4db59d-xtsg7 Pending 0s
app-659c4db59d-xtsg7 ContainerCreating 0s
app-659c4db59d-xtsg7 Running 3s
# When load was 200 concurrent users,
# two more pods were created
app-659c4db59d-9hlgf Pending 0s
app-659c4db59d-zdjjz Pending 0s
app-659c4db59d-9hlgf ContainerCreating 0s
app-659c4db59d-zdjjz ContainerCreating 0s
app-659c4db59d-zdjjz Running 2s
app-659c4db59d-9hlgf Running 3s
# After 5 minutes, KEDA began scaling back down,
# 1 pod each minute
app-659c4db59d-9hlgf Terminating 5m49s
app-659c4db59d-zdjjz Terminating 6m51s
app-659c4db59d-xtsg7 Terminating 9m52s
haproxy_backend_active_servers{proxy="default-app-http-port"}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment