Skip to content

Instantly share code, notes, and snippets.

@vfarcic
Last active May 3, 2024 20:06
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save vfarcic/2dad051fe41bd2bbcf94eda74386ce49 to your computer and use it in GitHub Desktop.
Save vfarcic/2dad051fe41bd2bbcf94eda74386ce49 to your computer and use it in GitHub Desktop.
# Source: https://gist.github.com/2dad051fe41bd2bbcf94eda74386ce49
#############################################
# KEDA: Kubernetes Event-Driven Autoscaling #
# https://youtu.be/3lcaawKAv6s #
#############################################
# Additional Info:
# - KEDA: https://keda.sh
# - Robusta: https://robusta.dev
# - Kubernetes Notifications, Troubleshooting, And Automation With Robusta: https://youtu.be/2P76WVVua8w
# - The Best Performance And Load Testing Tool? k6 By Grafana Labs: https://youtu.be/5OgQuVAR14I
#########
# Setup #
#########
# Create a Kubernetes cluster
# The Gist was NOT tested with local Kubernetes cluster (e.g., minikube, Rancher Desktop, etc.).
# Some changes might be required
helm repo add traefik \
https://helm.traefik.io/traefik
helm repo update
helm upgrade --install traefik traefik/traefik \
--namespace traefik --create-namespace --wait
# If NOT EKS
export INGRESS_HOST=$(kubectl --namespace traefik \
get svc traefik \
--output jsonpath="{.status.loadBalancer.ingress[0].ip}")
# If EKS
export INGRESS_HOSTNAME=$(kubectl --namespace traefik \
get svc traefik \
--output jsonpath="{.status.loadBalancer.ingress[0].hostname}")
# If EKS
export INGRESS_HOST=$(dig +short $INGRESS_HOSTNAME)
echo $INGRESS_HOST
# Repeat the `export` command(s) if the output is empty.
# If the output contains more than one IP, wait for a while longer, and repeat the `export` commands.
# If the output continues having more than one IP, choose one of them and execute `export INGRESS_HOST=[...]` with `[...]` being the selected IP.
git clone https://github.com/vfarcic/keda-demo
cd keda-demo
# Install `yq` from https://github.com/mikefarah/yq if you do not have it already
yq --inplace \
".spec.rules[0].host = \"dot.$INGRESS_HOST.nip.io\"" \
k8s/ing.yaml
cat k6.js \
| sed -e "s@http\.get.*@http\.get('http://dot.$INGRESS_HOST.nip.io');@g" \
| tee k6.js
cat k6-100.js \
| sed -e "s@http\.get.*@http\.get('http://dot.$INGRESS_HOST.nip.io');@g" \
| tee k6-100.js
kubectl create namespace production
kubectl --namespace production \
apply --filename k8s/
helm repo add kedacore \
https://kedacore.github.io/charts
helm repo add robusta \
https://robusta-charts.storage.googleapis.com
helm repo add prometheus-community \
https://prometheus-community.github.io/helm-charts
helm repo update
helm install keda kedacore/keda \
--namespace keda \
--create-namespace \
--wait
helm upgrade --install \
prometheus prometheus-community/prometheus \
--namespace monitoring \
--create-namespace \
--wait
# Execute only if you do not already have Robusta CLI
pip install -U robusta-cli --no-cache
robusta gen-config
# Follow the instructions from the Wizard
# Do NOT choose to install Prometheus (it's already installed)
helm upgrade --install robusta robusta/robusta \
--namespace monitoring --create-namespace \
--values generated_values.yaml \
--values robusta-values.yaml \
--set clusterName=dot --wait
#######################################
# Auto-Scaling Applications With KEDA #
#######################################
echo "http://dot.$INGRESS_HOST.nip.io"
# Open it in a browser
kubectl --namespace production \
get pods
cat keda-prom.yaml
kubectl --namespace production apply \
--filename keda-prom.yaml
k6 run k6.js
cat robusta-values.yaml
kubectl --namespace production \
get pods,hpa,scaledobjects
cat k6-100.js
k6 run k6-100.js
kubectl --namespace production \
get pods,hpa,scaledobjects
kubectl --namespace production \
get pods,hpa,scaledobjects
cat keda-prom.yaml
# Open https://keda.sh/docs/scalers/
# Open https://github.com/knative-sandbox/eventing-autoscaler-keda
###########
# Destroy #
###########
# Destroy or reset the cluster
@vfarcic
Copy link
Author

vfarcic commented Jan 6, 2023

I'm glad it worked out.

Something must have changed in Robusta so that the clusterName is now required. Thanks for letting me know. The Gist has been updated...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment