Skip to content

Instantly share code, notes, and snippets.

@vfarcic
Last active January 10, 2024 09:14
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save vfarcic/2dad051fe41bd2bbcf94eda74386ce49 to your computer and use it in GitHub Desktop.
Save vfarcic/2dad051fe41bd2bbcf94eda74386ce49 to your computer and use it in GitHub Desktop.
# Source: https://gist.github.com/2dad051fe41bd2bbcf94eda74386ce49
#############################################
# KEDA: Kubernetes Event-Driven Autoscaling #
# https://youtu.be/3lcaawKAv6s #
#############################################
# Additional Info:
# - KEDA: https://keda.sh
# - Robusta: https://robusta.dev
# - Kubernetes Notifications, Troubleshooting, And Automation With Robusta: https://youtu.be/2P76WVVua8w
# - The Best Performance And Load Testing Tool? k6 By Grafana Labs: https://youtu.be/5OgQuVAR14I
#########
# Setup #
#########
# Create a Kubernetes cluster
# The Gist was NOT tested with local Kubernetes cluster (e.g., minikube, Rancher Desktop, etc.).
# Some changes might be required
helm repo add traefik \
https://helm.traefik.io/traefik
helm repo update
helm upgrade --install traefik traefik/traefik \
--namespace traefik --create-namespace --wait
# If NOT EKS
export INGRESS_HOST=$(kubectl --namespace traefik \
get svc traefik \
--output jsonpath="{.status.loadBalancer.ingress[0].ip}")
# If EKS
export INGRESS_HOSTNAME=$(kubectl --namespace traefik \
get svc traefik \
--output jsonpath="{.status.loadBalancer.ingress[0].hostname}")
# If EKS
export INGRESS_HOST=$(dig +short $INGRESS_HOSTNAME)
echo $INGRESS_HOST
# Repeat the `export` command(s) if the output is empty.
# If the output contains more than one IP, wait for a while longer, and repeat the `export` commands.
# If the output continues having more than one IP, choose one of them and execute `export INGRESS_HOST=[...]` with `[...]` being the selected IP.
git clone https://github.com/vfarcic/keda-demo
cd keda-demo
# Install `yq` from https://github.com/mikefarah/yq if you do not have it already
yq --inplace \
".spec.rules[0].host = \"dot.$INGRESS_HOST.nip.io\"" \
k8s/ing.yaml
cat k6.js \
| sed -e "s@http\.get.*@http\.get('http://dot.$INGRESS_HOST.nip.io');@g" \
| tee k6.js
cat k6-100.js \
| sed -e "s@http\.get.*@http\.get('http://dot.$INGRESS_HOST.nip.io');@g" \
| tee k6-100.js
kubectl create namespace production
kubectl --namespace production \
apply --filename k8s/
helm repo add kedacore \
https://kedacore.github.io/charts
helm repo add robusta \
https://robusta-charts.storage.googleapis.com
helm repo add prometheus-community \
https://prometheus-community.github.io/helm-charts
helm repo update
helm install keda kedacore/keda \
--namespace keda \
--create-namespace \
--wait
helm upgrade --install \
prometheus prometheus-community/prometheus \
--namespace monitoring \
--create-namespace \
--wait
# Execute only if you do not already have Robusta CLI
pip install -U robusta-cli --no-cache
robusta gen-config
# Follow the instructions from the Wizard
# Do NOT choose to install Prometheus (it's already installed)
helm upgrade --install robusta robusta/robusta \
--namespace monitoring --create-namespace \
--values generated_values.yaml \
--values robusta-values.yaml \
--set clusterName=dot --wait
#######################################
# Auto-Scaling Applications With KEDA #
#######################################
echo "http://dot.$INGRESS_HOST.nip.io"
# Open it in a browser
kubectl --namespace production \
get pods
cat keda-prom.yaml
kubectl --namespace production apply \
--filename keda-prom.yaml
k6 run k6.js
cat robusta-values.yaml
kubectl --namespace production \
get pods,hpa,scaledobjects
cat k6-100.js
k6 run k6-100.js
kubectl --namespace production \
get pods,hpa,scaledobjects
kubectl --namespace production \
get pods,hpa,scaledobjects
cat keda-prom.yaml
# Open https://keda.sh/docs/scalers/
# Open https://github.com/knative-sandbox/eventing-autoscaler-keda
###########
# Destroy #
###########
# Destroy or reset the cluster
Copy link

ghost commented Jul 28, 2022

It is missing the helm repo for prometheus. I have forked it and added the prometheus-community repository, plus amended the helm install to use the chart provided by the prometheus-community repo.
If you want you can update this gist using https://gist.github.com/fernandojimenez-lk/793ea2222a2aba617a7c5150ae7cb8ba

@vfarcic
Copy link
Author

vfarcic commented Jul 28, 2022

Good catch. Thanks for that.

Gist is updated :)

@conradwt
Copy link

conradwt commented Jan 5, 2023

@vfarcic I'm seeing the following when attempting to install Traefik into local K8s cluster using MiniKube:

➜ helm upgrade --install \
    traefik traefik/traefik \
    --namespace traefik \
    --create-namespace \
    --wait
Release "traefik" does not exist. Installing it now.

Error: timed out waiting for the condition

Then I ran it again and received a different error:

➜ helm upgrade --install \
    traefik traefik/traefik \
    --namespace traefik \
    --create-namespace \
    --wait
Error: UPGRADE FAILED: timed out waiting for the condition

Do you have any ideas as to why this may be happening?

@conradwt
Copy link

conradwt commented Jan 5, 2023

After some research, it seems that one needs to use a compatible version of Helm for the K8s version. BTW, I was trying to use the K8s 1.26.0 cluster with Helm v3.10.3. Thus, others may find the following link helpful:

https://helm.sh/docs/topics/version_skew

However, making this change (i.e. downgrading to K8s 1.25.5) doesn't resolve the issue for me.

@vfarcic
Copy link
Author

vfarcic commented Jan 5, 2023

I'm not sure whether Traefik works with Minikube. It probably does but you might need to tweak some parameters. My best guess is that the Service type should be changed from LoadBalancer to NodePort. Local Kubernetes clusters tend to be tricky with Ingresses and the easiest option is often to use the one that is baked into it. With Minikube, I believe that's NGINX Ingress. I haven't used Minikube for a long while now so I might be wrong though.

Anyways... The best option on Minikube might be to drop Traefik altogether and use NGINX Ingress (I think it's a Minikube plugin). If you do that, you might need to change a few commands that export INGRESS_HOST (I think it would be 127.0.0.1 but I might be wrong on that one).

Finally, if you stick with Traefik in Minikube, remove --wait from the helm command. That will skip waiting for all the resources to be healthy and you can see what's the issue with Traefik. Start with kubectl --namespace traefik get all,ingresses.

@conradwt
Copy link

conradwt commented Jan 6, 2023

@vfarcic Thanks for getting back to me and I appreciate it. Anyway, I just ended up creating a K8s cluster on Linode that worked as expected with one minor tweak:

changed

helm upgrade --install \
    robusta robusta/robusta \
    --namespace monitoring \
    --create-namespace \
    --values generated_values.yaml \
    --values robusta-values.yaml \
    --wait

to

helm upgrade --install \
    robusta robusta/robusta \
    --namespace monitoring \
    --create-namespace \
    --values generated_values.yaml \
    --values robusta-values.yaml \
    --set clusterName=<CLUSTER_NAME> \
    --wait

Otherwise, I was getting an error saying that I was missing the cluster name.

@vfarcic
Copy link
Author

vfarcic commented Jan 6, 2023

I'm glad it worked out.

Something must have changed in Robusta so that the clusterName is now required. Thanks for letting me know. The Gist has been updated...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment