Skip to content

Instantly share code, notes, and snippets.

@lukaszbudnik
Created June 6, 2020 06:23
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lukaszbudnik/0da03250157321fa2e8e946d2df94dd0 to your computer and use it in GitHub Desktop.
Save lukaszbudnik/0da03250157321fa2e8e946d2df94dd0 to your computer and use it in GitHub Desktop.
Shows how to setup Azure AKS cluster with autoscaler enabled.
# az version
az version
{
"azure-cli": "2.5.1",
"azure-cli-command-modules-nspkg": "2.0.3",
"azure-cli-core": "2.5.1",
"azure-cli-nspkg": "3.0.4",
"azure-cli-telemetry": "1.0.4",
"extensions": {}
}
# kubectl version
kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11", GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-13T17:40:34Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
# resource group name
RG_NAME=lukaszbudnik
# Azure AKS cluster name
AKS_CLUSTER_NAME=awesome-product
# create the cluster
# vm type has to be VirtualMachineScaleSets as AvailabilitySet is not supported at the moment
az aks create --name $AKS_CLUSTER_NAME \
--resource-group $RG_NAME \
--load-balancer-sku basic \
--vm-set-type VirtualMachineScaleSets \
--min-count 1 \
--max-count 5 \
--node-count 1 \
--enable-cluster-autoscaler \
--node-vm-size=Standard_DS2_v2 \
--enable-addons monitoring \
--no-ssh-key
# fetch credentials so that kubectl will work
az aks get-credentials --resource-group $RG_NAME --name $AKS_CLUSTER_NAME
# testing autoscaler
# there should be only 1 node running
kubectl get nodes
# deploy a single nginx container
cat <<EOF > test-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
app: autoscaler-test
spec:
replicas: 1
selector:
matchLabels:
app: autoscaler-test
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: autoscaler-test
tier: frontend
spec:
containers:
- image: nginx
name: nginx
EOF
kubectl apply -f test-app.yaml
kubectl get pods
# scale it to 200 instances
kubectl scale --replicas=200 -f test-app.yaml
# wait a moment and check if nodes were started and all 200 pods are in running state
# 100 default ngnix containers fit into a single Standard_DS2_v2 machine
# so 200 should fit into 2 Standard_DS2_v2 machines
kubectl get nodes
kubectt get pods
# now scale to 600 nginx instances
kubectl scale --replicas=600 -f test-app.yaml
# so we should see 5 machines running (max of our scale set)
kubectl get nodes
# and around 100 pending - they simply didn't fit into running nodes
kubectl get pods | grep Pending
# when scaling down Kubernetes will first scale down the Pending ones
pending=$(kubectl get pods | grep Pending | wc -l)
new=$((600-pending))
kubectl scale --replicas=$new -f test-app.yaml
# delete test app
kubectl delete -f test-app.yaml
# by default autoscaler will scale down nodes after 10 minutes of inactivity
kubectl get nodes
# delete the cluster
az aks create --name $AKS_CLUSTER_NAME --resource-group $RG_NAME
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment