Skip to content

Instantly share code, notes, and snippets.

@lukaszbudnik
Last active June 4, 2020 08:38
Show Gist options
  • Save lukaszbudnik/820c751dc8a0c3eb4a981dd2f1a034a2 to your computer and use it in GitHub Desktop.
Save lukaszbudnik/820c751dc8a0c3eb4a981dd2f1a034a2 to your computer and use it in GitHub Desktop.
Shows how to setup AWS EKS cluster with autoscaler enabled.
# eksctl version
eksctl version
0.20.0
# kubectl/Kubernetes version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
# helm version
helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
# cluster name and region
CLUSTER_NAME=lukaszbudniktest1
AWS_REGION=us-east-2
# create new cluster using managed node group
# I set autoScaler IAM addon policy to true and eksctl will generate proper IAM permissions for managing ASGs
cat <<EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: $CLUSTER_NAME
region: $AWS_REGION
managedNodeGroups:
- name: managed-ng-1
instanceType: m5.large
minSize: 1
maxSize: 10
desiredCapacity: 1
volumeSize: 20
iam:
withAddonPolicies:
externalDNS: true
certManager: true
autoScaler: true
EOF
eksctl create cluster -f cluster.yaml
# deploying autoscaler
# AWS sample contains <YOUR CLUSTER NAME> which needs to be changed to your cluster name
wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
sed -i 's/<YOUR CLUSTER NAME>/'"$CLUSTER_NAME"'/' cluster-autoscaler-autodiscover.yaml
# do the actual deploy
kubectl apply -f cluster-autoscaler-autodiscover.yaml
# make sure we don't evict autoscaler
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
# testing autoscaler
# there should be only 1 node running
kubectl get nodes
# deploy a test app
cat <<EOF > test-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
app: autoscaler-test
spec:
replicas: 1
selector:
matchLabels:
app: autoscaler-test
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: autoscaler-test
tier: frontend
spec:
containers:
- image: nginx
name: nginx
EOF
kubectl apply -f test-app.yaml
kubectl get pods
# scale to 70 replicas
kubectl scale --replicas=70 -f test-app.yaml
# wait a moment and check if nodes were started and all 70 pods are in running state
# 70 default nginx containers fit into 3 m5.large machines without any problems
kubectl get nodes
kubectt get pods
# now scale to 300 replicas
kubectl scale --replicas=300 -f test-app.yaml
# we should see 10 nodes running (max of our managed node group)
kubectl get nodes
# and around 30 pods in pending state - they simply didn't fit into running nodes
kubectl get pods | grep Pending
# when scaling down Kubernetes will first scale down the pending ones
pending=$(kubectl get pods | grep Pending | wc -l)
new=$((300-pending))
kubectl scale --replicas=$new -f test-app.yaml
# delete test app
kubectl delete -f test-app.yaml
# by default autoscaler will scale down nodes after 10 minutes of inactivity
kubectl get nodes
# delete the cluster
eksctl delete cluster -f cluster.yaml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment