Skip to content

Instantly share code, notes, and snippets.

@lukaszbudnik
Created June 9, 2020 06:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save lukaszbudnik/771a4952647d8a2a8f77a9d2de741b10 to your computer and use it in GitHub Desktop.
Save lukaszbudnik/771a4952647d8a2a8f77a9d2de741b10 to your computer and use it in GitHub Desktop.
Shows AWS EKS Fargate auto scaling.
# eksctl version
eksctl version
0.20.0
# kubectl/Kubernetes version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
AWS_REGION=us-east-2
CLUSTER_NAME=lukaszbudniktest1
# create new cluster with Fargate profile
eksctl create cluster \
--name $CLUSTER_NAME \
--version 1.16 \
--region $AWS_REGION \
--fargate
# testing autoscaler
# when using Fargate profile we don't need to install Kubernetes autoscaler - AWS will take care of auto scaling
# there will be 2 nodes available and their names will start with "fargate-"
kubectl get nodes
# deploy a test app
cat <<EOF > test-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
app: autoscaler-test
spec:
replicas: 1
selector:
matchLabels:
app: autoscaler-test
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: autoscaler-test
tier: frontend
spec:
containers:
- image: nginx
name: nginx
EOF
kubectl apply -f test-app.yaml
kubectl get pods
# scale to 30 replicas
kubectl scale --replicas=30 -f test-app.yaml
# wait a moment and check if all replicas are running
kubectl get pods
# the result of the below command may come as a surprise to you
# there will be 30 additional nodes running - each for every replica
kubectl get nodes
# now scale to 60 replicas
kubectl scale --replicas=60 -f test-app.yaml
# again wait a moment and check if all replicas are running
kubectl get pods
# there will be additional 30 nodes running
kubectl get nodes
# in fact you can check that number of all pods equals number of all nodes
kubectl get pods -A | grep Running | wc -l
kubectl get nodes | grep Ready | wc -l
# delete cluster
eksctl delete cluster --region $AWS_REGION --name $CLUSTER_NAME
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment