Skip to content

Instantly share code, notes, and snippets.

@mhausenblas
Last active September 25, 2021 17:00
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save mhausenblas/a1d195745267811b68055320f9844fe1 to your computer and use it in GitHub Desktop.
Save mhausenblas/a1d195745267811b68055320f9844fe1 to your computer and use it in GitHub Desktop.
Scripting EKS on ARM

EKS on Arm

The xarm-install-graviton2.sh script allows you to install and use Amazon EKS on Arm (xARM) with a single command. In essence, it automates the steps described in the docs.

Make sure you have aws, eksctl, kubectl, and jq installed, this will be checked on start-up and the script will fail if these deps are not present. So far tested with bash on macOS.

$ chmod +x xarm-install-graviton2.sh
$ ./xarm-install-graviton2.sh

After some 15min the install should complete and you should see DONE. That means you can check the data plane:

$ kubectl get nodes --show-labels
NAME                                           STATUS   ROLES    AGE   VERSION               LABELS
ip-192-168-15-231.eu-west-1.compute.internal   Ready    <none>   48m   v1.15.11-eks-065dce   beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=m6g.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1a,kubernetes.io/arch=arm64,kubernetes.io/hostname=ip-192-168-15-231.eu-west-1.compute.internal,kubernetes.io/os=linux
ip-192-168-33-98.eu-west-1.compute.internal    Ready    <none>   48m   v1.15.11-eks-065dce   beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=m6g.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c,kubernetes.io/arch=arm64,kubernetes.io/hostname=ip-192-168-33-98.eu-west-1.compute.internal,kubernetes.io/os=linux
ip-192-168-48-242.eu-west-1.compute.internal   Ready    <none>   47m   v1.15.11-eks-065dce   beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=m6g.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c,kubernetes.io/arch=arm64,kubernetes.io/hostname=ip-192-168-48-242.eu-west-1.compute.internal,kubernetes.io/os=linux
#!/usr/bin/env bash
set -o errexit
set -o errtrace
set -o nounset
set -o pipefail
###############################################################################
### xARM - Amazon EKS on Arm
### Installs an EKS cluster using Graviton2-based Arm worker nodes
### based on https://docs.aws.amazon.com/eks/latest/userguide/arm-support.html
###
### Dependencies: jq, aws, eksctl, kubectl
###
### Example usage (showing all positional CLI arguments):
###
### ./xarm-install-graviton2.sh myarm us-east-1 1
###
### Author: Michael Hausenblas, hausenbl@amazon.com
### Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
### SPDX-License-Identifier: Apache-2.0
###############################################################################
### Utility functions
function preflightcheck {
if ! [ -x "$(command -v jq)" ]
then
echo "Please install jq via https://stedolan.github.io/jq/download/ and try again" >&2
exit 1
fi
if ! [ -x "$(command -v aws)" ]
then
echo "Please install aws via https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html and try again" >&2
exit 1
fi
if ! [ -x "$(command -v eksctl)" ]
then
echo "Please install eksctl via https://eksctl.io/introduction/installation/ and try again" >&2
exit 1
fi
if ! [ -x "$(command -v kubectl)" ]
then
echo "Please install kubectl via https://kubernetes.io/docs/tasks/tools/install-kubectl/ and try again" >&2
exit 1
fi
}
###############################################################################
### User parameters that can be overwritten as CLI arguments
# choose a custom name for the cluster:
export XARM_CLUSTER_NAME="${1:-xarm2}"
# choose a region to deploy the cluster into:
export XARM_TARGET_REGION="${2:-eu-west-1}"
# choose number of worker nodes (between 1 and 4):
export XARM_NODES_INITIAL_NUM="${3:-1}"
###############################################################################
### Script parameters (do not touch)
XARM_NODES_TYPE="m6g.medium"
CNI_MANIFEST_URL=https://raw.githubusercontent.com/aws/containers-roadmap/master/preview-programs/eks-arm-preview/aws-k8s-cni-arm64.yaml
KUBEPROXY_MANIFEST_URL=https://raw.githubusercontent.com/aws/containers-roadmap/master/preview-programs/eks-arm-preview/kube-proxy-arm-1.15.yaml
COREDNS_MANIFEST_URL=https://raw.githubusercontent.com/aws/containers-roadmap/master/preview-programs/eks-arm-preview/dns-arm-1.15.yaml
NODEGROUP_CF_URL=https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-06-10/amazon-eks-arm-nodegroup.yaml
AUTH_CONFIGMAP_URL=https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2019-11-15/aws-auth-cm.yaml
###############################################################################
### Main script
printf "Checking dependencies\n\n"
preflightcheck
printf "I will now provision an EKS cluster with the following parameters:\n"
printf ' %s \e[34m%s\e[0m\n' "cluster name:" $XARM_CLUSTER_NAME
printf ' %s \e[34m%s\e[0m\n' "region:" $XARM_TARGET_REGION
printf ' %s \e[34m%s\e[0m\n' "number of worker nodes:" $XARM_NODES_INITIAL_NUM
printf ' %s \e[34m%s\e[0m\n\n' "instance type:" $XARM_NODES_TYPE
printf "Starting! This will take some 15 min to complete\n\n"
# create the control plane and gather some data we need for the node group:
echo Creating the control plane
eksctl create cluster \
--name $XARM_CLUSTER_NAME \
--version 1.15 \
--region $XARM_TARGET_REGION \
--without-nodegroup
ControlPlaneSecurityGroup=$(aws eks describe-cluster --name $XARM_CLUSTER_NAME --region $XARM_TARGET_REGION | jq .cluster.resourcesVpcConfig.securityGroupIds[0] -r)
VPCId=$(aws eks describe-cluster --name $XARM_CLUSTER_NAME --region $XARM_TARGET_REGION | jq .cluster.resourcesVpcConfig.vpcId -r)
# 172.31.32.0/20 and 172.31.80.0/20
PublicSubnets=$(aws cloudformation describe-stacks --stack-name eksctl-$XARM_CLUSTER_NAME-cluster --region $XARM_TARGET_REGION | jq -r '.Stacks[0].Outputs' | jq -c '.[] | select( .OutputKey == "SubnetsPublic" )' | jq -r '.OutputValue')
# update control plane (Arm it):
echo Updating control plane with Arm components
kubectl apply -f $COREDNS_MANIFEST_URL
kubectl apply -f $KUBEPROXY_MANIFEST_URL
kubectl apply -f $CNI_MANIFEST_URL
# launch worker nodes and gather some data to join nodes:
echo Launching worker nodes
tsnow=$(date +%s)
xarmkeyname=xarm-$tsnow
curl -o amazon-eks-arm-nodegroup.yaml $NODEGROUP_CF_URL
aws ec2 create-key-pair \
--key-name "$xarmkeyname" \
--region $XARM_TARGET_REGION | \
jq -r ".KeyMaterial" > ~/.ssh/$xarmkeyname.pem
aws cloudformation deploy \
--template-file amazon-eks-arm-nodegroup.yaml \
--stack-name eksctl-$XARM_CLUSTER_NAME-ng \
--capabilities CAPABILITY_IAM \
--parameter-overrides "ClusterControlPlaneSecurityGroup=$ControlPlaneSecurityGroup" \
"ClusterName=$XARM_CLUSTER_NAME" \
"KeyName=$xarmkeyname" \
"KubernetesVersion=1.15" \
"NodeAutoScalingGroupDesiredCapacity=$XARM_NODES_INITIAL_NUM" \
"NodeGroupName=xarmdng" \
"NodeInstanceType=$XARM_NODES_TYPE" \
"Subnets=$PublicSubnets" \
"VpcId=$VPCId" \
--region $XARM_TARGET_REGION
NodeInstanceRole=$(aws cloudformation describe-stacks --stack-name eksctl-$XARM_CLUSTER_NAME-ng --region $XARM_TARGET_REGION | jq -r '.Stacks[0].Outputs' | jq -c '.[] | select( .OutputKey == "NodeInstanceRole" )' | jq -r '.OutputValue')
# add worker nodes to cluster
echo Adding worker nodes to cluster
curl -o aws-auth-cm.yaml $AUTH_CONFIGMAP_URL && \
sed "s|<ARN of instance role (not instance profile)>|$NodeInstanceRole|g" aws-auth-cm.yaml > aws-auth-cm-arm.yaml && \
kubectl apply -f aws-auth-cm-arm.yaml
echo DONE
#!/usr/bin/env bash
set -o errexit
set -o errtrace
set -o nounset
set -o pipefail
###############################################################################
### xARM - Amazon EKS on ARM
### Installs an EKS cluster using ARM worker nodes
### https://docs.aws.amazon.com/eks/latest/userguide/arm-support.html
###
### Dependencies: jq, aws, eksctl, kubectl
###
### Example usage (showing all positional CLI arguments):
###
### ./xarm-install.sh myarm eu-west-1 1 a1.xlarge
###
### Author: Michael Hausenblas, hausenbl@amazon.com
### Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
### SPDX-License-Identifier: Apache-2.0
###############################################################################
### Utility functions
function preflightcheck {
if ! [ -x "$(command -v jq)" ]
then
echo "Please install jq via https://stedolan.github.io/jq/download/ and try again" >&2
exit 1
fi
if ! [ -x "$(command -v aws)" ]
then
echo "Please install aws via https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html and try again" >&2
exit 1
fi
if ! [ -x "$(command -v eksctl)" ]
then
echo "Please install eksctl via https://eksctl.io/introduction/installation/ and try again" >&2
exit 1
fi
if ! [ -x "$(command -v kubectl)" ]
then
echo "Please install kubectl via https://kubernetes.io/docs/tasks/tools/install-kubectl/ and try again" >&2
exit 1
fi
}
###############################################################################
### User parameters that can be overwritten as CLI arguments
# choose a custom name for the cluster:
export XARM_CLUSTER_NAME="${1:-xarm}"
# choose the target region for the cluster:
export XARM_TARGET_REGION="${2:-us-west-2}"
# choose number of worker nodes (between 1 and 4):
export XARM_NODES_INITIAL_NUM="${3:-2}"
# choose EC2 instance type as per https://aws.amazon.com/ec2/instance-types/a1/
export XARM_NODES_TYPE="${4:-a1.large}"
###############################################################################
### Script parameters (do not touch)
CNI_MANIFEST_URL=https://raw.githubusercontent.com/aws/containers-roadmap/master/preview-programs/eks-ec2-a1-preview/aws-k8s-cni-arm64.yaml
KUBEPROXY_MANIFEST_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/kube-proxy-arm64-ds-1.14.yaml
COREDNS_MANIFEST_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/dns-arm64-1.14.yaml
NODEGROUP_CF_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-arm-nodegroup.yaml
AUTH_CONFIGMAP_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/aws-auth-cm.yaml
###############################################################################
### Main script
printf "Checking dependencies\n\n"
preflightcheck
printf "I will now provision an EKS 1.14 cluster with the following parameters:\n"
printf ' %s \e[34m%s\e[0m\n' "cluster name:" $XARM_CLUSTER_NAME
printf ' %s \e[34m%s\e[0m\n' "region:" $XARM_TARGET_REGION
printf ' %s \e[34m%s\e[0m\n' "number of worker nodes:" $XARM_NODES_INITIAL_NUM
printf ' %s \e[34m%s\e[0m\n\n' "instance type:" $XARM_NODES_TYPE
printf "Starting! This will take some 15 to 20min to complete\n\n"
# create the control plane and gather some data we need for the node group:
echo Creating the control plane
eksctl create cluster \
--name $XARM_CLUSTER_NAME \
--version 1.14 \
--region $XARM_TARGET_REGION \
--without-nodegroup
ControlPlaneSecurityGroup=$(aws eks describe-cluster --name $XARM_CLUSTER_NAME --region $XARM_TARGET_REGION | jq .cluster.resourcesVpcConfig.securityGroupIds[0] -r)
VPCId=$(aws eks describe-cluster --name $XARM_CLUSTER_NAME --region $XARM_TARGET_REGION | jq .cluster.resourcesVpcConfig.vpcId -r)
PublicSubnets=$(aws cloudformation describe-stacks --stack-name eksctl-$XARM_CLUSTER_NAME-cluster --region $XARM_TARGET_REGION | jq -r '.Stacks[0].Outputs' | jq -c '.[] | select( .OutputKey == "SubnetsPublic" )' | jq -r '.OutputValue')
# update control plane (ARM it):
echo Updating control plane with ARM components
kubectl apply -f $CNI_MANIFEST_URL
kubectl apply -f $KUBEPROXY_MANIFEST_URL
kubectl apply -f $COREDNS_MANIFEST_URL
# launch worker nodes and gather some data to join nodes:
echo Launching worker nodes
tsnow=$(date +%s)
xarmkeyname=xarm-$tsnow
curl -o amazon-eks-arm-nodegroup.yaml $NODEGROUP_CF_URL
aws ec2 create-key-pair \
--key-name "$xarmkeyname" \
--region $XARM_TARGET_REGION | \
jq -r ".KeyMaterial" > ~/.ssh/$xarmkeyname.pem
aws cloudformation deploy \
--template-file amazon-eks-arm-nodegroup.yaml \
--stack-name eksctl-$XARM_CLUSTER_NAME-ng \
--capabilities CAPABILITY_IAM \
--parameter-overrides "ClusterControlPlaneSecurityGroup=$ControlPlaneSecurityGroup" \
"ClusterName=$XARM_CLUSTER_NAME" \
"KeyName=$xarmkeyname" \
"KubernetesVersion=1.14" \
"NodeAutoScalingGroupDesiredCapacity=$XARM_NODES_INITIAL_NUM" \
"NodeGroupName=xarmdng" \
"NodeInstanceType=$XARM_NODES_TYPE" \
"Subnets=$PublicSubnets" \
"VpcId=$VPCId" \
--region $XARM_TARGET_REGION
NodeInstanceRole=$(aws cloudformation describe-stacks --stack-name eksctl-$XARM_CLUSTER_NAME-ng --region $XARM_TARGET_REGION | jq -r '.Stacks[0].Outputs' | jq -c '.[] | select( .OutputKey == "NodeInstanceRole" )' | jq -r '.OutputValue')
# add worker nodes to cluster
echo Adding worker nodes to cluster
curl -o aws-auth-cm.yaml $AUTH_CONFIGMAP_URL && \
sed "s|<ARN of instance role (not instance profile)>|$NodeInstanceRole|g" aws-auth-cm.yaml > aws-auth-cm-arm.yaml && \
kubectl apply -f aws-auth-cm-arm.yaml
echo DONE
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment