Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save AndrewFarley/3cdb7bb2f604f50827bff45910fcd345 to your computer and use it in GitHub Desktop.
Save AndrewFarley/3cdb7bb2f604f50827bff45910fcd345 to your computer and use it in GitHub Desktop.
Sample EKSCTL with YAML Aliases
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: master
region: us-east-1
version: "1.21"
iam:
withOIDC: true
vpc:
# For whitelisting public access eg: to your VPN and/or jumpbox, etc
# publicAccessCIDRs: ["1.1.1.1/32", "2.2.2.0/24"]
subnets:
private:
us-east-1a: { id: subnet-0ccbf67f3c6bdcd2b }
us-east-1d: { id: subnet-0cc371889e1e3c2eb }
clusterEndpoints:
# Consider disabling this (eg: allow your users to VPN into your subnet)
publicAccess: false
privateAccess: true
nodeGroups:
# Volume Encryption and Root Volume Size 100GB
- name: c5-all-2xlarge-spot-1a-v6
availabilityZones: ["us-east-1a"]
<<: &spotNodeGroupDefaultsV4
minSize: 2
desiredCapacity: 2
maxSize: 4
privateNetworking: true
# disablePodIMDS: true # This makes it so pods can't "escalate" permissions by using the node IAM role automatically. This AKA disabled the AWS http metadata endpoint which is how the AWS API gets permissions/access. This requires some changes and testing before enabling, often easier if used from day-one than down the road
volumeSize: 100
volumeEncrypted: true
asgSuspendProcesses:
- AZRebalance # This makes sure we don't kill instances for rebalancing, bad practice in Kubernetes
labels:
role: master
instance-type: spot
instancesDistribution:
# This maxPrice should be (imho) just above the most expensive hourly on-demand cost for the below instanceTypes
maxPrice: 0.5
instanceTypes: ["c5d.2xlarge","c5n.2xlarge","c5.2xlarge"]
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 0
spotAllocationStrategy: capacity-optimized
taints:
spotInstance: "true:PreferNoSchedule"
tags:
k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot
k8s.io/cluster-autoscaler/node-template/label/aws.amazon.com/spot: "true"
# k8s.io/cluster-autoscaler/node-template/taint/spotInstance: "true:PreferNoSchedule"
k8s.io/cluster-autoscaler/enabled: "true"
k8s.io/cluster-autoscaler/master: "true"
kubernetes.io/cluster/master: "owned"
iam:
withAddonPolicies:
ebs: true
cloudWatch: true
attachPolicyARNs:
# These first two are required and are defined by default, but you must put them here
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEKSVPCResourceController # Pod security groups support
- name: c5-all-2xlarge-spot-1d-v6
availabilityZones: ["us-east-1d"]
<<: *spotNodeGroupDefaultsV4
# eksctl utils update-cluster-logging --enable-types=controllerManager --disable-types=scheduler
# Valid entries are: "api", "audit", "authenticator", "controllerManager", "scheduler", "all", "*".
cloudWatch:
clusterLogging:
enableTypes: ["api", "authenticator"]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment