Skip to content

Instantly share code, notes, and snippets.

@krishnamurthydasari
Last active February 1, 2021 11:37
Show Gist options
  • Save krishnamurthydasari/73106165baa7d3429ef2f872dbe67e56 to your computer and use it in GitHub Desktop.
Save krishnamurthydasari/73106165baa7d3429ef2f872dbe67e56 to your computer and use it in GitHub Desktop.
AWS EKS Notes
Basics:
=======
http://kubernetesbyexample.com/
- A replication controller (RC) is a supervisor for long-running pods. An RC will launch a specified number of pods called replicas and makes sure that they keep running, for example when a node fails or something inside of a pod, that is, in one of its containers goes wrong.
example:
kubectl apply -f https://raw.githubusercontent.com/openshift-evangelists/kbe/master/specs/rcs/rc.yaml
kubectl get rc
kubectl get pods --show-labels
scale up --> kubectl scale --replicas=3 rc/rcex
kubectl get pods -l app=sise
to delete RC --> kubectl delete rc rcex
URL : http://kubernetesbyexample.com/rcs/
- A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them.
kubectl get svc
kubectl apply -f <service.yaml>
EKS Cluster/Control plane:
==========================
- Consists of control plane nodes that run the Kubernetes software, such as etcd, dns and the Kubernetes API server (controllers??)
- The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS endpoint associated with your cluster.
- Each Amazon EKS cluster control plane is single-tenant and unique, and runs on its own set of Amazon EC2 instances.
- All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted. Amazon EKS uses master encryption keys that generate volume encryption keys which are managed by the Amazon EKS service.
- The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load Balancing Network Load Balancer.
- Amazon EKS also provisions elastic network interfaces in your VPC subnets to provide connectivity from the control plane instances to the worker nodes
- If you restrict access to your cluster's public endpoint using CIDR blocks, it is recommended that you also enable private endpoint access so that worker nodes can communicate with the cluster.
- Without the private endpoint enabled, the CIDR blocks that you specify for public access must include the egress sources from your VPC.
Worker nodes:
=============
- When you laungh worker node/node group using Amazon optimised AMI, this AMI includes
- Installs iam authenticator, Kubelet and Docker packages
- run bootstrap script to discover and connect to control plane
- bootscrap script https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh
- To know what packages installed in worker nodes and config directories etc, look at build scripts at https://github.com/awslabs/amazon-eks-ami
- AMIs - Amazon optimized AMIs for Amazon Linux version 2 and Windows server 2019.
- Beginning with Kubernetes version 1.14 and platform version eks.3, Amazon EKS clusters support Managed Node Groups, which automate the provisioning and lifecycle management of nodes.
- Earlier versions of Amazon EKS clusters can launch worker nodes with an Amazon EKS-provided AWS CloudFormation template.
- worker node tags : kubernetes.io/cluster/<cluster-name> value "owned" will be added by AWS if you launch worker nodes using nodegroup or using CFN templates by AWS. If you launch workers manually, you must add the following tag to each worker node.
Managed node groups:
====================
- Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
- With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications.
You can create, update, or terminate nodes for your cluster with a single operation.
Nodes run using the latest Amazon EKS-optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
- All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that is managed for you by Amazon EKS.
- All resources including the instances and Auto Scaling groups run within your AWS account.
- Each node group uses the Amazon EKS-optimized Amazon Linux 2 AMI and can run across multiple Availability Zones that you define.
- Amazon EKS adds Kubernetes labels to managed node group instances. These Amazon EKS-provided labels are prefixed with eks.amazon.com.
- Amazon EKS automatically drains nodes using the Kubernetes API during terminations or updates. Updates respect the pod disruption budgets that you set for your pods.
- There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS resources you provision.
Security Groups:
================
1) The Cluster Security Group - Created by AWS during cluster creation. EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads. Communication from Kubernetes control plane and compute resources on the cluster.
2) Worker node security groups (remote access) - Created by AWS during node group creation to allow SSH access to managed worker nodes.
3) Additional cluster security groups - This is created by customer/user, needed to specify during cluster creation. To enable communications from the Kubernetes control plane to compute resources in your account
** In general, when you create cluster, two cross account ENIs will be created and those are in two security groups - additional sg and cluster sg
** when you create nodegroup, nodes are in cluster sg and remote access sg.
** By assigning the cluster security group to the control plane cross-account elastic network interfaces and the managed node group instances,
you do not need to configure complex security group rules to allow this communication.
Any instance or network interface that is assigned this security group can freely communicate with other resources with this security group.
** when you enable public access to API server, you can also restrict public access to required CIDR blocks, that is within cluster console itself not from security groups.
Cluster creation:
=================
- Create cluster by AWS console or AWS CLI or using eksctl
- To be able to manage cluster and nodes, you need to install below in your desktop or jump server EC2 instance
- Pre-requisites to manage cluster
kubectl version --short --client
aws --version (aws latest version)
python version is above 3.*
aws-iam-authenticator
you need to allow all traffic from bastion host (if you want to access kubectl from bastion), you need to allow traffic from both security groups bastion host sg and cluster sg.
- After creation, update/create config file for each cluster
aws eks update-kubeconfig --name <cluster-name> --kubeconfig <config-cluster-name> //keep seperate config files for each cluster
On this step, config file will be created with name provided in /home/krishna/.kube/config
you should be running update-kubeconfig command as same user as you used to create cluster.
By default when you run kube-config command, config will be created to use aws-iam-authenticator. if you want to use aws cli then edit the file and add role ARN as below. (if you are not sure which role to keep, check in cloudwatch logs and authenticator- log stream, you will see which role it is assumed when creating cluster, do not copy same but take ARN of that role from IAM console)
**if you login to aws account using IAM account without assuming any role, then you need to use your user account creds as-is.
**if you login to the aws account using IAM account by assuming role, then you need to mention that role in config.
users:
- name: arn:aws:eks:us-east-1:828551335655:cluster/demo-console
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- demo-console
- --role
- arn:aws:iam::828551335655:role/Admins
command: aws
kubectl get nodes
kubectl get svc
Allow additional users to kubectl:
----------------------------------
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
kubectl edit -n kube-system configmap/aws-auth --> this gets created when you create nodegroup.
add below lines
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
Networking:
===========
- AWS recommends a network architecture that uses private subnets for your worker nodes and public subnets for Kubernetes to create internet-facing load balancers within.
- It is possible to specify only public or private subnets when you create your cluster, but there are some limitations associated with these configurations:
Private-only: Everything runs in a private subnet and Kubernetes cannot create internet-facing load balancers for your pods.
Public-only: Everything runs in a public subnet, including your worker nodes.
- Amazon EKS creates an elastic network interface in your private subnets to facilitate communication to your worker nodes. This communication channel supports Kubernetes functionality such as kubectl exec and kubectl logs.
- The security group that you specify when you create your cluster is applied to the elastic network interfaces that are created for your cluster control plane.
- Your VPC must have DNS hostname and DNS resolution support. Otherwise, your worker nodes cannot register with your cluster.
- The Amazon EKS control plane creates up to 4 cross-account elastic network interfaces in your VPC for each cluster. Be sure that the subnets you specify have enough available IP addresses for the cross-account elastic network interfaces and your pods.
- Docker runs in the 172.17.0.0/16 CIDR range in Amazon EKS clusters. We recommend that your cluster's VPC subnets do not overlap this range.
- VPCs, Subnets etc should be tagged according to EKS documentation
** The CNI plugin is responsible for allocating VPC IP addresses to Kubernetes nodes and configuring the necessary networking for pods on each node.
- The plugin consists of two primary components:
a. The L-IPAM daemon is responsible for attaching elastic network interfaces to instances, assigning secondary IP addresses to elastic network interfaces, and maintaining a "warm pool" of IP addresses on each node for assignment to Kubernetes pods when they are scheduled.
b. The CNI plugin itself is responsible for wiring the host network (for example, configuring the interfaces and virtual Ethernet pairs) and adding the correct interface to the pod namespace.
-
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment