Skip to content

Instantly share code, notes, and snippets.

@k8s-dev

k8s-dev/blog.md Secret

Last active January 3, 2021 06:55
Show Gist options
  • Save k8s-dev/f06d62ca7a06c2b584238704a8eebf5b to your computer and use it in GitHub Desktop.
Save k8s-dev/f06d62ca7a06c2b584238704a8eebf5b to your computer and use it in GitHub Desktop.

Fargate

Setup a fully Private Amazon EKS on AWS Fargate Cluster

Regulated industries needs to host their Kubernetes workloads in most secure ways and fully private EKS on Fargate cluster attempts to solve this problem. Each pod running on Fargate has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another pod, which makes it secure from compliance point of view. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimise cluster packing.

Originally published on my blog: https://k8s-dev.github.io

Source code for this post is hosted at : https://github.com/k8s-dev/private-eks-fargate

Constraints

  • No internet connectivity to Fargate cluster, no public subnets, except for bastion host
  • Fully private access for Amazon EKS cluster's Kubernetes API server endpoint
  • All AWS services communicates to this cluster using VPC or Gateway endpoints, essentially using private AWS access

A VPC endpoint enables private connections between your VPC and supported AWS services. Traffic between VPC and the other AWS service does not leave the Amazon network. So this solution does not require an internet gateway or a NAT device except for bastion host subnet.

Pre-Requisite

  • At least 2 private subnets in VPC because pods running on Fargate are only supported on private subnets
  • A Bastion host in a public subnet in the same VPC to connect to EKS Fargate Cluster via kubectl
  • AWS cli installed and configured with and default region on this bastion host
  • VPC endpoints and Gateway endpoint for AWS Services that your cluster uses

Design

This diagram shows high level design for the implementation. EKS on Fargate cluster spans 2 private subnets and a bastion host is provisioned in public subnet with internet connectivity. All communication to EKS cluster will be initiated from this bastion host. EKS cluster is fully private and communicates to various AWS services via VPC and Gateway endpoints.

Design Diagram

Implementation Steps

Fargate pod execution role

Create a Fargate pod execution role which allows Fargate infrastructure to make calls to AWS APIs on your behalf to do things like pull container images from Amazon ECR or route logs to other AWS services. Follow : https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-sg-pod-execution-role

Create AWS Services VPC Endpoints

Since we are rolling out fully private EKS on Fargate cluster, it should have private only access to various AWS Services such as to ECR, CloudWatch, loadbalancer, S3 etc.

This step is essential to perform so that pods running on Fargate cluster can pull container images, push logs to CloudWatch and interact with loadbalancer.

See the entire list here of endpoints that your cluster may use: https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html#vpc-endpoints-private-clusters

Setup the VPC endpoint and Gateway endpoints in the same VPC for the services that you plan to use in your EKS on Fargate cluster by following steps at : https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint

We need to provision the following endpoints at the minimum:

  • Interface endpoints for ECR (both ecr.api and ecr.dkr) to pull container images
  • A gateway endpoint for S3 to pull the actual image layers
  • An interface endpoint for EC2
  • An interface endpoint for STS to support Fargate and IAM Roles for Services Accounts
  • An interface endpoint for CloudWatch logging (logs) if CloudWatch logging is required

Interface Endpoints

Create Cluster with Private API-Server Endpoint

Use aws cli to create EKS cluster in the designated VPC. Modify with the actual cluster name, kubernetes version, pod execution role arn, private subnet names and security group name before you run the command. Please notice that this might take 10-15 minutes to get the cluster in Ready state.

https://gist.github.com/db22d70b88ad8b479ab95f7fe9342ec3

Above command creates a private EKS cluster with private endpoints and enables logging for Kubernetes control plane components such as for apiserver, scheduler etc. Tweak as per your compliance needs.

Connection to this private cluster could be achieved in 3 ways, via bastion host, Cloud9 or connected network as listed here at : https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access . For the ease of doing it, we will go ahead with EC2 Bastion host approach.

While you are waiting for the cluster to be in ready state, let's set up the Bastion host. This is needed because the EKS on Fargate cluster is in private subnets only, without internet connectivity and the API server access is set to private only. Bastion host will be created in public subnet in the same VPC and will be used to access/reach EKS on Fargate cluster. Only requisite is to install kubectl and aws cli on it. Kubectl can be downloaded from here and aws cli v2 could be downloaded from here

All subsequent commands could be run from Bastion host.

Setup access to EKS on Fargate Cluster

kubectl on bastion host needs to talk to api-server in order to communicate to EKS on Fargate cluster. Perform the following steps in order to setup this communication. Make sure to have aws cli configuration done prior to running this command.

https://gist.github.com/14e3ef5f294092096fd4d69652196152

This saves the kubeconfig file to ~/.kube/config path which enables running kubectl commands to EKS cluster.

Enable logging(optional)

Logging could be very useful to debug the issues while rolling out the application. Setup the logging in the EKS Fargate cluster via : https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html

Create Fargate Profile

In order to schedule pods running on Fargate in your cluster, you must define a Fargate profile that specifies which pods should use Fargate when they are launched. Fargate profiles are immutable by nature. A profile is essentially combination of namespace and optionally labels. Pods that match a selector (by matching a namespace for the selector and all of the labels specified in the selector) are scheduled on Fargate.

https://gist.github.com/f451bb8d5a5f5e5bc1efb6fd69da7429

This command creates a Fargate profile in the cluster and tells EKS to schedule pods in namespace 'custom-space' to Fargate. However, we need to make few changes to our cluster, before we are able to run applications on it.

Private EKS on Fargate Cluster Update CoreDNS

CoreDNS is configured to run on Amazon EC2 infrastructure on Amazon EKS clusters. Since in our cluster we do not have EC2 nodes, we need to update the CoreDNS deployment to remove the eks.amazonaws.com/compute-type : ec2 annotation. And create a Fargate profile so that CoreDNS pods can use it to run. Update the following Fargate profile JSON for your own cluster name, account name, role arn and save in a local file.

https://gist.github.com/ca128cfb42c49d0d9bf274f3bf886c2c

Apply this JSON with the following command

https://gist.github.com/44ccf61803d281b83113f6950154a1d2

Next step is to remove the annotation from CoreDNS pods, allowing them to be scheduled on Fargate infrastructure:

https://gist.github.com/895497aae27cf01a4cc0db07610a0bcd

Final step to make coreDNS function properly is to recreate these pods, we will use rollout deployment for this:

https://gist.github.com/deb672afedb16a04e23e9e8d1ef59793

Make sure to double check after a while that coreDNS pods are in running state in kube-system namespace before proceeding further.

After these steps EKS on Fargate private cluster is up and running. Because this cluster does not have internet connectivity, pods scheduled on this can not pull container images from public registry like dockerhub etc. Solution to this is to setup ECR repo, host private images in it and refer to these images in pod manifests.

Setup ECR registry

This involves creating ECR repository, pulling image locally, tagging it appropriately and pushing to ECR registry. A container image could be copied to ECR from bastion host and can be accessed by EKS on Fargate via ECR VPC endpoint.

https://gist.github.com/d83d7be14803485d9985f2ec02de5599

Run Application in EKS on Fargate Cluster

Now that we have pushed nginx image to ECR, we can reference it to the deployment yaml.

https://gist.github.com/30fc771361f3c4e1224f5e22135c8c86

Cross-check to ensure that nginx application is in running state after a while.

https://gist.github.com/1b520ecadaffc8fcc8433cea7bcb851e

Describe the deployment should look like this.

https://gist.github.com/20c3193e75a8fdd073d19bef2cd6424f

Ensuring that nginx application is actually running on Fargate infrastructure, check the EC2 console page. You would not find any EC2 running there as part of EKS cluster, as the underlying infrastructure is fully managed by AWS.

https://gist.github.com/56ddcd113a0c0e9196ca6ad09816b04e

Another experiment to try out is to create a new deployment with image which is not part of ECR and see that pod will be in crashbackoff state because being a private cluster it can not reach dockerhub on public internet to pull the image.

Closing

  • In this experimentation - we created a private EKS on Fargate Cluster, with private API endpoints.
  • This deployment solves some of the compliance challenges faced by BFSI and other regulated sectors.

Please let know if you face challenges replicating this in your own environment. PRs to improve this documentation is welcome. Happy Learning!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment