Skip to content

Instantly share code, notes, and snippets.

@itsouvalas
Created March 23, 2022 10:15
Show Gist options
  • Save itsouvalas/9a8fafc74845f3cc0bd6a12a466a90c8 to your computer and use it in GitHub Desktop.
Save itsouvalas/9a8fafc74845f3cc0bd6a12a466a90c8 to your computer and use it in GitHub Desktop.
A runbook to cf-for-k8s on top of AWS's EKS

Introduction

In the following documentation we will go through all the steps required to use cf-for-k8s under AWS provisioned kubernetes service, specifically EKS. As most documentation endeavors already exist in some way or another, this is mostly based on cf-for-k8s readme and the in-depth guide of Deploying CF for K8s. At this stage of writing and to expedite the process, we will be using direct links as sources when appropriate, with the intent to integrate them in this document and keep the references. The rest of the documentation will focus on the missing pieces that should get you running out of the box without having to do your own research.

Prerequisites

AWS

Although you could probably use an already deployed EKS and IAM credentials, the following steps take you through creating one.

Create your VPC

Replace region-code with your region and my-eks-vpc-stack with the desired name

aws cloudformation create-stack \
  --region region-code \
  --stack-name my-eks-vpc-stack \
  --template-url https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml

Connect to AWS console

https://aws.amazon.com/console/

https://console.aws.amazon.com/iam/

Create your EKS Cluster

  • Under Services type EKS and click Elastic Kubernetes Service
  • From the Add Cluster drop down menu click Create
    • Configure cluster

      • Name: cf-for-k8s
      • Kubernetes version: 1.19 (at the time of this writing this was the newest available version that works with cf-for-k8s - check the prerequisites above for an updated version)
      • Cluster Service Role: eksClusterRole (or another name that you might have used during the role creation process above)
      • Next
    • Specify networking

      • VPC: Select the VPC created earlier (Hint: use the refresh button in case you just created it)
      • Next
    • Configure logging

      • Next
    • Review and create

      • Create

Hint: To avoid "Create failed": Instances failed to join the kubernetes cluster when creting the node group later, enable both public and private access on your cluster by Modifying cluster endpoint access

  • Create node-role-trust-policy.json with the following contents
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
  • Create the node IAM role

Replace myAmazonEKSNodeRole and node-role-trust-policy.json with the desired name and actual file name used earlier

aws iam create-role \
  --role-name myAmazonEKSNodeRole \
  --assume-role-policy-document file://"node-role-trust-policy.json"

Attach the required managed IAM policies to the role.

aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
  --role-name myAmazonEKSNodeRole
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
  --role-name myAmazonEKSNodeRole
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
  --role-name myAmazonEKSNodeRole

Add nodes to your cluster

  • Open the Amazon EKS console

  • select the recently created cluster

  • configuration tab

    • compute tab

    Add Node Group

Configure Node Group

  • Name: cf-for-k8s-node-group
  • Node IAM role: myAmazonEKSNodeRole (or the name used earlier)
  • Next

Set compute and scaling configuration

  • Instance types t3a.xlarge (this is the cheapest instance type that covers the 4vcpu >15GB of memory prerequisite)
  • Minimum size: 2
  • Maximum size/Desired size: 5
  • Next

Specify networking

  • Next

Review and create

  • Create

Deploy cf-for-k8s

Once you have a working kubernetes cluster you are ready to deploy cf-for-k8s

Enable kubectl access from your local machine

Update region-code and my-cluster respectively

aws eks update-kubeconfig --region region-code --name my-cluster

Important This will remove the ~/.kube/config if there is already one in place, so make sure you create a copy of it as required.

Copy cf-for-k8s code locally

git clone https://github.com/cloudfoundry/cf-for-k8s.git -b main

and cd into it

cd cf-for-k8s

create a special temp directory to store credential files.

mkdir ~/tmp

assign a variable for tmp_dir

TMP_DIR=~/tmp

Create CF installation values

./hack/generate-values.sh -d <cf-domain> > ${TMP_DIR}/cf-values.yml

where <cf-domain> is the domain name used for your cf deployment

Provide you external app registry credentials

app_registry:
  hostname: https://index.docker.io/v1/
  repository_prefix: "1oannis"
  username: "1oannis"
  password: "s0m3p@ssw0rd"

and add them with the above indentation as the last block on your recently created ${TMP_DIR}/cf-values.yml

vim ${TMP_DIR}/cf-values.yml

This is for dockerhub. More examples can be found on the cf-for-k8s documentation

Create the final yaml that will hold all the information required to deploy cf-for-k8s on your kubernetes cluster

ytt -f config -f ${TMP_DIR}/cf-values.yml > ${TMP_DIR}/cf-for-k8s-rendered.yml

Deploy cf-for-k8s

kapp deploy -a cf -f ${TMP_DIR}/cf-for-k8s-rendered.yml -y

Important: As per the documentation save the values file somewhere secure (remember, it contains secrets!) for future upgrades. You also may want to consider saving the final rendered K8s configuration file for future reference.

You should have you cf deployed within ~ ten minutes.

Grab DNS information

Given we have created the kubernetes cluster under EKS, you would want the hostname of the elb:

kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'

Edit your dns records

Add a wildcard subdomain as a CNAME pointing to the above host i.e.

*.cf  CNAME a269c0c59bbe44e92b2914d37dd0f148-229471642.us-east-1.elb.amazonaws.com.

Depending on your DNS registrar this might differ. In the example above the cf-domain is implied and *.cf is the subdomain wildcard record

Connect to cf

cf api --skip-ssl-validation https://api.<cf-domain>

If your dns changes haven't propagated yet, do a dig

dig a269c0c59bbe44e92b2914d37dd0f148-229471642.us-east-1.elb.amazonaws.com

and use on of the ips to point to api. on your /etc/hosts file

sudo vim /etc/hosts
3.215.26.208 api.<cf-domain>

Hint: Remember to remove that record once the dns changes have propagated or you might find yourself troubleshooting connectivity issues that are not there (once the IP changes)

Log in to CF

cf auth admin <password>

the password can be found under cf_admin_password on your ${TMP_DIR}/cf-values.yml) file. If you have yq installed you can parse it in one line

cf auth admin "$(yq '.cf_admin_password' ${TMP_DIR}/cf-values.yml)"

Create you org/space tree

cf create-org test-org
cf create-space -o test-org test-space
cf target -o test-org -s test-space

Deploy a simple test app

cf push test-node-app -p tests/smoke/assets/test-node-app

and test the result:

curl -k https://test-node-app.<cf-domain>

and if that doesn't look promising enough try a better looking example app

cd 
git clone https://github.com/cloudfoundry-samples/spring-music
cd spring-music
cf push
cf apps # to get the route

and visit https:// on your browser. Needless to say that your dns have to have propagated for this to work bu you always have your /etc/hosts file until they do.

Happy cf-for-k8s!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment