Skip to content

Instantly share code, notes, and snippets.

@willgarcia
Last active March 1, 2023 13:31
Show Gist options
  • Save willgarcia/5e7a0adb4c7313d8a0639cf883d06dd5 to your computer and use it in GitHub Desktop.
Save willgarcia/5e7a0adb4c7313d8a0639cf883d06dd5 to your computer and use it in GitHub Desktop.

####!/bin/bash

Set Variables

Set working directory (pwd for now)

export TMP_DIR="$(pwd)"

Set Prefix for AWS resources

export SUFFIX="-jrfuller"

PRE-REQs:

1. You must have an OCP cluster installed on AWS.

2. It must be a multi-AZ cluster.

3. You need to use STS.

Get/Set the cluster name (you can use "rosa list clusters" or the oc command below)

export CLUSTER_NAME="$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")"

Disable AWS cli output paging

export AWS_PAGER=""

Get/Set the VPC ID of your worker nodes.

export VPC=$(aws ec2 describe-vpcs --output json --filters
Name=tag-value,Values="${CLUSTER_NAME}*"
--query "Vpcs[].VpcId" --output text)

Get/Set VPC Subnets

export SUBNET_IDS=$(aws ec2 describe-subnets --output json
--filters Name=tag-value,Values="${CLUSTER_NAME}-public"
--query "Subnets[].SubnetId" --output text | sed 's/\t/ /g')

Get/Set AWS Region (or set it manually)

export AWS_REGION="$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')"

Set ALB Nanepace Name

export NAMESPACE="alb-controller"

Set Service Account Name

export SA="alb-controller"

Get/Set AWS OIDC Provider for the TrustPolicy

export OIDC_PROVIDER=$(oc get authentication.config.openshift.io cluster -o json
| jq -r .spec.serviceAccountIssuer | sed -e "s/^https:////")

Get/Set AWS Account ID (STS call identity), used for Policy and TrustPolicy

export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

Get the iam_policy.json and name it iam-policy.json locally

wget -O $TMP_DIR/iam-policy.json
https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json

Create AWSLoadBalancerControllerIAMPolicy IAM policy

aws iam create-policy --policy-name
"AWSLoadBalancerControllerIAMPolicy${SUFFIX}"
--policy-document file://$TMP_DIR/iam-policy.json
--query Policy.Arn --output text

Set AWSLoadBalancerControllerIAMPolicy Nalme

export LB_POL_NAME="AWSLoadBalancerControllerIAMPolicy${SUFFIX}"

Set POLICY_ARN

export POLICY_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:policy/${LB_POL_NAME}"

Set ALB Role Name

export ALB_ROLE_NAME="rosa1-alb-controller${SUFFIX}"

Set ALB_ROLE ARN (NOTE: FIX ME?)

export ALB_ROLE="arn:aws:iam::${AWS_ACCOUNT_ID}:role/${ALB_ROLE_NAME}"

Create trust policy

cat < $TMP_DIR/TrustPolicy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": [ "system:serviceaccount:${NAMESPACE}:${SA}" ] } } } ] } EOF

Create Role

aws iam create-role --role-name "${ALB_ROLE_NAME}"
--assume-role-policy-document file://$TMP_DIR/TrustPolicy.json
--query "Role.Arn" --output text

Attach Role Policy

aws iam attach-role-policy
--role-name "${ALB_ROLE_NAME}"
--policy-arn $POLICY_ARN

Tag subnets

aws ec2 create-tags
--resources $(echo ${SUBNET_IDS})
--tags Key=kubernetes.io/role/elb,Value=''

Install ALB Controller

helm install alb-controller eks/aws-load-balancer-controller
-n $NAMESPACE
--set clusterName=$CLUSTER_NAME
--set serviceAccount.name=$SA
--set "vpcId=$VPC"
--set "region=$AWS_REGION"
--set serviceAccount.annotations.'eks.amazonaws.com/role-arn'=$ALB_ROLE
--set "image.repository=amazon/aws-alb-ingress-controller"
--version 1.4.6

Set SCC for $SA in $NAMESPACE

oc adm policy add-scc-to-user anyuid -z $SA -n $NAMESPACE

##################

APP TEST

##################

Create app name space

from first two steps at the following URL:

oc new-project my-public-app

Create application

oc new-app --docker-image=docker.io/openshift/hello-openshift -n my-public-app

Change the service to type NodePort

oc patch service hello-openshift -p '{"spec":{"type":"NodePort"}}' -n my-public-app

Create the Ingress

cat <<EOF | oc apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-openshift namespace: my-public-app annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/shield-advanced-protection: "true" labels: app: hello-openshift spec: rules: - host: test.bar http: paths: - pathType: Prefix path: /hello backend: service: name: hello-openshift port: number: 8080 - pathType: Prefix path: /bye backend: service: name: hello-openshift port: number: 8080 EOF

Set Ingress Name variable:

export INGRESS_NAME="hello-openshift"

Set ALB Hostname variable:

export ALB_HOSTNAME="$(oc get ingress ${INGRESS_NAME} -o jsonpath='{.status.loadBalancer.ingress[].hostname}')"

Use curl to test.

We set no default path, so this returns nothing. Adding "path: /" to the ingress controller makes this work.s

curl -s --header "Host: test.bar" ${ALB_HOSTNAME}/

We set paths to "hello" and "bye"

curl -s --header "Host: test.bar" ${ALB_HOSTNAME}/hello Hello OpenShift! curl -s --header "Host: test.bar" ${ALB_HOSTNAME}/bye Hello OpenShift!

Prereqs:

  • ROSA public/privatelink clusters multi AZ. If you do not use a multi AZ cluster, ALB will fail to be provisioned ("couldn't auto-discover subnets: subnets count less than minimal required count: 1 < 2")
  • Be logged in in ROSA/AWS/OC CLIs
  1. Set up your environment
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
echo $AWS_ACCOUNT_ID
export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")
echo $CLUSTER_NAME
export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer| sed -e "s/^https:\/\///")
echo $OIDC_ENDPOINT
export WORK_DIR=/tmp/aws-lb-operator-workdir
mkdir -p $WORK_DIR

# User defined variables
## AWS Region code
export AWS_REGION=ap-southeast-2
  1. AWS Tags
# Ensure VPC and subnets are tagged appropriately 

## Required otherwise the AWS LB manager pod will fail with the following error: ERROR setup failed to get VPC ID {"error": "no VPC with tag \"kubernetes.io/cluster/<CLUSTER-NAME>\" found"} and will fail subnet auto-discovery

## ROSA VPC ID
export VPC=vpc-09a7051a8dea2b66e
aws ec2 create-tags --resources ${VPC} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned
## For each ROSA Public subnet run, this command
export PUBLIC_SUBNET_ID="subnet-01a4b63a9675cca3d" # subnet-0e3ab08e7cdc73c57 subnet-088b2dfea6085e82a"
aws ec2 create-tags --resources "${PUBLIC_SUBNET_ID}" --tags Key=kubernetes.io/role/elb,Value=''
## For each ROSA Private subnet, use this command
export PRIVATE_SUBNET_ID="subnet-0949e4471fe3d51b0" # subnet-0230e4ee9640103fa subnet-0a8fea53150adc142"
aws ec2 create-tags --resources "${PRIVATE_SUBNET_ID}" --tags Key=kubernetes.io/role/internal-elb,Value=''
  1. Install LB operator
oc new-project aws-load-balancer-operator
wget -O "${WORK_DIR}/load-balancer-operator-policy.json" \
  https://raw.githubusercontent.com/rh-mobb/documentation/main/content/docs/rosa/aws-load-balancer-operator/load-balancer-operator-policy.json
less "${WORK_DIR}/load-balancer-operator-policy.json"
POLICY_ARN=$(aws --region "$AWS_REGION" --query Policy.Arn \
--output text iam create-policy \
--policy-name aws-load-balancer-operator-policy \
--policy-document "file://${WORK_DIR}/load-balancer-operator-policy.json")
echo $POLICY_ARN


cat <<EOF > "${WORK_DIR}/trust-policy.json"
{
  "Version": "2012-10-17",
  "Statement": [
  {
  "Effect": "Allow",
  "Condition": {
    "StringEquals" : {
      "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"]
    }
  },
  "Principal": {
    "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}"
  },
  "Action": "sts:AssumeRoleWithWebIdentity"
  }
  ]
}
EOF
less "${WORK_DIR}/trust-policy.json"

ROLE_ARN=$(aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \
--assume-role-policy-document "file://${WORK_DIR}/trust-policy.json" \
--query Role.Arn --output text)
echo $ROLE_ARN

aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \
  --policy-arn $POLICY_ARN
echo $?

cat << EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: aws-load-balancer-operator
  namespace: aws-load-balancer-operator
stringData:
  credentials: |
    [default]
    role_arn = $ROLE_ARN
    web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
EOF

cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: aws-load-balancer-operator
  namespace: aws-load-balancer-operator
spec:
  targetNamespaces:
    - aws-load-balancer-operator
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: aws-load-balancer-operator
  namespace: aws-load-balancer-operator
spec:
  channel: stable-v0
  installPlanApproval: Automatic
  name: aws-load-balancer-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  startingCSV: aws-load-balancer-operator.v0.2.0
EOF

Wait a few minutes for the operator to be installed (you can check progress status in the OpenShift Web Console) and then create the following resource:

cat << EOF | oc apply -f -
apiVersion: networking.olm.openshift.io/v1alpha1
kind: AWSLoadBalancerController
metadata:
  name: cluster
spec:
  credentials:
    name: aws-load-balancer-operator
EOF

Verify if both the manager and controller are running:

oc -n aws-load-balancer-operator get pods
aws-load-balancer-controller-cluster-7f9bc7c48c-fkhbl            1/1     Running   0              64s
aws-load-balancer-operator-controller-manager-56664699b4-rw5rb   2/2     Running   4 (114s ago)   3m8s
  1. Prepare for testing

Deploy the echoserver:

oc new-project echoserver
oc adm policy add-scc-to-user anyuid system:serviceaccount:echoserver:default
oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-deployment.yaml
  1. Testing - Public-facing ALB
oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-ingress.yaml
oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-service.yaml

Wait for the ALB to be provisioned. You can verify status in the AWS Console > EC2 > Load Balancers.

Then, run:

INGRESS=$(oc -n echoserver get ingress echoserver \
  -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
curl -sH "Host: echoserver.example.com" \
  "http://${INGRESS}" | grep Hostname
  1. Testing - Public-facing NLB

There are two ways to provision the NLB.

Through the loadBalancerClass attribute or via annotation aws-load-balancer-type: external (see https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/)

Both provision an NLB through the AWS Load Balancer Operator (Controller).

This method uses annotations only:

cat << EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
  name: echoserver-nlb
  namespace: echoserver
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer
  selector:
    app: echoserver
EOF

At this point, you can verify the logs of the manager. They should show `{"level":"info","ts":1677669348.1301181,"logger":"controllers.service","msg":"created loadBalancer","stackID":"echoserver/echoserver-nlb","resourceID":"LoadBalancer","arn":"arn:aws:elasticloadbalancing:ap-southeast-2:<....>:loadbalancer/net/k8s-echoserv-echoserv-<...>/<...>"}`.

You can also go to the AWS Console > EC2 > Load Balancer. The LB should be in provisioning state.

You should also verify that all NLB targets are health: AWS Console > EC2 > Target Groups > Health: count > 0

Once provisioned, run the following commands:

```bash
NLB=$(oc -n echoserver get service echoserver-nlb \
  -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
curl -s "http://${NLB}" | grep Hostname

Cleanup

# oc adm policy remove-scc-from-user anyuid system:serviceaccount:echoserver:default

oc delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-ingress.yaml
oc delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-service.yaml
oc delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/echoservice/echoserver-deployment.yaml
oc delete service/echoserver-nlb -n echoserver
oc delete project echoserver

oc delete AWSLoadBalancerController/cluster
oc delete secret/aws-load-balancer-operator -n aws-load-balancer-operator
oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
oc delete project aws-load-balancer-operator

rm -rf /tmp/aws-lb-operator-workdir 
POLICY_ARN=$(aws iam list-policies --query \
  "Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \
  --output text)
echo $POLICY_ARN
aws iam detach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" --policy-arn $POLICY_ARN
echo $?
aws iam delete-role --role-name "${CLUSTER_NAME}-alb-operator"
echo $?
aws iam delete-policy --policy-arn $POLICY_ARN
echo $?

# If public subnets only
aws ec2 delete-tags --resources "${SUBNET_IDS}" --tags Key=kubernetes.io/role/elb,Value=''
# If private subnets only
aws ec2 delete-tags --resources "${SUBNET_IDS}" --tags Key=kubernetes.io/role/internal-elb,Value=''
echo $?
aws ec2 delete-tags --resources ${VPC} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned
echo $?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment