Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save piyushjajoo/a928a8710c7c7ccca5a9438de7c35c83 to your computer and use it in GitHub Desktop.
Save piyushjajoo/a928a8710c7c7ccca5a9438de7c35c83 to your computer and use it in GitHub Desktop.
IRSA in EKS within same and across AWS Accounts

IRSA in EKS within same and across AWS Accounts

This is a gist of examples also mentioned in the blog IAM Roles for Service Accounts (IRSA) in AWS EKS within and cross AWS Accounts. Prerequisite for this gist is to create the EKS Cluster as explained in my earlier blog Create Amazon EKS Cluster within its VPC using Terraform, OR you can use this github repository.

Running Example for IRSA within same account

Assuming you have the EKS Cluster running and your AWS CLI is configured to talk to the AWS Account where your EKS Cluster is running. If not please follow the our earlier blog on How to create an EKS Cluster using Terraform and have the EKS Cluster up and running OR you can also directly use this README to deploy the EKS Cluster.

  • Retrieve Kubeconfig and configure your terminal to talk to AWS EKS Cluster as follows, this should update the current kubeconfig context to point to the cluster -
export EKS_CLUSTER_NAME=<your eks cluster name>
export EKS_AWS_REGION=<aws region where you created eks cluster>

aws eks update-kubeconfig --region ${EKS_AWS_REGION} --name ${EKS_CLUSTER_NAME}

# validate kubecontext as below, should point to your cluster
kubectl config current-context
  • Create a namespace irsa-test and service account in that namespace named irsa-test as follows -
# create namespace
kubectl create namespace irsa-test

# create serviceaccount
kubectl create serviceaccount --namespace irsa-test irsa-test
  • Retrieve OIDC Issuer Id from the EKS Cluster
export EKS_CLUSTER_NAME=<your eks cluster name>
export EKS_AWS_REGION=<aws region where you created eks cluster>

export OIDC_ISSUER_ID=$(aws eks describe-cluster --name ${EKS_CLUSTER_NAME} --region ${EKS_AWS_REGION} --query "cluster.identity.oidc.issuer" | awk -F'/' '{print $NF}' | tr -d '"')
  • Create IAM Role with attached AWS managed AmazonS3FullAccess policy and configure TrustRelationship for ServiceAccount name irsa-test in namespace irsa-test as follows -
export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
export EKS_AWS_REGION="<replace with aws region where you created eks cluster>"
export EKS_OIDC_ID=$(echo $OIDC_ISSUER_ID)
export NAMESPACE="irsa-test"
export SERVICE_ACCOUNT_NAME="irsa-test"

cat > trust.json << EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/oidc.eks.${REGION}.amazonaws.com/id/${EKS_OIDC_ID}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.${REGION}.amazonaws.com/id/${EKS_OIDC_ID}:sub": "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        }
      }
    }
  ]
}
EOF

# create IAM role and attach trust policy
aws iam create-role --role-name irsa-test --assume-role-policy-document file://trust.json

# remove trust.json file
rm trust.json

# attach AmazonS3FullAccess Permissions policy to the iam role
aws iam attach-role-policy --role-name irsa-test --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
  • Annotate ServiceAccount irsa-test in namespace irsa-test with IAM Role as follows -
export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
export NAMESPACE="irsa-test"
export SERVICE_ACCOUNT_NAME="irsa-test"

# annotate service account
kubectl annotate serviceaccount --namespace ${NAMESPACE} ${SERVICE_ACCOUNT_NAME} eks.amazonaws.com/role-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:role/irsa-test
  • Deploy a pod using the irsa-test service account in irsa-test namespace as follows -
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: irsa-test
  namespace: irsa-test
  labels:
    app: aws-cli
spec:
  selector:
    matchLabels:
      app: aws-cli
  template:
    metadata:
      labels:
        app: aws-cli
    spec:
      serviceAccountName: irsa-test
      containers:
      - name: aws-cli
        image: amazon/aws-cli
        command: [ "/bin/sh", "-c", "--" ]
        args: [ "while true; do sleep 39000; done;" ]
EOF
  • Make sure the pod is running, using irsa-test serviceAccount and irsa-test serviceAccount is annotated with IAM Role you created above as follows -
# check pod is running
kubectl get po -n irsa-test irsa-test

# check pod is deployed with irsa-test serviceaccount
kubectl get deploy -n irsa-test irsa-test -o jsonpath='{.spec.template.spec.serviceAccountName}'

# check service account is annotated with IAM Role
kubectl get sa -n irsa-test irsa-test -o jsonpath='{.metadata.annotations}'
  • Exec into the pod deployed above and create an s3 bucket and validate s3 bucket is created successfully as follows, this will prove that your pod is configured for IRSA successfully -
# exec into the pod
export POD_NAME=$(kubectl get po -n irsa-test | grep irsa-test | awk -F ' ' '{print $1}')
kubectl exec -it -n irsa-test ${POD_NAME} -- bash

# run following commands inside the pod
export BUCKET_NAME="irsa-test-sample-$(date +%s)"

# create s3 bucket
aws s3api create-bucket --bucket ${BUCKET_NAME} --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2

# validate s3 bucket is created, there shouldn't be any error message on stdout
aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2

# delete s3 bucket, there shouldn't be any errors on stdout
aws s3 rm s3://${BUCKET_NAME} --region us-west-2 --recursive
aws s3api delete-bucket --bucket ${BUCKET_NAME} --region us-west-2

# validate s3 bucket is deleted, you should see 404 error message on stdout
aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2

Running Example Cross Account IRSA

Assuming you have the setup from the Running Example for IRSA within same account section earlier. If not, please read and follow the section earlier before proceeding with the examples below -

  • Make sure your AWS CLI is now configured to talk to the AWS Account2 where you want your pod running in AWS Account1 to create resources.

  • Create IAM Role with AmazonS3FullAccess permissions in Account2 and TrustRelationship to allow this IAM Role to be assumed by irsa-test IAM Role created earlier as follows -

export AWS_ACCOUNT1_ID="AWS Account1 Number"

cat > trust.json << EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::${AWS_ACCOUNT1_ID}$:role/irsa-test"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
EOF

# create IAM role and attach trust policy
aws iam create-role --role-name irsa-test --assume-role-policy-document file://trust.json

# remove trust.json file
rm trust.json

# attach AmazonS3FullAccess Permissions policy to the iam role
aws iam attach-role-policy --role-name irsa-test --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
  • Login to AWS Account1, go to IAM Role irsa-test and create an inline policy with permissions to allow Account1 to talk to Account2 as below -
{
	"Effect": "Allow",
	"Principal": {
		"AWS": "arn:aws:iam::<ACC2_NUMBER>:role/irsa-test"
	},
	"Action": "sts:AssumeRole"
}

Replace ACC2_NUMBER with AWS Account2's number.

  • Assuming your kubeconfig is still pointing to the EKS Cluster created earlier, also the irsa-test pod is still running. Make sure to exit from the interactive terminal of irsa-test pod, if it's still active from earlier session.

  • Exec into the irsa-test pod and assume the IAM Role in Account2 and retrieve the credentials, configure AWS CLI to now talk to Account2. Create s3 bucket and validate that it gets created in Account2 as follows -

# exec into the irsa-test pod
export POD_NAME=$(kubectl get po -n irsa-test | grep irsa-test | awk -F ' ' '{print $1}')
kubectl exec -it -n irsa-test ${POD_NAME} -- bash

# Account2's number
export AWS_ACCOUNT2_NUMBER=<replace with Account2 number>

# retrieve Account2's credentials
aws sts assume-role --role-arn "arn:aws:iam::${AWS_ACCOUNT2_NUMBER}:role/irsa-test" --role-session-name "create-bucket-session"

# copy AccessKeyId, SecretAccessKey and SessionToken from the output and set following environment variables
ASSUME_ROLE_OUTPUT=$(aws sts assume-role --role-arn "arn:aws:iam::${AWS_ACCOUNT2_NUMBER}:role/irsa-test" --role-session-name "create-bucket-session")

export AWS_ACCESS_KEY_ID=$(echo $ASSUME_ROLE_OUTPUT | grep -o '"AccessKeyId": "[^"]*"' | cut -d'"' -f4)
export AWS_SECRET_ACCESS_KEY=$(echo $ASSUME_ROLE_OUTPUT | grep -o '"SecretAccessKey": "[^"]*"' | cut -d'"' -f4)
export AWS_SESSION_TOKEN=$(echo $ASSUME_ROLE_OUTPUT | grep -o '"SessionToken": "[^"]*"' | cut -d'"' -f4)

# set bucket name
BUCKET_NAME="cross-irsa-test-$(date +%s)"

# make sure you are logged into AWS Account2 now
aws sts get-caller-identity --query Account --output text

# create s3 bucket
aws s3api create-bucket --bucket ${BUCKET_NAME} --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2

# validate s3 bucket is created, there shouldn't be any error message on stdout
aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2

# delete s3 bucket, there shouldn't be any errors on stdout
aws s3 rm s3://${BUCKET_NAME} --region us-west-2 --recursive
aws s3api delete-bucket --bucket ${BUCKET_NAME} --region us-west-2

# validate s3 bucket is deleted, you should see 404 error message on stdout
aws s3api head-bucket --bucket ${BUCKET_NAME} --region us-west-2

This proves that now Pod running inside an EKS Cluster in Account1 can talk AWS Services in Account 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment