Skip to content

Instantly share code, notes, and snippets.

@kkoppenhaver
Last active August 15, 2025 06:26
Show Gist options
  • Select an option

  • Save kkoppenhaver/50c4acc87dc1ba3ef8211d7d9106d954 to your computer and use it in GitHub Desktop.

Select an option

Save kkoppenhaver/50c4acc87dc1ba3ef8211d7d9106d954 to your computer and use it in GitHub Desktop.
Plural.sh config

πŸ› οΈ Zero-to-EKS: Auth-Focused Setup for Plural Console

This guide walks through all the key steps and IAM touchpoints to deploy the Plural Console via Plural.sh using Terraform on AWS. It assumes you're provisioning dev/staging/prod environments with standard Kubernetes tooling.


βœ… What You’ll Need

  • AWS account (admin access initially)
  • Route53 hosted zone (e.g. your-domain.com)
  • Terraform + Plural CLI
  • Optional: kubectl + helm

1. 🧱 Infrastructure Provisioning

A. Bootstrap Identity

Use one of:

  • IAM role (preferred): infra-provisioner, assumed by the Plural runner via assume_role
  • IAM user (OK for PoC): static access/secret key

Minimum permissions:

ec2:*, eks:*, iam:PassRole, iam:*, elasticloadbalancing:*, autoscaling:*, route53:*, ecr:GetAuthorizationToken, logs:*, sts:AssumeRole

B. VPC & Subnet Tags (required for Kubernetes service discovery)

public_subnet_tags = {
  "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  "kubernetes.io/role/elb" = "1"
}

private_subnet_tags = {
  "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  "kubernetes.io/role/internal-elb" = "1"
}

C. NAT Gateway Warning

Default config uses 1 NAT per AZ β†’ hits AWS Elastic IP quota fast.

To avoid:

enable_nat_gateway = true
single_nat_gateway = true

Or disable NAT entirely for a public-only setup.

D. EKS Cluster & Nodegroup

Roles:

  • EKS cluster role: AmazonEKSClusterPolicy
  • Nodegroup role: AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly

Enable the EKS OIDC provider:

aws eks describe-cluster --name <cluster> --query "cluster.identity.oidc.issuer"

2. πŸ” EKS Access Permissions

EKS requires access entries to allow IAM users/roles to interact with clusters.

Create:

aws eks associate-access-policy \
  --cluster-name <cluster> \
  --principal-arn arn:aws:iam::<acct>:user/plural-test \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
  --access-scope type=cluster

If Terraform manages aws_eks_access_entry, import existing entries:

terraform import aws_eks_access_entry.this["cluster_creator"] \
  <cluster>:<principal-arn>

3. πŸͺͺ IRSA Roles for Core Controllers

IRSA = IAM Roles for Service Accounts, required for ExternalDNS, ALB, cert-manager.

A. Trust Policy Skeleton

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Effect":"Allow",
      "Principal":{
        "Federated":"arn:aws:iam::<acct>:oidc-provider/oidc.eks.<region>.amazonaws.com/id/<id>"
      },
      "Action":"sts:AssumeRoleWithWebIdentity",
      "Condition":{
        "StringEquals":{
          "oidc.eks.<region>.amazonaws.com/id/<id>:sub":"system:serviceaccount:<ns>:<sa>"
        }
      }
    }
  ]
}

B. Required IRSA Roles

Controller Namespace/SA IAM Permissions
ALB Controller kube-system:aws-load-balancer-controller elasticloadbalancing:*, ec2:Describe*, acm:*
ExternalDNS external-dns:external-dns route53:ChangeResourceRecordSets, List*
cert-manager (DNS) cert-manager:cert-manager route53:ChangeResourceRecordSets, List*

Kubernetes annotation:

kubectl -n <ns> annotate sa <sa> \
  eks.amazonaws.com/role-arn=arn:aws:iam::<acct>:role/<irsa-role-name>

4. πŸš€ Deploy the Console App

Namespace: console

Helm / manifest values:

ingress:
  className: alb
  hosts:
    - console.your-domain.com
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip

Infra dependencies:

  • ExternalDNS sets DNS record in Route53
  • cert-manager issues TLS cert for hostname via ACME
  • ALB handles ingress traffic

5. πŸ”„ Terraform Runner Setup (Plural Console)

Use project-scoped credentials via Plural's "Credential" object:

provider "aws" {
  region = var.aws_region
}

Avoid hardcoded access_key, secret_key, or profile.

If you get:

aws: executable file not found

you're likely using a sandboxed runner. Inject credentials through the platform, not inline commands.


6. βœ… Validate Auth and Cluster Access

# OIDC enabled
aws eks describe-cluster --name <cluster> --query 'cluster.identity.oidc.issuer'

# IRSA role wiring
kubectl -n kube-system get sa aws-load-balancer-controller -o yaml | grep role-arn
kubectl -n external-dns get sa external-dns -o yaml | grep role-arn

# Add-on status
kubectl get deploy -A | grep -E 'alb|external-dns|cert-manager'

# DNS + TLS validation
kubectl -n console get ingress
kubectl -n cert-manager get certificate

7. πŸ“‹ Summary: Explicit Auth Touchpoints

Touchpoint What to configure
Provisioning identity Role or user with EKS/VPC/IAM permissions
EKS access associate-access-policy or aws_eks_access_entry
IRSA trust + perms One IAM role per controller, OIDC-based trust
Subnet tags Required for ELB discovery
NAT gateways Adjust for EIP limits (single_nat_gateway = true)
Credentials for Terraform Inject via Plural or CI, not hardcoded locally
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment