Skip to content

Instantly share code, notes, and snippets.

@iammuho
Created April 17, 2024 12:45
Show Gist options
  • Save iammuho/9e06024310405ecacf4e9e4c94928a5a to your computer and use it in GitHub Desktop.
Save iammuho/9e06024310405ecacf4e9e4c94928a5a to your computer and use it in GitHub Desktop.
Enabling/configuring the VPC CNI for the EKS with secondary CIDR

Steps for Configuring EKS with VPC CNI Custom Networking

Prerequisites

  • Install the EKS Addon VPC CNI addon.
  • Ensure a secondary CIDR block is attached to the VPC (already done).
  • Ensure private subnets use the secondary CIDR block (already done).
  • Retrieve the cluster security group ID, the VPC ID, and subnet IDs from the AWS Management Console.

Configuration Steps

  1. Connect to EKS Cluster

    Run the following command to check and set the VPC CNI version:

    • Set the AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG environment variable to true:
    kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
  2. Create ENIConfig for each subnet

    Create a new ENIConfig custom resource in the cluster with the following content for each subnet:

    apiVersion: crd.k8s.amazonaws.com/v1alpha1
    kind: ENIConfig
    metadata: 
      name: $az_1
    spec: 
      securityGroups: 
        - $cluster_security_group_id
      subnet: $new_subnet_id_1
    ---
    apiVersion: crd.k8s.amazonaws.com/v1alpha1
    kind: ENIConfig
    metadata: 
      name: $az_2
    spec: 
      securityGroups: 
        - $cluster_security_group_id
      subnet: $new_subnet_id_2
    ---
    apiVersion: crd.k8s.amazonaws.com/v1alpha1
    kind: ENIConfig
    metadata: 
      name: $az_3
    spec: 
      securityGroups: 
        - $cluster_security_group_id
      subnet: $new_subnet_id_3
  3. Update aws-node DaemonSet

    Add the following annotation to the aws-node DaemonSet in the kube-system namespace:

    kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone
  4. Reboot EC2 Instances

    • Terminate the EC2 instances in the cluster to apply the changes.
    • Wait for the new EC2 instances to be created and the pods to be scheduled.
  5. Test Configuration

    Run the below command to create a busybox pod in the cluster:

    kubectl run -i --tty busybox --image=busybox --restart=Never -- ip a

    The output should show the secondary CIDR block in the eth0 interface of the busybox pod as below:

    3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 9001 qdisc noqueue
        link/ether 6e:c0:42:b1:c0:1f brd ff:ff:ff:ff:ff:ff
        inet 100.64.19.49/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::6cc0:42ff:feb1:c01f/64 scope link
           valid_lft forever preferred_lft forever
    / #
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment