The example auto-installs cilium into EKS with the default ENI "datapath" (aka - "mode").
Be sure to roll/restart all running pods upon successful installation. Cilium will restart "unamanaged" pods, but that doesn't mean all pods will get restarted.
In this mode, you'll avoid a bunch of the pain that comes with the aws-vpc-cni
, and it should be sufficient for most people, and a diverse range of workloads.
If you want a fully overlaid/virtualized network, then these settings should work for EKS, out-of-the-box:
cilium install \
--context ${local.eks_kubectx_alias} \
--cluster-name ${var.env_name} \
--datapath-mode tunnel \
--helm-set cluster.id=0 \
--helm-set cluster.name=${var.env_name} \
--helm-set egressMasqueradeInterfaces=eth0 \
--helm-set encryption.nodeEncryption=false \
--helm-set hubble.enabled=true \
--helm-set hubble.relay.enabled=true \
--helm-set hubble.ui.enabled=true \
--helm-set kubeProxyReplacement=strict \
--helm-set operator.replicas=1 \
--helm-set tunnel=geneve \
--helm-set serviceAccounts.cilium.name=cilium \
--helm-set serviceAccounts.operator.name=cilium-operator
The advantage of this approach is that you effectively bypass the pod limits imposed on EC2 Worker Nodes. EKS Worker Node pod counts are constrained by the number of ENIs + IP addresses that an instance type (or size) can support. By using this installation mode, that constraint no longer applies.
🔴IMPORTANT❗🔴 --> If you choose to go this way, be advised that you will need to change certain controllers (like karpenter
or aws-load-balancer-controller
) by setting hostNetwork: true
--> otherwise, the EKS control plane won't be able to communicate with nodes using things like webhooks!