Skip to content

Instantly share code, notes, and snippets.

@armenr
Last active May 22, 2023 12:38
Show Gist options
  • Save armenr/59ed2cb98c76f9575ab01425f268af2f to your computer and use it in GitHub Desktop.
Save armenr/59ed2cb98c76f9575ab01425f268af2f to your computer and use it in GitHub Desktop.
terraform null_resource for automatically setting up Cilium + EKS via Cilium CLI

Install Cilium on EKS

Example

ENI Mode

The example auto-installs cilium into EKS with the default ENI "datapath" (aka - "mode").

Be sure to roll/restart all running pods upon successful installation. Cilium will restart "unamanaged" pods, but that doesn't mean all pods will get restarted.

In this mode, you'll avoid a bunch of the pain that comes with the aws-vpc-cni, and it should be sufficient for most people, and a diverse range of workloads.

Overlay Mode

If you want a fully overlaid/virtualized network, then these settings should work for EKS, out-of-the-box:

cilium install \
        --context ${local.eks_kubectx_alias} \
        --cluster-name ${var.env_name} \
        --datapath-mode tunnel \
        --helm-set cluster.id=0 \
        --helm-set cluster.name=${var.env_name} \
        --helm-set egressMasqueradeInterfaces=eth0 \
        --helm-set encryption.nodeEncryption=false \
        --helm-set hubble.enabled=true \
        --helm-set hubble.relay.enabled=true \
        --helm-set hubble.ui.enabled=true \
        --helm-set kubeProxyReplacement=strict \
        --helm-set operator.replicas=1 \
        --helm-set tunnel=geneve \
        --helm-set serviceAccounts.cilium.name=cilium \
        --helm-set serviceAccounts.operator.name=cilium-operator

The advantage of this approach is that you effectively bypass the pod limits imposed on EC2 Worker Nodes. EKS Worker Node pod counts are constrained by the number of ENIs + IP addresses that an instance type (or size) can support. By using this installation mode, that constraint no longer applies.

🔴IMPORTANT❗🔴 --> If you choose to go this way, be advised that you will need to change certain controllers (like karpenter or aws-load-balancer-controller) by setting hostNetwork: true --> otherwise, the EKS control plane won't be able to communicate with nodes using things like webhooks!

# !!This assumes that you're using terraform's AWS EKS module to create EKS clusters!
resource "null_resource" "setup_cilium" {
depends_on = [module.eks.cluster_endpoint]
triggers = {
cluster_id = module.eks.cluster_id
cluster_oidc_issuer_url = module.eks.cluster_oidc_issuer_url
vpc_cni_addon = local.default_cluster_addons["vpc-cni"].addon_version
}
provisioner "local-exec" {
command = <<EOF
# update kubeconfig
echo "Adding/updating kubeconfig for env..."
aws eks --region ${var.region} update-kubeconfig --name ${var.env_name} --alias ${local.eks_kubectx_alias}
## Wannabe async/await pattern, incoming! 🫠😬
# Check readiness for deployments
for deployment in coredns ebs-csi-controller; do
(
while true; do
replicas="$(kubectl -n kube-system get deployment "$deployment" -o jsonpath='{.status.replicas}')"
readyReplicas="$(kubectl -n kube-system get deployment "$deployment" -o jsonpath='{.status.readyReplicas}')"
echo "desiredNumberScheduled: $desiredNumberScheduled"
echo "numberReady: $numberReady"
if [[ "$replicas" == "$readyReplicas" ]]; then
echo "Deployment $deployment is ready."
break
else
echo "Waiting for deployment $deployment to be ready..."
sleep 5
fi
done
) &
done
# Check readiness for daemonsets
for daemonset in aws-node kube-proxy; do
(
while true; do
desiredNumberScheduled="$(kubectl -n kube-system get daemonset "$daemonset" -o jsonpath='{.status.desiredNumberScheduled}')"
numberReady="$(kubectl -n kube-system get daemonset "$daemonset" -o jsonpath='{.status.numberReady}')"
echo "desiredNumberScheduled: $desiredNumberScheduled"
echo "numberReady: $numberReady"
if [[ "$desiredNumberScheduled" == "$numberReady" ]]; then
echo "Daemonset $daemonset is ready."
break
else
echo "Waiting for daemonset $daemonset to be ready..."
sleep 5
fi
done
) &
done
# Wait for all readiness checks to complete
wait
echo "All specified deployments and daemonsets are ready."
# Install Cilium
echo "Installing Cilium..."
cilium install --context ${local.eks_kubectx_alias} --cluster-name ${var.env_name}
cilium hubble enable --ui
cilium status --wait
echo "Cilium installed successfully, cluster & cluster networking are ready!"
EOF
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment