Skip to content

Instantly share code, notes, and snippets.

@mikesparr
Last active November 14, 2021 15:14
Show Gist options
  • Save mikesparr/99af1f2a012e002e3eb6f3e8f0da0b47 to your computer and use it in GitHub Desktop.
Save mikesparr/99af1f2a012e002e3eb6f3e8f0da0b47 to your computer and use it in GitHub Desktop.
Example of installing the Masquerade Agent on a public Google Kubernetes Engine (GKE) cluster to enable NAT
#!/usr/bin/env bash
# [1] https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#add_configmap
# [2] https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#config_agent_configmap
# [3] https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#create_manual
export PROJECT_ID=$(gcloud config get-value project)
export PROJECT_USER=$(gcloud config get-value core/account) # set current user
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
export IDNS=${PROJECT_ID}.svc.id.goog # workflow identity domain
export GCP_REGION="us-central1" # CHANGEME (OPT)
export GCP_ZONE="us-central1-a" # CHANGEME (OPT)
export NETWORK_NAME="default"
# enable apis
gcloud services enable compute.googleapis.com \
container.googleapis.com
# configure gcloud sdk
gcloud config set compute/region $GCP_REGION
gcloud config set compute/zone $GCP_ZONE
# create test public cluster
export CLUSTER_NAME="public-cluster"
gcloud beta container --project $PROJECT_ID clusters create $CLUSTER_NAME \
--zone $GCP_REGION \
--release-channel "regular" \
--num-nodes "1" \
--enable-ip-alias
# add ConfigMap to cluster
export MASQ_CONFIGMAP_NAME="ip-masq-agent"
cat > config <<EOF
nonMasqueradeCIDRs:
- 0.0.0.0/0
masqLinkLocal: true
EOF
kubectl create configmap $MASQ_CONFIGMAP_NAME \
--from-file config \
--namespace kube-system
########### NOTE ##############
# ssh into a node and run:
# sudo iptables -t nat -L IP-MASQ
# note all the reserved ranges
###############################
# manually create the DaemonSet for the cluster
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ip-masq-agent
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: ip-masq-agent
template:
metadata:
labels:
k8s-app: ip-masq-agent
spec:
hostNetwork: true
containers:
- name: ip-masq-agent
image: k8s.gcr.io/networking/ip-masq-agent-amd64:v2.6.0
args:
- --masq-chain=IP-MASQ
# To non-masquerade reserved IP ranges by default, uncomment the line below.
# - --nomasq-all-reserved-ranges
securityContext:
privileged: true
volumeMounts:
- name: config
mountPath: /etc/config
volumes:
- name: config
configMap:
# Note this ConfigMap must be created in the same namespace as the
# daemon pods - this spec uses kube-system
name: ip-masq-agent
optional: true
items:
# The daemon looks for its config in a YAML file at /etc/config/ip-masq-agent
- key: config
path: ip-masq-agent
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: "CriticalAddonsOnly"
operator: "Exists"
EOF
########### NOTE ##############
# ssh into a node and run:
# sudo iptables -t nat -L IP-MASQ
# note the config change
###############################
# deploy busybox container and try to reach external destination (should fail)
kubectl run busybox -i --tty --image=busybox --restart=Never --rm -- ping -w 5 google.com
# delete pod
kubectl delete pod/busybox
###########################################
# NAT GATEWAY
###########################################
export NAT_GW_IP="nat-gw-ip"
export CLOUD_ROUTER_NAME="router-1"
export CLOUD_ROUTER_ASN="64523"
export NAT_GW_NAME="nat-gateway-1"
# create IP address
gcloud compute addresses create $NAT_GW_IP --region $GCP_REGION
# create cloud router and nat gateway
gcloud compute routers create $CLOUD_ROUTER_NAME \
--network $NETWORK_NAME \
--asn $CLOUD_ROUTER_ASN \
--region $GCP_REGION
gcloud compute routers nats create $NAT_GW_NAME \
--router=$CLOUD_ROUTER_NAME \
--region=$GCP_REGION \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--enable-logging
# change to static IP (test)
gcloud compute routers nats update $NAT_GW_NAME \
--router=$CLOUD_ROUTER_NAME \
--nat-external-ip-pool=$NAT_GW_IP
# retry busybox container and reach external destination (should succeed)
kubectl run busybox -i --tty --image=busybox --restart=Never --rm -- ping -w 5 google.com
# CONGRATULATIONS!!!
@mikesparr
Copy link
Author

Results

Screen Shot 2021-09-28 at 3 55 51 PM

Customization

See the docs for more information on customizing reserved ranges

@mikesparr
Copy link
Author

No Cloud NAT should fail

Screen Shot 2021-09-28 at 4 26 33 PM

Add Cloud NAT and external requests succeed

Screen Shot 2021-09-28 at 4 26 59 PM

@mikesparr
Copy link
Author

Alternatives

It's a best practice to use GKE private clusters but another option is KubeIP (from DoiT International team)

Read more about it on our blog post here:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment