Skip to content

Instantly share code, notes, and snippets.

@gabihodoroaga
Last active December 7, 2023 14:55
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save gabihodoroaga/1289122db3c5d4b6c59a43b8fd659496 to your computer and use it in GitHub Desktop.
Save gabihodoroaga/1289122db3c5d4b6c59a43b8fd659496 to your computer and use it in GitHub Desktop.
#!/bin/bash
set -e
PROJECT_ID=$(gcloud config list project --format='value(core.project)')
ZONE=us-central1-a
CLUSTER_NAME=demo-cluster
gcloud container clusters \
create $CLUSTER_NAME \
--zone $ZONE --machine-type "e2-medium" \
--enable-ip-alias \
--num-nodes=2
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
cat << EOF > values.yaml
controller:
service:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
EOF
helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
cat << EOF > dummy-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-deployment
spec:
selector:
matchLabels:
app: dummy
replicas: 2
template:
metadata:
labels:
app: dummy
spec:
containers:
- name: dummy
image: nginx:latest
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: dummy-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
app: dummy
EOF
kubectl apply -f dummy-app.yaml
cat << EOF > dummy-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dummy-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: ""
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dummy-service
port:
number: 80
EOF
kubectl apply -f dummy-ingress.yaml
NETWORK_TAGS=$(gcloud compute instances describe \
$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') \
--zone=$(kubectl get nodes -o jsonpath="{.items[0].metadata.labels['topology\.gke\.io/zone']}") \
--format="value(tags.items[0])")
gcloud compute firewall-rules create $CLUSTER_NAME-lb-fw \
--allow tcp:80 \
--source-ranges 130.211.0.0/22,35.191.0.0/16 \
--target-tags $NETWORK_TAGS
gcloud compute health-checks create http app-service-80-health-check \
--request-path /healthz \
--port 80 \
--check-interval 60 \
--unhealthy-threshold 3 \
--healthy-threshold 1 \
--timeout 5
gcloud compute backend-services create $CLUSTER_NAME-lb-backend \
--health-checks app-service-80-health-check \
--port-name http \
--global \
--connection-draining-timeout 300
gcloud compute backend-services update $CLUSTER_NAME-lb-backend \
--enable-logging \
--global
gcloud compute backend-services add-backend $CLUSTER_NAME-lb-backend \
--network-endpoint-group=ingress-nginx-80-neg \
--network-endpoint-group-zone=$ZONE \
--balancing-mode=RATE \
--capacity-scaler=1.0 \
--max-rate-per-endpoint=1.0 \
--global
gcloud compute url-maps create $CLUSTER_NAME-url-map \
--default-service $CLUSTER_NAME-lb-backend
gcloud compute target-http-proxies create $CLUSTER_NAME-http-proxy \
--url-map $CLUSTER_NAME-url-map
gcloud compute forwarding-rules create $CLUSTER_NAME-forwarding-rule \
--global \
--ports 80 \
--target-http-proxy $CLUSTER_NAME-http-proxy

How to configure ingress-nginx on GKE

This is a step by step tutorial to use ingress-nginx on GKE and expose it using a GCP custom external load balancer.

The trick is to deploy the ingress-nginx as ClusterIP and not LoadBalancer and then expose the ingresss-nginx-controller service using NEG and GCP External Load Balancer feature.

PROJECT_ID=$(gcloud config list project --format='value(core.project)')
ZONE=us-central1-a
CLUSTER_NAME=demo-cluster

First create the cluster

gcloud container clusters \
        create $CLUSTER_NAME \
        --zone $ZONE --machine-type "e2-medium" \
        --enable-ip-alias \
        --num-nodes=2

Next you need to add the ingress-nginx

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

The default installation of this ingress-nginx is configured to use the LoadBalancer option, this will automatically create a load balancer for you, but in this case this is not the expected behavior.

Create a file values.yaml

cat << EOF > values.yaml
controller:
  service:
    type: ClusterIP
    annotations:
      cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
EOF

Install the ingress-nginx

helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx

Create a dummy web deployment (also nginx but it not the ingress)

cat << EOF > dummy-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dummy-deployment
spec:
  selector:
    matchLabels:
      app: dummy
  replicas: 2
  template:
    metadata:
      labels:
        app: dummy
    spec:
      containers:
      - name: dummy
        image: nginx:latest
        ports:
        - name: http
          containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: dummy-service
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: dummy
EOF
# apply the configuration
kubectl apply -f dummy-app.yaml

Create the ingress object

cat << EOF > dummy-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dummy-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: "_"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: dummy-service
            port:
              number: 80
EOF
# apply the configuration
kubectl apply -f dummy-ingress.yaml

The last part part is to expose the ingress-nginx controller to the world using your custom Load Balancer. The steps are similar to what is described here https://hodo.dev/posts/post-27-gcp-using-neg/

Find the network tags

NETWORK_TAGS=$(gcloud compute instances describe \
    $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') \
    --zone=$ZONE --format="value(tags.items[0])")

Configure the firewall

gcloud compute firewall-rules create $CLUSTER_NAME-lb-fw \
    --allow tcp:80 \
    --source-ranges 130.211.0.0/22,35.191.0.0/16 \
    --target-tags $NETWORK_TAGS

Add health check configuration

gcloud compute health-checks create http app-service-80-health-check \
  --request-path /healthz \
  --port 80 \
  --check-interval 60 \
  --unhealthy-threshold 3 \
  --healthy-threshold 1 \
  --timeout 5

Add the backend service

gcloud compute backend-services create $CLUSTER_NAME-lb-backend \
  --health-checks app-service-80-health-check \
  --port-name http \
  --global \
  --enable-cdn \
  --connection-draining-timeout 300

Add our NEG to the backend service

gcloud compute backend-services add-backend $CLUSTER_NAME-lb-backend \
  --network-endpoint-group=ingress-nginx-80-neg \
  --network-endpoint-group-zone=$ZONE \
  --balancing-mode=RATE \
  --capacity-scaler=1.0 \
  --max-rate-per-endpoint=1.0 \
  --global

Setup the frontend

gcloud compute url-maps create $CLUSTER_NAME-url-map \
  --default-service $CLUSTER_NAME-lb-backend

gcloud compute target-http-proxies create $CLUSTER_NAME-http-proxy \
  --url-map $CLUSTER_NAME-url-map

gcloud compute forwarding-rules create $CLUSTER_NAME-forwarding-rule \
  --global \
  --ports 80 \
  --target-http-proxy $CLUSTER_NAME-http-proxy

Test

IP_ADDRESS=$(gcloud compute forwarding-rules describe $CLUSTER_NAME-forwarding-rule --global --format="value(IPAddress)")
echo $IP_ADDRESS
curl -s -I http://$IP_ADDRESS/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment