Skip to content

Instantly share code, notes, and snippets.

@aojea
Created May 5, 2023 08:26
Show Gist options
  • Save aojea/e9e9e3bae46f0014d96cb1396b584815 to your computer and use it in GitHub Desktop.
Save aojea/e9e9e3bae46f0014d96cb1396b584815 to your computer and use it in GitHub Desktop.
Ingress-nginx load sharing

Install ingress-nginx

kubectl create clusterrolebinding cluster-admin-binding   --clusterrole cluster-admin   --user $(gcloud config get-value account)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/cloud/deploy.yaml

Scale the ingress-controller to have two replicas:

kubectl -n ingress-nginx scale deployment.apps/ingress-nginx-controller --replicas=2

In order to optimize the load, the controller pods should run in different nodes, this can be achieved using topology constraints https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/:

ingress-nginx   ingress-nginx-controller-9d7684bcd-c8wvd         1/1     Running     0          50m   10.60.2.7     gke-n-default-pool-98907b91-gwfv   <none>           <none>
ingress-nginx   ingress-nginx-controller-9d7684bcd-pzszs         1/1     Running     0          59m   10.60.3.6     gke-n-default-pool-98907b91-kw34   <none>           <none>

Install backend: deployment and service. These backends return the IP used by the client on the handler /clientip, this will help us to identify with controller was used to connect to the pod.

kubectl apply -f backend.yaml

Expose the backend using an ingress

kubectl apply -f ingress.yaml

Get the LoadBalancerIP and connect to the service to obtain the clientIP of the ingress-controller

IP=$(kubectl get ingress example-ingress  --output jsonpath='{.status.loadBalancer.ingress[0].ip}')
while true; do curl -s $IP/test/clientip | cut -d\: -f1 ; done
10.60.3.6
10.60.2.7
10.60.3.6
10.60.2.7
10.60.3.6
10.60.2.7
10.60.3.6
10.60.3.6
10.60.3.6
10.60.2.7
10.60.3.6
10.60.3.6
10.60.2.7
10.60.3.6
10.60.2.7

If we check the controller logs we can see that, because the service has externalTrafficPolicy: Local, all the connections arrive with the client IP from all the nodes. This is because the loadbalancer sends the connection only to the Nodes where the ingress-controllers are running

35.204.196.12 - - [05/May/2023:07:47:10 +0000] "GET /test/clientip HTTP/1.1" 200 15 "-" "curl/7.88.1" 89 0.000 [default-myservice-80] [] 10.60.1.13:80 15 0.001 200 0f90523f3f49a3ba3a689611d0351f60
35.204.196.12 - - [05/May/2023:07:47:10 +0000] "GET /test/clientip HTTP/1.1" 200 15 "-" "curl/7.88.1" 89 0.001 [default-myservice-80] [] 10.60.3.3:80 15 0.001 200 becb3be33c52b7ab410182667fd41c14
35.204.196.12 - - [05/May/2023:07:47:10 +0000] "GET /test/clientip HTTP/1.1" 200 15 "-" "curl/7.88.1" 89 0.001 [default-myservice-80] [] 10.60.2.2:80 15 0.000 200 ce97afda12c0ea8d172803c6ce739bf0

if we do an histogram of the controllers we can see that the load is evenly balanced

$ for i in `seq 1 1000`; do curl -s $IP/test/clientip | cut -d\: -f1 >> test.log ; done
$ wc test.log
    1000    1000   10000 test.log
$ sort -n test.log | uniq -c
 495 10.60.2.7
 505 10.60.3.6

Trying with externalTrafficPolicy Cluster in the LoadBalancer, we can see now in the ingress-controller logs the traffic is coming from the Nodes

10.132.0.26 - - [05/May/2023:08:06:17 +0000] "GET /test/clientip HTTP/1.1" 200 15 "-" "curl/7.87.0" 89 0.001 [default-myservice-80] [] 10.60.2.2:80 15 0.001 200 db7a949e309a030a09950579de6d2ee9
10.132.0.27 - - [05/May/2023:08:06:17 +0000] "GET /test/clientip HTTP/1.1" 200 15 "-" "curl/7.87.0" 89 0.003 [default-myservice-80] [] 10.60.1.13:80 15 0.002 200 596efc082f797915c0d6e403aeadee45
10.60.2.1 - - [05/May/2023:08:06:18 +0000] "GET /test/clientip HTTP/1.1" 200 15 "-" "curl/7.87.0" 89 0.002 [default-myservice-80] [] 10.60.3.3:80 15 0.002 200 5b322f604ce202961003e9131010f708
10.132.0.27 - - [05/May/2023:08:06:18 +0000] "GET /test/clientip HTTP/1.1" 200 15 "-" "curl/7.87.0" 89 0.001 [default-myservice-80] [] 10.60.2.2:80 15 0.001 200 c53cad6ece09b598f9c9162812f29637

but the load still seems evenly distributed:

sort -n test2.log | uniq -c
 507 10.60.2.7
 529 10.60.3.6

If we repeat the same operation within the cluster, just running a pod, we find that the load is evenly distrubuted too:

kubectl run -it test --image  registry.k8s.io/e2e-test-images/agnhost:2.39 --command -- ash
If you don't see a command prompt, try pressing enter.
/ # curl
curl: try 'curl --help' or 'curl --manual' for more information
/ # IP=34.140.119.1
/ # for i in `seq 1 1000`; do curl -s $IP/test/clientip | cut -d\: -f1 >> test2.log ; done
/ # sort test2.log | uniq -c
   499 10.60.2.7
   501 10.60.3.6
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: MyApp
name: mydeployment
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: MyApp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: MyApp
spec:
containers:
- args:
- netexec
- --http-port=80
- --udp-port=80
image: k8s.gcr.io/e2e-test-images/agnhost:2.41
imagePullPolicy: IfNotPresent
name: agnhost
ports:
- containerPort: 80
protocol: TCP
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: MyApp
type: ClusterIP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: myservice
port:
number: 80
path: /test(/|$)(.*)
pathType: Prefix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment