Skip to content

Instantly share code, notes, and snippets.

@bikram20
Last active May 18, 2021 06:12
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bikram20/4d64ddafea4c0f0a741b504bb61ce77a to your computer and use it in GitHub Desktop.
Save bikram20/4d64ddafea4c0f0a741b504bb61ce77a to your computer and use it in GitHub Desktop.
Droplet-DOKS Internal communication:
Problem:
When you expose a service as a LB in DO kubernetes, the service gets a public IP. There is no internal LB. So when you need an application on a droplet (in the same VPC as the cluster) needing to communicate with a service in the cluster, the traffic traverses over public network.
You may like to instead keep that droplet --- cluster traffic inside the VPC itself.
Solution:
It is done by exposing the service as a NodePort (not LB), and making the firewall unmanaged for the nodeport (through an annotation).
This ensures that the service is ONLY accessible over Nodeport in the private VPC network. In the 2nd step, we use the external-DNS to program the Nodeport to a FQDN in DO DNS. As the droplets in the VPC rely on DO DNS, they get the updates for the NodePort through DNS.
++++++++
bgupta@C02CC1EGMD6M internal-lb % kubectl cluster-info
Kubernetes control plane is running at https://<redacted>.k8s.ondigitalocean.com
CoreDNS is running at https://<redacted>.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
bgupta@C02CC1EGMD6M internal-lb %
bgupta@C02CC1EGMD6M internal-lb % cat nginx.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
kubernetes.digitalocean.com/firewall-managed: "false"
external-dns.alpha.kubernetes.io/hostname: "nginx.kubenuggets.dev"
external-dns.alpha.kubernetes.io/ttl: "30"
external-dns.alpha.kubernetes.io/access: "private"
spec:
# externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
selector:
app: nginx
type: NodePort
bgupta@C02CC1EGMD6M internal-lb %
bgupta@C02CC1EGMD6M internal-lb %
bgupta@C02CC1EGMD6M internal-lb % kgpoowide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7848d4b86f-7c8gc 1/1 Running 0 5m20s 10.244.1.211 worker-pool-8bp58 <none> <none>
nginx-7848d4b86f-ddhxj 1/1 Running 0 5m20s 10.244.1.191 worker-pool-8bp58 <none> <none>
nginx-7848d4b86f-klgr5 1/1 Running 0 5m20s 10.244.1.174 worker-pool-8bp58 <none> <none>
bgupta@C02CC1EGMD6M internal-lb %
### NOTE ABOVE - All pods are running on same worker node.
bgupta@C02CC1EGMD6M internal-lb % kgsvc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 9m39s
nginx NodePort 10.245.200.218 <none> 80:31000/TCP 35s
bgupta@C02CC1EGMD6M internal-lb %
bgupta@C02CC1EGMD6M internal-lb % kgnoowide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-pool-8bp53 Ready <none> 7m43s v1.20.2 10.124.0.5 144.126.209.115 Debian GNU/Linux 10 (buster) 4.19.0-11-amd64 containerd://1.4.3
worker-pool-8bp58 Ready <none> 7m42s v1.20.2 10.124.0.4 161.35.237.215 Debian GNU/Linux 10 (buster) 4.19.0-11-amd64 containerd://1.4.3
worker-pool-8bp5u Ready <none> 7m46s v1.20.2 10.124.0.6 165.232.155.147 Debian GNU/Linux 10 (buster) 4.19.0-11-amd64 containerd://1.4.3
bgupta@C02CC1EGMD6M internal-lb %
### Let us check the NodePort from a droplet in the same VPC
#### DROPLET IN THE SAME VPC
root@www-1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether b2:c2:89:a1:19:57 brd ff:ff:ff:ff:ff:ff
inet 143.198.103.233/20 brd 143.198.111.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.48.0.5/16 brd 10.48.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b0c2:89ff:fea1:1957/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 16:dd:f9:d5:de:23 brd ff:ff:ff:ff:ff:ff
inet 10.124.0.2/20 brd 10.124.15.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::14dd:f9ff:fed5:de23/64 scope link
valid_lft forever preferred_lft forever
root@www-1:~#
# Now let us reach NGINX over private VPC IP.
root@www-1:~# curl 10.124.0.4:31000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
# You cannot access the nodeport over public ip, as it will be dropped by DO firewall.
# If you want to reach NGINX over ALL worker nodes, then remove ExternalTrafficPolicy: Local from the nginx service spec.
# At this point, we're able to communicate over a NodePort and over the private network. Let us enable dns-external and program DO DNS
# You can install external-dns through an helm chart or direct YAML as below.
# NOTE - You need to supply your DO access token. That will be used by external-dns to program DO DNS.
# NOTE - You must have a valid/registered domain name. In this case, I have a registered domain called kubenuggets.dev. I used a subdomain called nginx.kubenuggets.dev for my private purpose.
bgupta@C02CC1EGMD6M internal-lb % cat external-dns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
replicas: 1
selector:
matchLabels:
app: external-dns
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.8.0
args:
- --source=service
- --domain-filter=<YOUR DOMAIN FILTER>
- --provider=digitalocean
env:
- name: DO_TOKEN
value: "<DO ACCESS TOKEN>"
---
bgupta@C02CC1EGMD6M internal-lb %
##### Check from another client (droplet) in the VPC.
root@www-1:~# dig nginx.kubenuggets.dev
; <<>> DiG 9.16.1-Ubuntu <<>> nginx.kubenuggets.dev
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20213
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;nginx.kubenuggets.dev. IN A
;; ANSWER SECTION:
nginx.kubenuggets.dev. 30 IN A 10.124.0.5
nginx.kubenuggets.dev. 30 IN A 10.124.0.4
nginx.kubenuggets.dev. 30 IN A 10.124.0.7
;; Query time: 8 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue May 18 05:55:39 UTC 2021
;; MSG SIZE rcvd: 98
root@www-1:~#
root@www-1:~# curl http://nginx.kubenuggets.dev:31000/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@www-1:~#
###### TESTING
# I killed 1 pod (out of 3 pods in the replicaset) when traffic was on from outside the cluster.
root@www-1:~# ab -r -k -n 1000000 -c 10 http://nginx.kubenuggets.dev:31000/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking nginx.kubenuggets.dev (be patient)
Completed 100000 requests
Completed 200000 requests
Completed 300000 requests
^C
Server Software: nginx/1.19.10
Server Hostname: nginx.kubenuggets.dev
Server Port: 31000
Document Path: /
Document Length: 612 bytes
Concurrency Level: 10
Time taken for tests: 60.164 seconds
Complete requests: 310367
Failed requests: 6
(Connect: 0, Receive: 2, Length: 2, Exceptions: 2)
Keep-Alive requests: 310061
Total transferred: 264119095 bytes
HTML transferred: 189943380 bytes
Requests per second: 5158.68 [#/sec] (mean)
Time per request: 1.938 [ms] (mean)
Time per request: 0.194 [ms] (mean, across all concurrent requests)
Transfer rate: 4287.09 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 17
Processing: 0 2 3.0 1 63
Waiting: 0 2 3.0 1 63
Total: 0 2 3.0 1 63
Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 3
90% 4
95% 7
98% 12
99% 16
100% 63 (longest request)
root@www-1:~#
#
# Node failure. Destroy a worker node when traffic is on. I destroyed a node running 2 pods of NGINX (out of total 3 pods in the replicaset).
# 30 requests failed out of 1m. Some responses (above 99 percentile were delayed significantly).
root@www-1:~# ab -r -k -s 300 -n 1000000 -c 10 http://nginx.kubenuggets.dev:31000/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking nginx.kubenuggets.dev (be patient)
Completed 100000 requests
Completed 200000 requests
Completed 300000 requests
Completed 400000 requests
Completed 500000 requests
Completed 600000 requests
Completed 700000 requests
Completed 800000 requests
Completed 900000 requests
Completed 1000000 requests
Finished 1000000 requests
Server Software: nginx/1.19.10
Server Hostname: nginx.kubenuggets.dev
Server Port: 31000
Document Path: /
Document Length: 612 bytes
Concurrency Level: 10
Time taken for tests: 256.881 seconds
Complete requests: 1000000
Failed requests: 30
(Connect: 0, Receive: 10, Length: 10, Exceptions: 10)
Keep-Alive requests: 998997
Total transferred: 850986525 bytes
HTML transferred: 611993880 bytes
Requests per second: 3892.86 [#/sec] (mean)
Time per request: 2.569 [ms] (mean)
Time per request: 0.257 [ms] (mean, across all concurrent requests)
Transfer rate: 3235.13 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 15.5 0 15492
Processing: 0 2 363.3 1 131055
Waiting: 0 1 1.9 1 62
Total: 0 2 363.7 1 131055
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 2
90% 2
95% 4
98% 7
99% 10
100% 131055 (longest request)
root@www-1:~#
Summary - there will be some connections drop (more when a node dies vs. a pod).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment