Skip to content

Instantly share code, notes, and snippets.

@mcastelino
Last active June 20, 2022 08:48
Show Gist options
  • Save mcastelino/342cbba8bdb2cc4c9e7fa8caadf79cf7 to your computer and use it in GitHub Desktop.
Save mcastelino/342cbba8bdb2cc4c9e7fa8caadf79cf7 to your computer and use it in GitHub Desktop.
kubernetes kube-dns components, debugging

Summary

  • dnsmasq front ends the requests and sends them on to kube-dns

    dnsmasq
      --cache-size=1000
      --no-resolv
      --server=127.0.0.1#10053
      --log-facility=-
    
  • kube-dns handles the cluster level requests

      kube-dns 
      --domain=cluster.local
      --dns-port=10053
      --config-map=kube-dns
      --v=2
    
  • who handles the internet lookups?

Note: Looks like all requests are handled by kube-dns even the ones external to the cluster

" The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz. The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve performance. The healthz container provides a single health check endpoint while performing dual healthchecks (for dnsmasq and kubedns). "

  • iptables are used to send traffic from the POD to the kube-dns POD

What is the IP the DNS server

It has a cluster level IP

root@singlevm:~# kubectl get svc --namespace=kube-system
NAME       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   10.96.0.10   <none>        53/UDP,53/TCP   22h

Backed up by one or more PODS with internal IPs

root@singlevm:~# kubectl get ep kube-dns --namespace=kube-system
NAME       ENDPOINTS                     AGE
kube-dns   10.244.0.2:53,10.244.0.2:53   22h

And the Cluster Level IP is placed in the container resolver

/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

Open: Why do we see a query go out with just nginx even when ndots is set to 5?

How is DNS traffic sent from POD to kube-dns via Cluster Level IP

SNAT and DNAT Rules to send traffic from Service IP to POD

-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53

-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53


-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE

What is in the kube-dns POD

root@singlevm:/var/run/docker/netns# kubectl describe pod --namespace=kube-system kube-dns-2924299975-fpplx
Name:           kube-dns-2924299975-fpplx
Namespace:      kube-system
Node:           singlevm/10.0.2.15
Start Time:     Mon, 13 Mar 2017 21:44:20 +0000
Labels:         component=kube-dns
                k8s-app=kube-dns
                kubernetes.io/cluster-service=true
                name=kube-dns
                pod-template-hash=2924299975
                tier=node
Status:         Running
IP:             10.244.0.2
Controllers:    ReplicaSet/kube-dns-2924299975
Containers:
  kube-dns:
    Container ID:       docker://6e587bf9493a0b31ce49cee749b12259a320ab403ba4c9665a558e0152101a05
    Image:              gcr.io/google_containers/kubedns-amd64:1.9
    Image ID:           docker-pullable://gcr.io/google_containers/kubedns-amd64@sha256:3d3d67f519300af646e00adcf860b2f380d35ed4364e550d74002dadace20ead
    Ports:              10053/UDP, 10053/TCP, 10055/TCP
    Args:
      --domain=cluster.local
      --dns-port=10053
      --config-map=kube-dns
      --v=2
    Limits:
      memory:   170Mi
    Requests:
      cpu:              100m
      memory:           70Mi
    State:              Running
      Started:          Mon, 13 Mar 2017 21:44:48 +0000
    Ready:              True
    Restart Count:      0
    Liveness:           http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:          http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cbd28 (ro)
    Environment Variables:
      PROMETHEUS_PORT:  10055
  dnsmasq:
    Container ID:       docker://aca586acbbbd2e0ef915da3f2d7afb591aecec97b92129124fdc26ea260d5c3a
    Image:              gcr.io/google_containers/kube-dnsmasq-amd64:1.4
    Image ID:           docker-pullable://gcr.io/google_containers/kube-dnsmasq-amd64@sha256:a722df15c0cf87779aad8ba2468cf072dd208cb5d7cfcaedd90e66b3da9ea9d2
    Ports:              53/UDP, 53/TCP
    Args:
      --cache-size=1000
      --no-resolv
      --server=127.0.0.1#10053
      --log-facility=-
    Requests:
      cpu:              150m
      memory:           10Mi
    State:              Running
      Started:          Mon, 13 Mar 2017 21:44:53 +0000
    Ready:              True
    Restart Count:      0
    Liveness:           http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cbd28 (ro)
    Environment Variables:      <none>
  dnsmasq-metrics:
    Container ID:       docker://6217a6e224507a648506097f49d9913d9d26edd3f12b88644e012705d7d7be7f
    Image:              gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
    Image ID:           docker-pullable://gcr.io/google_containers/dnsmasq-metrics-amd64@sha256:4063e37fd9b2fd91b7cc5392ed32b30b9c8162c4c7ad2787624306fc133e80a9
    Port:               10054/TCP
    Args:
      --v=2
      --logtostderr
    Requests:
      memory:           10Mi
    State:              Running
      Started:          Mon, 13 Mar 2017 21:44:58 +0000
    Ready:              True
    Restart Count:      0
    Liveness:           http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cbd28 (ro)
    Environment Variables:      <none>
  healthz:
    Container ID:       docker://9089753b3461b9226cd6dd82e2043d47f2333580b22306490432657ca972b531
    Image:              gcr.io/google_containers/exechealthz-amd64:1.2
    Image ID:           docker-pullable://gcr.io/google_containers/exechealthz-amd64@sha256:503e158c3f65ed7399f54010571c7c977ade7fe59010695f48d9650d83488c0a
    Port:               8080/TCP
    Args:
      --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
      --url=/healthz-dnsmasq
      --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
      --url=/healthz-kubedns
      --port=8080
      --quiet
    Limits:
      memory:   50Mi
    Requests:
      cpu:              10m
      memory:           50Mi
    State:              Running
      Started:          Mon, 13 Mar 2017 21:45:02 +0000
    Ready:              True
    Restart Count:      0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cbd28 (ro)
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True
  Ready         True
  PodScheduled  True
Volumes:
  default-token-cbd28:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-cbd28
QoS Class:      Burstable
Tolerations:    dedicated=master:NoSchedule
No events.

Who/What is listening in the kube-dns namespace

Kube-dns internal IP

root@singlevm:/var/run/docker/netns# nsenter --net=25cb799cd23e
root@singlevm:/var/run/docker/netns# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 0a:58:0a:f4:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.244.0.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d4f9:8bff:fe0b:c4e5/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever

DNS Server Related Sockets

root@singlevm:/var/run/docker/netns# netstat -plunt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:53              0.0.0.0:*               LISTEN      26773/dnsmasq
tcp6       0      0 :::10053                :::*                    LISTEN      26336/kube-dns
tcp6       0      0 :::10054                :::*                    LISTEN      27307/dnsmasq-metri
tcp6       0      0 :::10055                :::*                    LISTEN      26336/kube-dns
tcp6       0      0 :::8080                 :::*                    LISTEN      27516/exechealthz
tcp6       0      0 :::8081                 :::*                    LISTEN      26336/kube-dns
tcp6       0      0 :::53                   :::*                    LISTEN      26773/dnsmasq
udp        0      0 0.0.0.0:53              0.0.0.0:*                           26773/dnsmasq
udp6       0      0 :::10053                :::*                                26336/kube-dns
udp6       0      0 :::53                   :::*                                26773/dnsmasq

Connection to API Server/etcd??

root@singlevm:/var/run/docker/netns# netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 *:domain                *:*                     LISTEN
tcp        0      0 10.244.0.2:57658        10.96.0.1:https         ESTABLISHED

What is the POD IP (internal)

root@singlevm:/var/run/docker/netns# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 0a:58:0a:f4:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.244.0.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d4f9:8bff:fe0b:c4e5/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever

How are the PODs setup for DNS

kubectl run -i -t netdebug --image=mcastelino/nettools --restart=Never 
#kubectl run -i -t busybox --image=busybox --restart=Never
#Busybox libc has an issue with ndots which results in incorrect lookup sequences.
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
    link/ether 0a:58:0a:f4:00:08 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.8/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::b0e8:adff:fef8:5522/64 scope link tentative flags 08
       valid_lft forever preferred_lft forever
/ # nslookup nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 10.109.102.230 nginx.default.svc.cluster.local

How is the service exposed

root@singlevm:/var/run/docker/netns# kubectl get svc
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.96.0.1        <none>        443/TCP        20h
nginx        10.109.102.230   <nodes>       80:30361/TCP   18h

Resolution within the dns POD

23:04:02.789492 IP (tos 0x0, ttl 64, id 7088, offset 0, flags [none], proto UDP (17), length 77)
    10.244.0.20.35926 > 10.244.0.2.domain: 44017+ A? nginx.default.svc.cluster.local. (49)
23:04:02.789602 IP (tos 0x0, ttl 64, id 19401, offset 0, flags [DF], proto UDP (17), length 93)
    10.244.0.2.domain > 10.244.0.20.35926: 44017 1/0/0 nginx.default.svc.cluster.local. A 10.109.102.230 (65)
23:04:02.789699 IP (tos 0x0, ttl 64, id 48377, offset 0, flags [DF], proto UDP (17), length 70)
    10.244.0.2.33618 > 10.0.2.3.domain: 9825+ PTR? 20.0.244.10.in-addr.arpa. (42)
23:04:02.790694 IP (tos 0x0, ttl 63, id 19024, offset 0, flags [none], proto UDP (17), length 164)
    10.0.2.3.domain > 10.244.0.2.33618: 9825 NXDomain* 0/1/0 (136)

Host side iptables for a kubernetes node

# Generated by iptables-save v1.6.0 on Tue Mar 14 18:29:49 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [12:720]
:POSTROUTING ACCEPT [12:720]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-263UWIYWZXPNMPMF - [0:0]
:KUBE-SEP-ECWJ4XMO64PCJIJZ - [0:0]
:KUBE-SEP-EYCY2JIROCGGPBZO - [0:0]
:KUBE-SEP-IT2ZTR26TO4XFPTO - [0:0]
:KUBE-SEP-OGNOLD2JUSLFPOMZ - [0:0]
:KUBE-SEP-YIL6JZP7A3QYXJU2 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx:" -m tcp --dport 30361 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx:" -m tcp --dport 30361 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-263UWIYWZXPNMPMF -s 10.244.0.4/32 -m comment --comment "default/nginx:" -j KUBE-MARK-MASQ
-A KUBE-SEP-263UWIYWZXPNMPMF -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 10.244.0.4:80
-A KUBE-SEP-ECWJ4XMO64PCJIJZ -s 10.244.0.3/32 -m comment --comment "default/nginx:" -j KUBE-MARK-MASQ
-A KUBE-SEP-ECWJ4XMO64PCJIJZ -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 10.244.0.3:80
-A KUBE-SEP-EYCY2JIROCGGPBZO -s 10.244.0.5/32 -m comment --comment "default/nginx:" -j KUBE-MARK-MASQ
-A KUBE-SEP-EYCY2JIROCGGPBZO -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 10.244.0.5:80
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-OGNOLD2JUSLFPOMZ -s 10.0.2.15/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-OGNOLD2JUSLFPOMZ -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-OGNOLD2JUSLFPOMZ --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.0.2.15:6443
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.109.102.230/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-ECWJ4XMO64PCJIJZ
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-263UWIYWZXPNMPMF
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-EYCY2JIROCGGPBZO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-OGNOLD2JUSLFPOMZ --mask 255.255.255.255 --rsource -j KUBE-SEP-OGNOLD2JUSLFPOMZ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-OGNOLD2JUSLFPOMZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-YIL6JZP7A3QYXJU2
COMMIT
# Completed on Tue Mar 14 18:29:49 2017
# Generated by iptables-save v1.6.0 on Tue Mar 14 18:29:49 2017
*filter
:INPUT ACCEPT [1504:637060]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1482:657015]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Tue Mar 14 18:29:49 2017

Headless Deployments

Expose all the IPs of the underlying service.

kubectl run nginx-headless --image=nginx --port=81 --replicas=3 
kubectl expose deployment nginx-headless --type NodePort --cluster-ip=none

/ # nslookup nginx-headless
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx-headless
Address 1: 10.244.0.13
Address 2: 10.244.0.14
Address 3: 10.244.0.15

@mcastelino
Copy link
Author

The key difference between the k8s implementation and docker implementation is that docker places the rules inside of the namespace of the container which cannot be handled cleanly in the case of VM based container runtimes.

For more details on how docker handles the same https://gist.github.com/mcastelino/62cf2a882ce05d07400b5a10f21b6437#dns-resolution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment