Skip to content

Instantly share code, notes, and snippets.

@krsna1729
Created October 8, 2018 16:16
Show Gist options
  • Save krsna1729/c9e7e87c0ab3e7ac2ce238f668909f51 to your computer and use it in GitHub Desktop.
Save krsna1729/c9e7e87c0ab3e7ac2ce238f668909f51 to your computer and use it in GitHub Desktop.

On the external system, still on same subnet as k8s node, add route to kube-dns via the k8s node IP (master here)

sudo ip route add 10.96.0.10 via 10.54.81.161

Note: In case they are not in same L2 domain, need to provide a routable IP for kube-dns service ClusterIP and ensure return route is handled from the cluster.

At this point dig explicitly pointing to kube-dns should resolve a running service in your k8s cluster

dig @10.96.0.10 +short consul-helm-consul.default.svc.cluster.local

This should yield output similar to below

192.168.50.140
192.168.50.169
192.168.50.182

Snapshot current contents of /etc/resolv.conf

search company.com
nameserver 10.248.2.1
nameserver 10.22.224.196
nameserver 10.3.86.116

Now to query kube-dns for all cluster.local names by default we would need dnsmasq setup. Modify dns key in /etc/NetworkManager/NetworkManager.conf to be dnsmasq

[main]
#plugins=ifcfg-rh,ibft
dns=dnsmasq

Create /etc/NetworkManager/dnsmasq.d/kube.conf with contents

server=/cluster.local/10.96.0.10

Restart the NetworkManager service and make sure its running

sudo systemctl restart NetworkManager
systemctl status NetworkManager

Check the contents of /etc/resolv.conf

# Generated by NetworkManager
search company.com
nameserver 127.0.0.1

Test if DNS resolution works for the service

dig +short consul-helm-consul.default.svc.cluster.local

It should return the same output as seen previously when the kube-dns IP was specified

192.168.50.140
192.168.50.169
192.168.50.182

Now to access the pods in the cluster add route for the Pod CIDR

sudo ip route add 192.168.50.0/24 via 10.54.81.161

Now if we ping the service multiple times, we see it directed to different backend

ping consul-helm-consul.default.svc.cluster.local
PING consul-helm-consul.default.svc.cluster.local (192.168.50.182) 56(84) bytes of data.
64 bytes from 192.168.50.182 (192.168.50.182): icmp_seq=1 ttl=63 time=0.193 ms
64 bytes from 192.168.50.182 (192.168.50.182): icmp_seq=2 ttl=63 time=0.235 ms

PING consul-helm-consul.default.svc.cluster.local (192.168.50.140) 56(84) bytes of data.
64 bytes from 192.168.50.140 (192.168.50.140): icmp_seq=1 ttl=63 time=0.251 ms
64 bytes from 192.168.50.140 (192.168.50.140): icmp_seq=2 ttl=63 time=0.653 ms

PING consul-helm-consul.default.svc.cluster.local (192.168.50.169) 56(84) bytes of data.
64 bytes from 192.168.50.169 (192.168.50.169): icmp_seq=1 ttl=63 time=0.267 ms
64 bytes from 192.168.50.169 (192.168.50.169): icmp_seq=2 ttl=63 time=0.606 ms

Note that it matches the backend IPs of consul-helm-consul returned in the dig output. We see DNS-based loadbalancing in action.

In case ping by name doesn't work but ping by IP works, DNS resolution itself works fine with dig and nslookup, observed in Fedora, then check the contents of /etc/nsswitch.conf and move the dns option to after files or change them to RHEL defaults

Fedora default

hosts:      files mdns4_minimal [NOTFOUND=return] dns myhostname

RHEL default

host:       files dns myhostname
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment