Skip to content

Instantly share code, notes, and snippets.

@frimik
Last active June 12, 2020 02:16
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save frimik/d54df89ed3fc9d5c75e5ad93d6353f37 to your computer and use it in GitHub Desktop.
Save frimik/d54df89ed3fc9d5c75e5ad93d6353f37 to your computer and use it in GitHub Desktop.
k3d ubuntu systemd-resolved DNS hacks

DNS Hack for Ubuntu

The problem

Going on and off VPN, things work, then they don't work... general annoying. Containers can't resolve... and when you might make containers resolve, then containers in containers (k3d) can't resolve...

It seems I got things working... I can go on and off VPN, name resolution works essentially the same on the Host as in Docker and the Kubernetes (k3d) nodes and the k3d kubernetes containers.

On the host I have per-interface DNS servers via systemd-resolved that takes care of it.

/etc/resolv.conf points to 127.0.0.53 which is systemd-resolved. resolvectl status should show individual DNS servers per Link. Example tun0 (which may be your VPN interface) vs wlp2s0 (which may be your Wireless interface).

Patching k3d to take --dns parameter

First of all, I've patched k3d so I can start it with a --dns parameter. Same as docker run --dns.

So, I run it with k3d create --dns 172.17.0.1, essentially pointing it to my socat Dns Proxy (explained below).

All the steps and assumptions explained

  1. You have systemd-resolved listening on loopback:

    # sudo netstat -nlp | grep systemd-resol
    tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      5021/systemd-resolv
    udp        0      0 127.0.0.53:53           0.0.0.0:*                           5021/systemd-resolv
  2. Your docker0 interface has an IP: 172.17.0.1

    # ip addr show docker0 | awk '$1 ~ /^inet$/ { print $2 }'
    172.17.0.1/16
  3. Start an socat DNS proxy forwarding from the docker network to systemd-resolved:

    # export DOCKER_GATEWAY_IP=172.17.0.1; hostname; echo "Proxying port 53 to systemd-resolved ..."; sudo socat -v UDP-LISTEN:53,fork,reuseaddr,bind=$DOCKER_GATEWAY_IP UDP:127.0.0.53:53
    fridhsfunhouse
    Proxying port 53 ...

    Note: A TCP proxy is also needed for full support, but we'll start with this.

  4. The Kubernetes (K3D/K3S) master node has an IP on the internal flannel network: 10.42.0.1:

    # hostname; ip addr show cni0 | awk '$1 ~ /^inet$/ { print $2 }'
    k3d-k3s-default-server
    10.42.0.1/24
  5. Run socat on the Kubernetes master (... in Docker) node's flannel interface to proxy DNS to the embedded Docker dns server on address: 127.0.0.11. The bind IP here is 0.0.0.0 but you could make it 10.42.0.1 which is usually the master node's flannel IP.

    docker exec -it k3d-k3s-default-server socat -v UDP-LISTEN:53,fork,reuseaddr,bind=10.42.0.1 UDP:127.0.0.11:53
  6. Configure CoreDNS' ConfigMap to proxy to this new socat DaemonSet (or Service, however you want to set it up). In this case I point it to the Kubernetes master node flannel interface (cni0) IP: 10.42.0.1.

    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health
            kubernetes cluster.local in-addr.arpa ip6.arpa {
              pods insecure
              upstream
              fallthrough in-addr.arpa ip6.arpa
            }
            prometheus :9153
            proxy . 10.42.0.1
            cache 30
            loop
            reload
            loadbalance
        }
    kind: ConfigMap
    metadata:
@frimik
Copy link
Author

frimik commented May 4, 2020

See k3s-io/k3s#462 for CoreDNS customization options (if there are any...)

@YAMLcase
Copy link

YAMLcase commented Jun 12, 2020

Thanks for taking the time to write this up. I wish I had come across it sooner because I just found a workable solution after spending way too much time going through all the "what am I doing wrong" troubleshooting.

This is the quick-n-dirty hack I found that works for me:

iptables -t nat -A PREROUTING -p udp -d 8.8.8.8  --dport 53 -j DNAT --to 11.22.33.44

11.22.33.44 is your choice for DNS resolution. (thanks @huiser for providing this).

Why this works: DNS resolution falls back to 8.8.8.8, so just NAT that to your own DNS server, definitely not as elegant as your solution, but got me past my current blocker.

Related issues I've come across if anyone's interested:

Can't Resolve DNS using Host's /etc/resolv.conf #1872
k3s-io/k3s#1872

Forward plugin only uses two of three nameservers from host's /etc/resolv.conf #3939
(elaboration to the above issue in here)
coredns/coredns#3939

forward . /etc/resolv.conf not work sometimes when use local dns #3926
coredns/coredns#3926

CoreDNS initial forwarders specification #1863
k3s-io/k3s#1863

DNS resolution fails with dnsPolicy: ClusterFirstWithHostNet and hostNetwork: true #1827
(linked to workaround suggestion for this particular issue)
k3s-io/k3s#1827 (comment)

[BUG] DNS not resolving #209
k3d-io/k3d#209

Several issues on CoreDNS issue board were resolved with this:
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-coredns

Finally

My best guess, this is a Corefile configuration issue that k3s needs to implement.

@YAMLcase
Copy link

YAMLcase commented Jun 12, 2020

See rancher/k3s#462 for CoreDNS customization options (if there are any...)

This looks like something I might try:
k3d-io/k3d#229 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment