Skip to content

Instantly share code, notes, and snippets.

Last active June 7, 2022 11:53
Show Gist options
  • Save jlebon/0cfcd3dcc7ac7de18a69 to your computer and use it in GitHub Desktop.
Save jlebon/0cfcd3dcc7ac7de18a69 to your computer and use it in GitHub Desktop.
How to set up dnsmasq on an OpenShift cluster

This gist is mostly based on the dnsmasq appendix from the openshift-training repo. However, I updated it and included fixes for the many gotchas I found along the way.

This is useful for folks who want to set up a DNS service as part of the cluster itself, either because they cannot easily change their DNS setup outside of the cluster, or just because they want to keep the cluster setup self-contained.

This is meant to be done before you run the openshift-ansible playbook.

If you already have docker running, you will have to restart it after doing these steps.

Eventually, I hope to convert this to an ansible playbook.

A. Install dnsmasq

  1. Choose and ssh into the node on which you want to install dnsmasq. This node will be the one that all the other nodes will contact for DNS resolution. Do not install it on any of the master nodes, since it will conflict with the Kubernetes DNS service.

  2. Install dnsmasq

    yum install -y dnsmasq

B. Configure dnsmasq

  1. Create the file /etc/dnsmasq.d/openshift.conf with the following contents:

    # If you want to map all * to one address (probably
    # something you do want if you're planning on exposing services to the
    # outside world using routes), then add the following line, replacing the
    # IP address with the IP of the node that will run the router. This
    # probably should not be one of your master nodes since they will most
    # likely be made unscheduleable (the default behaviour of the playbook).
    # When you eventually create your router, make sure to use a selector
    # so that it is deployed on the node you chose, e.g. in my case:
    #   oadm router --create=true \
    #     --service-account=router \
    #     --credentials=/etc/origin/master/openshift-router.kubeconfig \
    #     --images='${component}:${version}' \
  2. Add in the /etc/hosts file all the public IPs of the nodes and their hostnames, e.g. ose3-master ose3-node1 ose3-node2
  3. Copy the original /etc/resolv.conf to /etc/resolv.conf.upstream and in /etc/dnsmasq.conf, set resolve-file to /etc/resolv.conf.upstream.

C. Configure resolv.conf

  1. On all the nodes, make sure that /etc/resolv.conf has the IP of the node running dnsmasq only. Important: on the node running dnsmasq, do not use, use the actual cluster IP, just like the other nodes. This is because docker by default ignores local addresses when copying /etc/resolv.conf into containers. Here's a sample /etc/resolv.conf:

    # change this IP to the IP of the node running dnsmasq
  2. Make sure that in /etc/sysconfig/network-scripts/ifcfg-eth0, the variable PEERDNS is set to no so that /etc/resolv.conf doesn't get overwritten on each reboot. Reboot the machines and check that /etc/resolv.conf hasn't been overwritten.

D. Open DNS port and enable dnsmasq

  1. Finally, make sure port 53/udp is open on the dnsmasq node. We have to use iptables rules for this, even if you have firewalld installed. Otherwise, the openshift-ansible playbook will disable and mask it and we will lose those rules. If you do have firewalld, let's mask it and replace it with iptables-services:

    # systemctl stop firewalld
    # systemctl disable firewalld
    # systemctl mask firewalld
    # yum install -y iptables-services
    # systemctl enable iptables
    # systemctl start iptables
  2. Install the DNS iptables rules

    # iptables -I INPUT 1 -p TCP --dport 53 -j ACCEPT
    # iptables -I INPUT 1 -p UDP --dport 53 -j ACCEPT
    # iptables-save > /etc/sysconfig/iptables
  3. Restart the iptables service and make sure that the rules are still there afterwards.

  4. Enable and start dnsmasq:

    # systemctl enable dnsmasq
    # systemctl start dnsmasq

E. Check that everything is working

  1. To verify that everything is good, run the following on each node, and check that the answered IP is correct (dig is in the bind-utils package if you need to install it):

    # Check that nodes can look each other up (replace names as needed)
    [root@ose3-node1]# dig
    # Check that nodes can look up the wildcard domain. This should
    # return the address of the node that will run your router.
    [root@ose3-node1]# dig
Copy link

Hi Lebon,

That was a pretty great elaboration. Thank you.

It was said to provide the ip of the node which hosts openshift router for the "address=" parameter in openshift.conf file. Just need some clarification on my case, where, the router is deployed on both the nodes (1 router pod has 2 containers, one on each node).

Please suggest.

Copy link

Great info. However, I do not think this info below is correct (at least in centOS7/RHEL7 servers):

"Make sure that in /etc/sysconfig/network-scripts/ifcfg-eth0, the variable PEERDNS is set to no so that /etc/resolv.conf doesn't get overwritten on each reboot. Reboot the machines and check that /etc/resolv.conf hasn't been overwritten."

PEERDNS only impacts that particular ifcfg-* as in instructing NetworkManager to not read from that network file. I believe you need to set [main] dns=none in /etc/NetworkManager/NetworkManager.conf and restart if you do not want your /etc/resolv.conf file changed back to default after reboot.


Copy link

typo. resolv-file not resolve-file

Copy link

Is this information still valid for RHOS 3.11? Is there a way to prevent the installer from installing dnsmasq and overwriting my configuration?

Copy link

Did you indeed recommended to turn off whole firewall, instead of opening domain port? What was reason for that?

Copy link

wouldn't be better to use nmcli instead of editing manually the /etc/resolv.conf ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment