First, a small bit of editorialising (my gist, my rules). PiHole is an excellent ad-blocker. It's perfectly OK for just about every other device on your home network to use PiHole as its primary DNS.
The one device in your network that shouldn't use PiHole-in-a-container for its DNS is the Raspberry Pi running PiHole in a Docker container. I'll go so far as to describe it as a seriously dumb idea.
Why? Several reasons:
- Containers start quite late in the boot cycle. Any process starting before Docker that depends on the DNS being "there" can be disappointed. PiHole being late to the party and occasionally disappearing when you do perfectly ordinary things like…
- … Taking down your stack can trigger resolver failover conditions which don't always recover gracefully. You might think your DNS setup is working just fine with the PiHole container in the prime role but then it all turns to custard and you don't know why.
- The DNS can be quite difficult to debug. This is a legacy of its longevity and incredible robustness. You might think an answer is coming from a particular server but it can just as easily be coming from an upstream relay or a local resolver cache, and it's easy to draw the wrong conclusions.
In short, it might work. You might also live to regret it. Sufficient warning? Read on!
Commands like DNET
and RESTART
come from Paraphraser/IOTstackAliases.
-
192.168.203.65 aka gravel aka gravel.mydomain.com = my local upstream DNS. BIND9 running on a MacMini. Authoritative for mydomain.com. The DHCP server offers 192.168.203.65 for DNS resolution. Devices that need PiHole services override this.
Why? Spouse didn't like all the ads being blocked. Nothing more than that. Originally, I told DHCP to offer 192.168.203.9 for DNS so everything went to PiHole. It worked.
-
192.168.203.9 aka iotdev aka iotdev.mydomain.com = the Raspberry Pi under test. It is running IOTstack. Two containers concern us:
- pihole = the container resolving DNS queries
- nodered = a reference container within which we test DNS resolution
-
192.168.203.90 aka liveblock aka liveblock.mydomain.com = a dummy name and IP association which is defined on, and resolves on, gravel:
… in the zone file … liveblock IN A 192.168.203.90 … in the reverse file … 90 IN PTR liveblock.mydomain.com.
but is blacklisted by PiHole:
$ sqlite3 \ ~/IOTstack/volumes/pihole/etc-pihole/gravity.db \ 'select * from vw_blacklist where domain="liveblock.mydomain.com";' domain id group_id ---------------------- -- -------- liveblock.mydomain.com 14 0
You don't need all these players. They are just here for demonstration purposes.
If you want to follow along, you will need dig
:
-
For the Raspberry Pi proper:
$ sudo apt install -y dnsutils
-
For the Node-RED container (which runs Alpine Linux):
$ docker exec nodered apk add --no-cache bind-tools
This is an ephemeral change and will disappear if you recreate the Node-RED container. To make the change permanent, you need to do it in the Dockerfile.
-
A query directed to the local upstream DNS:
$ dig @192.168.203.65 liveblock.mydomain.com +short 192.168.203.90
-
A query directed to the Raspberry Pi under test:
$ dig @192.168.203.9 liveblock.mydomain.com +short 0.0.0.0
The 0.0.0.0 reply means "blocked". PiHole is configured to relay queries in the mydomain.com domain to gravel. For example:
$ dig @192.168.203.9 gravel.mydomain.com +short 192.168.203.65
Test 1 shows that gravel can resolve liveblock.mydomain.com so that name returning 0.0.0.0 when the query is directed to the Raspberry Pi under test is proof that the query is being forwarded to, and is being blocked by, PiHole.
-
By inference, an undirected query tells us whether the answer is coming from gravel or pihole and, from that, we can deduce the probable traffic flow:
$ dig liveblock.mydomain.com +short 192.168.203.90
Conclusion: answered by gravel.
-
Confirm that the Raspberry Pi is configured to use gravel for its DNS:
$ grep "^name_servers=" /etc/resolvconf.conf name_servers=192.168.203.65
-
Confirm that PiHole is running and listening on expected ports:
$ DNET pihole NAMES PORTS pihole 0.0.0.0:53->53/udp, :::53->53/udp, 0.0.0.0:53->53/tcp, 0.0.0.0:67->67/udp, :::53->53/tcp, :::67->67/udp, 0.0.0.0:8089->80/tcp, :::8089->80/tcp
-
Tests:
-
An undirected query, on iotdev, outside container-space:
$ dig liveblock.mydomain.com +short 192.168.203.90
Conclusion: answered by gravel.
-
An undirected query, on iotdev, inside a reference container:
$ docker exec nodered dig liveblock.mydomain.com +short 192.168.203.90
Conclusion: answered by gravel.
-
A query, on iotdev, directed to localhost:
$ dig @127.0.0.1 liveblock.mydomain.com +short 0.0.0.0
Conclusion: answered by PiHole.
-
-
Tell the Raspberry Pi to use "itself" (127.0.0.1) as the DNS server:
$ sudo sed -i.bak \ -e 's/^#name_servers=127.0.0.1/name_servers=127.0.0.1/' \ -e 's/^name_servers=192.168.203.65/#name_servers=192.168.203.65/' \ /etc/resolvconf.conf
Note:
-
I don't actually expect you to use
sed
. In any event, the above command assumes:#name_servers=127.0.0.1
is commented-out while:
name_servers=192.168.203.65
is active, and is inverting that arrangement. Those assumptions probably don't hold on your system. You're better off observing that the file being edited is
/etc/resolvconf.conf
, and then using your favourite text editor to make the necessary changes by hand.It's a good idea to do as
sed
does and save the "before" state in a.bak
file.
-
-
Confirm that the configuration file has been changed:
$ grep "^name_servers=" /etc/resolvconf.conf name_servers=127.0.0.1
-
Apply the change:
-
First, tell Raspbian (the
resolvconf
can return errors - run it multiple times until it shuts up):$ sudo resolvconf -u $ sudo service dhcpcd reload
-
Then tell Docker to pass the message to the stack:
$ RESTART Restarting pihole ... done Restarting nodered ... done
-
-
Confirm PiHole is still running and listening on expected ports:
$ DNET pihole NAMES PORTS pihole 0.0.0.0:53->53/udp, :::53->53/udp, 0.0.0.0:53->53/tcp, 0.0.0.0:67->67/udp, :::53->53/tcp, :::67->67/udp, 0.0.0.0:8089->80/tcp, :::8089->80/tcp
-
Tests:
-
An undirected query, on iotdev, outside container-space:
$ dig liveblock.mydomain.com +short 0.0.0.0
Conclusion: answered by PiHole.
-
An undirected query, on iotdev, inside a container:
$ docker exec nodered dig liveblock.mydomain.com +short 0.0.0.0
Conclusion: answered by PiHole.
-
A query, on iotdev, directed to localhost:
$ dig @127.0.0.1 liveblock.mydomain.com +short 0.0.0.0
Conclusion: answered by PiHole.
-
The sed
command used earlier creates a .bak file. If you did something similar (you should have!) and you want to put everything back the way it was:
$ sudo mv /etc/resolvconf.conf.bak /etc/resolvconf.conf
$ sudo service dhcpcd reload
$ RESTART
The only way to get other containers to use PiHole for DNS is to set the Raspberry Pi running your IOTstack to use itself (127.0.0.1). This implies that:
- The Raspberry Pi will use PiHole for all DNS resolution; and
- All other containers will use PiHole for all DNS resolution.
There is no mix and match. It's an all-or-nothing deal.
-
Assumptions:
- Baseline configuration
- IOTstack not running
-
docker-compose.yml
changed such that all containers except PiHole include:restart: "no" dns: ${PIHOLE_IPv4}
-
Start PiHole by itself:
$ UP pihole Creating network "iotstack_default" with the default driver Creating pihole ... done
-
The IP address assigned to the container, dynamically, can be discovered like this:
$ PIHOLE_CIDR=$(docker network inspect iotstack_default | jq -r '.[0].Containers[] | select(.Name=="pihole") | .IPv4Address') $ echo $PIHOLE_CIDR 172.18.0.2/16
-
The prefix length can be removed like this:
$ export PIHOLE_IPv4=${PIHOLE_CIDR%/*} $ echo $PIHOLE_IPv4 172.18.0.2
-
The PIHOLE_IPv4 variable, having been "exported" is now available when you bring up the remainder of the stack:
$ UP pihole is up-to-date Creating influxdb ... done Creating portainer-ce ... done Creating nodered ... done Creating grafana ... done Creating mosquitto ... done
-
Tests:
-
An undirected query, on iotdev, outside container-space:
$ dig liveblock.mydomain.com +short 192.168.203.90
Conclusion: answered by gravel (ie the Raspberry Pi is respecting
/etc/resolvconf.conf
and/or DHCP). -
An undirected query, on iotdev, inside a container:
$ docker exec nodered dig liveblock.mydomain.com +short 0.0.0.0
Conclusion: answered by PiHole.
-
A query, on iotdev, directed to localhost:
$ dig @127.0.0.1 liveblock.mydomain.com +short 0.0.0.0
Conclusion: answered by PiHole.
-
The reason for specifying restart: "no"
for each container (replacing the more common restart: unless-stopped
) is because of what happens during a reboot. Docker will attempt to restart the containers in a random order so the IPv4 address of the pihole container may well change. I have no idea whether the value of the PIHOLE_IPv4 environment variable is preserved by Docker but, even if it persists, it may well be wrong.
One possible approach is a script like the following:
#!/usr/bin/env bash
# declare the path to the compose file
COMPOSE="$HOME/IOTstack/docker-compose.yml"
# start the pihole container
docker-compose -f "$COMPOSE" up -d pihole
# wait for the container to be ready for business
while ! nc -w 1 127.0.0.1 53 ; do echo "waiting for PiHole on TCP port 53"; sleep 1; done
# discover the IPv4 address of the pihole container in CIDR form
PIHOLE_CIDR=$(docker network inspect iotstack_default | jq -r '.[0].Containers[] | select(.Name=="pihole") | .IPv4Address')
# extract the IPv4 address without the prefix length
export PIHOLE_IPv4=${PIHOLE_CIDR%/*}
# bring up the remaining containers
docker-compose -f "$COMPOSE" up -d
Assuming that script was named start_iotstack
, it could be invoked by cron via a directive like:
@reboot start_iotstack
Nevertheless, it would still be quite fragile. Any event which caused the PiHole container to change its IP address would bring proceedings to a halt until everything was restarted in the proper order.