Skip to content

Instantly share code, notes, and snippets.

@pedrolamas
Created August 18, 2020 19:32
Show Gist options
  • Save pedrolamas/db809a2b9112166da4a2dbf8e3a72ae9 to your computer and use it in GitHub Desktop.
Save pedrolamas/db809a2b9112166da4a2dbf8e3a72ae9 to your computer and use it in GitHub Desktop.
Script to fix Docker iptables on Synology NAS
#!/bin/bash
currentAttempt=0
totalAttempts=10
delay=15
while [ $currentAttempt -lt $totalAttempts ]
do
currentAttempt=$(( $currentAttempt + 1 ))
echo "Attempt $currentAttempt of $totalAttempts..."
result=$(iptables-save)
if [[ $result =~ "-A DOCKER -i docker0 -j RETURN" ]]; then
echo "Docker rules found! Modifying..."
iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER
echo "Done!"
break
fi
echo "Docker rules not found! Sleeping for $delay seconds..."
sleep $delay
done
@Maypul
Copy link

Maypul commented Jan 30, 2022

Hi there, actually I have been running these ip tables rules successfully or so I though but I have noticed something strange happening with these rules on on my Synology DSM 7, once rules are removed, everything works as expected, except getting real ip in my docker apps of course...
So the issue is, I also have to run in synology firewall rules such as All All [docker subnet] to allow my containers to connect to the internet also to talk between each other, I have three subnets. Once this firewall rule is on, with even one iptables rule only some containers are becoming availble bypassing firewall deny deny rule. So if I have allow docker subnet to go through firewall in synology, some containers also get 'allow others to connect to containers from outside' exposing theirs ports on synology even if there is no rule allowing the ports to open. With my setup with containers I had exposed but not allowed though firewall, phpmyadmin and organizzer would be accessible everywhere. For example, qbittorrentvpn or plex would not be unless allowed on firewall.
I have tried host mode on ports in traefik and it did not work for me. As I need to have firewall on and also allow docker containers to the internet, I simply can't use these iptables rules if they open some random containers. Seems like docker on synology and/or iptables are not working really well in there,

Picture reference, if these docker network rules are on, then this leak happens on some containers: http://prntscr.com/26lq4pk Another all rule are just 80 and 443.

Edit 2: I just though of some similarities that applies to both apps that leaked, I am no expert at iptables at all, but both phpmyadmin and organizzer uses 80 port internally in docker, even if I mapped them to other ports to host machine. I am assuming that with these iptables maybe somehow it also opens these apps since they are on themselves open on port 80? I will try to check it later.

But my apps that leaked with these firewall rules in docker had these ports:
phpmyadmin - 8080:80/tcp (checked also 30020:80)
organizr - 82:80/tcp

And those not leaked:
plex - 32400:32400/tcp
arch-qbittorrentvpn - 30000:30000/tcp

And since traefik needs to have open 80 and 443, there would be the similarity... but I will stress it again, I am no expert.

Edit 3: It seems to work as described. Would be great, if that iprule could be somehow adjusted to maybe forward that traffic to specofic container/ip rather that whole docker network as I assume from the rule's '-j DOCKER'.

@Maypul
Copy link

Maypul commented Feb 1, 2022

I have tried to update the iprules to specific ranges or custom chains but with no success - but I have managed to get correct IP logging on my Traefik instance without modyfing iptables at all and without using host mode (which did not work for me). For anyone interested, in my case I had to add IP of the docker gateway to entrypoints.https.forwardedHeaders.trustedIPs in Traefik (the one IP I was getting in logs). Before I had only Clouldflare IPs in there. Now Traefik and everything behind Traefik gets correct IP in my setup. There is one other thing I had to update in my setup, in my whitelists add: ipStrategy depth: 1 so the whitelist would use the real ip to filter instead of gateway ip.

Edit: Also worked a bit on iptables rules, and finally managed to get them working for me without leaking additional ports. Since I only needed 80 and 433 ports for my proxy, and I though to expand upon dns port that would not get proxied, all I needed instead was:
iptables -t nat -A PREROUTING -p tcp --dport 80 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -p tcp --dport 443 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -p tcp --dport 53 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -p udp --dport 53 -m addrtype --dst-type LOCAL -j DOCKER

which worked exactly as intended and did not provide these leaks mentioned earlier.

@kevindd992002
Copy link

@pedrolamas do you have any comments on @ben-ba's comment and @Maypul's comment? I want to be able to get rid of macvlan and transition to using this fix but I'm afraid it will break something in my Syno, like firewall rules.

@Maypul Can you provide the complete script you used to make this persistent during boot up? So if I just want to want SWAG (ports 80 and 443 tcp) and AGH (ports 53 tcp/udp), I just copy the commands you have above?

@Maypul
Copy link

Maypul commented Feb 22, 2022

@kevindd992002 I have not tried the 'script' to check it after reboot yet as I am still rebuilding my raid on the device, but I basically replaced the rules you can see in original script with mine.

#! /bin/bash

currentAttempt=0
totalAttempts=10
delay=15

HTTP_PORT=81
HTTPS_PORT=444

sed -i "s/^\( *listen .*\)80/\1$HTTP_PORT/" /usr/syno/share/nginx/*.mustache
sed -i "s/^\( *listen .*\)443/\1$HTTPS_PORT/" /usr/syno/share/nginx/*.mustache

while [ $currentAttempt -lt $totalAttempts ]
do
	currentAttempt=$(( $currentAttempt + 1 ))
	
	echo "Attempt $currentAttempt of $totalAttempts..."
	
	result=$(iptables-save)

	if [[ $result =~ "-A DOCKER -i docker0 -j RETURN" ]]; then
		echo "Docker rules found! Modifying..."
		

		iptables -t nat -A PREROUTING -p tcp --dport 80 -m addrtype --dst-type LOCAL -j DOCKER
		iptables -t nat -A PREROUTING -p tcp --dport 443 -m addrtype --dst-type LOCAL -j DOCKER
		iptables -t nat -A PREROUTING -p tcp --dport 53 -m addrtype --dst-type LOCAL -j DOCKER
		iptables -t nat -A PREROUTING -p udp --dport 53 -m addrtype --dst-type LOCAL -j DOCKER

		echo "Done!"
		
		break
	fi
	
	echo "Docker rules not found! Sleeping for $delay seconds..."
	
	sleep $delay
done

Edit: You might want to drop

HTTP_PORT=81
HTTPS_PORT=444

sed -i "s/^\( *listen .*\)80/\1$HTTP_PORT/" /usr/syno/share/nginx/*.mustache
sed -i "s/^\( *listen .*\)443/\1$HTTPS_PORT/" /usr/syno/share/nginx/*.mustache

as it's just for changing synology's default web server ports.

As for commands, you might need to use sudo if trying them in cli first ex sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -m addrtype --dst-type LOCAL -j DOCKER

@kevindd992002
Copy link

I see. Will this mess up anything with the Synology firewall?

@Maypul
Copy link

Maypul commented Feb 22, 2022

@kevindd992002 it still does, maybe not in as big scale as the original command. I also have not been able to find any 'leaks' for now which I sometimes test for. It still forwards traffic, but to specific ports, so it bypasses those. So I can still use synology firewall to manage other rules as ports 80, 443 for me were supposed to be open anyway.

@Edemilorhea
Copy link

Can I ask every one, this script is work on Dsm 7.1?

Casue I build the linuxServer/chevereto on my synology docker.

But user ip as same my ip, it isn't user real ip.

I just tried this scrpit, I copy all scrpit into User-defined script

like this image, but it isn't work, user upload still my docker ip.

How can I fix it?
image

@kevindd992002
Copy link

How do I revert the changes on this script? Just disable the scheduled task and reboot?

@pedrolamas
Copy link
Author

How do I revert the changes on this script? Just disable the scheduled task and reboot?

Correct, that should be enough to revert the changes!

@ideaalab
Copy link

ideaalab commented Jun 22, 2022

So, which rules to use? The originals from @pedrolamas, the ones suggested by @flo-mic, or the last ones from @Maypul?

For example if im trying to open ports 6000 tcp and range 1000:1100 udp to a container in docker:

iptables -t nat -A PREROUTING -p tcp --dport 6000 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -p udp --match multiport -–dports 1000:1100 -m addrtype --dst-type LOCAL -j DOCKER

This would be ok? Or still I need this?

iptables -t nat -A PREROUTING ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER

@styt
Copy link

styt commented Sep 21, 2022

So, which rules to use? The originals from @pedrolamas, the ones suggested by @flo-mic, or the last ones from @Maypul?

For example if im trying to open ports 6000 tcp and range 1000:1100 udp to a container in docker:

iptables -t nat -A PREROUTING -p tcp --dport 6000 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -p udp --match multiport -–dports 1000:1100 -m addrtype --dst-type LOCAL -j DOCKER

This would be ok? Or still I need this?

iptables -t nat -A PREROUTING ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER

Hi,

Anyone spent some more time here to figure out the best alternative for this?

Thx
Stefan

@kevindd992002
Copy link

@pedrolamas any updates to the questions above?

@pedrolamas
Copy link
Author

@kevindd992002 these worked fine at the time I wrote this, however I have since moved away from this type of setup and so can't test the proposed alternatives.

Having said that:

  • @ben-ba is most likely right, that second rule right now looks suspiciously useless...
  • @Maypul version targets only specific ports, so seems to me the best (tighter / more secure) approach to the whole problem!

@kevindd992002
Copy link

@kevindd992002 these worked fine at the time I wrote this, however I have since moved away from this type of setup and so can't test the proposed alternatives.

Having said that:

* @ben-ba is most likely right, that second rule right now looks suspiciously useless...

* @Maypul version targets only specific ports, so seems to me the best (tighter / more secure) approach to the whole problem!

Do you mind sharing what you're using now so that the real client IP is reflected in AGH or pihole?

@pedrolamas
Copy link
Author

What I meant is that I am not hosting these apps inside the Synology NAS, but am instead using a completely separate system.

@kevindd992002
Copy link

Gotcha!

@dhow
Copy link

dhow commented Feb 27, 2023

Having the same headaque, tried this script but could not get it to work on DSM: 7.1.1-42962 Update 4.

I started to wonder if I'm missing anything as this (having real IP being forwarded to Docker containers) should be something common that many people would want to achieve -- just thinking of analytic or security usecases, however, when I search I don't see much search results...

FYI, I have my NAS IP added to Security > Trusted proxy as I saw someone doing that using Nginx as a reverse proxy, but no joy...

@domigi
Copy link

domigi commented Mar 12, 2023

Thanks for sharing this!
I noticed in my nextcloud logs that LAN-local IP addresses are still not being shown and instead the IP of the reverse proxy.. I wasn't able to find a solution for this yet.

@ben-ba
Copy link

ben-ba commented Mar 13, 2023

@domigi
Thats the way a reverse proxy work.

@domigi
Copy link

domigi commented Mar 13, 2023

@ben-ba Not sure if we're talking about the same idea.
In my nextcloud container it seems to only see the XFF IP if it's an external/public IP.
For example here two request:

Client Proxy Service Request appears to be from
10.0.0.2 172.16.0.2 172.30.1.2 172.16.0.2
42.199.8.17 172.16.0.2 172.30.1.2 42.199.8.17

(My local LAN is 10.0.0.0/24)

What I would like to achieve: In the example above the first request should also appear to be from 10.0.0.2 and not how it currently is 172.16.0.2.

@arsenio7
Copy link

arsenio7 commented Apr 7, 2023

@domigi Thats the way a reverse proxy work.

in this case, it's not a reverse proxy issue. This happens within the NAT mechanism of the firewall of the Synology box (OSI layer 3), a reverse proxy would be an app getting the incoming connections and forwarding 'em, i.e. a HTTP rev-proxy (OSI layer 7)

@petrosmm
Copy link

this works for a synology 1621+ DSM 7... kudos!

@scrapix
Copy link

scrapix commented Jul 9, 2023

@Maypul
Thank you lots! This helped me to get the actual IPs in traefik and stopped crowdsec to block all connections!

@SK-StYleZ
Copy link

I've testen iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER and it worked for me => saw the "real" ip in the traefik access log.
After a reboot i've promted the same command, but this time it didn't work. I am getting the internal docker ip's in the traefik access log.
recreated the containers but still no "real" ip's ...
what do i miss?

@Hoaxr
Copy link

Hoaxr commented Sep 7, 2023

FINNALY! Been searching so long for this and now it works! Thanks

@Paul-B
Copy link

Paul-B commented Sep 19, 2023

I've been wasting so much time this week getting NGINX reverse proxy manager to work properly on Synology. If you want IP-whitelisting then being aware of actual outside IP's is essential and I just could not get it to work. Got headaches from macvlan in Docker and still it did not work. Thank you!

Is this problem unique to Synology?
The rule that I applied is the first suggestion by Pedro:
sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER

However I have a few thoughts.

The first one is security.
It seems that this is a bit of a brute force method that risks exposing other ports of other containers to the network. An attacker would have to break through the port forwarding of my router which does seem like an unlikely event. Then the solution by @Maypul to only script the ports that you are actually going to use sounds like smart paranoia.

The second thought I have is about complexity.
After a while there are so many tweaks to a system that without proper documentation to self (or others) somebody else or even myself is never going to figure out what or why these tweaks were done in case of a system reset or rebuild. Thus, it might be a smart move to install the docker containers that need to be aware of the outside IP to another system like a Raspberry Pi that runs somewhere in the network in more or less default config and that is dedicated to these tasks. That way it becomes clear what the purpose of that machine is and why it is setup the way that it is.

Anyway, just my thoughts but I'm curious about comments.

@mat926
Copy link

mat926 commented Oct 8, 2023

This doesn't work when trying to access the container from a reverse proxy

@Jabb0
Copy link

Jabb0 commented Nov 30, 2023

Awesome @Maypul thank you!

The change bypassing the synology firewall is understandable and the default docker behaviour.

Other rules added to the FORWARD chain, either manually, or by another iptables-based firewall, are evaluated after the DOCKER-USER and DOCKER chains. This means that if you publish a port through Docker, this port gets published no matter what rules your firewall has configured

https://docs.docker.com/network/packet-filtering-firewalls/#add-iptables-policies-before-dockers-rules

It should only affect ports that you published with docker.

I have my home network on eth0 and another network on eth1. For this reason I only want to accept connections from eth0.

adding the -i eth0 flag does the trick.

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 53 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -i eth0 -p udp --dport 53 -m addrtype --dst-type LOCAL -j DOCKER

@JVT038
Copy link

JVT038 commented Mar 27, 2024

None of these iptables rules have worked for me :(

I'm using a DS918+ and running DSM 7.2.

When I run the iptables script, the X-Forwarded-For IP address becomes the address of my router for some reason. So I don't get the client IP, but the IP of my router.

Does anyone know a fix? I've also tried disabling userland-proxy in the docker daemon, but that didn't work either. Or maybe I did something wrong.

@Aurel004
Copy link

@ben-ba Not sure if we're talking about the same idea. In my nextcloud container it seems to only see the XFF IP if it's an external/public IP. For example here two request:

Client Proxy Service Request appears to be from
10.0.0.2 172.16.0.2 172.30.1.2 172.16.0.2
42.199.8.17 172.16.0.2 172.30.1.2 42.199.8.17
(My local LAN is 10.0.0.0/24)

What I would like to achieve: In the example above the first request should also appear to be from 10.0.0.2 and not how it currently is 172.16.0.2.

Have you got any fix on this ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment