Skip to content

Instantly share code, notes, and snippets.

@BretFisher
Last active August 12, 2024 16:10
Show Gist options
  • Save BretFisher/7233b7ecf14bc49eb47715bbeb2a2769 to your computer and use it in GitHub Desktop.
Save BretFisher/7233b7ecf14bc49eb47715bbeb2a2769 to your computer and use it in GitHub Desktop.
Docker Swarm Port Requirements, both Swarm Mode 1.12+ and Swarm Classic, plus AWS Security Group Style Tables

Docker Swarm Mode Ports

Starting with 1.12 in July 2016, Docker Swarm Mode is a built-in solution with built-in key/value store. Easier to get started, and fewer ports to configure.

Inbound Traffic for Swarm Management

  • TCP port 2377 for cluster management & raft sync communications
  • TCP and UDP port 7946 for "control plane" gossip discovery communication between all nodes
  • UDP port 4789 for "data plane" VXLAN overlay network traffic
  • IP Protocol 50 (ESP) if you plan on using overlay network with the encryption option

AWS Security Group Example

AWS Tip: You should use Security Groups in AWS's "source" field rather then subnets, so SG's will all dynamically update when new nodes are added.

Inbound to Swarm Managers (superset of worker ports)

Type Protocol Ports Source
Custom TCP Rule TCP 2377 swarm + remote mgmt
Custom TCP Rule TCP 7946 swarm
Custom UDP Rule UDP 7946 swarm
Custom UDP Rule UDP 4789 swarm
Custom Protocol 50 all swarm

Inbound to Swarm Workers

Type Protocol Ports Source
Custom TCP Rule TCP 7946 swarm
Custom UDP Rule UDP 7946 swarm
Custom UDP Rule UDP 4789 swarm
Custom Protocol 50 all swarm

Docker Swarm "Classic" Ports, with Consul

For Docker 1.11 and older. I Used this list from Docker Docs on Swarm Classic, then tested on multiple swarms.

Inbound to Swarm Nodes

  • 2375 TCP for swarm manger -> nodes (LOCK PORT DOWN, no auth)
  • 7946 TCP/UDP for container network discovery from other swarm nodes
  • 4789 UDP container overlay network from other swarm nodes

Inbound to Swarm Managers

  • 3375 TCP for spawner -> swarm manager (LOCK PORT DOWN, no auth)

Inbound to Consul

  • 8500 TCP for swarm manager/nodes -> consul server (LOCK PORT DOWN, no auth)
  • 8300 TCP for consul agent -> consul server
  • 8301 TCP/UDP for consul agent -> consul agent
  • 8302 TCP/UDP for consul server -> consul server

Swarm Classic Inbound Ports In AWS Security Group Format, with Consul

AWS Tip: You should use Security Groups in AWS's "source" field rather then subnets, so SG's will all dynamically update when new nodes are added.

This is another way to look at the above lists, in a format that makes sense for AWS SG's

  • assume AWS inbound from:
    • Internet ELB -> Swarm Managers
    • Swarm Managers -> Swarm Nodes
    • Swarm Managers -> Consul Internal ELB
    • Swarm Nodes -> Consul Internal ELB
    • Consul Internal ELB -> Consul Nodes

ELB Swarm Manager

Type Protocol Ports Source
Custom TCP Rule TCP 3375 spawners

Swarm Managers

Type Protocol Ports Source
Custom TCP Rule TCP 3375 elb-swarm-manager

Swarm Nodes

Type Protocol Ports Source
Custom TCP Rule TCP 2375 swarm-managers
Custom TCP Rule TCP 7946 swarm-nodes
Custom UDP Rule UDP 7946 swarm-nodes
Custom UDP Rule UDP 4789 swarm-nodes

ELB Consul

Type Protocol Ports Source
Custom TCP Rule TCP 8500 swarm-nodes
Custom TCP Rule TCP 8500 swarm-managers

Consul Nodes

Type Protocol Ports Source
Custom TCP Rule TCP 8500 elb-consul
Custom TCP Rule TCP 8300-8302 consul-nodes
Custom UDP Rule UDP 8301-8302 consul-nodes
@InitusJames
Copy link

Hello, I can't find a solution to this problem, I hope this is ok to post here.. Swarm/LB wont work for me..

I’ve followed the basic docker install, swarm manager and workers.

example
sudo docker swarm init --listen-addr 192.168.0.218:2377 --advertise-addr 192.168.0.218

sudo docker swarm join --token SWMTKN-1-1l0280etdtdtqn8qc6pi5lm1cmcs2vk8mz92n6hqdde91kj88u-1h0278cywn9pthi5yafphow5l 192.168.0.218:2377

The resulting “join” messages seem just fine, swarm is now active.

I am attempting to manage this via Portainer and it seems to work well.
I can pull down and deploy images across all the worker nodes, no problem.

I can connect directly to each one and test the app via port 8888, all results all correct.

The problem stems from hitting the manager node. It will serve up the Web API but it will not load balance the process. I get only responses from the manager node and no others.

I’ve scaled down the cluster, works great, there is a scenario where the manager loses the container image due to the scale down which is fine and there are 2 or more instances of that service across other nodes. I can connect to them directly via ip addresses but not via the Manager node.

The image of Ubuntu is on Rock64 and Pine64 SBCs, therefore there is no firewall present on these minimal images.

Not sure where to go now so I am here.

Client:
Version: 18.09.1
API version: 1.39
Go version: go1.10.6
Git commit: 4c52b90
Built: Wed Jan 9 19:42:36 2019
OS/Arch: linux/arm64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 4c52b90
Built: Wed Jan 9 19:03:16 2019
OS/Arch: linux/arm64
Experimental: false

Doesn’t anyone have any suggestions on what to check next, please?

Additional details on ports for this issue are here.
https://forums.docker.com/t/swarm-doesnt-want-to-work-but-seems-setup-correctly/67667

@nanakwafo
Copy link

wow ,you saved my day

@Nikoogle
Copy link

Nikoogle commented Jul 2, 2021

2021, still a thanks :)

@mehransaeed7810
Copy link

I cant ping from a container to ping 8.8.8.8 or any other host. containers running in the swarm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment