Skip to content

Instantly share code, notes, and snippets.

@ddimtirov
Created May 13, 2019 11:14
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ddimtirov/5b61a697218e3e49a9b2b6e1b7e73ef2 to your computer and use it in GitHub Desktop.
Save ddimtirov/5b61a697218e3e49a9b2b6e1b7e73ef2 to your computer and use it in GitHub Desktop.
UDP Multicast in Container Networking

UDP Multicast in Container Networking

If your application is built around UDP Multicast, you are probably used to having network issues and long negotiations with the corporate data center network teams. For me that was one of the things that made me procrastinate learning about cloud and container networking, but now I finally bit tha bullet and it turns out the situation is not as bad as I thought (not that great either).

I will assume you have done your homework about basic Docker networking, if you don't go read the doc overview.

So, as we know Docker has the following built-in network drivers:

  • None - no network - no multicast.
  • Host - no abstraction - multicast away!
  • Bridged - a container can multicast to itself (useful if it is an ugly, horrible multi-process container, #dontdoit), but the bridge network won't route the IGMP packets announcing who is listening, so your PIM packets will never leave the container.
  • MACVLAN - UDP multicast works. Slower than host mode, but not much slower. Need to do cluster discovery and routing manually. Fiddly to implement in managed networks.
  • Overlay - in the native Docker driver it uses kernel bridge for the same host and VXLAN-encapsulated tunnel between brides. As the VXLAN tunnel does not support IGMP snooping, your UDP multicast will not be routed. There is moby/libnetwork#552 open for contributions.

If you want the convenience of the overlay networks and you need UDP multicast, you have only 2 choices - use the remote driver for Weave or Contiv. I haven't spent enough time with them to comment, but it looks like Weave is feature rich (risking complexity, performance, bugs); while the primary selling point ov Contiv is Cisco integration (which I am not qualified to comment on)

This is all there is to say about containers and multicast as of May 2019.

I also found that none of the big 3 clouds (AWS, Azure or Google Cloud) does not support multicast in their network. In practice that means that your application containers will not be able to use host-mode or MACVLAN driver and the only option is to use Weave or Contiv overlay.

In case you want to dive at the deep end and roll your own multicast tunneling - start here: http://troglobit.github.io/howto/multicast/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment