Embedded OpenDDS via Linux Containers
Deploy OpenDDS programs within containers running on different Raspberry Pis connected via a LAN switch. The OpenDDS participants need to be able to communicate with each other using RTPS. The programs must be able to use sensor data from sensors connected to the Raspberry Pi's GPIO interface. One participant should make use of the Raspberry Pi's camera.
- OpenDDS 3.12
- Three Raspberry Pis v3 Model B
- Docker 17.11.0-ce
- OpenDDS programs run within cross platform containers (ARM/x86), but programs within containers can't communicate with each other yet
- OpenDDS programs run on several machines within a LAN communicating with each other (without containers)
- Sensor data can be transmitted over RTPS network. E.g. pressing a GPIO-bound button on Raspi A causes motor on Raspi B to turn
Docker: Cross host / cross container linking
By default, docker routes all container network traffic through the docker bridge. The docker bridge is a separate network interface that is installed on the OS level during Docker's installation. It allows cross container communication for containers deployed on the same physical host. However, cross container communication across separate hosts is impossible through the bridge interface. Gladly, docker allows users to specify which network interface a container will use. This is done with the
--network flag when instantiating a container. Several network modes are available out of the box.
The first one is the host option. This option gives the container full access to the host's network system. As a result, the container has no limits in what it can to in terms of networking. One could say the container will act in behalf of the host. With host mode, no virtualization layers through the docker deamon need to be passed as it is the case with the bridge network mode. As a result, host mode has significant performance benefits over other modes. However, this approach is discouraged as it results in severe security flaws. Another way to enable cross host / cross container communication is through logical overlay networks. With docker, it is possible to use user defined network modes employing a variety of network drivers that may be plugged into the network mode at will. Some network drivers allow to set up a logical overlay network over the physical one which makes it possible for distributed containers to act as if they were connected via LAN. The network driver macvlan for example achieves this through use of VLAN.
All options to enable cross container linking across separate hosts have one significant downside. They only work when all communicating parties are deployed using docker. This severely limits dockers usefulness in many use cases. E.g., it often can't be guaranteed that a given application can be deployed on top of docker because it is built to be deployed on the metal, i.e., without OS inbetween.
Docker/DDS: Multicast ist not well supported by docker
As suspected, docker's default network drivers don't support IP multicast. This was confirmed by the OpenDDS team who stated they have experimented with cross host / cross container communication (Source). It is evident from an ongoing discussion on the docker github repo that multicast support is a much requested feature and chances are that it will be implemented in the near future.
Since DDS's dynamic service discovery is based on multicast it cannot be used in the context of containerized, distributed systems - i.e. not with the network drivers currently available for docker. We tried several network drivers:
weave. They all enable cross container / cross host communication but multicast is not supported.
Due to docker's lack of multicast support, dynamic (distributed) RTPS service discovery is hard to achieve on a container cluster. It is very likely that we have to drop dynamic service discovery in favor of the centralized service discovery via OpenDDS's InfoRepo.
A major disadvantage of the centralized InfoRepo approach is an increased maintenance effort. A dedicated node needs to be present hosting the InfoRepo and all participants need to be configured to have access to it.
Docker: Control GPIO interface from within container
It is possible to control the GPIO pins from within the container. There is one caveat, however. The container needs to be run in privileged mode, i.e. with
--privileged flag. It is unclear what exactly this does but it is certatin that it has harsh security implications.
Raspi: GPIO only supports digital I/O
The Raspi's GPIO only suports digital, as opposed to analog I/O. Lots of data created by sensors is analog, e.g. that of temperature sensors. The result of this is that we need a lot of A/D converters. However, so far, we only have one. Another challenge is to make the conversion work with the converter at hand. The values retrieved from the temperature sensor dont quite make sense yet.
- What does docker's
--privilegeddo? How does it impact security?
- What does
--network=hostwork? How does it impact security?
- How does the mesh network on top of kubernetes work?
- How big is the performance penalty of
bridgenetwork mode over
- What does docker's
- Single host DDS communication
- Multi host DDS communication
- Single host / cross container DDS communication
- Multi host / cross container DDS communication (with security caveats)
- Achieve mutli host / cross container communication through container orchestration platform
- Control GPIO pins from DDS client
- Sensor input on host A causes sensor output on host B
- Control GPIO pins from within container
- Send camera stream over DDS network to other participants