Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Paraphraser/6b897e4f2b5102637afc263d229e3410 to your computer and use it in GitHub Desktop.
Save Paraphraser/6b897e4f2b5102637afc263d229e3410 to your computer and use it in GitHub Desktop.
IOTstack tutorial: running two instances of Node-RED on the same host

IOTstack tutorial: running two instances of Node-RED on the same host

Suppose you have a Node-RED container running but you want to create a test version where you can experiment safely without disturbing the original.

Running multiple instances of a container is something Docker excels at! This tutorial explains how to set it up. It will also help you to understand the differences between containers running in host mode and non-host mode.

Assumptions

  1. Your existing Node-RED service definition looks similar to this:

      nodered:
        container_name: nodered
        build:
          context: ./services/nodered/.
          args:
            - DOCKERHUB_TAG=latest
            - EXTRA_PACKAGES=
        restart: unless-stopped
        environment:
          - TZ=${TZ:-Etc/UTC}
        x-network_mode: host
        ports:
          - "1880:1880"
        user: "0"
        volumes:
          - ./volumes/nodered/data:/data
          - ./volumes/nodered/ssh:/root/.ssh

    Don't worry if your service definition has a one-line build clause like this:

    build: ./services/nodered/.

    The one-line version is what IOTstack used historically. The five-line version is what you would get if you installed IOTstack today, from scratch. The five-line version lets you control a lot more from your compose file but it also depends on having a matching Dockerfile so, if your compose file has a one-line build clause, please don't adopt the five-line syntax without also changing your Dockerfile.

  2. Your existing instance of Node-RED is running. You can stop the container if you wish but it's not necessary to do that just to work through this tutorial.

Scenario 1: both containers in non-host mode

Start by opening your compose file in a text editor. Duplicate your existing Node-RED service definition, and then edit it to look like this:

  nodered-test:
    container_name: nodered-test
    build:
      context: ./services/nodered/.
      args:
        - DOCKERHUB_TAG=latest
        - EXTRA_PACKAGES=
    restart: unless-stopped
    environment:
      - TZ=${TZ:-Etc/UTC}
    x-network_mode: host
    ports:
      - "1881:1880"
    user: "0"
    volumes:
      - ./volumes/nodered-test/data:/data
      - ./volumes/nodered-test/ssh:/root/.ssh

The required edits are:

  1. Line 1: change the service name from nodered to nodered-test.
  2. Line 2: change the container name from nodered to nodered-test.
  3. Line 13: change the external (left hand side) port from 1880 to 1881 but leave the internal (right hand side) port unchanged.
  4. Lines 16 & 17: change the persistent storage folder in both left-hand-side paths from nodered to nodered-test.

Before activating the second container, please think about how you want the second container to behave on first launch. You have two options:

  1. Let the container start from a clean slate; or
  2. Clone your existing container's persistent store.

Cloning might sound attractive but please remember that any flows in the clone will be active as soon as the new container starts. The flows in your test container will subscribe to the same Mosquitto topics as your original container, will receive the same data, and will likely push that data into your InfluxDB databases. If that happens you will wind up with duplicate records in your databases. Whether duplicate records are a problem is something only you can know. If you are in any doubt, I recommend starting from a clean slate.

If you decide to make a clone, proceed like this:

$ cd ~/IOTstack/volumes
$ sudo cp -a nodered nodered-test

If you don't run that last command to make a clone, the new container will initialise its own persistent storage and start with a clean slate.

Bring up the second container:

$ cd ~/IOTstack
$ docker-compose up -d nodered-test

If you decided to let the second container start with a clean slate, you should consider following the Node-RED setup steps in the IOTstack Wiki. Specifically:

  1. The Securing Node-RED process, substituting:

    ~/IOTstack/volumes/nodered-test/data/settings.js
    
  2. Optionally, the Setting a username and password for Node-RED process, substituting:

    $ docker exec nodered-test node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" PASSWORD

    Don't forget to replace PASSWORD with the password you wish to use.

At this point you have two instances of Node-RED running. Your original version is listening to external port 1880 while the test version is listening to external port 1881. Even though both instances started from the same image, they are distinct, have completely separate persistent stores, and can be controlled independently.

In terms of network reachability, your two Node-RED containers behave like this:

  • for either Node-RED instance to reach any other non-host mode container running on the same Raspberry Pi needs «container»:«port» syntax, where the port is the internal port. Examples:

     mosquitto:1883
     influxdb:8086
    
  • although it is unusual for other non-host mode containers to want to reach a Node-RED container directly, if you have such a requirement then you use «container»:«port» syntax, where the port is the internal port. Examples:

     nodered:1880
     nodered-test:1880
    

    Containers are like small independent computers so the fact that both containers are listening to port 1880 doesn't create any ambiguity.

  • processes running outside container-space on the Raspberry Pi use localhost:«port» syntax to reach the Node-RED containers, where the port is the external port. Examples:

     $ curl -I localhost:1880
     $ curl -I localhost:1881
  • in all other cases, use the IP address or domain name of the Raspberry Pi plus the external port.

about port mapping

It's helpful to understand how Docker's port mapping works. It is implemented with Network Address Translation (NAT). Docker sets up iptables rules which masquerade each internal port behind the associated external port.

If you've ever set up a port-forwarding rule in your home router, it is exactly the same thing.

Each time the Raspberry Pi receives a packet for destination port 1881, NAT re-addresses the packet to the destination IP address of the nodered-test container (eg 172.30.0.5) with destination port 1880 (the internal port).

When the container replies, packets start with a source IP address of 172.30.0.5 (the container) and a source port of 1880 (the internal port). NAT masquerade rewrites the source IP address to be that of the Raspberry Pi (eg 192.168.1.100) and changes the source port to be 1881 (the external port).

Scenario 2: test container in host mode

Change the service definition to look like this:

  nodered-test:
    container_name: nodered-test
    build:
      context: ./services/nodered/.
      args:
        - DOCKERHUB_TAG=latest
        - EXTRA_PACKAGES=
    restart: unless-stopped
    environment:
      - TZ=${TZ:-Etc/UTC}
      - PORT=1881
    network_mode: host
    x-ports:
      - "1881:1880"
    user: "0"
    volumes:
      - ./volumes/nodered-test/data:/data
      - ./volumes/nodered-test/ssh:/root/.ssh

The edits are:

  1. Insert new line 11: - PORT=1881.
  2. Line 12: remove the leading x- to activate host mode.
  3. Line 13: insert a leading x- so deactivate the ports clause.

The PORT environment variable tells the nodered-test container to listen on port 1881, rather than its default of 1880. If you didn't do that, you would get a port conflict with the nodered container instance.

Placing the nodered-test container in host mode means there is no NAT. The distinction between external and internal ports goes away. What we have previously been thinking of as the internal port is now an external port.

Bring up the test container again:

$ cd ~/IOTstack
$ docker-compose up -d nodered-test

The "up" command causes docker-compose to notice the changes made to the service definition. The old nodered-test container is stopped and removed, and a brand new nodered-test container is initialised with the new parameters.

In terms of network reachability, getting to/from the existing nodered container is unchanged but the nodered-test container's behaviour is different:

  • for nodered-test to reach any non-host mode container running on the same Raspberry Pi needs localhost:«port» syntax, where the port is the external port. Examples:

     localhost:1883
     localhost:8086
    
  • if you need a non-host mode containers to want to reach nodered-test directly, you can use x.x.x.x:«port» syntax, where x.x.x.x is the IP address (or fully qualified domain name) of the Raspberry Pi, and the port is the external port. Example:

     192.168.1.100:1881
    

    An alternative approach is to add an extra_hosts clause to the service definition of each container that needs to reach the nodered-test container:

         extra_hosts:
           - "nodered-test:host-gateway"

    The host-gateway name (right hand side) is dynamically associated with the IP address of the logical router between Docker's internal bridged network and the Raspberry Pi's own internal network. Addressing a packet to host-gateway has the effect of saying "forward this out of container-space".

    This form is more robust than using the IP address of the Raspberry Pi. Example:

     nodered-test:1881
    
  • processes running outside container-space on the Raspberry Pi use localhost:1881 to reach the nodered-test container:

     $ curl -I localhost:1881
  • in all other cases, use the IP address or domain name of the Raspberry Pi plus the external port.

Reasons to choose host mode

There's really only one good reason to run any container in host mode:

  • If the container needs to "see" or otherwise participate in non-unicast traffic (broadcast or multicast).

Good examples of containers that need to run in host mode include:

  • DHCP service. As clients boot up, they issue DHCP requests as broadcast packets.
  • Home Assistant or HomeBridge services. Many client devices advertise their presence using broadcast or multicast.

If an add-on node you require was designed assuming a non-container environment, it may implicitly assume it can see non-unicast traffic received by the host on which Node-RED is installed. Once Node-RED runs in a container, there are only two ways for a flow to "see" non-unicast traffic:

  1. The Node-RED container runs in host mode; or
  2. A proxy process listens to the traffic on Node-RED's behalf and forwards the information (eg via MQTT).

Reasons to avoid host mode

When running in host mode, the nodered-test container must use localhost:«port» syntax to reach its peer non-host mode containers like Mosquitto and InfluxDB.

This implies that every packet moving between nodered-test and those peer containers has to pass through NAT. In each direction.

Each packet also needs to be routed (Layer 3). In each direction.

Conversely, when all containers run in non-host mode, there is no NAT and no routing. Forwarding between containers occurs via bridging (Layer 2). Docker implements the internal bridged network like a switch (ie all unicast traffic is point-to-point).

Each NAT traversal and routing hop incurs a performance penalty. Those might not matter too much when your traffic volumes are low but you should keep these overheads in mind as your needs grow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment