This chapters covers the information how the nodes are communicating to each other in the underlying P2P network.
In jormungandr each node maintains a list of all available (reachable/unreachable) nodes (neighbours) of the overlay, from which it selects:
- a random recipient to
gossip
and - a fixed-size (default=10) number of nodes from the
overlay
forgossip
.
Therefore a gossip is just a fixed-number list of nodes' details (i.e.: id/public_id/node_id, address (IP:PORT), and subscriptions etc.) that is sent to recipient.
The node will send the gossip to the recipient, which will send back a similar list. Then the node will merge this list with its own all available (IP:PORT presents and not quarantined), unreachable (no IP present) and quarantined node's list.
Note: Unreachable notes (Wallets or nodes behind the NAT) can gossip by establishing initial connections to some known peers (
trusted_peers
). The recipient will know that they're unreachable as their address will be empty, and therefore it will appear in the recipient'sunreachable_nodes
list.
The peers communicate to each other through TCP based gRPC links in a from of client-server side communication, where the client initiates the connection request and the server either accepts or deny it.
These connections are bidirectional gRPC
streams, so in ideal a node, that's connected to and other node should not have an incomming TCP connection from the same node.
Jormungander maintains these inbound
/outbound
connections (called Peers
, default=256
) completely separately to the overlay/topology view, but uses that view to make and allow connections to and from those Peers
/nodes of the overlay
.
Jormungandr
has an aggregated wiew that contains a fixed-number of nodes (64 in default) of all the available nodes, from the four different layers' view (Rings (4)
, Vicinity (20)
, Cyclone (20)
and an optional jormungandr specific layer the Random Direct Connections (20)
).
This aggregated view can be filtered by topics, but in general every nodes in jormungandr's network are considered as they are all (probably) subscribed to all topics.
The different layer's view
Rings
view: the 4 closest neighbours of your node from all nodes of theavailable
nodes (direct neighbour links).Vicinity
view: 20 random closer nodes from all nodes of theavailable
node. The node selection is based on a proximity function (more shared topics higher the proximity).Cyclone
view: 20 random nodes from allavailable
nodes. For gossips, it does the same for 10 random nodes.Random Direct Connections
view (default ismax_unreachable_nodes_to_connect_per_event: 20
):- it can be disabled by setting
max_unreachable_nodes_to_connect_per_event: 0
.
- it can be disabled by setting
Gossips are sent out in every gossip_interval
times to the actual view
's peers from all nodes in the topology/overlay.
When a node creates a block, it propogates that a subset (view) of the max_connections
number of connected nodes that are connected and subscribed to the Block topic
(in jormungandr, ideally all the nodes of the overlay are subscribed to all topics) by:
- getting a view of the subscribed nodes from the overlay (precisely form the
Blocks
topic ring), - and sending the block to those ones that the node have (inbound and/or outbound) connection(s) established with.
- or set a pending block announcement for a peer when the node is still connecting to it.
In the current implementation of the jormungandr (<= v0.8.6), in default, every time a node started or restarted it generates a new random 24-byte length node identification number (node_id
) which identifies the node in the overlay
.
This causes an unwelcome situation when a node restart it will generate a new node_id in the overlay after some gossip cycles. As I deciphered the code, jormungandr did not implement the ageing mehanism of poldercast to remove these dead entries from the overlay, which causes the overlay
(list of all nodes in the poldercast network) to grow in every nodes.
Currently, it contains ~75649 node entries, while the current topology currently contains around ~2000 live nodes, which means that ~73K are stealed/dead entries, and it requires ~14secs to query the nodes list which is ~60Mb in size.
time ( API_PORT=3201; jcr network p2p available | (tee Result; ls -sh Result >&2 ; grep '"id":' Result | wc -l >&2 ) | grep 000000; )
"id": "000000000000000000000000000000000000000000000010"
"id": "000000000000000000000000000000000000001313139003"
"id": "000000000000000000000000000000000000001313139004"
"id": "000000000000000000000000000000000000001313139011"
"id": "0000000000692e79fd4fae75de0fa6c9cd7a5a0000000000"
"id": "111100000000000000000000000000000000000000001111"
"id": "222200000000000000000000000000000000000000002222"
"id": "301300000000000000000000000000000000000000003013"
"id": "333300000000000000000000000000000000000000003333"
"id": "444400000000000000000000000000000000000000004444"
"id": "666600000000000000000000000000000000000000006666"
"id": "777700000000000000000000000000000000000000007777"
"id": "ada4cafebabe000000000000000000000000000000000000"
"id": "ada4cafebabe000000000000000000000000000000000002"
"id": "ada4cafebabe000000000000000000000000000000000088"
59M Result
76668
real 0m8.963s
user 0m4.057s
sys 0m1.185s