Skip to content

Instantly share code, notes, and snippets.

@jhiesey
Created March 25, 2019 17:41
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jhiesey/f9a3d9961ccdfca940e54ec279b47cf9 to your computer and use it in GitHub Desktop.
Save jhiesey/f9a3d9961ccdfca940e54ec279b47cf9 to your computer and use it in GitHub Desktop.
Modular DHT

Preface

One of the problems of the current libp2p DHT design is that the DHT is treated as a monolithic component implementing roughly the Kademlia protocol, while trying to stay as decoupled from the rest of the system as possible. This means that, for example, the Kademlia routing table is completely separate from the peer store. Instead, we should take the ideas from Kademlia (and subsequent research) and integrate them as appropriate throught libp2p. That doesn't mean that the core of the DHT isn't a modular component that can be omitted. It just means that useful functionality consumed by the DHT should be designed into the rest of the system.

Most of the nitty gritty details aren't yet defined; instead, this proposal is intended as a starting point for discussions. It assumes that the rest of libp2p exposes a quite different set of interfaces than is currently the case. Although the hope is that libp2p would eventually expose these different interfaces, it is feasible to build a shim underneath this design to allow this DHT to run on top of the existing libp2p components in the meantime.

In addition, the DHT proposed here also exposes a different interface to its consumers. A second shim on top could be used to make this DHT a plug in replacement for the current one, or modules that depend on the DHT could be modified.

Proposed changes to libp2p environment

A redesigned libp2p would have a few notable changes:

  • A message-oriented host interface, instead of or in addition to the current stream-oriented interface.
  • Resource allocation and utilization as a core concept. Specifically, the existing connection manager, dialer, peer store, and routing table would be redesigned.

(HIGHLY SPECULATIVE) In order to manage resources, each user application and subsystem that supports the network would have an "account". As any resource is exhausted, the resources used by the lowest priority account would be involuntarily freed to maintain quality of service for other accounts. For example, a minimal application utilizing libp2p might have the following accounts, from highest to lowest priority:

  • Main application
  • DHT client (routing table for directing the application's queries)

A more sophisticated node would also have accounts for:

  • DHT server (for incoming requests)
  • Peer relay

The resource management subsystem would manage resources like the following:

  • Memory (for message buffering, info about peers like addresses/encryption session keys)
  • Bandwidth
  • File descriptors (where applicable)
  • NAT/routing table entries

Each request to hold a resource for extended periods of time would return a lease object, with the following (public) methods:

  • End() -> () Called by the resource requester to indicate that the resource is no longer needed.
  • SetTerminationHandler(handler) -> (handle) Indicates that handler should be called when the resource manager decides to involuntarily revoke a resource.
  • ClearTerminationHandler(handle) -> () Removes a termination handler.

In addition, leases can also be used as a convenient way to remove handlers. TODO: explain this better. Maybe that is too confusing and there should be a separate interface for this.

Host/peer interface:

Note that this is actually rather different from any existing interface. It is roughly a hybrid of PeerInfo, Host, and Switch.

  • PeerId() -> (peerId) Gets the peer ID

  • SendMessage(account, protocol, msg) -> () Sends a binary message msg to the handler for protocol on the remote host. No guarantees are provided on how many times the message will be received by the remote peer (zero or more are OK; use best effort) or on the relative ordering of messages. Encryption and authentication are assumed to be handled automatically.

  • SendMessageAndAwaitResponse(account, protocol, msg, handler) -> (lease) Same as above, but calls handler(response) when a response comes in. When no more responses are desired, end the lease. To implement this, each message sent with this method would have a RequestId header set with a large random value. Responses would come in on a special response protocol, echoing the RequestId header. This would cause the local peer to look for an active handler lease to dispatch the message to.

  • SetMessageHandler(account, protocol, handler) -> (lease) Causes incoming messages for protocol protocol to be sent to handler by calling handler(protocol, msg). Returns a lease that can be ended to remove the handler.

  • PreservePeerInfo(account) -> (lease) Requests that this peer's addresses, session keys, etc. should be kept. Peers that are used for DHT routing would have this called on them. This is a partial replacement for the TTL system in the existing peer store.

  • EnsureConnectivity(account) -> (lease) Requests that connectivity be established to a peer. This is the equivalent of dialing a peer in the current interface. As long as the lease exists it can be used to detect when the peer is no longer reachable.

  • Methods for managing addresses. TBD.

  • Methods for managing supported protocols. TBD.

  • Methods to listen for state changes. TBD.

Peer store/routing table/swarm interface:

This is also different from any existing interface.

  • SetNewPeerHandler(handler) -> (lease) Runs handler(peer, lease) when a new peer is discovered (lease in the callback is equivalent to that returned by PreservePeerInfo). End lease to remove the handler.

  • SetMessageHandler(account, protocol, handler) -> (lease) Runs handler(peer, protocol, msg) on an incoming message from any peer. End the lease to remove the handler.

  • QueryBest(sortFn, count) -> ([]Peer) PLACEHOLDER Finds the count peers where sortFn(peer) returns the best (highest) values. This is a very general kind of query function that can handle many things, including Kademlia-style routing table lookups. We will definitely want more specific optimized query functions as well, since this will run in linear time (in the number of known peers). The point is that this function could serve as a brute-force replacement for the entire Kademlia routing table and many other search operations. This is really a placeholder for a potential "peer query" interface.

DHT structure

This section describes both the interfaces, and rough implementation sketches, for the proposed DHT. The core of the DHT provides peer routing and nothing else. All other functionality is optional. In addition, server and client components are separate.

PeerRoutingClient: finds addresses of peers

  • FindPeer(account, PeerId) -> (Peer, lease) Finds the peer with id PeerId and returns the peer object and a lease on the addresses (as PreservePeerInfo would return).

PeerRoutingServer: responds to incoming peer routing requests

(no public methods)

Internally, peer routing would use a specific Peer Routing Protocol, which could look something like this:

struct PeerRoutingQuery {
	key bytes // PeerID
}

Responses would look like (not including the RequestId header, which is stripped before the message reaches the DHT):

struct {
	closerPeers []PeerInfo
}

ContentRoutingClient: finds content

  • FindProviders(account, CID) -> (chan Peer, lease) Finds peers providing CID until the lease is ended or the channel is closed (these are equivalent).
  • AddProvider(account, CID) -> (lease) Advertises the local peer as a provider for CID. Republishes automatically until lease ends.

PeerRoutingServer: responds to incoming content routing requests

(no public methods)

This would use a Content Routing protocol:

struct ContentRoutingQuery {
	key bytes // CID
	level int // Coral-style network level
}

Responses would look like (not including the RequestId header, which is stripped before the message reaches the DHT):

struct {
	providerPeers []PeerInfo
	closerPeers []PeerInfo
}

Similarly, AddProvider would also have a message in the Content Routing protocol:

struct ContentRoutingAdd {
	key bytes // CID
	// PeerInfo "value" is implicit and can be retrieved from the peer store
}

This doesn't need a reply, but one might be worthwhile. When received, the value will be stored in the record store with an expiration time.

TODO: the rest of the Coral maintenance stuff

IPNS (and pubkey) client

  • Lookup(account, key) -> (chan hash, lease) Looks up key in IPNS, and returns a stream of records.
  • LookupBest(account, key) -> (hash) Looks up key in IPNS, and returns the "best" record.
  • Store(account, keypair, value) -> (lease) Stores value (and the public key) for the keypair given by keypair. Republishes automatically until lease ends.

IPNS server

(no public methods)

This would use an IPNS routing protocol:

struct IPNSQuery {
	// Empty body, since everything is implicit. The search key is known as the source peer of the message
}

Which would get a response like:

struct {
	hash bytes // Value of IPNS record
	signature sig // signature of hash with original node's public key
	closerPeers []PeerInfo
}

Storing an IPNS record would look like this:

struct IPNSStore {
	hash bytes // Value to store
	signature sig // signature of hash with source node's public key
}

When a peer receives this message, it should look up the peer's public key on the host/peer store, verify that the value is signed with the public key found there, and then stored in the record store.

Record store

This is the one DHT component that doesn't need significant change.

Managing the routing table

In traditional Kademlia, the existence of a peer and its willingness to accept and respond to routing messages is communicated implicitly simply by sending any request, e.g. FIND_NODE or GET_PROVIDERS. This is rather limiting, since it doesn't provide a clean way to indicate if the sender wants to be a server or not, or to indicate if any additional DHT protocols are supported. Instead, we should have a general solution that allows a peer to indicate what protocols it supports, which should be handled by multistream 2.0 (or something similiar).

The DHT would use SetNewPeerHandler (above) to listen for new peers and the protocols they support. Once a peer is known, one or more calls to QueryBest (or more likely a more efficient variant) would be used to find out whether we know enough peers within the appropriate XOR distance range. If so (i.e. the bucket is full), we would end the lease on that peer's address info. Otherwise, we would keep the lease around.

Essentially, whenever a bucket of distances of known peers contains K or more peers for a given DHT protocol, we would end all leases on those peers. If these start getting removed (or peers are otherwise not reachable), once the number of known peers gets down to K, we would take out leases on ALL the remaining connections to prevent these from being removed.

Sybil resistance challenge protocol

Unfortunately, this design is still subject to the same attack vectors as vanilla Kademlia. To improve on this, any peer A can require another peer B to do a proof of work before A will add B to its routing table. Instead of the traditional Kademlia implicit routing advertisement, once peer A is aware that peer B exists and speaks the relevant routing protocol, peer A has the option of issuing a challenge to peer B.

A client-only peer should never receive such a challenge, and can ignore it if it does. So only peers behaving as DHT servers need to be willing to do work. The challenge and response would look like this (for peer routing):

struct PeerRoutingChallenge {
	challenge // TODO: what does this look like. Must be unique for each message
}

And the response is simply

struct PeerRoutingResponse {
	challenge
	solution
}

This extension could be implemented independently in all modules (peer routing, content routing, etc). The peer store records would indicate if a given peer has responded to a challenge, and only peers with this flag set would be sent out in closerPeers fields or count toward the K needed peers in a Kademlia bucket.

@jhiesey
Copy link
Author

jhiesey commented Apr 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment