Skip to content

Instantly share code, notes, and snippets.

@phyro
Last active July 27, 2020 17:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save phyro/f1aa499e166fc75b58b5d1f6ccf8fed9 to your computer and use it in GitHub Desktop.
Save phyro/f1aa499e166fc75b58b5d1f6ccf8fed9 to your computer and use it in GitHub Desktop.
Encrypted Objective Dandelion

Encrypted Objective Dandelion

There has been some nice research done on improving aggregation before going into 'fluff' phase in Dandelion. The main idea behind Objective Dandelion is that the nodes converge towards a common point and sending all the transactions towards what is called a fluff node. The fluff node then aggregates all the received transactions and only then broadcasts them to the network. There are many possible ways to determine what the path to the fluff node is, but the ideas that have been discussed had a common way of evaluating a certain max function among the connected peers and sending the transaction to the max peer. If there is no peer with a better max score than your own, then your node is a fluff node. I'm going to skip the details and go into what I propose here.

While the scheme would likely be a great improvement when there are more transactions flowing across the network, it does have some drawbacks.

  1. Every intermediary node, which is not a fluff node, can see the transaction you are sending before it gets aggregated
  2. A fluff node sees all the transactions before they are aggregated
  3. Even if we said that the intermediary nodes aggregate the transactions they receive, these intermediary nodes would know how to deaggregate the transaction because they were the ones that performed the aggregation

This attempts to solve these problems with an encryption layer while retaining the convergence to a fluff node.

Applying encryption

In order to encrypt data, we need to first find out what the public key of our fluff node is. For this, we have 2 options:

  1. If we are the max peer then we are the fluff node and we generate a random public key for this objective dandelion epoch
  2. If someone else in our local peer set is a max peer, we ask them to tell us the public key of the fluff node. This max peer will then again apply the same logic which is either provide their own public key in case they are the fluff node or ask their own local maxima peer. Note that all nodes on the path to the same fluff node will have the same public key for the fluff node:
Node5(Node7.pub) -> Node3(Node7.pub) -> Node1(Node7.pub) -> Node2(Node7.pub) -> ... -> Node7(holds private key)

Since the fluff nodes change only at the next Dandelion epoch, it is enough to query for the fluff peer public key once per epoch e.g. 5 blocks.

Once we have the fluff node public key we apply the function encrypt_tx:

struct EncryptedTx {
  inputs []EncryptedMsg
  outputs []EncryptedMsg
  kernels []EncryptedMsg
  # The first node that adds the transaction splits its kernel offset into k1,k2 where
  # k1+k2 = my_tx.offset.
  # k1 is saved encrypted with the fluff key in the 'offset' variable
  # k2 is added to the initial 0 sum_offset so => 'sum_offset=k2'
  # NOTE: only the first node that adds the transaction interacts with the 'offset' variable here
  offset EncryptedMsg
  sum_offset int64
}

fn sorted_tx(encrypted_tx) ->
  return EncryptedTx{
    inputs: sort(encrypted_tx.inputs)
    outputs: sort(encrypted_tx.outputs)
    kernels: sort(encrypted_tx.kernels)
    offset: encrypted_tx.offset
    sum_offset: encrypted_tx.sum_offset
  }

fn encrypt_tx(fluff_public_key, my_tx, encrypted_tx) ->
  # If we have no transaction to add, we just return the existing encrypted transaction
  if not my_tx:
    return encrypted_tx
  offset_to_add = my_tx.offset

  # Handle the case if you will contribute the first transaction
  if encrypted_tx.kernels == []:
    k1 = random()
    k2 = my_tx.offset - k1   # this means that k1+k2 == my_tx.offset
    offset_to_add = k2       # this is the offset we need to add to 'sum_offset' part
    encrypted_tx.offset = encrypt(fluff_public_key, k1)

  # We could have received some encrypted parts if someone before us added a transaction so
  # we have to add to add our pieces to the existing encrypted transactions
  encrypted_tx.inputs += [encrypt(fluff_public_key, input) for input in my_tx.inputs]
  encrypted_tx.outputs += [encrypt(fluff_public_key, output) for output in my_tx.outputs]
  encrypted_tx.kernels += [encrypt(fluff_public_key, kernel) for kernel in my_tx.kernels]
  sum_offset += offset_to_add
  # We sort inputs, outputs and kernels to avoid the next node knowing which were added last
  # and making guesses around that
  return sorted_tx(encrypted_tx)

This means that each stem node on the path to the fluff node encrypts their own transaction data separately and adds their pieces to the current encrypted transaction. Once the encrypted transaction reaches the fluff node, the fluff node decrypts the encrypted_tx.offset and adds it to encrypted_tx.sum_offset (which as we said is not encrypted) to get the total kernel offset of all the aggregated transactions. Then it proceeds by decrypting inputs, outputs and kernels. Due to the kernel offset being merged, it cannot deaggregate the transaction. This means that we have solved the issues we mentioned above. The fluff node needs to check whether the transaction is valid before broadcasting it to the network.

Questions

What if a stem nodes don't have transactions to aggregate? One option is to wait. If we communicated also the total fee paid of the encrypted transaction, we could calculate if we can add some decoy transactions by just adjusting only the offset. Even though an undo attack in a self-spend is harmless, note that the fact that we are adding the offset to the sum_offset part means that it would be impossible to know what our offset was.

What if a malicious stem node adds a fake nonexisting input and the intermediary nodes would not be able to tell it is invalid because it is encrypted? It could also just start sending you invalid encrypted transactions that never gets broadcasted?

This could open up a spam attack vector on the nodes. It might be possible to mitigate some of this by having the fluff node communicate back whether the transaction was balanced and valid in the end. It's also worth noting that this might also be just a temporary situation because the path to fluff node should change at each epoch. So if you keep receiving stem encrypted transactions from the same node while it was unlikely that you are again the max peer of an average node with your max score for that epoch, you can probabilistically ban that peer for a period of time. Another possible mitigation would be to keep the inputs decrypted. This way, if the transaction ends up being unbalanced then you could ignore the next transactions that come that use the same inputs you have seen. If it became unbalanced after you added your part, then this problem will likely be solved at the next dandelion epoch because you will send it to other peers.

What if the max peer node lies about the public key of the fluff node and instead just wants to read our transaction?

In this case, they could decrypt the transaction which would make it the plan Objective Dandelion without encryption. Not sure how to prevent this. Might be possible to ask for a path to the fluff node that could be verifiable and check whether the fluff node has a big max peer score or something similar.

In general, if the majority of the nodes are honest, we should not hit these problems too often.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment