Skip to content

Instantly share code, notes, and snippets.

@instagibbs
Last active June 30, 2022 19:43
Show Gist options
  • Save instagibbs/b3095752d6289ab52166c04df55c1c19 to your computer and use it in GitHub Desktop.
Save instagibbs/b3095752d6289ab52166c04df55c1c19 to your computer and use it in GitHub Desktop.
Relay proposal

BIP125 and smart contracts

This writeup covers a subset of BIP125 issues mainly with respect to smart contracts on Bitcoin, as well as some proposed policy updates that allow a broad set of improvements to currently deployed and future systems.

BIP125 rule#1

The original transactions signal replaceability explicitly or through inheritance as described in the above Summary section.

The language here and elsewhere has led to confusion about intent, and reports such as: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31876

Regardless of intent, in Core policy today a parent explicitly signaling replaceability will not imply the child is directly replaceable by double-spending the child's inputs. This has implications for smart contracts where we would want to guarentee that a child tx, freely generated by any protocol participant, must also be opt-in RBF.

We can side-step this by requiring "0 CSV" to be executed for those conditions in the execution script, but this only works if the participants support these script fragments.

BIP125 rule#2

The replacement transaction may only include an unconfirmed input if that input was included in one of the original transactions. (An unconfirmed input spends an output from a currently-unconfirmed transaction.)

This implies that in any protocol that requires new inputs to be brought in dynamically for fees or updates, a wallet should keep a reserve of confirmed outputs.

Second least-problematic rule behind #4 in my opinion.

BIP125 rule#3

The replacement transaction pays an absolute fee of at least the sum paid by the original transactions.

This rule means if there are any shared transactions of any kind, trivial ways of making RBF prohibitively expensive are available:

i) payments: destination(s) can sweep unconfirmed output ii) LN channels: commitment transaction pinning iii) eltoo: update transaction or settlement transaction pinning iv) payment pools: pinning via unilateral exits

To date, these tactics are not seen often. However, in building truly trustless systems, we must eliminate these vulnerabilities while balancing DoS risk with miner incentives.

CPFP and Carve-out as work-around for BIP125 rule#3 pinning

Today's anchor outputs in Lightning Network BOLTs give both parties the ability to CPFP a specific transaction, as long as you know about the transaction you are to build off of being in the mempool or block.

With two anchor outputs, the counterparty can put a low-feerate package in the mempool to block the other output from being spent due to package limits, the default being 25 descendants(including the root unconfirmed transaction), or 101kvB. Note that these are defaults that may be changed by operators.

To avoid this package limit pinning vector, the not-beloved-by-anyone relay policy called "carve-out" was included: If we hit package limits, let one more pretty-small transaction in from another output from the root parent transaction. This works for exactly two parties, one anchor each, relies on the fact that someone will sweep the just-above-dust outputs to not pollute the utxo set, and leeches those dusty values from the contract.

Two party-only restriction also means we need to expand this to N, or rethink how it can be taken out.

BIP125 rule#4

The replacement transaction must also pay for its own bandwidth at or above the rate set by the node's minimum relay fee setting. For example, if the minimum relay fee is 1 satoshi/byte and the replacement transaction is 500 bytes total, then the replacement must pay a fee at least 500 satoshis higher than the sum of the originals.

I've yet to really hear anyone complain about this from an application developer point of view.

BIP125 rule#5

The number of original transactions to be replaced and their descendant transactions which will be evicted from the mempool must not exceed a total of 100 transactions.

Given default mempool policies of 25 ancestors and descendants, this means given a set of unconfirmed transactions, you can RBF up to 100/25=4 of these packages in a single fee bump, even if counterparties are creating large transaction chains. Note that this assumption falls apart if miners decide to increase their own policy limit, e.g. maximum descendant count of 50 implies 100/50=2 packages being able to be bumped.

Worst case scenario is all miners increase this value to 101, making it impossible to know if a single RBF can be achieved, regardless of fees paid, regardless if we fixed rule#3!

FUTURE WORK

These are things this proposal does not do.

Replace rule#3 with feerate consideration only

There are fairly extensive debates on how to do this properly, with concern focusing on anti-DoS protection being lost, as well as the ability for a few in-pool replacements to clear out a nearly unbounded number of data. (FIXME what's the mempool emptying attack again? rule#5 seems to stop that?)

Changing rule#3 is the "third rail" of policy discussions, and as such I propose no changes here.

Do Better

Given the long history of debates about how to fix these issues, what is a minimum viable fix in policy that can have maximum impact?

Below is one proposal.

"Mandatory" Proposals

These two are "required" pieces to enable a broad array of improvements to smart contracts, including secure eltoo assuming APO-like functionality deployed.

PACKAGE RELAY

Thanks Gloria! Required tool that lets us hook into N 0-fee tranactions to get them into the mempool with a fee-bringing child. Same BIP125 RBF limitations, roughly.

OPT-IN MEMPOOL POLICY

Make nVersion==3 be standard, with the following additional restrictions for relay: i) All child transactions must also be nVersion==3 ii) All nVersion==3 transactions are taken as explicitly signaling bip125 replacement, regardless of nSequence values set. iii) All nVersion==3 transactions are constrained further to only be standard if the entire mempool package they are entering(itself included) is below a new weight limit of opt_weight_limit. iv) Scale the 100 in rule#5 by replacing with (max(ancestor_count_max, descendant_count_max) - 1) * num_chains

Rationale:

i+ii) We don't want to deal with "inherited signaling" confusion of implementation complexity, while forcing CPFP transactions to signal replaceability. We don't want to re-litigate opt-in RBF either. Smart contract wallets will set this, and retail shops can do 0-conf, or not based on this signaling. A hack to force signaling is to stick "0 CSV" in the execution script but why bother with a hack when we can just fix policy?

iii) opt_weight_limit <- bikeshedding goes here. This is the mitigation for rule#3 to not break your own bank with potentially no cost to the attacker. FIXME do we need to allow "next block" transactions in, even if it's hitting this limit, to make people feel better about miner incentives?

iv) Default values of 25, 25, and fixed value of 4 respectively, resulting in up to 96 transactions. We want an API where know that we can bump up to num_chains(4?) transactions, even if miners have messed with mempool default limits in perhaps reasonable ways. If miners are accepting longer chains, more should be allowed to be evicted, and vice versa. Provided the smart contract is staying within the confines of the current package relay proposal, this means that the "honest" player is allowed attach a single CPFP transaction to num_chains outputs, regardless of package limits, except for the case of descendant_count_max<=1, meaning no unconfirmed spends allowed at all.

NICE TO HAVE

EPHEMERAL DUST PACKAGE OUTPUTS

When considering a deduplicated relay package for inclusion in the mempool, if an output generated in the package is also consumed within the package(meaning individual transaction submission does not suffice for inclusion): i) Ignore standard dust relay checks ii) Allow a blank scriptpubkey(or OP_TRUE? malleability questions).

Rationale:

Dust utxos are only problematic in that if they are not spent, full nodes must carry them forever as state, or relies on undeployed relay improvements such as utreexo, which carries its own engineering tradeoffs. If the parent transaction is only included in the case of the child immediately spending it to bring the package feerate high enough for inclusion, we allow this ephemeral dust to be generated.

With 0-value outputs becoming practical for CPFP, we can now re-consider the necessity of multiple anchor-like outputs in protocols. Indeed, we can use a single "anyonecanspend" output as an anchor, in conjunction with the aforementioned nVersion==3 policy change to avoid BIP125 rule#3 pinning attacks.

Note that sticking with N anchors attributed to each party no longer works as they will not be both immediately used in unilateral closes in LN like constructs, violating the restriction that it must be spent in the same package.

USE CASES

What do the proposed changes allow?

Batched Payments

RBF batched payments without fear under the opt_weight_limit, even if people are sweeping unconfirmed outputs.

LN Penalty with 0-value anchors

Switch to using nVersion==3 for all pre-signed transactions.

The user can confidently RBF an opposing commitment transaction, or their own, without ever requiring mempool access or assumptions about the mempool view, all with the total rule#3 "damage" being strictly bounded by opt_weight_limit.

With the stretch goals, we can collapse 2 anchor outputs into 1 0-value anchor output. This means no value in the commitment transaction has to be set aside to pay for on-chain fees(thanks to base package relay), as well as no value allocated to anchor output. All fee values can be brought about unilaterally if desired.

Open question: can cooperative opens and close transactions still can use nVersion==2, so in the common case nothing otherwise unusual is seen on chain? These can be joint transactions in the collaborative funding case, which may fall prey to the same pinning issues? If you put funds in, you care on completing or getting money back in a timely fashion.

Eltoo for N-Party protocols

Assuming APO-like softfork is deployed, we again use nVersion==3 for all pre-signed transactions

The same conversion can be done for eltoo constructions for 1-input-1-output update transactions, either Bring Your Own Fees(BYOF) for additional fee inputs/outputs, or switch to SIGHASH_ALL and include a single 0-value anchor output to use in an identical fashion to ln-penalty as proposed above.

For settlement transactions which have N-party balance outputs and M HTLC outputs, we 1 CSV timelock all of these, and attach a single 0-value anchor.

This obviates the requirement for additional research into SIGHASH_GROUP and advanced sighash consensus modifications.

Kill "Carve-out"(?)

Note how we don't need carveout for any of these constructions?

What constructions will still need it, aside from legacy?

ACKNOWLEDGEMENTS: Suhas/BlueMatt/??? for the original "descendants must be below X weight total" discussion Antoine for breaking old version of the weight-limiting fix which didn't cover ancestors in addition to the original tx: https://bitcoinops.org/en/newsletters/2022/05/18/#using-transaction-introspection-to-prevent-rbf-pinning Jeremy for spurring discussion about allowing 0-value as long as they are spent, which seems a natural fit for package relay proposal

TODO Fix FIXMEs TODO Add more citations

@instagibbs
Copy link
Author

@t-bast

taproot annex or something similar

yes, maybe you can only make the value more restrictive or something to make analysis easier

@remyers

Whereas with the current system, if a miner sets descendants_count_max to 50, but does not also manually scale max_ancestors in an equivalent way, does that just mean to use CPFP to unpin 4 transactions you would now need 2 instead of 1 CPFP transaction?

Yes and if they set it to 101, you cannot necessarily RBF anything!

My real question is if the num_chains parameter is to allow more efficient CPFP and prevent accidental parameter mis-configuration, OR if there's some more fundamental reason to configure by num_chains that helps make RBF easier to use to unpin maliciously pinned txs.

Pretty much the goal is to make a nice API for wallets and smart contracts to RBF. Rule#5 currently is a gnarly corner case that could theoretically pin ~every RBF potentially, if miners start to set this value to high values.

@remyers
Copy link

remyers commented Jun 6, 2022

Pretty much the goal is to make a nice API for wallets and smart contracts to RBF. Rule#5 currently is a gnarly corner case that could theoretically pin ~every RBF potentially, if miners start to set this value to high values.

From an API standpoint wallets and smart contracts can't know exactly many chains they can RBF together at one time, right?

But that said, I can see how this makes rule #5 much less likely to get mis-configured by miners, which is good. Even if you can't know what particular value each miner uses, this makes it a fairly safe bet to be >= 1 and likely to work for the default configuration of 4 chains or more if enough miners increase that value.

@ariard
Copy link

ariard commented Jun 6, 2022

This implies that in any protocol that requires new inputs to be brought in dynamically for fees or updates, a wallet should keep a reserve of confirmed outputs.

Though this rules leads quickly to RBF penalty overhead.

Let's consider the following scenarion where you would like to bump multiple LN channels with anchor output. Let's say you have two reserves UTXOs : A of value 3000 sats and B of value 6000 sats. You would like to fee-bump the 3 LN commitments X, Y, Z of size 1000 vbytes each, pre-signed with 1 sat/vb. You split the UTXO A in 3 with a fan-out transaction then you broadcast the commitment and CPFP setting the feerate to each package to 2 sats/vb. The mempools feerate increase to 4 sats/vb. Under rule 2, you're not able to fan-out UTXO B then to replace the 3 packages X,Y,Z. What you have to do is double-spend the fan-out tx spending A, and thus pay the RBF penalty on the whole fan-out A + packages X,Y,Z..

While it's possible to implement this fee-bumping and rebroadcast strategy, it makes it hard to ensure your fee-bumping reserves are satisfying the RBF-penalty worst-cases.

Two party-only restriction also means we need to expand this to N, or rethink how it can be taken out.

Fee-efficiency wise, I don't think the anchor output pattern scales well for contract with high number of participants, as you need to add one for each participant...

Package-relay could allow us to solve the issue. As a multi-party UTXO is most of time spendable by multiple different transactions (true for both LN-penalty and Eltoo) and you can assume to have a consistent view of the network mempools, you should make the parent part of any package broadcast. The parent will conflict with any pending spend of the contract UTXO, and thus evict any chain of junk children, of which the potential presence is motivating the carve-out. To do so, I think we might have to make RBF package atomic and avoid any interference with package dedup.

In another direction, I believe we could enable some "in-place" carve-out, where if the conflicted transaction is of unconfirmed depth=2, we relax the number of descendant to allow the replacement to occur.

I've yet to really hear anyone complain about this from an application developer point of view.

In the context of multi-party contract, a counterparty could have honestly broadcast a chain of transaction, of which the size is unknown. As you don't have view of network mempools, you don't know by which RBF-penalty you should increase your replacement thus decreasing its odds of propagation. I think it solved with opt_weight_limit as you can assume a worst-case RBF-penalty.

ii) All nVersion==3 transactions are taken as explicitly signaling bip125 replacement, regardless of nSequence values set.

Sadly, I believe it won't solve pinning of multi-party funded transactions such as described here : https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-May/003033.html as a malicious participant is still allowed to disable RBF on her double-spend. One way to solve it could be to "override" RBF opt-out transaction if a nVersion=3 transaction exists spending the same UTXO (but I think we would need to invert some checks in the mempool validation path, Dosy...)

iv) Scale the 100 in rule#5 by replacing with (max(ancestor_count_max, descendant_count_max) - 1) * num_chains

I'm not sure about the interest of this rule, at least for the LN case. As described here, usage of it could lead to unsafe situations :

"Multiple parents aren't safe for lightning as it does allow a counterparty to delay the confirmation of a specific channel commitment by overbidding on any bundled commitment inside the same package. Let's say you broadcast the package A+B+C+D+E, where A,B,C,D are commitment transactions and E a common CPFP. Previously, your malicious counterparty have submitted a better-feerate package A'+F to the network mempools. When your package {A,B,C,D,E} is sumbitted for acceptance, it will failed on the better-feerate requirement :

bitcoin/src/validation.cpp

Line 830 in fdd80b0
if (newFeeRate <= oldFeeRate)

(fwiw, not documented in bip125).

As you don't know the failure reason from the honest LN node viewpoint, it's hard to decide on the next broadcast strategy. Either splitting the former package in X isolated one parent+child ones, because you suspect your counterparty to meddle with your transaction confirmation (similar to lightning/bolts#803). Or attribute the failure to feerate efficiency w.r.t to top of of the mempool transactions.

Note, your counterparty doesn't have to be malicious for your multiple parents package to fail confirmation. A honest counterparty can just have decided to go on-chain concurrently, with a better fee-estimation than you.

Of course, I think this unsafety is only concerning in case of time-sensitive confirmations. If all your commitments are devoid of HTLC outputs, it's not time-sensitive anymore, and that's okay to have confirmation failure on the edge cases, not funds are loss, beyond the timevalue of liquidity."

i) Ignore standard dust relay checks

At first sight, I think it's okay to have 0-value anchor though beware a transaction censorship vector, where an attacker would split out the fully-signed and valid parent from the package and front-run p2p propagation to announce it as a single transaction to hit a policy violation and get it bounced off by m_recent_rejects when the package honestly propagates.

These can be joint transactions in the collaborative funding case, which may fall prey to the same pinning issues?

For cooperative opens, I think so, as the funding inputs are singly controlled by a counterparty. See post linked above "On Mempool Funny Games against Multi-Party Funded Transactions".

@instagibbs
Copy link
Author

Though this rules leads quickly to RBF penalty overhead.

Sure! Not solving everything here.

Fee-efficiency wise, I don't think the anchor output pattern scales well for contract with high number of participants, as you need to add one for each participant...

Yes, if you keep reading, I indeed get rid of this using a shared anchor output at 0-value. I propose that to go with package relay, perhaps very similar to what you're describing?

as a malicious participant is still allowed to disable RBF on her double-spend

Yeah, the dual/multi-funding case seems recursively broken after all, so that's yet another argument for long-term full-rbf. Maybe no getting around this :(

I'm not sure about the interest of this rule, at least for the LN case. As described bitcoin/bitcoin#22674 (comment), usage of it could lead to unsafe situations

The whole point of this document is to make it possible to broadcast and bump transactions "blindly", the fact that you have to increase your feerate to get included given some structure is totally fine. Maybe not smart for the wallet, but it should work. This particular section is just trying to make a heuristic policy that makes sure you can at least bump 1!

At first sight, I think it's okay to have 0-value anchor though beware a transaction censorship vector,

Yeah, I'm also assuming the tx isn't banned for having 0 fee even though you have a feefilter. Goes without saying imo.

@ariard
Copy link

ariard commented Jun 8, 2022

Yes, if you keep reading, I indeed get rid of this using a shared anchor output at 0-value. I propose that to go with package relay, perhaps very similar to what you're describing?

Yeah agree here, my note was about emphazing more downsides of N-carve-out + N-anchor output pattern to any curious reader.

Maybe not smart for the wallet, but it should work. This particular section is just trying to make a heuristic policy that makes sure you can at least bump 1!

Yes, I hope every use-case will write good-practice to avoid unsafe holes with specific context.

Yeah, I'm also assuming the tx isn't banned for having 0 fee even though you have a feefilter. Goes without saying imo.

Yep also the feefilter, nice if we don't forget that safety-critical context down the implementation pipeline!

@instagibbs
Copy link
Author

Just noting from other conversations: Turns out "Replace rule#3 with feerate considerations" isn't enough on its own to stop economic pinning, we would still have to avoid the situation with huge junk ancestors and an extremely high feerate child. It would be replaceable, but at potentially up to 1000x normal feerates depending on risk appetite of attacker on default relay configs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment