Hi Bitcoin Devs,
I'd like to share with you a draft proposal for a mechanism to replace CPFP and RBF for increasing fees on transactions in the mempool that should be more robust against attacks.
A reference implementation demonstrating these rules is available here for those who prefer to not read specs.
Should the mailing list formatting be bungled, it is also available as a gist here.
This BIP proposes a general purpose mechanism for expressing non-destructive (i.e., not requiring the spending of a coin) dependencies on specific transactions being in the same block that can be used to sponsor fees of remote transactions.
The mempool has a variety of protections and guards in place to ensure that miners are economic and to protect the network from denial of service.
The rough surface of these policies has some unintended consequences for second layer protocol developers. Applications are either vulnerable to attacks (such as transaction pinning) or must go through great amounts of careful protocol engineering to guard against known mempool attacks.
This is insufficient because if new attacks are found, there is limited ability to deploy fixes for them against deployed contract instances (such as open lightning channels). What is required is a fully abstracted primitive that requires no special structure from an underlying transaction in order to increase fees to confirm the transactions.
If a transaction's last output's scripPubKey is of the form OP_VER followed by n*32 bytes, where n>=1, it is interpreted as a vector of TXIDs (Sponsor Vector). The Sponsor Vector TXIDs must also be in the block the transaction is validated in, with no restriction on order or on specifying a TXID more than once. This can be accomplished simply with the following patch:
+
+ // Extract all required fee dependencies
+ std::unordered_set<uint256, SaltedTxidHasher> dependencies;
+
+ const bool dependencies_enabled = VersionBitsState(pindex->pprev, chainparams.GetConsensus(), Consensus::DeploymentPos::DEPLOYMENT_TXID_DEPENDENCY, versionbitscache) == ThresholdState::ACTIVE;
+ if (dependencies_enabled) {
+ for (const auto& tx : block.vtx) {
+ // dependency output is if the last output of a txn is OP_VER followed by a sequence of 32*n
+ // bytes
+ // vout.back() must exist because it is checked in CheckBlock
+ const CScript& dependencies_script = tx->vout.back().scriptPubKey;
+ // empty scripts are valid, so be sure we have at least one byte
+ if (dependencies_script.size() && dependencies_script[0] == OP_VER) {
+ const size_t size = dependencies_script.size() - 1;
+ if (size % 32 == 0 && size > 0) {
+ for (auto start = dependencies_script.begin() +1, stop = start + 32; start < dependencies_script.end(); start = stop, stop += 32) {
+ uint256 txid;
+ std::copy(start, stop, txid.begin());
+ dependencies.emplace(txid);
+ }
+ }
+ // No rules applied otherwise, open for future upgrades
+ }
+ }
+ if (dependencies.size() > block.vtx.size()) {
+ return state.Invalid(BlockValidationResult::BLOCK_CONSENSUS, "bad-dependencies-too-many-target-txid");
+ }
+ }
+
for (unsigned int i = 0; i < block.vtx.size(); i++)
{
const CTransaction &tx = *(block.vtx[i]);
+ if (!dependencies.empty()) {
+ dependencies.erase(tx.GetHash());
+ }
nInputs += tx.vin.size();
@@ -2190,6 +2308,9 @@ bool CChainState::ConnectBlock(const CBlock& block, BlockValidationState& state,
}
UpdateCoins(tx, view, i == 0 ? undoDummy : blockundo.vtxundo.back(), pindex->nHeight);
}
+ if (!dependencies.empty()) {
+ return state.Invalid(BlockValidationResult::BLOCK_CONSENSUS, "bad-dependency-missing-target-txid");
+ }
The final output of a transaction is an unambiguous location to attach metadata to a transaction such that the data is available for transaction validation. This data could be committed to anywhere, with added implementation complexity, or in the case of Taproot annexes, incompatibility with non-Taproot addresses (although this is not a concern for sponsoring a transaction that does not use Taproot).
A bare scriptPubKey prefixed with OP_VER is defined to be invalid in any context, and is trivially provably unspendable and therefore pruneable.
If there is another convenient place to put the TXID vector, that's fine too.
As the output type is non-standard, unupgraded nodes will by default not include Transactions containing them in the mempool, limiting risk of an upgrade via this mechanism.
The mechanism proposed above is a general specification for inter-transaction dependencies.
In this BIP, we only care to ensure a subset of behavior sufficient to replace CPFP and RBF for fee bumping.
Thus we restrict the mempool policy such that:
- No Transaction with a Sponsor Vector may have any child spends; and
- No Transaction with a Sponsor Vector may have any unconfirmed parents; and
- The Sponsor Vector must have exactly 1 entry; and
- The Sponsor Vector's entry must be present in the mempool; and
- Every Transaction may have exactly 1 sponsor in the mempool; except
- Transactions with a Sponsor Vector may not be sponsored.
The mempool treats ancestors and descendants limits as follows:
- Sponsors are counted as children transactions for descendants; but
- Sponsoring transactions are exempted from any limits saturated at the time of submission.
This ensures that within a given package, every child transaction may have a sponsor, but that the mempool prefers to not accept new true children while there are parents that can be cleared.
To prevent garbage sponsors, we also require that:
- The Sponsor's feerate must be greater than the Sponsored's ancestor fee rate
We allow one Sponsor to replace another subject to normal replacement policies, they are treated as conflicts.
There are a few other ways to use OP_VER sponsors that are not included. For instance, one could make child chains that are only valid if their parent is in the same block (this is incompatible with CTV, exercise left to reader). These use cases are in a sense incidental to the motivation of this mechanism, and add a lot of implementation complexity.
What is wanted is a minimal mechanism that allows arbitrary unconnected third parties to attach fees to an arbitrary transaction. The set of rules given tightly bounds how much extra work the mempool might have to do to account for the new sponsors in the worst case, while providing a "it always works" API for end users that is not subject to traditional issues around pinning.
Eventually, rational miners may wish to permit multiple sponsor targets, or multiple sponsoring transactions, but they are not required for the mechanism to work. This is a benefit of the minimality of the consensus rule, it is compatible with future policy should it be implemented.
In the worst case the new policy can lead to a 1/2 reduction in the number of children allowed (e.g., if there are 13 children submitted, then 12 sponsors, the 25 child limit will saturate before) and a 2x increase in the maximum children (e.g., if there are 25 children submitted, and then each are sponsored). Importantly, even in the latter attack scenario, the DoS surface is not great because the sponsor transactions have no children nor parents.
Future policy work might be able to insert sponsors into a special sponsor pool with an eviction policy that would enable sponsors to be queried and tracked for transactions that have too low fee to enter the mempool in the first place. This is treated as a separate concern, as any strides on package relay generally should be able to support sponsors trivially.
A reference implementation demonstrating these rules is available here. This is a best effort implementation, but has not been carefully audited for correctness and likely diverges from this document in ways that should either be reflected in this document or amended in the code.
Best,
Jeremy
@whitslack correct, we're talking in the domain of policy. The consensus rules do not need adjusting. What is important tho is to ensure the policy is incentive compatible or rational miners will change behaviors.
@dgpv yep; I think all of this presupposes some level of active mempool monitoring anyways, via watchtowers or similar. Otherwise you would not even know if something were in the mempool or not. And generally, for protocols, you don't need to monitor the mempool actively, you need to monitor blocks within your chosen SLA and if triggered (e.g., by the publishing of a revoked state) respond within a given time window. Once such a protocol is started you are protected in the mempool by timelocks ensuring the final close state cannot be used with some relative bound (and cannot be in the mempool).