Skip to content

Instantly share code, notes, and snippets.

@kayabaNerve
Last active May 21, 2024 06:34
Show Gist options
  • Save kayabaNerve/0e1f7719e5797c826b87249f21ab6f86 to your computer and use it in GitHub Desktop.
Save kayabaNerve/0e1f7719e5797c826b87249f21ab6f86 to your computer and use it in GitHub Desktop.
FCMP+SA+L

Full-Chain Membership Proofs + Spend Authorization + Linkability

This proposes an extension to FCMPs to make them a drop-in replacement for the exsting CLSAG. In order to be such a replacement, the proof must handle membership (inherent to FCMPs), spend authorization, and linkability.

Terminology

  • FCMP: Full Chain Membership Proof
  • GBP: Generalized Bulletproof, a modified Bulletproof supporting Pedersen Vector Commitments in its arithmetic circuit proof. Security proofs are currently being worked on by Cypher Stack.
  • GSP: Generalized Schnorr Protocol, a proven protocol for 2009 for proving: For a m*n matrix
    [
        [A, B, C],
        [D, E, F]
    ]`
    
    And a n-length row
    [x, y, z]
    
    The m-length output column:
    [
      T,
      U,
    ]
    
    is
    [
      xA + yB + zC,
      xD + yE + zF,
    ]
    
    This is useful to prove knowledge of the opening of a vector commitment and consistency between vector commitments.
  • BP+: Bulletproof+. A weighted-inner-product proof currently in use in Monero.
  • F-S: Forward secret. An adversary who can solve for the elliptic curve discrete log problem cannot find the spent output.

Recap

FCMPs, when instantiated with GBPs, are a O(n) proof of an O(log m) program (where m is the amount of members in the set). The program is a Merkle tree proof, where the same layer program is executed multiple times. The layer program is summarizable as follows:

  1. Unblind the Pedersen Commitment input.
  2. Verify the unblinded variable is present in the layer being hashed.
  3. Hash the layer, outputting a new Pedersen Vector Commitment (blinding the hash for the layer).

For the first layer, under Seraphis, the input would be the re-randomized squashed-enote (effectively a Pedersen Vector Commitment). The "unblinding" would remove the re-randomization and continue.

The final layer would use randomness = 0 such that the output Pedersen Vector Commitment is the tree root (or the protocol would take the difference and require a discrete log PoK, forcing the user ).

Difficulty

The naive solution would be to open K, the output key, inside the proof, then prove for the key-image inside the proof. This would require hardware wallets perform the FCMP proof, which is extremely undesirable/entirely unacceptable. Accordingly, the goal is for the membership proof to perform re-randomization without knowledge of x, letting a second proof prove for spend authorization and the key image.

Solution

Instead of building a Merkle tree of squashed enotes, we build a Merkle tree with elements (K, hash_to_point(K), C) where C is the amount commitment. We output re-randomized versions of all three and then prove the key image correctly formed.

Originally Proposed Modifications for Deployment During RingCT

EDIT: These should not be moved forward with. The alternative modifications in the next section achieve much better properties and are the basis of the forward-secret variant posted below. This is only present as it was discussed below and therefore would be confusing to remove entirely.

For the first layer, we don't take in the re-randomized squashed-enote (as this is pre-Seraphis), yet (r K, r**-1 H(K), C + a G). We denote this (K', H', C').

The first layer does not perform the traditional unblinding (addition of a G for a known a) for K', H'. Instead, it multiplies K by r (checking it's equal to K') and H' by r (checking it's equal to H(K)). We then perform the traditional unblinding of C' and check membership of (K, H(K), C).

This does not prove the spend is authorized nor that any claimed linking tag is correctly formed. It solely shows that the K', H' are derived as expected from a member of the tree.

We then extend this with a Discrete Log Equality proof for K', I over G, H'. The discrete logarithm of K' is r x, which we notate x'. x' H' expands to r x r**-1 H(K), with the r terms cancelling out for x H(K), the expected key image. This introduces linkability into the proof. By having the DLEq proof's transcript be over the message signed, spend authorization as well.

Proposed Modifications

With newly introduced generators T, U, V, and ignoring the amount commitment for now:

  • K' = K + aT
  • I' = hash_to_point(K) + bU
  • B = bV

The FCMP would only perform three standard unblinds (the third being C', ignored here), and show consistency between the bU term in I' with B. Instead of performing a DLEq proof after, we'd provide a proof of the following.

For matrix:

[
  [G,  T,  0], // Open K'
  [0,  0,  V], // Open B
  [U,  0,  0], // Create xU as a hint, Z, to prove for bxU without revealing bU
  [I', 0, -Z]  // x (H(k) + bU) + -bxU = x H(k)
]

The multi-scalar-multiplication by the consistent row of scalars:

[x, a, b]

The output is:

[
  K',
  B,
  Z,
  I
]

where Z is part of the proof data (formed as x U) and I is the key image.

This proves linkability, and again performs spend-authorization so long as the message is part of the proof's transcript.

Integrity

Obviously, implementation must be done correctly, and the proving system (GBPs) must be proven secure. Forging a member requires breaking the Merkle tree OR r, r**-1 consistency (for the original proposal) or bU, bV consistency (for the current proposal). The original proposal also assumes the trivial r = 0 case is banned, which it already should be by the rule against key images of identity (though we'd still explicitly prevent it).

Privacy

The privacy of the originally proposed scheme relies on it being infeasible to link K', H' to K, H(K). The most trivial way to solve this is by checking if DH(K, H(K)) == DH(K', H'), which is presumably reducible to the Decisional Diffie-Hellman. The Decisional Diffie-Hellman game asks if DH(A, B) == C. One who can check if two Diffie-Hellmans would be equivalent can check if a single Diffie-Hellman is equivalent via DH(A, B) == DH(C, G).

The second scheme's privacy is much more clear-cut. Given the commitment bV, one must calculate bU, which is the Computational Diffie-Hellman (with the caveat Monero key images are currently considered private under the weaker Decisional Diffie-Hellman problem, so this being stronger is irrelevant).

Features

Not only would this introduce FCMPs to Monero, it would also enable transaction chaining and post-proved membership if we don't bind the membership proof into the secondary proof's (whether DLEq or GSP) transcript.

Relation to Seraphis

Seraphis offers more efficient FCMPs, should provide forward privacy, and moves from hacked-together formal properties inherent to RingCT to formalized-from-the-start properties.

Unfortunately, Seraphis has been estimated as 2 to 5 years out by UkoeHB, with inclusion of FCMPs making it closer to 5 years. I disagree with that estimate, yet jberman estimated 3 years themselves. The goal of this is to solve the privacy issues solvable under RingCT, buying time to do better regarding design, performance/scalability, and the unsolvable issues (forward privacy). This may also be of indepedent merit.

Performance

I prior estimated Full-Chain Membership Proofs to be 35ms in a batch of 10. This was evaluated over the "pasta curves" with Bulletproofs+. The arithmetic-circuit argument for Bulletproofs, which we are currently using (or rather, Generalized Bulletproofs, a derivative of), should be twice as fast due to the relation to the underlying proof. This would mean ~18ms within a batch of 10.

Generalized Bulletproofs also notably reduces the amount of gates we use in circuit. Prior, we spent several gates entering the divisor into the circuit (a bunch of numbers which had to be entered somehow). Now, with Generalized Bulletproofs, we can simply provide an additional Pedersen Vector Commitment (32 bytes) to enter the divisor. This slashes the size of the circuit.

By adding SA+L, we'll have the additional performance costs within the first layer of:

  1. Proving for a tri-set membership.
  2. Doing multiple unblinds.
  3. Proving consistency of bH, bG or r, r**-1.

I'm expecting this makes the first layer ~3x more expensive.

So from for 1+1+1+1 at 35ms, to 1+1+1+1 at 18ms, to 3+1+1+1 (~31ms). That's ignoring the reduction in gates offered by GBPs.

Steps and Timeline

There are several different tracks along which we can move, and we should move along in parallel.

Implementation

  1. Implement GBPs.

This has already been done and solely needs optimizations performed.

  1. Implement an amenable framework for building arithmetic circuits.

This has technically been done, yet needs some love.

  1. Implement a library for elliptic curve divisors.

Technically done, yet with an edge case that should be fixed.

  1. Implement the various gadgets (set membership, index within set, PoK of DLog, proof of DLog).

This was done as needed for FCMPs, not as needed for FCMP+SA+L which has some distinctions (multi-set membership, which is done via getting the index of each item within each set and checking equality, and proof of DLog).

  1. Implement the FCMP+SA+L circuit.

This was done for FCMPs yet not FCMP+SA+L.

  1. Implement the secondary proof.

This isn't too notable, with a polished implementation hopefully being within a few days of work for the second case. The first case is prior implemented.

  1. Implement the necessary towering curves.

This is trivial to do over a generic integer library, yet performance effectively requires a dedicated implementation be done.

  1. Integrate into Monero.

Formal Security

  1. Formally prove the soundness, completeness, and zero-knowledge properties of GBPs.

We can do this now. Diego claimed a deliverable, or lack thereof if a failure occurs, could happen in one month.

  1. Have the divisor-technique reviewed, having been prior published by Eagen.

This is possible now.

  1. Formally prove the correctness of the "gadgets", as logical.

This is possible with just a formal description of the circuit, and the gadgets contained within. The Curve Trees paper did specify their gadgets, yet we have a notably different construction due to the usage of divisors and the extensions proposed here.

  1. Formally prove the correctness of the secondary proof.

The first proposal uses a DLEq which has existed since Chaum-Pedersen's 1993 work. The second uses a proof I frequently see referred to as a "Generalized Schnorr Protocol", despite my personal dislike for the ascription. I wouldn't be surprised if it was already proven.

Practical Secrity

  1. Audit the implementation of GBPs.

This is possible once the further optimizations are done, and the formal proofs exist.

  1. Audit the circuit framework.

This is possible once the framework is cleaned to a point we're happy with.

  1. Audit the EC divisor library.

This is possible as soon as the standing edge case is resolved.

  1. Audit the implementation of the gadgets.

This should be trivial and possible as soon as the gadgets are formally proven.

  1. Audit the implementations of the towering curves.

This is possible as soon as the curves are implemented. If we implement them with a constant-time integer library, we'd solely audit that.

  1. Audit the circuit.

  2. Audit the secondary proof.

This could be audited as soon as it's implemented, which again, should be trivial to implement.

  1. Audit the integration.

Contribution to Seraphis

Creation of FCMP+SA+L effectively does all the work necessary for FCMPs with Seraphis with regards to FCMPs themselves. It'd provide the proving system, gadgets, review/audits, and most of the composition work. The only additions would be the curve cycle libraries (if we switched curves), the cross-group DLEq we'd move commitments with, and the new integration (into Seraphis instead of into RingCT).

Steps Forward

We'd have to agree on which proposal to move forward with, if we were to move forward.

While the first proposal was the one originally posited, it remains here solely for historic reasons. It appears to perform one less PoK of DLog, making it faster, yet it requires using two private points as generators within Proof of (Knowledge of) DLog statements. Such proofs are much more performant when the generator being used is public.

The second proposal also has the benefit of a much stronger argument for privacy, and uses additive randomness (instead of multiplicative).

We'd then need various implementation work done. I can personally implement the curve libraries using crypto-bigint, yet I cannot implement tailored implementations as likely needed to maintain the performance target. Someone else would need to step up.

I believe I can have all the implementation done, besides integration into Monero, within a few months. The actual integration into Monero would be:

  1. Having Monero build the tree.
  2. Having wallet2 call prove via the Rust library.
  3. Adding RPC routes to get the current tree root and the path of a member of the tree (where the DSA would be used to request multiple paths as to not reveal the output being spent).
  4. Having monerod call verify via the Rust library.

There'd be no wallet/address/privacy set migration. All of the additional curves would be contained within the proof.

I cannot estimate/guarantee the timeline for the formal analysis side of things. I'd hope by clearly defining components, the rate at which review occurs is not bottlenecked by the rate at which development occurs. That doesn't change we'd have to efficiently manage the timeline and definition of components. I'd hope for, in the best case, development + review/audits of the building blocks in 3-6 months, targeting deployment in ~12 months. Beyond myself, the only development necessary is of dedicated curve implementations (towering curves having already been found by tevador) and the integration into Monero (which jberman already started for FCMPs into Seraphis).

@kayabaNerve
Copy link
Author

but wouldn't this also mean providing 2 addresses?

Under my proposal, only the new address would need to be provided. They're indistinguishable and have an identical sending process. I make no guarantees my proposal would not impact the relevant JAMTIS proposal which only declares itself compatible with the current addresses.

Does this mean this adversary can assert the existence of unspent non-FCMP outputs, and when they are spent? I.e. there's 2 output sets before entering the main FCMP set?

Sorry, but this entire section isn't really sufficiently well formed for me to reply to its questions/comments. I'd rather clarify things from the start, and if you still have questions, follow up there.

There's currently the Ring outputs and the RingCT outputs. Both of these are non-FS and continue to be non-FS.

After FCMPs, FS outputs become possible. You may still make non-FS outputs yet may also make FS outputs (if there's an address for FS outputs or a protocol for FS outputs).

A historical non-FS output can have its linking tag calculated. Once calculated, an adversary can wait for it to appear on-chain.

A FS output cannot have its linking tag calculated. It has as many linking tag discrete logarithms as scalars for the curve. If you assume all outputs aren't FS, setting the new T term to have a coefficient of 0, you can obtain the linking tag it'd have if it wasn't FS. You can then wait and see if that linking tag appears on chain (confirming it isn't a FS output OR another entity has a solution for the discrete log problem at time of appearance on-chain).

Accordingly, an adversary with a discrete log oracle cannot differentiate unspent non-FS outputs and FS outputs. They both have linking tag discrete logarithms recoverable if you assume the T term has a coefficient of 0. Both will not have those linking tags actually appear on-chain. You can only differentiate spent non-FS outputs as those will have their linking tags appear on-chain.

This assumes we don't deploy FS at a hard fork boundary, which it sounds like we will with the above JAMTIS proposal. My proposal enables it to be done whenever without new wallet protocols (not to suggest its better than the JAMTIS proposal or still an active candidate. Just to note how these discussions formed).

@j-berman
Copy link

How is the tree planned to be stored? Additional tables in the main database?

Yep, here's a draft schema:

/* DB schema:
 *
 * Table            Key          Data			          Properties
 * -----            ---          ----			          ----------
 * leaves           leaf_idx     {O.x, I.x, C.x}	          Integer key, no dups, fixed size, sorted by `leaf_idx`
 * branches         layer_idx    [{branch_idx, branch_hash}...]   Integer key, no dups, fixed size, sorted by `layer_idx` first `branch_idx` second
/*
  • In the branches table:
    • The largest layer_idx in the db == tree root
    • The largest layer_idx in the db should only have a single record in the db, with branch_idx == 0
    • A branch refers to the hash of a chunk w_c elements in layer_idx-1 (or if layer_idx == 0, in the leaves table)
    • branch_idx == element idx in layer_idx-1 / w_c
    • branch_hash is 32 bytes that can either be a Helios point or Selene point
      • even layer_idx == selene
      • odd layer_idx == helios
  • This table approach should enable efficient queries for path by leaf_idx (will know all db reads needed immediately, can make parallel reads in theory).
    • The reason the branches table uses layer_idx as a key, and uses value [{branch_idx, branch_hash}...] is to optimize the same way output_amounts table does it (which is optimized for reading outputs by amount and global output ID). The primary index is layer_idx (like amount) and secondary index is branch_idx (like output_id). Planning on doing some more perf testing on this approach as well.
  • The block header stores the tree root hash at that block, and the end leaf_idx added to the tree for that block.
  • We'll also need a new table to keep track of locked outputs sorted by unlock_time.
  • We'll also want a migration for key images also described here.

For unsynced nodes, would tree operations occur alongside block downloading + verification?

Yep

For synced nodes, will new post-FCMP++ nodes have to stall on startup for a bit while generating the whole tree? Would this be done as a database migration?

I initially figured nodes would stall too; @kayabaNerve proposed implementing the migration as an async background task, which would only stall the node if the tree is not finished constructing at the fork height where FCMP's begin. Sounds like a solid idea to me.

For context, the last time migration code was touched was by mooo 5 years ago. I'm unsure if there's anyone currently willing to do it, although as far as I can tell it's relatively straightforward.

I included the migration of cryptonote outputs to the tree as part of my CCS proposal :)

Assuming timelocks aren't banned before FCMP++, how exactly would this be done? Convert time to blocks by dividing by 120 and add onto the current block height?

The timestamp timelocks are unix timestamps for when an output should unlock to clarify. So we can do something like: take timestamp from some block N, extrapolate block times into the future from that starting point using 120s for each block, then convert timestamps to unlock at their respective extrapolated block.

Thinking on it some more.. it's probably fairly straightforward to have a separate table for timestamp locked outputs sorted by unlock_time also. Then when adding a block, check the lowest unlock_time output in the table; if the output is unlocked using get_adjusted_time at that height, remove from table and add to tree, and continue iterating removing outputs from the table until encountering an output that should remain locked at that block's get_adjusted_time. This way we wouldn't need to do the conversion from timestamp to block.

@tevador
Copy link

tevador commented May 13, 2024

Is it as simple as adding this key + the current incoming key for full view-only wallets to exist? How would current wallets migrate to this? They can generate a new key and calculate o*G+y*T, but wouldn't this also mean providing 2 addresses?

Outgoing view keys are only planned for the new Jamtis address format. Legacy CryptoNote addresses will not have outgoing view keys.

Existing wallets cannot be safely migrated to support outgoing view keys because that would change all their existing addresses.

@kayabaNerve
Copy link
Author

@tevador I actually would like to distinctly follow up with you on if a wallet whose current main addresses are (sG, vG) will survive JAMTIS's backwards compatibility if malleated to (sG + yT, vG), as I originally proposed. While I hear you existing wallets cannot simply move, I'm personally curious about developing wallet software outside a HF boundary which achieves FS in such a manner for new wallets (though I don't want to have said addresses bork at time of JAMTIS).

@tevador
Copy link

tevador commented May 13, 2024

Forward-secrecy can be achieved at the time of FCMP activation with a tiny change in the sender-receiver protocol. The sender would simply add a T-term to the one-time address. This would bring immediate forward secrecy for everyone and not break existing wallets.

I do not support creating any additional new wallet types before Jamtis.

@kayabaNerve
Copy link
Author

In that case, mind documenting the exact proposed change and can we circle in @j-berman so we can push for such a change? Presumably, it's just an extra derivation off the shared secret?

Slight caveat that achieves weaker forward secrecy, which I note now in case such thoughts also impact JAMTIS. The lack of any binomial components in the existing addresses allow breaking known addresses (whereas if there was such an additional term as in my original proposal, you can identify sends to such addresses, yet you can not identify their spends).

@tevador
Copy link

tevador commented May 13, 2024

Slight caveat that achieves weaker forward secrecy, which I note now in case such thoughts also impact JAMTIS. The lack of any binomial components in the existing addresses allow breaking known addresses (whereas if there was such an additional term as in my original proposal, you can identify sends to such addresses, yet you can not identify their spends).

Forward secrecy always assumes that the DLP-breaking adversary doesn't know your address. That applies to both of the discussed cases. Regardless of any T-terms present in the address, the adversary can simply extract the secret view keys from the address, which makes any attempts for forward secrecy moot unless we migrate to a PQ-secure key exchange.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 13, 2024 via email

@tevador
Copy link

tevador commented May 13, 2024

We should not overestimate the forward-secrecy capabilities in case of leaked addresses. For example, the adversary can detect spends to known addresses (they will see a change output going to the sending wallet and a payment output going to the receiving wallet).

I want to reiterate that there is no safe way to malleate the keys of legacy addresses. Adding T-terms to one-time addresses achieves what is generally called forward-secrecy, is backwards compatible with existing addresses and forward-compatible with Jamtis. I don't see any other viable solutions.

@kayabaNerve
Copy link
Author

Fair. I'll circle back to

In that case, mind documenting the exact proposed change and can we circle in @j-berman so we can push for such a change? Presumably, it's just an extra derivation off the shared secret?

For the potential at-time-of-hard-fork solution.

@tevador
Copy link

tevador commented May 13, 2024

In that case, mind documenting the exact proposed change

It should be relatively simple.

variable description
K_d "key derivation" (sender-receiver shared key)
idx output index
K_1 (sub)address public spend key
K_o "one-time address" (output key)

Current one-time address derivation:

k_g = hash_to_scalar(K_d || idx)
K_o = K_1 + k_g G

Proposed change after the FCMP-fork:

k_g = hash_to_scalar(G || K_d || idx)
k_t = hash_to_scalar(T || K_d || idx)
K_o = K_1 + k_g G + k_t T

@kayabaNerve
Copy link
Author

We can eliminate the burning bug while we're at it by also prefixing the first input's key image if we're at it (modifying the shared key derivation). I'll leave further thoughts/discussions on this to @j-berman for now.

@tevador
Copy link

tevador commented May 13, 2024

We can eliminate the burning bug while we're at it

While I'm not saying I'm against this, we need to be careful about scope creep.

@tevador
Copy link

tevador commented May 13, 2024

Thinking on it some more.. it's probably fairly straightforward to have a separate table for timestamp locked outputs sorted by unlock_time also. Then when adding a block, check the lowest unlock_time output in the table; if the output is unlocked using get_adjusted_time at that height, remove from table and add to tree, and continue iterating removing outputs from the table until encountering an output that should remain locked at that block's get_adjusted_time. This way we wouldn't need to do the conversion from timestamp to block.

The problem is that get_adjusted_time is not monotonic. Adding a new block to the blockchain can in some cases result in outputs becoming locked again, unnecessarily increasing complexity.

I propose the following to be implemented at the time of the fork:

  1. Reduce the coinbase lock time to the default of 10 blocks that apply to all spends. This was discussed in research-lab#104.
  2. Ban new time locks. This would be done by a consensus rule mandating unlock_time to be zero as a natural extension of monero#9151.
  3. Convert all time-based legacy locks to height-based locks as unlock_height = height + (unlock_time - get_adjusted_time(height)) / DIFFICULTY_TARGET_V2, where height is the block height where the time-locked transaction was confirmed.

@j-berman
Copy link

j-berman commented May 13, 2024

I propose the following to be implemented at the time of the fork

I'm fine with this plan. Getting rid of the feature is still a +1 in my view as it's more likely to cause harm than benefit as it's been used.

Adding a new block to the blockchain can in some cases result in outputs becoming locked again, unnecessarily increasing complexity.

Clarifying that with the approach to maintain the existing time-based locks, the new rule at the fork would become first get_adjusted_time where time-based timelock is unlocked == block in which the output unlocks and is guaranteed unlocked from that point on. Either way we're introducing a new rule dictating how the output is unlocked.

Considering how little the feature is used and there's rough agreement to deprecate it, I think the optimal route is the path of least resistance and least complexity.

In that vein, I think converting time-based to block-based may still be optimal. I think the picture will be clearer upon implementing handling block-based timelocks first.

@hinto-janai
Copy link

I'd rather clarify things from the start, and if you still have questions, follow up there.

Sorry, not sure how to word things correctly. Given output movement like this:

non-FS output (A) ---> FS output (B) ---> output (C)

are these statements correct?:

  • A is known to link to B
  • B is known to come from A
  • C is known to not come from A
  • C is known to come from some FS output (but not necessarily B)

@j-berman thanks, details are appreciated.

We'll also want a migration for key images also described here

Ah okay, I thought the discussion here lead to this not being needed.

which would only stall the node if the tree is not finished constructing at the fork height where FCMP's begin.

So there will be leeway before the fork height to give nodes time to build the tree?

Thinking on it some more.. it's probably fairly straightforward to have a separate table for timestamp locked outputs sorted by unlock_time also. Then when adding a block, check the lowest unlock_time output in the table; if the output is unlocked using get_adjusted_time at that height, remove from table and add to tree, and continue iterating removing outputs from the table until encountering an output that should remain locked at that block's get_adjusted_time. This way we wouldn't need to do the conversion from timestamp to block.

The problem is that get_adjusted_time is not monotonic. Adding a new block to the blockchain can in some cases result in outputs becoming locked again, unnecessarily increasing complexity.

There's already up to 120 seconds of leeway for timelocks, would this be enough for the proposed table to be accurate enough?

@tevador
Copy link

tevador commented May 13, 2024

Clarifying that with the approach to maintain the existing time-based locks, the new rule at the fork would become first get_adjusted_time where time-based timelock is unlocked == block in which the output unlocks. Either way we're introducing a new rule dictating how the output is unlocked.

Fair point. Since we are already changing the rule, it makes sense to go with the simpler option that doesn't need a call to get_adjusted_time for every future block. I would also expect the code to be simpler if all historical time-locks are treated the same.

@j-berman
Copy link

So there will be leeway before the fork height to give nodes time to build the tree?

Ideally we'd release monerod containing the update well in advance of the fork height (last fork v18 release was released ~1 month in advance IIRC). So users could update their nodes, and it would start building the tree in the background on startup.

it makes sense to go with the simpler option

I lean toward converting time-based to block-based as well.

There's already up to 120 seconds of leeway for timelocks, would this be enough for the proposed table to be accurate enough?

We can also add an extra block like that for the extrapolated block, but imo I think the answer to "is it accurate enough" is likely yes. It's probably worth qualifying that with some analysis of block's adjusted times in practice, but the get_adjusted_time code appears fairly reasonable to me.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 13, 2024 via email

@kayabaNerve
Copy link
Author

kayabaNerve commented May 13, 2024 via email

@kayabaNerve
Copy link
Author

Presumably too late now, but we could've squashed $R$ into $\tilde{O}$. It'd save 32 bytes from the input tuple, save a point subtraction when prepping the GSP out-of-circuit. In-circuit, we'd save a discrete log proof (and with it, two VCs, which are 64 bytes and non-trivial re: GBP perf, prob a few percent).

We can ignore this for now and follow up as things continue (it'd be extending the current composition review, yet we don't have a requirement to do things perfectly contiguously and can revisit in a few weeks or even couple months).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment