Skip to content

Instantly share code, notes, and snippets.

@kayabaNerve
Last active May 13, 2024 19:24
Show Gist options
  • Save kayabaNerve/0e1f7719e5797c826b87249f21ab6f86 to your computer and use it in GitHub Desktop.
Save kayabaNerve/0e1f7719e5797c826b87249f21ab6f86 to your computer and use it in GitHub Desktop.
FCMP+SA+L

Full-Chain Membership Proofs + Spend Authorization + Linkability

This proposes an extension to FCMPs to make them a drop-in replacement for the exsting CLSAG. In order to be such a replacement, the proof must handle membership (inherent to FCMPs), spend authorization, and linkability.

Terminology

  • FCMP: Full Chain Membership Proof
  • GBP: Generalized Bulletproof, a modified Bulletproof supporting Pedersen Vector Commitments in its arithmetic circuit proof. Security proofs are currently being worked on by Cypher Stack.
  • GSP: Generalized Schnorr Protocol, a proven protocol for 2009 for proving: For a m*n matrix
    [
        [A, B, C],
        [D, E, F]
    ]`
    
    And a n-length row
    [x, y, z]
    
    The m-length output column:
    [
      T,
      U,
    ]
    
    is
    [
      xA + yB + zC,
      xD + yE + zF,
    ]
    
    This is useful to prove knowledge of the opening of a vector commitment and consistency between vector commitments.
  • BP+: Bulletproof+. A weighted-inner-product proof currently in use in Monero.
  • F-S: Forward secret. An adversary who can solve for the elliptic curve discrete log problem cannot find the spent output.

Recap

FCMPs, when instantiated with GBPs, are a O(n) proof of an O(log m) program (where m is the amount of members in the set). The program is a Merkle tree proof, where the same layer program is executed multiple times. The layer program is summarizable as follows:

  1. Unblind the Pedersen Commitment input.
  2. Verify the unblinded variable is present in the layer being hashed.
  3. Hash the layer, outputting a new Pedersen Vector Commitment (blinding the hash for the layer).

For the first layer, under Seraphis, the input would be the re-randomized squashed-enote (effectively a Pedersen Vector Commitment). The "unblinding" would remove the re-randomization and continue.

The final layer would use randomness = 0 such that the output Pedersen Vector Commitment is the tree root (or the protocol would take the difference and require a discrete log PoK, forcing the user ).

Difficulty

The naive solution would be to open K, the output key, inside the proof, then prove for the key-image inside the proof. This would require hardware wallets perform the FCMP proof, which is extremely undesirable/entirely unacceptable. Accordingly, the goal is for the membership proof to perform re-randomization without knowledge of x, letting a second proof prove for spend authorization and the key image.

Solution

Instead of building a Merkle tree of squashed enotes, we build a Merkle tree with elements (K, hash_to_point(K), C) where C is the amount commitment. We output re-randomized versions of all three and then prove the key image correctly formed.

Originally Proposed Modifications for Deployment During RingCT

EDIT: These should not be moved forward with. The alternative modifications in the next section achieve much better properties and are the basis of the forward-secret variant posted below. This is only present as it was discussed below and therefore would be confusing to remove entirely.

For the first layer, we don't take in the re-randomized squashed-enote (as this is pre-Seraphis), yet (r K, r**-1 H(K), C + a G). We denote this (K', H', C').

The first layer does not perform the traditional unblinding (addition of a G for a known a) for K', H'. Instead, it multiplies K by r (checking it's equal to K') and H' by r (checking it's equal to H(K)). We then perform the traditional unblinding of C' and check membership of (K, H(K), C).

This does not prove the spend is authorized nor that any claimed linking tag is correctly formed. It solely shows that the K', H' are derived as expected from a member of the tree.

We then extend this with a Discrete Log Equality proof for K', I over G, H'. The discrete logarithm of K' is r x, which we notate x'. x' H' expands to r x r**-1 H(K), with the r terms cancelling out for x H(K), the expected key image. This introduces linkability into the proof. By having the DLEq proof's transcript be over the message signed, spend authorization as well.

Proposed Modifications

With newly introduced generators T, U, V, and ignoring the amount commitment for now:

  • K' = K + aT
  • I' = hash_to_point(K) + bU
  • B = bV

The FCMP would only perform three standard unblinds (the third being C', ignored here), and show consistency between the bU term in I' with B. Instead of performing a DLEq proof after, we'd provide a proof of the following.

For matrix:

[
  [G,  T,  0], // Open K'
  [0,  0,  V], // Open B
  [U,  0,  0], // Create xU as a hint, Z, to prove for bxU without revealing bU
  [I', 0, -Z]  // x (H(k) + bU) + -bxU = x H(k)
]

The multi-scalar-multiplication by the consistent row of scalars:

[x, a, b]

The output is:

[
  K',
  B,
  Z,
  I
]

where Z is part of the proof data (formed as x U) and I is the key image.

This proves linkability, and again performs spend-authorization so long as the message is part of the proof's transcript.

Integrity

Obviously, implementation must be done correctly, and the proving system (GBPs) must be proven secure. Forging a member requires breaking the Merkle tree OR r, r**-1 consistency (for the original proposal) or bU, bV consistency (for the current proposal). The original proposal also assumes the trivial r = 0 case is banned, which it already should be by the rule against key images of identity (though we'd still explicitly prevent it).

Privacy

The privacy of the originally proposed scheme relies on it being infeasible to link K', H' to K, H(K). The most trivial way to solve this is by checking if DH(K, H(K)) == DH(K', H'), which is presumably reducible to the Decisional Diffie-Hellman. The Decisional Diffie-Hellman game asks if DH(A, B) == C. One who can check if two Diffie-Hellmans would be equivalent can check if a single Diffie-Hellman is equivalent via DH(A, B) == DH(C, G).

The second scheme's privacy is much more clear-cut. Given the commitment bV, one must calculate bU, which is the Computational Diffie-Hellman (with the caveat Monero key images are currently considered private under the weaker Decisional Diffie-Hellman problem, so this being stronger is irrelevant).

Features

Not only would this introduce FCMPs to Monero, it would also enable transaction chaining and post-proved membership if we don't bind the membership proof into the secondary proof's (whether DLEq or GSP) transcript.

Relation to Seraphis

Seraphis offers more efficient FCMPs, should provide forward privacy, and moves from hacked-together formal properties inherent to RingCT to formalized-from-the-start properties.

Unfortunately, Seraphis has been estimated as 2 to 5 years out by UkoeHB, with inclusion of FCMPs making it closer to 5 years. I disagree with that estimate, yet jberman estimated 3 years themselves. The goal of this is to solve the privacy issues solvable under RingCT, buying time to do better regarding design, performance/scalability, and the unsolvable issues (forward privacy). This may also be of indepedent merit.

Performance

I prior estimated Full-Chain Membership Proofs to be 35ms in a batch of 10. This was evaluated over the "pasta curves" with Bulletproofs+. The arithmetic-circuit argument for Bulletproofs, which we are currently using (or rather, Generalized Bulletproofs, a derivative of), should be twice as fast due to the relation to the underlying proof. This would mean ~18ms within a batch of 10.

Generalized Bulletproofs also notably reduces the amount of gates we use in circuit. Prior, we spent several gates entering the divisor into the circuit (a bunch of numbers which had to be entered somehow). Now, with Generalized Bulletproofs, we can simply provide an additional Pedersen Vector Commitment (32 bytes) to enter the divisor. This slashes the size of the circuit.

By adding SA+L, we'll have the additional performance costs within the first layer of:

  1. Proving for a tri-set membership.
  2. Doing multiple unblinds.
  3. Proving consistency of bH, bG or r, r**-1.

I'm expecting this makes the first layer ~3x more expensive.

So from for 1+1+1+1 at 35ms, to 1+1+1+1 at 18ms, to 3+1+1+1 (~31ms). That's ignoring the reduction in gates offered by GBPs.

Steps and Timeline

There are several different tracks along which we can move, and we should move along in parallel.

Implementation

  1. Implement GBPs.

This has already been done and solely needs optimizations performed.

  1. Implement an amenable framework for building arithmetic circuits.

This has technically been done, yet needs some love.

  1. Implement a library for elliptic curve divisors.

Technically done, yet with an edge case that should be fixed.

  1. Implement the various gadgets (set membership, index within set, PoK of DLog, proof of DLog).

This was done as needed for FCMPs, not as needed for FCMP+SA+L which has some distinctions (multi-set membership, which is done via getting the index of each item within each set and checking equality, and proof of DLog).

  1. Implement the FCMP+SA+L circuit.

This was done for FCMPs yet not FCMP+SA+L.

  1. Implement the secondary proof.

This isn't too notable, with a polished implementation hopefully being within a few days of work for the second case. The first case is prior implemented.

  1. Implement the necessary towering curves.

This is trivial to do over a generic integer library, yet performance effectively requires a dedicated implementation be done.

  1. Integrate into Monero.

Formal Security

  1. Formally prove the soundness, completeness, and zero-knowledge properties of GBPs.

We can do this now. Diego claimed a deliverable, or lack thereof if a failure occurs, could happen in one month.

  1. Have the divisor-technique reviewed, having been prior published by Eagen.

This is possible now.

  1. Formally prove the correctness of the "gadgets", as logical.

This is possible with just a formal description of the circuit, and the gadgets contained within. The Curve Trees paper did specify their gadgets, yet we have a notably different construction due to the usage of divisors and the extensions proposed here.

  1. Formally prove the correctness of the secondary proof.

The first proposal uses a DLEq which has existed since Chaum-Pedersen's 1993 work. The second uses a proof I frequently see referred to as a "Generalized Schnorr Protocol", despite my personal dislike for the ascription. I wouldn't be surprised if it was already proven.

Practical Secrity

  1. Audit the implementation of GBPs.

This is possible once the further optimizations are done, and the formal proofs exist.

  1. Audit the circuit framework.

This is possible once the framework is cleaned to a point we're happy with.

  1. Audit the EC divisor library.

This is possible as soon as the standing edge case is resolved.

  1. Audit the implementation of the gadgets.

This should be trivial and possible as soon as the gadgets are formally proven.

  1. Audit the implementations of the towering curves.

This is possible as soon as the curves are implemented. If we implement them with a constant-time integer library, we'd solely audit that.

  1. Audit the circuit.

  2. Audit the secondary proof.

This could be audited as soon as it's implemented, which again, should be trivial to implement.

  1. Audit the integration.

Contribution to Seraphis

Creation of FCMP+SA+L effectively does all the work necessary for FCMPs with Seraphis with regards to FCMPs themselves. It'd provide the proving system, gadgets, review/audits, and most of the composition work. The only additions would be the curve cycle libraries (if we switched curves), the cross-group DLEq we'd move commitments with, and the new integration (into Seraphis instead of into RingCT).

Steps Forward

We'd have to agree on which proposal to move forward with, if we were to move forward.

While the first proposal was the one originally posited, it remains here solely for historic reasons. It appears to perform one less PoK of DLog, making it faster, yet it requires using two private points as generators within Proof of (Knowledge of) DLog statements. Such proofs are much more performant when the generator being used is public.

The second proposal also has the benefit of a much stronger argument for privacy, and uses additive randomness (instead of multiplicative).

We'd then need various implementation work done. I can personally implement the curve libraries using crypto-bigint, yet I cannot implement tailored implementations as likely needed to maintain the performance target. Someone else would need to step up.

I believe I can have all the implementation done, besides integration into Monero, within a few months. The actual integration into Monero would be:

  1. Having Monero build the tree.
  2. Having wallet2 call prove via the Rust library.
  3. Adding RPC routes to get the current tree root and the path of a member of the tree (where the DSA would be used to request multiple paths as to not reveal the output being spent).
  4. Having monerod call verify via the Rust library.

There'd be no wallet/address/privacy set migration. All of the additional curves would be contained within the proof.

I cannot estimate/guarantee the timeline for the formal analysis side of things. I'd hope by clearly defining components, the rate at which review occurs is not bottlenecked by the rate at which development occurs. That doesn't change we'd have to efficiently manage the timeline and definition of components. I'd hope for, in the best case, development + review/audits of the building blocks in 3-6 months, targeting deployment in ~12 months. Beyond myself, the only development necessary is of dedicated curve implementations (towering curves having already been found by tevador) and the integration into Monero (which jberman already started for FCMPs into Seraphis).

@tevador
Copy link

tevador commented Apr 3, 2024

I personally expected Seraphis to be ready much sooner. If the 3-5 year estimate is correct, then I fully support going this route of implementing a RingCT-compatible version of FCMP if it turns out to be feasible.

What is the transaction size estimate compared to current CLSAG-16? How does the verification time compare to current CLSAG-16?

@tevador
Copy link

tevador commented Apr 3, 2024

which is presumably reducible to the Decisional Diffie-Hellman

Here is a proof.

Definitions

Let a "DDH oracle" be an algorithm that takes a quadruple of group elements [G,X,Y,Z] and returns:

  1. True if the quadruple has the form of [G,x*G,y*G,x*y*G] for some scalars x, y.
  2. False if the quadruple has the form of [G,x*G,y*G,z*G] for some scalars x, y, z.

Let a "FCMP oracle" be an algorithm that takes two tuples of group elements [K,J] and [K',J'] and returns

  1. True if K' = r*K and J' = (1/r)*J for some scalar r.
  2. False otherwise.

Proof part A: Constructing a FCMP oracle from a DDH oracle

By submitting the quadruple [K,K',J',J] to the DDH oracle, we get a FCMP oracle.

Let j be the scalar such that: J = j*K.

We can easily see that the DDH oracle will return True iif the FCMP oracle would return True because we can define:

G:=K
x:=r
y:=j/r

Then x*y*G = j*K = J.

Proof part B: Constructing a DDH oracle from a FCMP oracle

By submitting the tuples [G,Z] and [X,Y] to the FCMP oracle, we get a DDH oracle.

Let

X = x*G
Y = y*G
Z = z*G

The FCMP oracle will return True iif there is a scalar r such that

X = r*G
Y = (1/r)*Z

We can clearly see that this is the case when z = x*y, we have r = x.

@Rucknium
Copy link

Rucknium commented Apr 3, 2024

This proposal will be discussed at the Monero Research Lab meeting on Wed 03 April 2024, 17:00 UTC: monero-project/meta#986

@kayabaNerve
Copy link
Author

I'd estimate ~2.2kB per input at this time, @tevador, and I'd love to have you at the meeting in an hour to further discuss this.

@tevador
Copy link

tevador commented Apr 3, 2024

towering curves having already been found by tevador

Here is my latest and probably final tower-cycle proposal:

p = 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed
q = 0x7fffffffffffffffffffffffffffffffbf7f782cb7656b586eb6d2727927c79f
Ep: y^2 = x^3 - 3*x + 15789920373731020205926570676277057129217619222203920395806844808978996083412 
Eq: y^2 = x^3 - 3*x + 50691664119640283727448954162351551669994268339720539671652090628799494505816

This is the first cycle I found that has 2^256 - 2*q < 2^128 (this allows for faster field arithmetic mod q on some architectures) and both curves have a curve constant of a = -3 (this allows for more efficient group law formulas).

As for the actual implementation, I can't promise to help since it reportedly needs to be written in rust, but it should not take a lot of work:

  1. Efficient arithmetic in GF(p) can be taken from any Curve25519 open source library.
  2. Efficient arithmetic in GF(q) needs to be implemented since this specific prime is unlikely to be used by anyone else.
  3. Efficient group law for both Ep and Eq can be taken from any P-256 (secp256r1, NIST curve) open source library because it has the same curve form of y^2 = x^3 - 3*x + b (the three curves differ in the b constant, which is usually not used in group law formulas).

I'm planning to publish a commented script that produces this cycle to show that these are NUMS curves.

@kayabaNerve
Copy link
Author

To be quite honest, I simply don't have familiarity with implementing big ints and modular arithmetic 😅 I'd be fine implementing the point aspects over the various fields. I'd also be happy to advise on Rust/port some C code (which should be trivial) :)

Looking forward to the script, and thank you, tevador.

@kayabaNerve
Copy link
Author

This may allow outgoing view keys. Output keys are currently xG where the key image is x hash_to_point(xG). We don't actually check output keys are over G at time of creation though (we do inherently at time of spend right now).

This proposal, under the alternative modifications which make sense to move forward with, outputs a key xG + rZ, for some generator Z, requiring the signer know x and r to create the GSP we use as a signature. The intent was for r to be the randomness used within the circuit. If the re-randomized key was already xG + aZ, it'd be re-blinded by r to xG + (a+r)Z. Then, the GSP would be able to open it with x, (a+r), before proving the key image as x hash_to_point(K) (where K is xG + aZ).

This means that one cannot spend such output keys without knowing the a component, which should allow publishing the x component, which should allow anyone with x to calculate the key images for outputs.

The modified public spend key would be usable in the existing address format, without others being able to distinguish. This means all new addresses could opt into OVKs without segmenting the privacy pool/mandating a new format.

The primary concern I have is the redefinition of key images from dlog_G(K) hash_to_point(K) to term_G(K) hash_to_point(K). That needs to be proven secure under the current proposal regardless, as it does that naturally. If that's insecure, we can make the GSP prove the key is over G by outputting not just K + rZ, yet also K + rY, requiring opening both over G, Z and G, Y with consistent scalars.

@kayabaNerve
Copy link
Author

kayabaNerve commented Apr 6, 2024

My biggest concern with Monero after this is the lack of forward secrecy. With the current Monero, output keys are K = xG, and key images are I = x hash_to_point(K). A quantum computer will be able to solve for the discrete logarithm of K over G, then calculate the would-be key image to detect when its spent.

If output keys are rewritten as xG + rZ after FCMPs are deployed (I am not calling for with), xG cannot be determined to calculate the key image ahead of time (as described above). When given a key image x hash_to_point(K), every single output key would have some r causing it to match.

Accordingly, the key image model can be made forward secret with the above key image re-definition (a side effect of my proposal which I now believe should be made a feature) and wallets eventually re-defining their output keys. The proof would still need to be forward secret though.

The current proposal isn't (again, for the alternate modifications). Providing bG, the hint, enables calculating bH enables calculating the key image generator from the blinded key image generator.

This does force some degree of modification to the above proposal, which I don't want to immediately endorse in the name of scope creep, despite how big of an accomplishment I'd believe this'd be. The circuit changes would be as simple as (for new generators W, X, Y, Z):

K' = K + aW
H' = hash_to_point(K) + bX
D = bY + dZ
C' = C + cG

which I believe there's space for within my current estimates (due to padding causing us to have leftover space).

The issue would be the opening proof. We'd need to remove a x bX term (where x is the private key), and I don't see a way to do that without providing a hint (xX) which isn't forward secret. I believe a more powerful, and more complex, proof would be needed which I don't want to immediately posit nor propose at this time.

This would be a worthwhile future evolution however.

@tevador
Copy link

tevador commented Apr 6, 2024

Looking forward to the script, and thank you, tevador.

https://gist.github.com/tevador/4524c2092178df08996487d4e272b096

I updated the b constants of the curves to match the minimality criteria (i.e. the presented curves are the first curves that match the criteria in natural search order).

Note: the script can run for up to a few hours before it outputs our curves (depends on the number of CPU threads).

@tevador
Copy link

tevador commented Apr 6, 2024

The modified public spend key would be usable in the existing address format, without others being able to distinguish. This means all new addresses could opt into OVKs without segmenting the privacy pool/mandating a new format.

While the public address format would stay the same, there would have to be significant wallet-side changes. The only feasible way would be to opt in for OVKs at wallet creation time when using the Polyseed mnemonic phrase, which can encode a bit flag "enable_ovk".

Why do we have to opt in at wallet creation time? Because enabling OVKs at any later time could cause the following issues:

  1. Inconsistent list of generated addresses. Existing addresses would suddenly change if the wallet was restored with OVKs enabled, if it was previously being used witout OVKs. This could cause major confusion to users.
  2. Wallets could miss outputs that were sent to legacy addresses. This could be mitigated by wallets scanning for both versions of each address (with and without the a*Z component).

1. Wallets without OVKs

Legacy wallets and Polyseed wallets with "enable_ovk" set to false would only have two private keys k_v (the private view key) and k_s (the private spend key). Subaddress generation would be unchanged:

m = Hs("SubAddr" || k_v || account_index || subaddress_index_within_account)
Y = k_s*G + m*G
X = k_v*Y
Address = (X,Y)

2. Wallets with OVKs

New wallets with "enable_ovk" set to true would consist of four private keys:

  1. k_s, the private spend key
  2. k_vs = H(k_s), the private view-spent key
  3. k_vr = H(k_vs), the private view-received key
  4. s_ga = H(k_vr), the generate-address secret

The fourth key is not related to OVKs, but since we are already changing the private keys, we might as well separate the ability to generate addresses from the ability to view incoming payments, which merchants have been complaining about.

Subaddress generation would work as follows:

m_z = Hs("SubAddr-Z" || s_ga || account_index || subaddress_index_within_account)
m_g = Hs("SubAddr-G" || s_ga || account_index || subaddress_index_within_account)
Y = k_s*Z + m_z*Z + k_vs*G + m_g*G
X = k_vr*Y
Address = (X,Y)

We would get 4 distinct wallet tiers:

Tier Secret Public keys Off-chain capabilities On-chain capabilities
AddrGen s_ga k_s*Z, k_vs*G, k_vr*G, k_vr*Z, k_vr*k_s*Z, k_vr*k_vs*G generate public addresses none
ViewReceived k_vr k_s*Z, k_vs*G all view received
ViewAll k_vs k_s*Z all view received, view spent
Master k_s - all all

I guess this would be a "mini-Jamtis".

@tevador
Copy link

tevador commented Apr 7, 2024

This does force some degree of modification to the above proposal, which I don't want to immediately endorse in the name of scope creep

I agree. Forward secrecy is already present in Seraphis and this is meant to be an interim solution. I think providing forward secrecy is out of scope of this proposal, especially if it requires a more complex proof or has slower verification time.

As for OVKs, I'm currently in favor of implementing them as a separate wallet type described above (possibly being enabled by default for newly generated wallets). The main reason is that it's only a wallet-side change that's invisible on the outside, so we can't prevent 3rd party wallets from implementing it. In that case, it's better to have official support for compatibility reasons.

@kayabaNerve
Copy link
Author

kayabaNerve commented Apr 8, 2024

Scope creeping (I played with it when I had some spare time, it's an interesting problem), this is amenable to forward secrecy. I'm defining forward secrecy as an adversary being unable to link outputs to the inputs spending them, nor to each other, even when there is a QC available. The QC is assumed to only have the information on-chain.

For new generators T, U, V (redoing notation yet again), with the key image generator labelled I, we'd output:

K' = xG + aT
I' = I + bU
B = bV + cT

This does extend the circuit (we have a new c variable), yet doesn't impact proof size/time due to current padding rules from my initial estimates.

BP+ is inherently a zkPoK for the relationship P = a G_bold + b H_bold + ayb G + r H, where a, G_bold, b, H_bold are vectors and their multiplications are actually inner products (the sum of each pair multiplied). ayb is the inner-product of three vectors, where y is a vector of powers of y. If y[0] = 1, this is not the weighted inner-product yet the standard inner-product relationship.

If we set a = [x], G_bold = [G], b = [b], H_bold = [V], G = U, H = T, y = 1, we'd obtain a proof for P = xG + bV + xbU + rT for some prover-specified x, b, r in two points and three scalars (it's quite close to the existing Schnorr proof as it's not doing a permutation argument over 64 rows in the vectors, and it's not converting the range proof statement to a WIP statement).

In order to ensure the P has the same x, b as K', B, we could open P in a GSP. Instead, we do P - K' - B. For an honest provider, this is (xG + bV + xbU + rT) - (xG - aT) - (bV + cT), or xbU + (r - a - c)T. For a dishonest prover who uses a distinct x/b, they will be left with a remaining G or V term. Accordingly, a prover can prove honesty by opening over U, T alone (without G, V terms).

That would be this following matrix/output column/row of scalars.

[
  [G, T, 0, 0]
  [0, 0, U, T],
  [I', 0, -U, 0],
]
[
  K',
  P',
  L
]
[x, a, xb, r-a-c]

This is forward secret per https://gist.github.com/kayabaNerve/e09ba62d4a6165ced006b037f1068d95. That script defines a prime field (insecure to malicious adversaries, of course, but that's not what we're modelling) in order to give us a discrete log oracle (modular inverse by whatever number we define the 'points' as). It creates an honest instance of the proof, then picks a random output J (specifically, a random number acting as the distinct key image generator for a distinct output).

The Python proceeds to use the discrete log oracle to extract various scalar solutions and nonces, before rebuilding all of the nonce commitments. Since it is able to extract all scalars necessary for the proof to be valid, and since the commitments are identical, one with a discrete log oracle cannot differentiate I from J as both yield valid solutions.

This causes the BP+ + GSP composition to output 6 points and 7 scalars (416 bytes), with the FCMP proof outputting 3 points itself (ignoring the linking tag, which is not considered part of the proof). Only the BP+ + GSP composition would have to be done on a hardware wallet, and a more efficient version is presumably possible if one inlines the BP+ and the GSP. This commentary doesn't as it means we'd solely need to argue a composition, not a new proof (however building on prior theory).


I'm fine immediately continuing on the non-F-S variant, in order to minimize contention. Despite that, I'm obligated to note:

  • The F-S variant achieves parity with Seraphis re: functionality (the non-F-S variant is only lacking in F-S)
  • It doesn't require a new anonymity pool
  • It doesn't require changing the address format (solely generating new addresses to gain the benefits described)
  • It doesn't invalidate old addresses
  • It can be rolled out incrementally (initially FCMPs, then F-S FCMPs with the same address protocol yet new generation methods, then a new address scheme if still believed worth it, then new TX versions removing TX extra... all over several HFs, instead of at once)
  • The F-S variant would also minimally increase development time

@jeffro256
Copy link

Introduction

FCMP = Full-chain Membership Proofs (efficient signatures that let each input have the anonymity set of the entire chain)
FCMP-RCT = Full-chain Membership + Spend Authorization + Linking proofs on RingCT as described in this proposal.
F-S = Forward Secrecy, usually in reference to DLP solvers not being able to glean any tx information only from on-chain data.
OVK = Outgoing View Keys

To begin, I want to express how impressive @kayabaNerve's work is here that 1) FCMPs on RingCT is possible in the first place, 2) it can be modified to be made forward secret against a quantum adversary, and 3) it can be made to be so efficient. I'm certainly glad that this is an excellent option that exists for Monero development to take. However, this post is mainly against going forward with FCMP-RCT under the following large underlying assumption: For efficiency, security, cryptographic clarity, elliptic curve changes, perfect forward secrecy, or other inherent protocol properties, the network WILL want to migrate to Seraphis at SOME point. If this assumption is NOT true (which could possibly be the case since FCMP-RCT achieves feature parity with Seraphis with F-S added), then you can ignore this entire post altogether.

Quick Summary of Concerns with FCMP-RCT

  1. Though described as such, FCMP-RCT is NOT a drop-in replacement for CLSAGs from an engineering standpoint, as many non-trivial changes to the database, cryptonote core, tx format, tx construction code, tx validation logic, RPC interface, and wallet logic will have to be made, as well as additional crypto libs, that wouldn't have to be made with a more "typical" ring-like proof.
  2. Replacing ring signatures with FCMP-RCT will significantly increase technical debt for future protocol handling code and make future Seraphis integration harder.
  3. In practice, moving forward with FCMP-RCT will split development time away from Seraphis integration today.
  4. If FCMP-RCT is enacted, this will increase the dev community's reliance upon wallet2.
  5. FCMPs in general are likely overkill for dealing with decoy flooding attacks (FCMPs really shine against EAE, EABE, poisioned dust attacks, etc), so the recent sharp increase in tx count should not cause us to rashly jump to a FCMP solution.
  6. This CCS proposal related to this work asks for ~$375,000 worth of funding. AFAIK, that amount of resources has not been allocated towards Seraphis integration. If the stated reason for doing FCMP-RCT is to get FCMPs in before Seraphis, since seraphis integration is taking too long, it makes more sense to instead to allocate those resources directly to make FCMPs on Seraphis happen faster.
  7. FCMP-RCT has a much larger attack surface than FCMPs on Seraphis, since FCMP-RCT has to additionally prove spend authorization and linking, and as such, it should be noted that there is a greater security concern of a buggy implementation in FCMP-RCT than FCMPs on Seraphis.
  8. If FCMPs are implemented before Jamtis is implemented, then background syncing will necessarily consume more storage/RAM.

Seraphis Integration Progress

TODO

Expanding on FCMP-RCT concern points

"Though described as such, FCMP-RCT is NOT a drop-in replacement for CLSAGs from an engineering standpoint, as many non-trivial changes to the database, cryptonote core, tx format, tx construction code, tx validation logic, RPC interface, and wallet logic will have to be made, as well as additional crypto libs, that wouldn't have to be made with a more "typical" ring-like proof."

This can be contrasted to the upgrade from MLSAGs to CLSAGs, which is a true "drop-in" replacement. The upgrade to CLSAGs had all of the exact engineering properties of MLSAGs, we just needed to add in a field to the RingCT signatures, and call a different function for verification if that field was present. This is not the case with FCMPs (in any form, Seraphis or FCMP-RCT), since we must interact with the curve tree when constructing txs, proving txs, or modifying the chainstate. The same can be said about Seraphis if we choose to go for FCMPs on the first iteration, however, Seraphis lets us achieve basically all the features (membership proof delegation, better multisig, OVKs, tx chaining, F-S) without having to integrate FCMPs on the first go. In other words, Seraphis lets us integrate more gradually on the point of FCMPs. FCMP-RCT lets us integrate gradually in respect to F-S.

"Replacing ring signatures with FCMP-RCT will significantly increase technical debt for future protocol handling code and make future Seraphis integration harder."

This point is related to the first point. If we plan to do Seraphis in the future, we will effectively have 4 tx eras instead of a 3, when it comes time to write code that can verify all cases. We would have: Pre-RingCT, RingCT, FCMP-RCT, and Seraphis, with the FCMP-RCT era being the shortest. Since we have to make changes to the tx format, even outside the RingCT signatures, and must keep additional database information, from a codebase standpoint, this makes the entire ruleset for full nodes much more complicated.

""In practice, moving forward with FCMP-RCT will split development time away from Seraphis integration today."

There are not a great number of people who can correctly integrate extensive changes into critical parts of Monero core infrastructure. Some of those people are working on Seraphis integration currently or Seraphis review. In order for FCMP-RCT to get the review work in deserves, this will necessarily slow down Seraphis integration in the current dev environment.

"If FCMP-RCT is enacted, this will increase the dev community's reliance upon wallet2."

The "No Wallet Left Behind" project, in addition to being the place to discuss/iterate Seraphis developments, has also become the room for discussing replacing wallet2, since large swathes of legacy wallet code have been re-written for Seraphis anyways. Only as I've delved deeper into looking at other Monero wallets' design, and wallet libs in other cryptocurrency projects, have I truely began to appreciate how bad wallet2's design is, and how significantly is it likely holding back Monero's adoption on many fronts. There are numerous, numerous efforts by developers in all different languages to create wrappers, or sometimes band new libs, around avoiding having to deal with wallet2. There is a wrapper around wallet2 in the core codebase (which completely changes the API). There are wrappers in different languages which change the API completely. There are ecosystem wrappers in the same language which help you avoid using wallet2. Yet wallet2, and Cryptonote scanning in general, is so complicated that almost all services use wallet2 underneath. Yet wallet2 is not modular, not light-weight, has no real way to meaningfully change transport methods without a re-write of the abstract http client, has few re-usable free functions, has no decent debugging tools, no documentation outside of a few scattered comments, etc, etc. This results in worse Monero services, and less Monero integration into the broader cryptocurrency ecosystem. Certainly, government regulation and blockchain surveillance companies have a large role to play in suppresing Monero adoption, but it's also hard to overstate how annoying it is to develop on Monero as an outsider compared to other currencies. The sooner we move away from wallet2, the faster we can remove technical barriers to adoption. The Seraphis integration project has already began significant work on this (see UkoeHB/monero#23), has a plan to phase out the internals of wallet2, and as such, would destroy an incredible amount of technical debt in the future at the cost of some dev time in the present.

"FCMPs in general are likely overkill for dealing with decoy flooding attacks (FCMPs really shine against EAE, EABE, poisioned dust attacks, etc), so the recent sharp increase in tx count should not cause us to rashly jump to a FCMP solution."

This discussion will need much more space and research than can be covered in this short post, but as I largely understand it, increasing the reference set past a certain point has very quickly diminishing returns when performing a decoy flood attack against the entire network. Let's say that tha attacker can create 50% of the decoys on-chain, the difference between the effective ring size being 64 (128 under Seraphis * 50%) and half of the chain for privacy against non-counterparties is not that much. FCMPs would really change the game in regards to privacy when dealing with malicious counterparties (EAE, EABE, etc). I won't expand on this point much further, since the details could be discussed for days, but we shouldn't let the recent spam attacks push us to FCMPs too quickly at the expense of the future health of the protocol. We shoul move to FCMPs careful and for the right reasons.

This CCS proposal related to this work asks for ~$375,000 worth of funding. AFAIK, that amount of resources has not been allocated towards Seraphis integration. If the stated reason for doing FCMP-RCT is to get FCMPs in before Seraphis, since seraphis integration is taking too long, it makes more sense to instead to allocate those resources directly to make FCMPs on Seraphis happen faster.

Fact: People want FCMPs
Fact: People want Seraphis
Fact: Seraphis integration is taking too long in some people's eyes
Fact: People have the capital and desire to push extensive protcol and infrastructure changes (FCMP-RCT) to happen soon

Conclusion: it would make sense if those resources were allocated to Seraphis integration

"FCMP-RCT has a much larger attack surface than FCMPs on Seraphis, since FCMP-RCT has to additionally prove spend authorization and linking, and as such, it should be noted that there is a greater security concern of a buggy implementation in FCMP-RCT than FCMPs on Seraphis."

FCMPs under Seraphis only have to prove membership, which is the property that the input enote the proof is talking about simply exists on-chain. This proof system must do all the things that a ring signature under RingCT must do: it must prove that one of the pubkeys in the reference set authorized this input, that this input has an amount commitment that corresponds to that input, that this pubkey has not been spend from before by proving it corresponds to a key image, and that the same key image corresponds to the same amount commitment. Seraphis replaces all of these proofs, as well as changing how key images are defined, but we get the nice cryptographic property that the proofs are compartamentalized into proving specific properties. Thus, if we are able to prove the security of the abstract Seraphis protocol, any new changes to the implementation of the Seraphis protocol (FCMPs, new iterations of FCMPs, new composition proofs, etc) don't have to satisfy all of the RingCT security properties at once, and are thus much less likely to have breaking bugs.

"If FCMPs are implemented before Jamtis is implemented, then background syncing will necessarily consume more storage/RAM."

This one is a smaller technical issue, but cryptonote background view-key syncers would have to store every single on-chain key image of every single tx until their spend key is brought back online. Versus today, the background process only has to store the key images of rings whose reference sets contain the index of an owned enote (which today with a ring size of 16 is about ~8x as many enotes as the user owns). This is because current output selection of cryptonote addressing can't tell if a user is involved in a transaction solely based on its enotes. In Jamtis, output selection guarantees at least one view tag must match if the tx constructor is sending funds. That way, we could narrow down the key images needed to store to only those whose in transactions where at least one view tag matches (likely ~1%).

Conclusion

So there's a quick and dirty explanation of why I don't think we should move forwards with FCMP-RCT if we are going to move forwards with Seraphis eventually. However, @kayabaNerve has given us a path to F-S, which would basically give us all the same properties of Seraphis, if less efficiently. FCMP-RCT would give us a quicker path to FCMPs, but would likely take longer to do F-S, if we chose to do it after the first upgrade. Seraphis integration has a head start, but there is likely less total work to do to get to a first FCMP-RCT hard fork. Also, this post is not necessarily a hard "no" for @kayabaNerve's CCS proposal, since apparently ~80% of the R&D work translates nicely to Seraphis FCMPs.

@kayabaNerve
Copy link
Author

  1. The goal for F-S is for a QC to not be able to learn more. I showed that by demonstrating a DLog solver (a feature available to a QC) can find a solution for any output, therefore the actually spent output isn't differentiable.

  2. The benefit of Seraphis is in the design simplicity, which my afford notable performance benefits to the membership proof under the squashed enote model (though that adds additional elements to the aggregate range proofs).

as many non-trivial changes to the database, cryptonote core, tx format, tx construction code, tx validation logic, RPC interface, and wallet logic will have to be made, as well as additional crypto libs

I don't agree with the presentation of this. A tree needs to be added to the database, agreed. "cryptonote core" doesn't notably change. It passes the tree root, calls accumulate on the tree, and calls pop on reorg. We would edit the RCT section of the TX, sure. Validation logic only changes in calling FCMP::verify with the tree root instead of CLSAG::verify with the decoy information. We would need new RPC routes and the wallet would have to move from requesting decoy information to path information. Accordingly, all of the above sounds like three notable changes claimed notable across six sections.

The crypto libraries are necessary now and with Seraphis, so I find that note largely irrelevant personally. The only libs which wouldn't be reused are the Helios/Selene if Seraphis adopts a curve cycle.

  1. I disagree on the impact of that technical debt, and will take it for the year of privacy discussed at which users would otherwise be vulnerable.

  2. Only jberman would split off under current proposals.

  3. seraphis-migration/strategy#3 discusses replacing wallet2 and I'm fine leaving all conversations about wallet2 being bad to it.

  4. I disagree FCMPs are overkill to anything. We need to evolve past rings. This is the fastest proposal to do so while meeting a trustless setup and performance requirements.

  5. Seraphis is welcome to make its own CCS. This CCS does not outlaw Seraphis CCSs. I do not believe Seraphis, even with resources, can accomplish the integration, formal proving, formal review, and auditing necessary before this can as this has a much smaller scope than Seraphis on a fundamental level.

  6. FCMPs over RingCT only have larger attack surface than FCMPs with Seraphis because Seraphis takes a giant attack surface, and part of its giant attack surface is the FCMPs + RingCT attack surface. It just shifts the burden. I also don't believe the attack surface is an issue as both this and Seraphis are proposed with extensive review and audits.

  7. The proposed increase in storage requirements are minimal. It's just the need for view-only wallets to keep all key images until the spend key filters them, instead of only key images which used a known output as a decoy. That, combined with how the outgoing view key extension would enable view-only wallets to immediately filter...

  8. I'd argue this proposal has less, or at worst equal, remaining work than Seraphis does, not just less work in total. This also accomplishes notable milestones several months to year(s) faster.

@Gingeropolous
Copy link

this route of implementing a RingCT-compatible version of FCMP if it turns out to be feasible.

Do we know when the "if" in this statement can get resolved? From my perspective, this would introduce a key deciding factor. If its known that FCMP-RCT is sound within a matter of weeks, and the deployment (including audits) can occur <1 year, to me it makes sense to push this forward.

From where I sit, seraphis was first introduced primarily to permit an increase in ringsize, with the hopeful development of some non-ring based system that could be dropped in. Concurrently, the seraphis upgrade would usher in a new transaction system that is ultimately better than the one we have from cryptonote.

It turns out that non-ring based system was discovered prior to seraphis implementation. I think this was unexpected.

I know this is a technical thread, but i'll drop some analogies. I feel like monero is currently a plug in hybrid car. It's using the best of the technology that was available at the time to achieve the goals of the manufacturers, which is lower emissions (or something. don't take the analogy all the way). The ideal car in this analogy would be a fully electric car built from the ground up. This is seraphis. However, in this hypothetical scenario, there's a crisis and there's no more gas and people still need cars. Thus, we can produce an electric car thats usable now by dropping in an electric motor, because an engineer just figured out how to do it. Thats FCMP-RCT.

In real life, car manufacturers that attempted this drop in weren't successful, but I'd argue that's because there wasn't a gas supply crisis. Whereas car manufacturers that created pure electric cars from the ground up have done... relatively well.

Which I think brings us to the pertinent question. Is there a crisis that would warrant postponing the ideal product in lieu of something that could work now?

I would argue yes. It's been known that rings are the weakest link in monero's function. before the spam attacks it was known. It was known well before then.

@tevador
Copy link

tevador commented Apr 18, 2024

Due to the ongoing debate if FCMPs should be implemented before Seraphis, I reread the Seraphis paper. The main premise of the new key image format that comes with Seraphis was to enable outgoing view keys (OVKs) and to separate the membership proof from the ownership proof.

Since both of these features can be achieved with RingCT-compatible key images, is there a good reason to go forward with the imcompatible key image format?

Most, if not all, new features of Seraphis could be implemented with a RingCT-compatible key format:

  1. Membership/ownership proof separation can be done as described in this gist.
  2. Jamtis address format also works (with minor changes to the key derivation formulas), including all wallet tiers, encrypted address tags and Janus attack protections.
  3. Transaction structure changes are also applicable as they are independent of the key format.

If Seraphis was adapted to use the key formats mentioned here, the pre-Seraphis FCMP implementation would 100% roll over to Seraphis, which would address the concerns some people might have with this endeavor. Additional advantages would be:

  1. We could optionally support both old and new address formats at the same time if that was something deemed useful.
  2. There would be no need for migration-type transactions that spend old RingCT e-notes and create new Seraphis e-notes. All old e-notes would be part of the anonymity set of all future transactions, which would arguably be better for privacy.
  3. We would only have a single pool of spent key images instead of two.

@kayabaNerve
Copy link
Author

Do we know when the "if" in this statement can get resolved?

If you ignore performance enough, I'd say it's resolved. I initially posited 35ms at Monerokon, which was actually a 100ms variant that did several additional proofs GBPs supersede. If GBPs fall through, we'd fall back to that 100ms metric (with a new item to formally prove). That ignores the several performance benefits we're looking at. BPs, instead of BP+s, inherently should be ~2x faster (35ms -> 17.5ms), and I've come up with more efficient ways of architecting the circuit internally.

I also have high faith in the ability to prove GBPs. I'm expecting a soundness proof for the divisors to be more difficult, yet that'd 'only' double our cost for performing the elliptic curve cryptography in circuit (back to 35ms, if we use BPs and not BP+s).

Ideally, we should have an upper bound on time in around a month, when the current CS work completes.

It turns out that non-ring based system was discovered prior to seraphis implementation.

To be fair, FCMPs with Seraphis was discovered before FCMPs without Seraphis.

postponing the ideal product

necessitates discussion on how far from ideal this is.


Most, if not all,

I believe all features of Seraphis are accomplished when the above idea is fully executed (the F-S variant and supporting wallet code).

If Seraphis was adapted to use the key formats mentioned here

If we are to keep the Seraphis code (which I would support), I'd probably endorse deploying this over the existing codebase (with extensions), then deploying the Seraphis code as a new wallet library + new transaction version. I'm obligated to note that makes Seraphis more of a codebase than a protocol, as it is now, yet that also isn't necessarily an issue.

We could optionally support both old and new address formats at the same time

This does cause metadata leakage (wallet age revealed), yet doesn't invalidate old addresses which is great.


Distinctly, why wasn't non-RCT outputs migrated into RCT? Can't we set the randomness in a Pedersen Commitment to 0 and make RCT outs out of non-RCT outs? I ask as that'd allow making all historical TX validation code technically still active (due to CN-RCT migration TXs still being allowed, as they should be) solely historical (as new spends of original CN outputs would be treated as RCT outputs which would be a subset of the new global pool). When thinking about removing the Seraphis migration, I had that thought about the old migration.

@tevador
Copy link

tevador commented Apr 19, 2024

Distinctly, why wasn't non-RCT outputs migrated into RCT?

Currently, outputs are grouped by their value for the purposes of decoy selection. Ring-CT outputs are counted as having a value of 0. Old denominated outputs and unmixable outputs have transparent amounts, so presumably the real output (and the amount spent) would be leaked with a high probability if such an old output appeared in the ring.

I think that with FCMPs, we could easily treat all outputs from the genesis block as a single group by creating amount commitments for them with a known mask (AFAIK coinbase outputs use a mask value of 1, i.e. C = v*H + G).

@kayabaNerve
Copy link
Author

kayabaNerve commented Apr 19, 2024

I'd be quite interested if it really wasn't done historically due to side-channel attack commentary, I'll note this as yet another idea which can take FCMPs to their 'ultimate' form.

cc @jeffro256 on tech debt (it doesn't remove the verification code, yet would make it explicitly historical) and also @jeffro256 for their own comment on migrating RCT outputs to Seraphis (which I wouldn't see possible without an overtly complicated circuit/having every Seraphis TX also have a RCT input/etc).

@Gingeropolous
Copy link

necessitates discussion on how far from ideal this is.

from my understanding, a large benefit of seraphis was the modularization of all of the components. (e.g., FCMP could just be dropped into the membership proof component). So I guess one question regarding this FCMP-RCT effort is how upgradable is it? e.g., in 5 years, if something faster and smaller comes out for component x, y, or z, how easy is it to slip into cryptonote-fcmp-rct, vs how easy would it be to slip into seraphis-fcmp?

Or maybe "future-proof" would be the right term.

If Seraphis was adapted to use the key formats mentioned here, the pre-Seraphis FCMP implementation would 100% roll over to Seraphis, which would address the concerns some people might have with this endeavor.

This seems like an ideal option imo.

@kayabaNerve
Copy link
Author

kayabaNerve commented Apr 19, 2024

from my understanding, a large benefit of seraphis was the modularization of all of the components.

This also modularizes the membership and spend-auth/linkability proof. The statements are distinct, as needed to safely satisfy the RingCT design preserved, yet it is modularized. Membership, spend-auth/linkability, balance (currently sum of commitments is zero), and output range could also be independently upgraded.

This seems like an ideal option imo.

I believe that proposal is effectively to move forward with this now, then scrap Seraphis the cryptographic protocol. Seraphis the codebase would then be adapted to satisfy the system created here, argued only fundamentally distinct in the linking tag structure. That is equivalent to the general idea of FCMP+SA+L now + JAMTIS later + new TX formats which are saner later (incrementally upgrading without a hard migration), except the new TX formats would be the already written Seraphis formats re-intented. I just want to be clear about my understanding there.

@kayabaNerve
Copy link
Author

@kayabaNerve
Copy link
Author

https://github.com/kayabaNerve/fcmp-/blob/develop/fcmp%2B%2B.pdf is my work in progress formalization of this, with background. The circuit is detailed yet I wouldn't claim specified yet. It also covers how integration would be handled.

cc @tevador especially (who I don't believe I can reach on Matrix 😅 ), when you have the time.

@kayabaNerve
Copy link
Author

https://github.com/kayabaNerve/fcmp-ringct/blob/develop/fcmp%2B%2B.pdf should now be at the feedback phase, where the only expected changes are for clarity/detail where not enough detail was provided (such as on the integration side of things). Barring evolution with research (see DLog PoK which I played safe here, yet do posit a more efficient candidate for), only minor tweaks should occur.

@tevador
Copy link

tevador commented Apr 30, 2024

I reviewed the PDF, version d946868.

  • 4.3 First bullet point should probably say "Hadamard product" instead of "vector multiplication" to avoid ambiguity.
  • 4.3.8 The section is missing a sentence explaining that permissible points are those where U(y) is a square and U(-y) is non-square. I'd also add that 1/4 of points are permissible.
  • 5.4, 6.8.1 You could add a reference to Jamtis-RCT, which is a compatible addressing scheme supporting outgoing view keys.
  • 6.8.2 Outgoing view keys are not a requirement for forward secrecy. It's sufficient to add a T-component to the one-time address when constructing an e-note.

Regarding section 6.4, where you propose to require output keys and commitments to be permissible, I've been thinking about it and it should be possible by adding two encrypted 1-byte fields to every e-note, which will contain the number of times the constructed value needs to be incremented to be equal to the published key, i.e. K_o = K_o' + i_1 T and C = C' + i_2 G, where K_o' and C' are the values derived from the sender-recipient shared secret. The integers i_1 and i_2 are in the range of 0-255, which gives a negligible chance of failure to find a permissible point (less than 1.4x10-32).

@kayabaNerve
Copy link
Author

  1. ACK.
  2. I don't spell out the square commentary, yet did comment permissible(Y) && !pemissible(Y). I'll elaborate.
  3. ACK as a cite under future expansion :) Thanks for putting that together. I have yet to review it, unfortunately.
  4. Agreed they're not technically required, yet the addition of the T term is what simultaneously creates the outgoing view key and offers forward secrecy. I'll rewrite it so its clearer than outgoing view keys, which is a bunch of wallet protocol changes, isn't implied as required.
  5. I don't require outputs be permissible. I require they be made permissible before being added to the tree. It's a few additions by the node (1.5 on average?), so it shouldn't notably impact performance, and prevents requiring changes to the scanning process. We can also publish such values, yet I believe post-incrementing is the simpler solution. The only argument against the node doing it is if one posits a pit of non-permissible values (say, thousands of them in a row) which forms an effective DoS.

Distinctly, even without an extra item, could we not force wallets to resample randomness until permissibility happens to be achieved? That'd not increase bandwidth (negligible) nor change the scanning protocol (which is a bit annoying, and likely requires a table or 1-byte double-and-add to be properly performant?).

@tevador
Copy link

tevador commented Apr 30, 2024

the addition of the T term is what simultaneously creates the outgoing view key and offers forward secrecy

Not necessarily. If the T term is derived from the shared secret, it's not safe to use the G term as the OVK. However, the resulting ouput key is forward-secret nonetheless.

It's a few additions by the node (1.5 on average?)

Actually, it's on average 4 point additions and 8 Legendre symbol calculations.

The only argument against the node doing it is if one posits a pit of non-permissible values (say, thousands of them in a row) which forms an effective DoS.

Not thousands, but it's quite easy to find keys that will take 80+ point additions, so it can be considered to be a DoS vector. We can go with it for the initial FCMP rollout, but I'd like to add permissible keys and commitments to the Jamtis-RCT specs. That would reduce the verifier workload to just 2 Legendre symbol calculations per key.

could we not force wallets to resample randomness until permissibility happens to be achieved

The chance that a random key is permissible is 1/4. Two-output transactions only have one public key and a total of 4 keys derived from it. That means you would need, on average, 256 attempts (=44) to make a valid 2-output transaction. Impractical.

likely requires a table or 1-byte double-and-add to be properly performant

You can have a precomputed table of 256 multiples of T and G, which would only take 32 KB. Then you'd only need one table lookup and one point addition to recompute the permissible key. In any case, thanks to view tags, this would only happen for a small fraction of e-notes, so the precomputed tables are probably not even worth it.

@kayabaNerve
Copy link
Author

kayabaNerve commented Apr 30, 2024

Fair. I always presumed deployment via the public spend key as that's indistinguishable, with no changes to scanning/sending. Apologies for being so imprecise.

If 1/4 points are permissible, I would've assumed we'd reach a permissible point in 1.5 additions (as the current one can be checked without any additions). The average distance to an permissible point being 4, not 2, when they're presumably even spaced by 4 sounds off. I haven't wrote the code to literally show however. Agreed there's overhead for the check itself.

80+ point additions definitely isn't great. It's still less work than a scalar multiplication though. In that case, I'd go back to recommending resampling randomness yet that only can be discussed when each output has an independent randomness which isn't currently the case. If we're shifting send/scan changes to a future fork, would not that future fork presumably use independent randomness per outputs? I'd have to re-read on how Seraphis + JAMTIS proposed handling change outputs. If the extra byte is the best solution, along with a table or one-byte double-and-add in the scan code (which I never meant to suggest impractical, solely an annoyance like all three of these paths have), ACK.

@tevador
Copy link

tevador commented Apr 30, 2024

Re: torsion (6.2.1 and 6.4): What can happen if we allow keys with a torsion component to be added to the tree? I can't think of any exploit as long as the SA+L proof is handled correctly.

@kayabaNerve
Copy link
Author

kayabaNerve commented Apr 30, 2024

I can't think of any. I just no longer care to ask the question and argue security despite the question. It's not worth the mental effort and the potentially surprising results. I'm still frustrated I still have to consider some aspects re: torsion which are fundamentally not removable. This just really isn't the place to fuck around, find out.

The requirement of torsion clearing historical points is adding a non-negligible amount of work however, which I will concede. To go in depth, the divisor-based DLog proof shouldn't pass for torsioned elements. No sum of elements without torsion will add with a torsioned element to the identity element, which is what the divisor is supposed to check. That means the worst you get is torsion in, torsion out, on the output key and commitment.

We already argue spending torsioned input commitments is fine. You can torsion them on-chain, and we verify them as prime order elements (emphasis on as prime order elements, not that they are prime order elements), but at time of spend they'll be torsioned. For a torsioned binomial output key x'G + aT, where ' denotes torsion, we'd have to argue it's fine.

I'm not sure that is? I proposed a Bulletproof+ for the F-S SA+L. While we can argue if it's actually a BP+ or not (as it's solely the one-round protocol), that doesn't change we currently ensure all elements passed to a BP+ are without torsion via torsion clears. This would not have its torsion cleared and would be explicitly torsioned. We'd have to inv eight, eight (currently proposed for outputs, now done for inputs) or so increase the tolerances of our policy re: BP+.

I'd rather be done with it, and if we're concerned about performance (just over one additional scalar multiplication), implement Pornin's halving method which I believe is just 30% the cost.

@tevador
Copy link

tevador commented Apr 30, 2024

If 1/4 points are permissible, I would've assumed we'd reach a permissible point in 1.5 additions

We have:

Chance Additions Legendre symbols
1/4 0 3
(3/4)*(1/4) 1 5.5
(3/4)2*(1/4) 2 8
... ... ...

So the average number of additions will be:

$$A = \sum_{i=0}^{\infty} i p^i (1-p) = p (1-p) \sum_{i=0}^{\infty} i p^{i-1} = \frac{p (1-p)}{(1-p)^2} = \frac{p}{1-p}$$

With p=0.75, we get A=3. Similarly, for the number of Legendre symbols, we get an average of L=10.5.

I'm estimating that a Legendre symbol calculation will take time comparable to at least 50 point additions. So the average cost is ~500, dominated by Legendre symbol calculations. For "1 in 10 million" crafted DoS points, we get A=51 and L=130.5 and a total cost of ~6500 point additions.

Update (2024-04-30): fixed the number of Legendre symbol calculations. The average is 1.5 per addition plus 2 for the final successful point.

Update (2024-05-03): fixed the number of Legendre symbol calculations to account for the use of projective coordinates. The average is 2.5 per addition plus 3 for the final successful point.

@kayabaNerve
Copy link
Author

kayabaNerve commented Apr 30, 2024 via email

@kayabaNerve
Copy link
Author

Pushed edits in response, though not yet noting the above re: permissibility as it should likely be incorporated into the JAMTIS proposal.

Also, please feel free to further back on my anti-torsion stance. I do get that change is non-negligible when optimizing this. My justification to myself was the fact I proposed broadcasting keys and commitments as inverse eight, decreasing current node resource use, making this a one time payment for our history. I'm obligated to concede however that wallets would have to perform two triple doublings, which should be negligible yet is an additional cost we'd pay into the future (not solely historically).

@tevador
Copy link

tevador commented May 1, 2024

broadcasting keys and commitments as inverse eight, decreasing current node resource use [...]. I'm obligated to concede however that wallets would have to perform two triple doublings, which should be negligible yet is an additional cost we'd pay into the future

Napkin math tells me that it's better to use the Ristretto format rather than pre-multiplication by (1/8) and post-multiplication by 8.

Ignoring the permissibility check and Ed25519 -> Wei25519 conversion, to get the canonical touple (K, I, C) to insert into the tree, you need the following operations:

A. With pre- and post- multiplication

A.1 Transaction builder

  1. K' = ge_scalarmult(inv8, K)
  2. Compress K'
  3. C' = ge_scalarmult(inv8, C)
  4. Compress C'

A.2 Transaction verifier

  1. Decompress K'
  2. 3x ge_dbl
  3. fe_invert to convert to affine
  4. I = hash_to_ec(K)
  5. Decompress C'
  6. 3x ge_dbl
  7. fe_invert to convert to affine

B. With Ristretto

B.1 Transaction builder

  1. Compress K
  2. Compress C

B.2 Transaction verifier

  1. Decompress K
  2. I = hash_to_ec(K)
  3. Decompress C

Even if Ristretto point compression and decompression is slightly slower, it's more than compensated for by avoiding the extra group operations for both the transaction builder and the verifier.

Additionally, for commitments, the post-multiplication by 8 is more risky because it we are not careful, we might allow users to spend 8x the amount in the output.

The requirement of torsion clearing historical points is adding a non-negligible amount of work however, which I will concede.

As for historical outputs, some transactions have been confirmed to have torsioned keys. I see three possible solutions:

  1. Ignore torsion if we can prove it's safe.
  2. What you are proposing, i.e. clearing the torsion by multiplying by 1/8 and then 8 for all keys.
  3. Using the point halving method to detect torsioned keys and clear the cofactor only for those keys. This should be about 2x faster.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 1, 2024

It'd be better to use Ristretto despite there being no explicit conversion provided, as it'd be trivial to define, yet I'm not happy with the proposed scope at this time to move Monero to Ristretto despite calling for it a couple years ago. If others are willing to do the time and effort and it's not too contentious, than yes, I'd support it. I also get how proposing it for a future hard fork is silly though, as it'd re-incur building the tree (which is notable). That isn't to say it wouldn't be worthwhile in a future hard fork, solely bad to put off to one.

Additionally, for commitments, the post-multiplication by 8 is more risky because it we are not careful, we might allow users to spend 8x the amount in the output.

This seems like the thing we'd catch with basic testing (have every single point added to the tree go through a single fn, so either all commitments or none are so mutated) and also already a concern present in Monero? If the BP+ forgets to multiply by eight, the created commitment may be a value that's only in range when divided by eight? (though that variance isn't enough to trigger an overflow regardless)

Using the point halving method to detect torsioned keys and clear the cofactor only for those keys. This should be about 2x faster.

Would be my advocacy at this time, though I'll note halving doesn't detect torsioned keys. 3x halving, then 3x doubling to a different point does. We can't save the second half of those operations for most keys.


Distinctly, re: Wei25519, I implemented Wei25519 from the IETF draft for the divisor library. The map from Montgomery is trivial (one addition of a constant), where Wei25519.2 and Wei25519.-3 are non-trivial (multiplications by constants, plural). The only potential benefit I could see if we optimized the hell out of divisor construction, as the curve formula is a modulus and a coefficient of 2 may theoretically clean that math up? But that'd force the node to do the more expensive map (and it only does the map. it does not do any arithmetic for Wei25519 points, nor does it multiply/divide/mod any polynomials). Feel fee to confirm I'm not missing any considerations :) Really appreciate the feedback and collaboration here.

EDIT: We do two hash to curves and one Weierstrass addition per DLog claim on the first layer. I'll need to review hash to curve candidates and performance implications for choice of A before I can so write off the remaining question of my initial choice.

@tevador
Copy link

tevador commented May 1, 2024

It'd be better to use Ristretto despite there being no explicit conversion provided, as it'd be trivial to define, yet I'm not happy with the proposed scope at this time to move Monero to Ristretto despite calling for it a couple years ago. If others are willing to do the time and effort and it's not too contentious, than yes, I'd support it. I also get how proposing it for a future hard fork is silly though, as it'd re-incur building the tree (which is notable). That isn't to say it wouldn't be worthwhile in a future hard fork, solely bad to put off to one.

I think it's worth it to implement in parallel with FCMPs if we can find someone willing to do so. We should at least analyze the amount of work needed for this. Ristretto is basically only an alternative serialization/deserialization method and produces points in extended coordinates (struct ge_p3 as used by Monero), so it should not be too much work.

I implemented Wei25519 from the IETF draft for the divisor library. The map from Montgomery is trivial (one addition of a constant), where Wei25519.2 and Wei25519.-3 are non-trivial

I started implementing a concept for permissible points and the canonical Wei25519 representation is the most suitable one because the y coordinate in Wei25519 format is equal to the v coordinate in Montgomery format, which makes it easier to check for permissibility.

@tevador
Copy link

tevador commented May 1, 2024

Here is my reference implementation for point permissibility: https://gist.github.com/tevador/f9e6d4f262aba1c3de1eb7a20f40347d

I'm using the criterion that 1+y is square and 1-y is non-square, where y is the point coordinate in Wei25519 form. Point addition is still done in Ed25519 form, we just convert to Curve25519 coordinates for the permissibility test.

G + 3 T is a permissible point (the example used in the script).

The script shows that we actually need 3 Legendre symbol calculations to check permissbility due to the use of projective coordinates (affine coordinates are not used in practice).

@kayabaNerve
Copy link
Author

so it should not be too much work.

https://ristretto.group is clear enough I was able to implement it 2y ago, when I really did not know what I was doing, into an old project of mine: https://github.com/MerosCrypto/Meros/blob/master/e2e/Libs/Ristretto/Ristretto.py#L18-L111

It's less than 100 lines of Python for encode/decode (though that code isn't well-written and is ugly, please refer to the Ristretto specification).


I agree the y coordinate directly mapping is pleasant. I'm not convinced the saved cost of the map (multiplication by one constant) is worth the cost of the (potentially) worse hash to points/incomplete addition functions (which could incur multiple multiplications if not exponentations). I'm refraining from forming a definite comment until I poke around on the matter more.


Clarifying, is 1+y simply meant as a demo where a=1, b=1 for simplicity, or are you proposing using a=1, b=1? I'm not convinced the uniform distribution 'achieved' is of too much benefit given the a/b are constant. As one can brute force ys, they can brute force ay + bs. We may want to legitimately discuss removing the 'hash' element.

@tevador
Copy link

tevador commented May 1, 2024

I agree the y coordinate directly mapping is pleasant. I'm not convinced the saved cost of the map (multiplication by one constant) is worth [...]

You are correct. After publishing the permissibility script, I realized that there is no performance difference between Wei25519 and Wei25519.2. They simply differ in one constant that needs to be multiplied in both cases.

worse hash to points/incomplete addition functions

Incomplete addition doesn't use the curve constants. Only incomplete doubling uses a, but AFAIK we will not use doubling in the circuit. In case we need doublings in the circuit, it's better to use Wei25519.2. On-curve check might also be a little bit faster with Wei25519.2 because you can replace the multiplication by a with an addition, but I'm not sure if that makes any difference in the circuit.

Clarifying, is 1+y simply meant as a demo where a=1, b=1 for simplicity, or are you proposing using a=1, b=1? I'm not convinced the uniform distribution 'achieved' is of too much benefit given the a/b are constant. As one can brute force ys, they can brute force ay + bs. We may want to legitimately discuss removing the 'hash' element.

I'm proposing 1+y as the simplest hash that works. Not using a hash is out of question because our field has p = 1 mod 4, so y and -y always have the same quadratic residuosity (i.e. there would be no permissible points at all).

@kayabaNerve
Copy link
Author

kayabaNerve commented May 1, 2024 via email

@tevador
Copy link

tevador commented May 1, 2024

The implemented incomplete addition doesn't use the constant a but some
iadd functions only work for specific values of a (and may be more
performant).

You can check Explicit Formulas Database. The formulas for addition are the same for all values of a with the curve equation y^2 = x^3 + a*x + b. There are specific doubling formulas for some coordinate systems that depend on the value of a.

However, I'm fine with using Wei25519.2. I don't recommend Wei25519.-3.

It's a performance/'security' benefit tradeoff.

There is no security tradeoff here. You might be confused by the term "hash" they used in the Curve Trees paper. "Surjective map" would be a better term. We simply want a map that produces the expected 1/4 of permissible points. The simplest surjective map that works is the one I'm proposing. The main purpose of all this is to make sure only one of the pair [P, -P] is permissible for adding to the tree.

Note that if we had p = 3 mod 4 (like Bitcoin, for example), we could use simply y and -y and get 1/2 of permissible points.

@kayabaNerve
Copy link
Author

You are right re: iadd, sorry. I missed only doubling had distinct formulas.

I do understand it's not a proper hash. The security argument I was attempting to reference would be if scaling by a uniform element achieves a distribution closer to uniform. I don't inherently see why such a scaling would benefit security, hence security being in quotes, but I also don't consider myself capable of commenting on the statistical distribution and the impact of scaling vs solely including a fixed offset (b). I'll also note a term better than security may be "integrity" or "quality". If you also don't see a benefit for the distribution, its solely a performance detriment we can skip by setting a=1.

@tevador
Copy link

tevador commented May 1, 2024

AFAIK there is no relationship between the residuosity of 1+y and 1-y. This test is giving a good distribution. For examples, G + x T is permissible for the following values of x:

3
5
8
10
12
15
16
21
22
31
35
39
46
47
48
50
51
60
71
77
81
89
90
91

Overall, out of the first 100 000 points, 25 228 are permissible, roughly 1/4 as expected.

In the circuit, you will only prove the first part, i.e. w^2 = 1 + y. This is sufficient because for any point P in the tree, -P will not be permissible, so this check prevents the use of negated points for double spending.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 1, 2024 via email

@kayabaNerve
Copy link
Author

kayabaNerve commented May 3, 2024

@tevador Thoughts on re-defining key images to just their x coordinates?

If we let the prover malleate the sign of their key image, we avoid 25% of the scalars on the first layer (which is by far the widest part of the tree). Since the tree building is non-negligible (four scalar multiplications per output + updating all higher branches), I wanted to consider it. It'd trivially be possible now without rebuilding the key image store if we do two lookups, one per sign of the key image (until an eventual/never migration to an x-only store).

I think it'd only be the one-bit loss to security? I can't think of further ramifications immediately. I'm still mulling this over though.

EDIT: If we keep this going, we don't need permissibility checks on the output key/key image, as we allow sign malleation. That prevents requiring tweaking, and makes resampling randomness better than prior (as it's now 1 point per output which may trigger resampling, not 2). Whether or not resampling randomness is optimal remains debatable. I'm also just not really happy with the questions raised and how invasive this entire idea would be though...

@tevador
Copy link

tevador commented May 3, 2024

Eliminating the y coordinate of I from the hash would mean you can prove both (K, I, C) and (K, -I, C) are in the tree. Negating the key image base would result in a negated key image -KI. Not considering the sign bit of the key image would fix that and prevent double spending.

I think rebuilding the key image table would be relatively easy (could be done during a common DB migration). We would probably need to check the current set of spent outputs if there are two key images that only differ by the sign bit (extremely unlikely). Otherwise I can't immediately see any major issues. If it significantly speeds up the hash, I support this change.

@tevador
Copy link

tevador commented May 3, 2024

If we keep this going, we don't need permissibility checks on the output key/key image, as we allow sign malleation. That prevents requiring tweaking, and makes resampling randomness better than prior (as it's now 1 point per output which may trigger resampling, not 2).

When I'm thinking about it, do we even need the amount commitment C to be permissible? Not checking for the sign of C would at worst mean someone can 'spend' the negative amount in the output, which would only introduce a new method of burning Monero, but I can't immediately see issues with it.

@kayabaNerve
Copy link
Author

In order to currently have a difference by the sign bit, you'd have a solution to the discrete log problem for two outputs of the hash function. For xH_1 == yH_2, x * y**-1 is the relationship of H_1, H_2.

I think rebuilding the key image table would be relatively easy (could be done during a common DB migration).

Assuming key images are unordered (LMDB prefix iteration means they aren't), it's a one-time migration to avoid 2x impact to reading. I'm unsure that's worthwhile vs simply having new nodes build the DB in the faster manner, but I'm unsure migration policy.

This would save 25% off the first layer of the tree, and literally over 100m scalarmuls from the initial tree building (due to having more outputs than that). Those scalarmuls aren't full cost, due to being part of a multiexp, yet still non-negligible (especially since we need the multiexp output for a small batch, say 100 points at the high end, limiting the cost saved).


I also considered it for C. Since the formula is

sum(inputs) - sum(ouputs), having a negative input does not increase the amount spendable. It does allow underflowing inputs, yet wrapping around the curve order would require 2**192 input and/or outputs since this is additive.

I didn't want to suggest it, as while it removes the concept of permissibility once and for all, it's cursed. I appreciate optimizations where they exist, yet allowing negative inputs to save a tiny bit (ideally, just one sqrt per output) seems overly optimizing. I'm debating not endorsing the key image commentary due to how it'd affect the protocol proper (not just the TX format, yet fundamentally how we track double spends) and the need to be extra careful with checking historical key images, in-block key images, and in-TX key images. Not to mention, we can always generalize protocol rules, yet we can never re-specify them.

But yes, I believe that's technically valid and would remove permissibility.


I'll keep the protocol as-is for now, and we can discuss both these items at the next NWLB/MRL to see sentiment.

@tevador
Copy link

tevador commented May 3, 2024

It does allow underflowing inputs, yet wrapping around the curve order would require 2**192 input and/or outputs since this is additive.

That's nothing new, it already applies to outputs (i.e. you could create counterfeit Monero today if transactions with 2^192 outputs were possible).

I didn't want to suggest it, as while it removes the concept of permissibility once and for all, it's cursed. I appreciate optimizations where they exist, yet allowing negative inputs to save a tiny bit (ideally, just one sqrt per output) seems overly optimizing.

I don't think it's a tiny optimization.

  1. It removes the need to migrate old commitments to be permissible (a one-time cost of 100M+ commitments for every node).
  2. It removes the DoS vector of permissibility (see above). It's not one square root per commitment. The average is 10.5 square roots (or Legendre symbols).
  3. It simplifies both the protocol and the implementation (everything regarding permissibility can be removed).
  4. No need to modify Jamtis (sender-receiver protocol) to account for permissibility.

The only con of the change would be the possibility of purposefully burning double the amount in your own e-note (you'd need to provide extra e-notes to make the result non-negative).

@kayabaNerve
Copy link
Author

ACK. Will discuss at aforementioned meetings and see how other developers/researchers feel.

@kayabaNerve
Copy link
Author

@tevador We still need permissibility for the output of the hash function, unless we lose collision resistance for branch hashes, yet that should be DoS-mitigated as the hash is by branch (no one output can cause thousands of operations unless the only item in its batch).

@tevador
Copy link

tevador commented May 4, 2024

we lose collision resistance

Do we?

Let's take the simplest case of a binary tree and Pedersen hash H(a, b) = x(H_0 + a H_1 + b H_2), where x() means we only take the x-coordinate of the point as the hash result.

Suppose that someone can find an algorithm that takes (H_0, H_1, H_2) as input and produces two different tuples (a, b) and (a', b') such that H(a, b) = H(a', b').

We can turn that algorithm into a discrete log oracle because:

  1. If a H_1 + b H_2 = a' H_1 + b' H_2, then H_1 = (b' - b)/(a - a') H_2.
  2. If H_0 + a H_1 + b H_2 = - H_0 - a' H_1 - b' H_2, then H_0 = -(a + a')/2 H_1 - (b + b')/2 H_2 and we still get a discrete log oracle if we input for example H_2 = r H_1.

So finding a collision with an x-only Pedersen hash should still be as hard as solving the discrete log. It's important to have H_0 != O, otherwise you get an easy collision (a, b) and (-a, -b).

@kayabaNerve
Copy link
Author

It is (a, b) and (-a, -b) which I was concerned of. That produces outputs (x, y), (x, -y). I'm unsure the impact of having such a known collision when said -a would still need to be the hash of some branch, and finding such an opening would still be the discrete log problem? Yet it's best not to mess with it.

If you include a initialization of G_0 which is fixed with a coefficient of 1... then this explicitly becomes under the discrete log problem to find a collision? So ACK :) Good thinking. I believe when we pass the blinded Pedersen Commitment to the next layer up, we'd simply add this point, so it'd be done out of circuit which is quite nice. It does incur one extra addition to every hash output, but that should still be better than incurring the permissibility.

@tevador
Copy link

tevador commented May 4, 2024

In the FCMP++ PDF, chapter 6.1, where you write "where all members of branches are zero by default", there should at least be a footnote explaining that this is safe because Selene and Helios don't contain a point with x = 0 (that could be used to fake the presence of a member in the tree).

For this reason, we also don't need any impermissible dummy values that are suggested in the Curve Trees paper in Appendix E.

@kayabaNerve
Copy link
Author

For what it's worth, I did write out the circuit and verify the constraints pass for a honest prover. With that, we effectively have a correctness proof (not a soundness proof to anyone who is reading this without a formal background), letting us move to review/audits/formal verification. At least, we do once I update the paper with the fixes I made on the code side of things...

Re: development, it's been much smoother than expected. I was expecting notable challenges/frustrations based on my prior work on FCMPs. It was quite cumbersome to work with. While I did carry the code from it, as part of "productionizing", I did rewrite vast swaths of it (the existing code providing a reference on what works, the new code doing what works how it should be done).

Part of my thoughts on the amount of rewriting from prior work can be seen in this gist and in my CCS.

Implement an amenable framework for building arithmetic circuits.

is a milestone here, yet not in my CCS. The generic arithmetic circuit framework was fine, yet became incredibly bogged down when we added challenges/Pedersen Vector Commitments. This new work scraps it entirely for a minimal framework (enough to write circuits safely and without scratching my eyes out), then treats the PVCs as outside of the framework. This specification, only implementing what we need as we need, without a layer of abstraction around it, definitively helped. This decision was made prior to the CCS (hence it not being a milestone there).

The explicit document also helped, as it corrected a lot of development before it began. I'm hoping to be able to start work on the exposed API, and the integration side of things, in a couple weeks (letting jberman and co start on integration while I continue work on the FCMP++ side of things).

@kayabaNerve
Copy link
Author

ACK re: zero. The Helioselene library I published earlier today does have such tests.

@tevador
Copy link

tevador commented May 5, 2024

I ran some tests with Ristretto and unfortunately, it doesn't solve all of our problems.

Ristretto abstracts torsion away by choosing one representative from each of the 8-torsion cosets. This choice is made during serialization. However, the chosen representative is not always the point in the prime-order subgroup. For example, the base point deserialized from the ristretto representation e2f2ae0a6abc4e71a884a961c500515f58e30b6aa582dd8db6a65945e08d2d76 will have a torsion component (the y-coordinate is not equal to the canonical value of 4/5).

This has two consequences for us:

  1. In order to get the legacy point representation for hashing (to get the key image base), torsion clearing still needs to be done after deserializing K from the Ristretto format.
  2. Key images serialized in Ristretto format may still have torsion, which must be cleared to get the legacy representation.
  3. When converting a Ristretto point to Wei25519, we will still have torsion.

The first point could be overcome by redefining the key image base for post-fork output keys from Hp(legacy_repr) to Hp(ristretto_repr). This should be safe because each output will still have only one well-defined key image, so double spending will not be possible.

For the second point, torsion clearing with mul8 is unavoidable even with Ristretto.

For the third point, this is probably not a real problem because each K and C will still have only one well-defined x-coordinate in Wei25519 format. We would need to avoid any equality comparisons in Wei25519 format and use strictly Ed25519 format and Ristretto comparison (e.g. to check for identity) and only convert to Wei25519 immediately prior to hashing.

@tevador
Copy link

tevador commented May 5, 2024

For the second point, torsion clearing with mul8 is unavoidable even with Ristretto.

To clarify: Torsion clearing is avoidable only if we also migrate all historical key images to the Ristretto format. It would be a bit tricky though, with the proposed change to have the same representation for both KI and -KI,

@kayabaNerve
Copy link
Author

kayabaNerve commented May 5, 2024

... that may raise the new question of can we convert old key images?

If Ristretto has a random torsion component, and we have the full Ed25519 points (with sign data), within 8 additions we can presumably find its Ristretto-decoded equivalent. i don't love this idea and we'd need more specification to further comment, but I don't think it's impossible.

EDIT: As I posted this, the page refreshed and showed tevador's comment. Seems we're on the same page there.


I removed permissibility. I'll move to updating the paper to recent developments next. I'm unsure if my Helioselene library is 3 or 50x slower than dalek. My computer is unreliable for benchmarks :/ I'm getting its field arithmetic is somehow faster than dalek (despite not using a tailored impl), yet point addition is ~3x slower, yet the multiexp is ~50x slower. My honest guess is it's on the scale of 5-10x slower, but I'm only planning for a 2x performance increase to the entire proof while considering efficiency.

I truly can't say I have the experience nor interest in making a faster library. If anyone wants to do the field arithmetic in C and have me port, I'd be fine doing so. While I don't believe the learning curve for Rust on this amount of low-level code atrocious, I respect lack of interest in working on Rust (as I would a lack of interest in working on any topic, really).


We do need a test confirming Wei25519's x-coordinate of 0 does not have a valid point associated or a distinct representation for null leafs.

@tevador
Copy link

tevador commented May 5, 2024

can we convert old key images?

Yes, it's definitely possible.

  1. Decode the key image into extended coordinates using the legacy method.
  2. Encode the point in extended coordinates into the Ristretto format.

I think it should be possible to tweak the Ristretto encoding function so that both P and -P get the same representation (this would only be used for key images).

I truly can't say I have the experience nor interest in making a faster library. If anyone wants to do the field arithmetic in C and have me port, I'd be fine doing so. While I don't believe the learning curve for Rust on this amount of low-level code atrocious, I respect lack of interest in working on Rust (as I would a lack of interest in working on any topic, really).

It's on my TODO list. I want to take it as an opportunity to learn Rust.

We do need a test confirming Wei25519's x-coordinate of 0 does not have a valid point associated or a distinct representation for null leafs.

Wei25519 has a point with x = 0, but we'd have to check if the point can be reached from a deserialized Ristretto point. There is a 7/8 chance it won't be reachable.

@kayabaNerve
Copy link
Author

It's on my TODO list. I want to take it as an opportunity to learn Rust.

:)

Wei25519 has a point with x = 0, but we'd have to check if the point can be reached from a deserialized Ristretto point. There is a 7/8 chance it won't be reachable.

If we move to Ristretto, which still requires someone step up there in a timely fashion and isn't blocked by larger sentiment/time to review it. For now, I'll add a note that we use a the lowest x-coordinate which cannot be used within a point for the leaves (and 0 for the branches on Helios/Selene per reasoning you noted).

@tevador
Copy link

tevador commented May 6, 2024

Wei25519 has a point with x = 0, but we'd have to check if the point can be reached from a deserialized Ristretto point. There is a 7/8 chance it won't be reachable.

I confirm that the two Wei25519 points with x = 0 are unreachable, so we are lucky. Ristretto only adds a random 4-torsion element, but these two points have an order of 8*ℓ, so they can never be produced from valid Ristretto points.

@tevador
Copy link

tevador commented May 6, 2024

For the third point, this is probably not a real problem because each K and C will still have only one well-defined x-coordinate in Wei25519 format. We would need to avoid any equality comparisons in Wei25519 format and use strictly Ed25519 format and Ristretto comparison (e.g. to check for identity) and only convert to Wei25519 immediately prior to hashing.

So it turns out that this is a problem. The blinded key K' can have a different torsion than the original key K in the tree, in which case a membership proof would be impossible because after subtracting the blinding from K', the result would have a different x-coordinate. The same applies to C' and C.

A naive solution to adjust the blinding until K' and K have the same Ristretto torsion would be a disaster for privacy, because it would reduce the effective anonymity set to 1/4.

The only solution I see at the moment is to redefine the leaf-layer hash as H(K, I, C) := H(4*K, I, 4*C). This is effectively torsion clearing, except it only needs two doublings instead of three (thanks to Ristretto).

Since we can't avoid torsion clearing, this raises the question if we should forget Ristretto and go with the simple mul8 solution instead.

@kayabaNerve
Copy link
Author

I can't claim I'm happy with these torsion discussions and want to continue with them, especially if the end result would be separation of the privacy pools. While we can save a single doubling (two instances per output) without separation (so that definitely won't be the end result), I don't believe the complexity is worth it. The entire point of my torsion clearing in the document was to avoid spending days on these discussions of what's safe and what isn't and to establish a safe, clear, simple policy. While I don't want to say we shouldn't spend days on say, a 5% performance increase, I do believe this discussion is far from that.

I will say I'm happy if Wei25519's 0 x-coordinate is torsioned, as that means this new policy of torsion clearing effectively bans 0, allowing us to use 0 for null leafs. Non-0 for null is yet another such annoyance I'd like to be able to simplify out.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 6, 2024

@tevador For hash to point, do you have a better candidate than the IETF spec and SSWU for the mapping? I'm unsure if you had a specific research effort in mind for these curves.

@tevador
Copy link

tevador commented May 6, 2024

I can't claim I'm happy with these torsion discussions and want to continue with them, especially if the end result would be separation of the privacy pools. While we can save a single doubling (two instances per output) without separation (so that definitely won't be the end result), I don't believe the complexity is worth it. The entire point of my torsion clearing in the document was to avoid spending days on these discussions of what's safe and what isn't and to establish a safe, clear, simple policy. While I don't want to say we shouldn't spend days on say, a 5% performance increase, I do believe this discussion is far from that.

The Ristretto solution was my personal investigation and I think it was important to do it before commiting to a possibly suboptimal solution. Given my comments above, I think it's sensible to move forward with the original torsion clearing solution.

I will say I'm happy if Wei25519's 0 x-coordinate is torsioned, as that means this new policy of torsion clearing effectively bans 0, allowing us to use 0 for null leafs.

Correct. Torsion-cleared Wei25519 points will never have x = 0.

@kayabaNerve
Copy link
Author

The Ristretto solution was my personal investigation and I think it was important to do it before commiting to a possibly suboptimal solution.

Understood and respected. I participated out of belief it may produce a notably better solution. I just wanted to note my lack of active belief (which quickly stated fading when we discussed promoting existing key images to Ristretto representations).

@tevador
Copy link

tevador commented May 6, 2024

For hash to point, do you have a better candidate than the IETF spec (which may still be a draft? I forget if it was ever finalized) and SSWU for the mapping? I'm unsure if you had a specific research effort in mind for these curves.

Do we need hash to point for Selene and Helios? For the generators, we can use the simple method of hashing to a bitstring and trying to decompress it to a point.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 6, 2024

Yes, for Wei25519, Helios, and Selene (though we can of course hash to a birationally equivalent curve and then map, if that is cheaper). The divisor challenges require we sample two challenge points and then evaluate the divisor over their line.

Currently, I'm randomly sampling x's and recovering y. I don't actually want to claim that's optimal.

Also, the divisor challenge is the most expensive part of the FCMP right now (barring the multiexp, which executes in batch). It requires not only these hash to points, yet also creating hundreds of scalar powers (256 for the powers of x, ~256 for the powers of x multiplied by y). It still should be a small amount of work (hampered by the current Helioselene library not meeting performance expectations), that just doesn't change its percentage of the overall work. This final proof has ended up quite compact thanks to the current usage of divisors (it's 2048 or 4096 rows in the inner-product statement to just 256).

@tevador
Copy link

tevador commented May 6, 2024

The divisor challenges require we sample two challenge points and then evaluate the divisor over their line.

OK, I wasn't aware. I think we can use RFC 9380 and Simplified SWU for hashing as you suggested. Wei25519 will need torsion clearing after the mapping.

@kayabaNerve
Copy link
Author

ACK, and all good.

@kayabaNerve
Copy link
Author

Slight correction. We should be able to reuse challenges. That puts us at a flat two hash to points per Bulletproof.

For context on challenge reuse, the divisors are all evaluated independently (there's no ability to create one which plays off another). They just require a challenge binding to their variables. All existing hashes are drawn from one transcript of every such variable, so every challenge right now is binding to every variable. Accordingly, we should be able to collapse challenging without issue.

That should roughly halve the time the FCMP circuit takes? (FCMP circuit -> GBP verification -> batched multiexp, so halving the FCMP circuit is not halving the entire proof, just a notable part of it)

@hinto-janai
Copy link

Hello, I have some questions:

Outgoing View Keys

Is it as simple as adding this key + the current incoming key for full view-only wallets to exist? How would current wallets migrate to this? They can generate a new key and calculate o*G+y*T, but wouldn't this also mean providing 2 addresses?

Forward Secrecy [...] An adversary with a discrete log oracle also cannot distinguish between an unspent non-forward-secret output and a forward-secret output. Such an adversary can only calculate what the linking tag would be if the output isn’t forward secret, and wait to see if that appears

Does this mean this adversary can assert the existence of unspent non-FCMP outputs, and when they are spent? I.e. there's 2 output sets before entering the main FCMP set?

 /------------------------\        /------------------------------------\
| unspent pre-FCMP outputs |      |                                      |
 \------------------------/       |                                      |
             |                    |        main post-FCMP outputs        |
             v                    | (unknown origin, but is known to not |
 /------------------------\       |   be one from the previous 2 sets)   |
|    post-FCMP outputs     | ---> |                                      |
|     (known origin)       |      |                                      |
 \------------------------/        \------------------------------------/

If a post-FCMP output can be identified via the pre-FCMP output's linking tag, can a post-FCMP output be determined to be a member of either the known origin or main set by seeing if a pre-FCMP output's linking tag leads to it? Not sure if this really matters, just curious.

Tree

Maybe for @j-berman:

How is the tree planned to be stored? Additional tables in the main database? For unsynced nodes, would tree operations occur alongside block downloading + verification? For synced nodes, will new post-FCMP++ nodes have to stall on startup for a bit while generating the whole tree? Would this be done as a database migration?

If an output has a time-based timelock prior to the activation of the FCMP++ hard fork, it is converted to an estimated block-based timelock

Assuming timelocks aren't banned before FCMP++, how exactly would this be done? Convert time to blocks by dividing by 120 and add onto the current block height?

DB migration

For context, the last time migration code was touched was by mooo 5 years ago. I'm unsure if there's anyone currently willing to do it, although as far as I can tell it's relatively straightforward.

@kayabaNerve
Copy link
Author

but wouldn't this also mean providing 2 addresses?

Under my proposal, only the new address would need to be provided. They're indistinguishable and have an identical sending process. I make no guarantees my proposal would not impact the relevant JAMTIS proposal which only declares itself compatible with the current addresses.

Does this mean this adversary can assert the existence of unspent non-FCMP outputs, and when they are spent? I.e. there's 2 output sets before entering the main FCMP set?

Sorry, but this entire section isn't really sufficiently well formed for me to reply to its questions/comments. I'd rather clarify things from the start, and if you still have questions, follow up there.

There's currently the Ring outputs and the RingCT outputs. Both of these are non-FS and continue to be non-FS.

After FCMPs, FS outputs become possible. You may still make non-FS outputs yet may also make FS outputs (if there's an address for FS outputs or a protocol for FS outputs).

A historical non-FS output can have its linking tag calculated. Once calculated, an adversary can wait for it to appear on-chain.

A FS output cannot have its linking tag calculated. It has as many linking tag discrete logarithms as scalars for the curve. If you assume all outputs aren't FS, setting the new T term to have a coefficient of 0, you can obtain the linking tag it'd have if it wasn't FS. You can then wait and see if that linking tag appears on chain (confirming it isn't a FS output OR another entity has a solution for the discrete log problem at time of appearance on-chain).

Accordingly, an adversary with a discrete log oracle cannot differentiate unspent non-FS outputs and FS outputs. They both have linking tag discrete logarithms recoverable if you assume the T term has a coefficient of 0. Both will not have those linking tags actually appear on-chain. You can only differentiate spent non-FS outputs as those will have their linking tags appear on-chain.

This assumes we don't deploy FS at a hard fork boundary, which it sounds like we will with the above JAMTIS proposal. My proposal enables it to be done whenever without new wallet protocols (not to suggest its better than the JAMTIS proposal or still an active candidate. Just to note how these discussions formed).

@j-berman
Copy link

How is the tree planned to be stored? Additional tables in the main database?

Yep, here's a draft schema:

/* DB schema:
 *
 * Table            Key          Data			          Properties
 * -----            ---          ----			          ----------
 * leaves           leaf_idx     {O.x, I.x, C.x}	          Integer key, no dups, fixed size, sorted by `leaf_idx`
 * branches         layer_idx    [{branch_idx, branch_hash}...]   Integer key, no dups, fixed size, sorted by `layer_idx` first `branch_idx` second
/*
  • In the branches table:
    • The largest layer_idx in the db == tree root
    • The largest layer_idx in the db should only have a single record in the db, with branch_idx == 0
    • A branch refers to the hash of a chunk w_c elements in layer_idx-1 (or if layer_idx == 0, in the leaves table)
    • branch_idx == element idx in layer_idx-1 / w_c
    • branch_hash is 32 bytes that can either be a Helios point or Selene point
      • even layer_idx == selene
      • odd layer_idx == helios
  • This table approach should enable efficient queries for path by leaf_idx (will know all db reads needed immediately, can make parallel reads in theory).
    • The reason the branches table uses layer_idx as a key, and uses value [{branch_idx, branch_hash}...] is to optimize the same way output_amounts table does it (which is optimized for reading outputs by amount and global output ID). The primary index is layer_idx (like amount) and secondary index is branch_idx (like output_id). Planning on doing some more perf testing on this approach as well.
  • The block header stores the tree root hash at that block, and the end leaf_idx added to the tree for that block.
  • We'll also need a new table to keep track of locked outputs sorted by unlock_time.
  • We'll also want a migration for key images also described here.

For unsynced nodes, would tree operations occur alongside block downloading + verification?

Yep

For synced nodes, will new post-FCMP++ nodes have to stall on startup for a bit while generating the whole tree? Would this be done as a database migration?

I initially figured nodes would stall too; @kayabaNerve proposed implementing the migration as an async background task, which would only stall the node if the tree is not finished constructing at the fork height where FCMP's begin. Sounds like a solid idea to me.

For context, the last time migration code was touched was by mooo 5 years ago. I'm unsure if there's anyone currently willing to do it, although as far as I can tell it's relatively straightforward.

I included the migration of cryptonote outputs to the tree as part of my CCS proposal :)

Assuming timelocks aren't banned before FCMP++, how exactly would this be done? Convert time to blocks by dividing by 120 and add onto the current block height?

The timestamp timelocks are unix timestamps for when an output should unlock to clarify. So we can do something like: take timestamp from some block N, extrapolate block times into the future from that starting point using 120s for each block, then convert timestamps to unlock at their respective extrapolated block.

Thinking on it some more.. it's probably fairly straightforward to have a separate table for timestamp locked outputs sorted by unlock_time also. Then when adding a block, check the lowest unlock_time output in the table; if the output is unlocked using get_adjusted_time at that height, remove from table and add to tree, and continue iterating removing outputs from the table until encountering an output that should remain locked at that block's get_adjusted_time. This way we wouldn't need to do the conversion from timestamp to block.

@tevador
Copy link

tevador commented May 13, 2024

Is it as simple as adding this key + the current incoming key for full view-only wallets to exist? How would current wallets migrate to this? They can generate a new key and calculate o*G+y*T, but wouldn't this also mean providing 2 addresses?

Outgoing view keys are only planned for the new Jamtis address format. Legacy CryptoNote addresses will not have outgoing view keys.

Existing wallets cannot be safely migrated to support outgoing view keys because that would change all their existing addresses.

@kayabaNerve
Copy link
Author

@tevador I actually would like to distinctly follow up with you on if a wallet whose current main addresses are (sG, vG) will survive JAMTIS's backwards compatibility if malleated to (sG + yT, vG), as I originally proposed. While I hear you existing wallets cannot simply move, I'm personally curious about developing wallet software outside a HF boundary which achieves FS in such a manner for new wallets (though I don't want to have said addresses bork at time of JAMTIS).

@tevador
Copy link

tevador commented May 13, 2024

Forward-secrecy can be achieved at the time of FCMP activation with a tiny change in the sender-receiver protocol. The sender would simply add a T-term to the one-time address. This would bring immediate forward secrecy for everyone and not break existing wallets.

I do not support creating any additional new wallet types before Jamtis.

@kayabaNerve
Copy link
Author

In that case, mind documenting the exact proposed change and can we circle in @j-berman so we can push for such a change? Presumably, it's just an extra derivation off the shared secret?

Slight caveat that achieves weaker forward secrecy, which I note now in case such thoughts also impact JAMTIS. The lack of any binomial components in the existing addresses allow breaking known addresses (whereas if there was such an additional term as in my original proposal, you can identify sends to such addresses, yet you can not identify their spends).

@tevador
Copy link

tevador commented May 13, 2024

Slight caveat that achieves weaker forward secrecy, which I note now in case such thoughts also impact JAMTIS. The lack of any binomial components in the existing addresses allow breaking known addresses (whereas if there was such an additional term as in my original proposal, you can identify sends to such addresses, yet you can not identify their spends).

Forward secrecy always assumes that the DLP-breaking adversary doesn't know your address. That applies to both of the discussed cases. Regardless of any T-terms present in the address, the adversary can simply extract the secret view keys from the address, which makes any attempts for forward secrecy moot unless we migrate to a PQ-secure key exchange.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 13, 2024 via email

@tevador
Copy link

tevador commented May 13, 2024

We should not overestimate the forward-secrecy capabilities in case of leaked addresses. For example, the adversary can detect spends to known addresses (they will see a change output going to the sending wallet and a payment output going to the receiving wallet).

I want to reiterate that there is no safe way to malleate the keys of legacy addresses. Adding T-terms to one-time addresses achieves what is generally called forward-secrecy, is backwards compatible with existing addresses and forward-compatible with Jamtis. I don't see any other viable solutions.

@kayabaNerve
Copy link
Author

Fair. I'll circle back to

In that case, mind documenting the exact proposed change and can we circle in @j-berman so we can push for such a change? Presumably, it's just an extra derivation off the shared secret?

For the potential at-time-of-hard-fork solution.

@tevador
Copy link

tevador commented May 13, 2024

In that case, mind documenting the exact proposed change

It should be relatively simple.

variable description
K_d "key derivation" (sender-receiver shared key)
idx output index
K_1 (sub)address public spend key
K_o "one-time address" (output key)

Current one-time address derivation:

k_g = hash_to_scalar(K_d || idx)
K_o = K_1 + k_g G

Proposed change after the FCMP-fork:

k_g = hash_to_scalar(G || K_d || idx)
k_t = hash_to_scalar(T || K_d || idx)
K_o = K_1 + k_g G + k_t T

@kayabaNerve
Copy link
Author

We can eliminate the burning bug while we're at it by also prefixing the first input's key image if we're at it (modifying the shared key derivation). I'll leave further thoughts/discussions on this to @j-berman for now.

@tevador
Copy link

tevador commented May 13, 2024

We can eliminate the burning bug while we're at it

While I'm not saying I'm against this, we need to be careful about scope creep.

@tevador
Copy link

tevador commented May 13, 2024

Thinking on it some more.. it's probably fairly straightforward to have a separate table for timestamp locked outputs sorted by unlock_time also. Then when adding a block, check the lowest unlock_time output in the table; if the output is unlocked using get_adjusted_time at that height, remove from table and add to tree, and continue iterating removing outputs from the table until encountering an output that should remain locked at that block's get_adjusted_time. This way we wouldn't need to do the conversion from timestamp to block.

The problem is that get_adjusted_time is not monotonic. Adding a new block to the blockchain can in some cases result in outputs becoming locked again, unnecessarily increasing complexity.

I propose the following to be implemented at the time of the fork:

  1. Reduce the coinbase lock time to the default of 10 blocks that apply to all spends. This was discussed in research-lab#104.
  2. Ban new time locks. This would be done by a consensus rule mandating unlock_time to be zero as a natural extension of monero#9151.
  3. Convert all time-based legacy locks to height-based locks as unlock_height = height + (unlock_time - get_adjusted_time(height)) / DIFFICULTY_TARGET_V2, where height is the block height where the time-locked transaction was confirmed.

@j-berman
Copy link

j-berman commented May 13, 2024

I propose the following to be implemented at the time of the fork

I'm fine with this plan. Getting rid of the feature is still a +1 in my view as it's more likely to cause harm than benefit as it's been used.

Adding a new block to the blockchain can in some cases result in outputs becoming locked again, unnecessarily increasing complexity.

Clarifying that with the approach to maintain the existing time-based locks, the new rule at the fork would become first get_adjusted_time where time-based timelock is unlocked == block in which the output unlocks and is guaranteed unlocked from that point on. Either way we're introducing a new rule dictating how the output is unlocked.

Considering how little the feature is used and there's rough agreement to deprecate it, I think the optimal route is the path of least resistance and least complexity.

In that vein, I think converting time-based to block-based may still be optimal. I think the picture will be clearer upon implementing handling block-based timelocks first.

@hinto-janai
Copy link

I'd rather clarify things from the start, and if you still have questions, follow up there.

Sorry, not sure how to word things correctly. Given output movement like this:

non-FS output (A) ---> FS output (B) ---> output (C)

are these statements correct?:

  • A is known to link to B
  • B is known to come from A
  • C is known to not come from A
  • C is known to come from some FS output (but not necessarily B)

@j-berman thanks, details are appreciated.

We'll also want a migration for key images also described here

Ah okay, I thought the discussion here lead to this not being needed.

which would only stall the node if the tree is not finished constructing at the fork height where FCMP's begin.

So there will be leeway before the fork height to give nodes time to build the tree?

Thinking on it some more.. it's probably fairly straightforward to have a separate table for timestamp locked outputs sorted by unlock_time also. Then when adding a block, check the lowest unlock_time output in the table; if the output is unlocked using get_adjusted_time at that height, remove from table and add to tree, and continue iterating removing outputs from the table until encountering an output that should remain locked at that block's get_adjusted_time. This way we wouldn't need to do the conversion from timestamp to block.

The problem is that get_adjusted_time is not monotonic. Adding a new block to the blockchain can in some cases result in outputs becoming locked again, unnecessarily increasing complexity.

There's already up to 120 seconds of leeway for timelocks, would this be enough for the proposed table to be accurate enough?

@tevador
Copy link

tevador commented May 13, 2024

Clarifying that with the approach to maintain the existing time-based locks, the new rule at the fork would become first get_adjusted_time where time-based timelock is unlocked == block in which the output unlocks. Either way we're introducing a new rule dictating how the output is unlocked.

Fair point. Since we are already changing the rule, it makes sense to go with the simpler option that doesn't need a call to get_adjusted_time for every future block. I would also expect the code to be simpler if all historical time-locks are treated the same.

@j-berman
Copy link

So there will be leeway before the fork height to give nodes time to build the tree?

Ideally we'd release monerod containing the update well in advance of the fork height (last fork v18 release was released ~1 month in advance IIRC). So users could update their nodes, and it would start building the tree in the background on startup.

it makes sense to go with the simpler option

I lean toward converting time-based to block-based as well.

There's already up to 120 seconds of leeway for timelocks, would this be enough for the proposed table to be accurate enough?

We can also add an extra block like that for the extrapolated block, but imo I think the answer to "is it accurate enough" is likely yes. It's probably worth qualifying that with some analysis of block's adjusted times in practice, but the get_adjusted_time code appears fairly reasonable to me.

@kayabaNerve
Copy link
Author

kayabaNerve commented May 13, 2024 via email

@kayabaNerve
Copy link
Author

kayabaNerve commented May 13, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment