Skip to content

Instantly share code, notes, and snippets.

@kayabaNerve
Created February 16, 2024 04:42
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kayabaNerve/7336bf7d95176deaa9c2807690e8c47e to your computer and use it in GitHub Desktop.
Save kayabaNerve/7336bf7d95176deaa9c2807690e8c47e to your computer and use it in GitHub Desktop.
ZeroExec
The following is a rough sketch of a potential design, largely written in
present-tense.
---
We achieve a constant-time constant-size trustless ZK proof by recursing a
trustless sublinear Spartan instance to a minimum bound.
We then allow arbitrary users to specify a R1CS proof of size equal to the
minimum bound, allowing said users to in-effect specify and verify Spartan
proofs of arbitrary size. This ensures a trustless composition is possible, yet
also enables growing the circuit to a size any desired proof may be used.
This user-specified proof is referred to as the "application proof".
We then create a fully programmable private blockchain with full spent UTXO
privacy, private (non-identified, constant-time and constant-size) program
execution, full amount privacy, and uniform outputs.
The blockchain executes "actions", where each action nullifies a UTXO from a
protocol-level set. The UTXO is effectively a Pedersen vector commitment of
`program_id G value H nullifier I randomness J`, where the action proves:
1) The R1CS constraints of some "application proof" circuit
(entered as variables into the action circuit) hash to the UTXO's program ID.
2) The specified constraints are satisfied for the proof's inputs.
---
In further detail, the exact structures and flows are:
1) For actions `0 .. n`, verify an (ideally folded) membership proof proving the
spent UTXO `program_id G value H nullifier I second_randomness J` refers to a
UTXO in the set with `program_id G value H nullifier I randomness J` where
`second_randomness - randomness` is a known value.
2) For transaction-level context `block_hash`, verify all proofs with inputs:
- Block header for block `block_hash`
- The UTXOs spent
- The UTXOs created
- The nullifiers created (1+ per action)
- The SMT entries created (covered later)
3) Verify all nullifiers are well-formed to be `program_id G nullifier K`, where
the first `nullifier` is the spent output's, yet all further nullifiers
created by the action may use any `nullifier`. Then, accumulate them,
rejecting the action if any are prior known.
4) Verify all SMT entries are well-formed to be `program_id G value L`. Then,
accumulate them into the current SMT, rejecting the action of any were prior
inserted.
5) Verify the created outputs
---
Two additional services are offered by the network.
1) A sparse merkle tree of 32-byte values constructed with a cryptographic hash
function. This is intended for usage within proofs.
2) A blob storage of arbitrarily-sized values constructed with Blake3. This is
intended to be proved was published, in order to satisfy data availability,
yet not frequently used.
Both tree roots are part of the block header, enabling them to be accessed by
proofs.
---
For the basic transfer to public key, the address is defined as
`(spend_key I, view_key V)`. The created output is
`program_id G value H (spend_key + H(tx nullifiers || R view_key || o)) I`,
where the program is for knowledge of a Schnorr signature by `spend_key I` for
the spent outputs' nullifiers and created outputs.
`R` is a one-time-key placed in the transaction of the form `r V`. `o` is the
index of the created output.
---
For a basic token program, using one action per input spent:
1) Verify the claimed SMT entry exists (the token input).
2) Hash the claimed program ID, amount, and nullifier, and verify the hash equals
the value in the SMT.
3) Verify the ownership of the SMT entry via evaluating the embedded program.
4) Verify the created SMTs' value is equal to the inputs' value.
5) Output an extra nullifier for the token input's nullifier.
The internal program ID and the nullifier derivation mimics the traditional
transfer flow.
---
For a rollup such as a zkEVM, after the sequencer has published the state
difference to the blob storage:
1) Prove the claimed current chain head is in the SMT
2) Verify the state difference was published to the blob storage a sufficiently
confirmed block
3) Verify the transition is valid
4) Output a nullifier for the current chain head
5) Create an output recursing the program ID and an SMT entry for the most
recent chain head
Sequencing does not need to be limited, yet may be per whatever designs.
This does need to be partnered with a force-sequence mechanism. This would
likely be by letting anyone publishing a blob on the L1, requiring the sequencer
include all-so-published blobs within a certain amount of time.
---
The goal of this protocol is to be tailor-optimized for executing ZK proofs,
as Celestia is tailor-optimized for storage. This design is not incompatible
with Celestia, as Celestia may be used for storage and proofs of inclusion in
the blob storage may be replaced by proofs of publication to Celestia.
---
Further optimizations may be done by employing Nova to fold all proofs for
actions together. Then, a meta-proof may be used by the block producer to remove
all transaction proofs with a running chain proof.
Since programs are specified as R1CS, the base-layer Spartan proof may be
upgraded to any proof with a compatible R1CS scheme. This would allow future
improvements to efficiency without breaking the existing programs.
---
For Monero specifically, the proposal would be to evolve to this post-Seraphis.
The comparison with Darkfi is over the opacity of the execution provided here.
Given the effects on transfers, this is largely a thought experiment not
legitimately posed.
When Monero upgrades to achieve quantum security, this may make the most sense.
Post-quantum cryptography is generally not as composable as elliptic curve
cryptography is. Accordingly, having a single SNARK with all parts of the
transaction solely being arguments for said SNARK may make sense. If so, this
design also may. While the base layer transaction complexity will notably
increase, it'll enable scaling (which will be further necessary when under the
load of post-quantum cryptography).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment