Skip to content

Instantly share code, notes, and snippets.

What would you like to do?

ETH 2.0 Interop Survey

Survey v1.0.0.

1. General

  • 1.1 We target v0.8.1 of the spec across Prysm’s repositories

  • 1.2 The major barrier towards us adopting v0.8.2 is the complexity of the spectests configuration, which we see as a significant time burden given other priorities at the moment, so we currently intend to solely focus on v0.8.1.

  • 1.3 The biggest pain for us is in the setup of spec tests, given every update seems to revamp them significantly, demanding almost a week of work or more to ensure we are up to speed.

  • 1.4

  • 1.5 The primary bottleneck in development has been ensuring we can capture runtime bugs efficiently as well as edge cases in a testnet scenario that are newly observed bugs. Aside from that, libp2p has of course led to a fair share of problems and slowdowns due to it sometimes having poorly designed abstractions and frequently breaking dependencies on our codebase due to its extreme multi-repo approach.

  • 1.6 We anticipate the major bottleneck will be catching runtime bugs between prysm and non-prysm nodes communicating once we reach that interop point, as we imagine there will be significant differences in design decisions that can lead to serious problems upon running a multi-client chain.

2. Networking Essentials

  • 2.1 Yes we do

  • 2.2 No

  • 2.3 At the moment we use Kademlia DHT, but are about to wrap up integrating discv5 instead. We support static peering as well.

  • 2.4 We use libp2p’s internal handshake handling to do so, and are in the midst of fully implementing the interop networking spec

  • 2.5 We support SecIO as required in the networking spec. We could add TLS as provided by go-libp2p-tls.

  • 2.6 Yes

3. Syncing

  • 3.1 Yes

  • 3.2 Yes

  • 3.3 No

  • 3.4 Yes

  • 3.5 Initially, it will be full sequential sync from genesis with multiple peers.

  • 3.6 Yes we do for expired attestations

4. State Storage

(Out of interest, no hard requirements)

  • 4.1 Other
  • 4.2 Store every block

5. Attestation Aggregation

  • 5.1 Yes

  • 5.2 We support basic aggregation aside from that using our BLS package.

6. Fork Choice

  • 6.1 Yes, we have a yaml driven framework for fork choice testing as well as extensive unit testing. One thing we miss and we’d like to have is an end to end test (eg. run 1000 blocks for state transition, what’s the head?). We’ll get to that before interop.

  • 6.2 Unoptimized spec + reduced from. Working on caching but not sure if we have time to get to that before interop.

7. Spec-Tests / Transition Consensus

  • 7.1 We pass all tests
  • 7.2 v0.8.1

8. Block Propagation (Strategy)

  • 8.1 Yes

  • 8.2 Just enough to know the proposer index and verify signature. But can adapt to transition state completely if it’s needed.

  • 8.3 Do you use any different approaches No

9. Attestation Propagation (strategy)

  • 9.1 Follow the network spec

10. Block Proposals

  • 10.1 We have not made use of the graffiti yet

  • 10.2 Yes; however our JSON api only supports JSON spec for binary data in base64. We do not intend to support hex encoded data for this API at this time.

  • 10.3 We use a combination of roughtime servers and verify those against the system time frequently.

11. Monitoring

  • 11.1 Yes we do, and we maintain monitoring through prometheus/grafana

  • 11.2 Yes we do, we implement the Ethereum APIs repository:

  • 11.3 We use logrus with the option of specifying the STDOUT, and we use nice filterable JSON logging in our testnet Kubernetes cluster using a fluentd format.

  • 11.4 We mostly aim to implement and actively support all the endpoints defined in

12. Keystore

  • 12.1 Yes we do, we in-fact use geth’s keystore package directly for managing and retrieving private keys from a keystore path in Prysm.

  • 12.2 The format has a lot of thought put into it and makes sense to have a standard. For interop, it will just be a bit more work to implement than what we currently have, but if it has a long-term, beneficial impact then we are ok with it.

13. SSZ

  • 13.1 Yes, in

  • 13.2 Not at the moment, we are quite happy with the benchmarks we see in our testnets today, which are around 50ms hash tree root for the beacon state in our latest runs

14. BLS

15. Chain Start (reference doc)

  • 15.1 A kickstart (plain (balance, pubkey, witdraw_credentials) tuple) No A list of deposits, with incremental proofs (genesis spec) No A list of deposits, with proofs all to the same deposit root. No A series of deposit contract logs from an Eth 1.0 oracle, from a mock/test service Yes A series of deposit contract logs from a real Eth 1.0 node? Yes A genesis constructed from a (slow and long) stream of deposit log events? Yes A plain prepared genesis BeaconState object from SSZ? No

  • 15.2 Yes, we generate a keystore normally full of validator private keys that we can use to predictably launch the chain from genesis

16. Configuration & Performance

  • 16.1 Yes

  • 16.2 Not particularly, we’re happy with the recommended config parameters

  • 16.3 We have a set of constant parameters that are hard-coded in our client. At runtime, we decide which set of parameters to use (either minimal or mainnet)

17. Building & deploying

  • 17.1 Yes, it is a simple bazel build //beacon-chain

  • 17.2 Yes we provide docker images for our releases, and we use Bazel as our build system

  • 17.3 Yes we do, with automated canary testing and a comprehensive Kubernetes cluster for all our services

  • 17.4 We mostly recommend our users to run our docker images, which are supported on every architecture which can run docker. Aside from that, bazel also works on most architectures. We also have rudimentary support for ARM which has come from some of our users/contributors that have expressed interest, but is not a priority at the moment.

18. Conclusion

  • 18.1

  • 18.2

  • 18.3 We believe many of our bottlenecks and bugs we see in our runtime stem from libp2p, but given we currently implement the newest networking spec with its particular configuration and setup suggestions, we may see some of our previous problems start to go away.

  • 18.4 The easiest way is to avoid any major revamps of configuration or test setup that would require us to change our tooling or dedicate time/resources to the changes that detract from the greater development at hand.

  • 18.5 We have no explicit asks for any tool. We have solutions for all of our needs at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.