Skip to content

Instantly share code, notes, and snippets.

@ihor
Last active February 2, 2023 08:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ihor/bda1f45724542f4e7d52e65023e9653e to your computer and use it in GitHub Desktop.
Save ihor/bda1f45724542f4e7d52e65023e9653e to your computer and use it in GitHub Desktop.
This proposal describes a Casper Event Standard that could mitigate the lack of native events until they are implemented

[RFC-0002] Casper Event Standard Proposal

Author(s) ihor@make.services, david.hernando@make.services
Status Implemented in casper-event-standard
Created 2022-11-14
Updated 2023-02-01

This proposal describes a Casper Event Standard that could mitigate the lack of native events until they are implemented.

Background

Web 3 applications rarely exist in a vacuum and require integration with Web 2 solutions. Such integration requires a Web 2 solution to observe the on-chain changes made by the corresponding smart contracts. In order to be trackable, the smart contracts should notify the observer that certain domain-level events happened during the execution.

Currently, the Casper Network doesn't provide a native event implementation to contract developers. Because of that, contract developers had to come up with workarounds to imitate events in the deploy execution results by creating identifiable WriteCLValue transforms that would contain the event data. At the moment, there are three event standards on the Casper Network that are known to us.

Known Casper event standards

1. The standard that came with the CEP-47 NFT contract implementation

The events are presented as string-to-string maps. Events are emitted by writing to an URef that points to the event data.

Example:

{
  "key": "uref-c384457b128d8ff623f6a8212f974fc860e99616174dd41cc029c07bc746ec3d-000",
  "transform": {
    "WriteCLValue": {
      "bytes": "0400000015000000636f6e74726163745f7061636b6167655f6861736840000000663833343939383662346135656236616662396132626563396639313032356266646433393532666364336330333735323130343039303563343433386132300a0000006576656e745f747970650e00000063657034375f6d696e745f6f6e6509000000726563697069656e744e0000004b65793a3a4163636f756e7428613564323737346136323363646165316230376263373833323433616265373039306439383437616431653335646632656230323930303561363034636437322908000000746f6b656e5f69640400000034313535",
      "parsed": [
        {
          "key": "contract_package_hash",
          "value": "f8349986b4a5eb6afb9a2bec9f91025bfdd3952fcd3c037521040905c4438a20"
        },
        {
          "key": "event_type",
          "value": "cep47_mint_one"
        },
        {
          "key": "recipient",
          "value": "Key::Account(a5d2774a623cdae1b07bc783243abe7090d9847ad1e35df2eb029005a604cd72)"
        },
        {
          "key": "token_id",
          "value": "4155"
        }
      ],
      "cl_type": {
        "Map": {
          "key": "String",
          "value": "String"
        }
      }
    }
  }
}

PROS

  • It's easy to identify events in the deploy execution results
  • No additional RPC requests to the network should be made to collect the latest contract state

CONS

  • This solution is not storage space efficient because the event fields are duplicated for every event, and all the information should be converted to strings
  • It's hard to verify if the claimed contract emitted the event. A malicious contract that emits events that pretend to belong to a different contract could be constructed

Check the implementation here.

2. The standard that came with the CEP-78 NFT contract implementation

A special events dictionary is created under the contract named keys. A new item is inserted for each action that happens to a token (mint, transfer, approve, burn). The key of the dictionary is a hash of the token_id + token_event_number. The value is the action encoded as u8. A helper id_tracker dictionary tracks the latest event number for each of the tokens.

PROS

  • The solution is very storage space efficient

CONS

  • It's impossible to identify events in the deploy execution results, because to do that, you need to know the id of the changed token, which means that such events could not be trackable in real-time by an external observer
  • It's required to query the network to learn about the latest state of the token. It doesn't allow the creation of a scalable real-time solution based on the SSE stream consumption, because the extra data should be requested from the same source node to avoid possible data race conditions.
  • It's impossible to recover event details if two events happen to one token within the same block

Check the implementation here.

3. The standard proposed by Maciej Zieliński that came with his CR DAO contracts implementation

Every contract has its own dictionary called events. All events are put there as raw bytes in a form of Option<Vec<u8>>. In addition, every contract has the events_length named key of type u32. It tracks how many events are in the dictionary.

The event data will be encoded based on the predefined schema known to the contract developer and the external observer, but at the beginning an event always contains its name, which should be used to parse the rest of data correctly.

Example:

{
    "key": "dictionary-5a0ecec0ac3e7bd8e327e290ed61ac7a6b2d7b9c3ded6eeaaee4fea9b9e34add",
    "transform": {
        "WriteCLValue": {
            "bytes": "3600000001310000000c0000004f776e65724368616e676564003b4ffcfb21411ced5fc1560c3f6ffed86f4885e5ea05cde49d90962a48a14d950d0e0320000000494a7cccc18a1414715008dd9550e8e03ab1746ac7dfa7c4db3e39460d9c81514000000031316461366431663736316464663962646234633964366535333033656264343166363138353864306135363437613161376266653038396266393231626539",
            "cl_type": "Any",
            "parsed": null
        }
    }
}

Event parsing looks like reading from bytes with a remainder:

// Extract CLValue.
let (cl_value, rem): (CLValue, _) = FromBytes::from_bytes(&bytes).unwrap();

// Parse CLValue into a Vec<u8>
let bytes: Option<Vec<u8>> = cl_value.into_t().unwrap();
let bytes = bytes.unwrap();

// Try to extract String from bytes. It will be a name of the event.
let (event_name, bytes): (String, _) = FromBytes::from_bytes(&bytes).unwrap();

// Because we know what to expect the name can be matched.
match event_name.as_str() {
    "OwnerChanged" => {
        // We know what are the fields of the event, so those can be extracted
        // one by one.
        let (_address, bytes): (Address, _) = FromBytes::from_bytes(bytes).unwrap();
        // After the extraction no more bytes to parse should left.
        assert_is_empty(bytes);
    },

    _ => panic!("Unknown event: {}", event_name)
};

PROS

  • This implementation is more space efficient than the CEP-47 one
  • No additional RPC requests to the network should be made to collect the latest contract state

CONS

  • It's hard to identify the events in the deploy execution results, which doesn't allow to implement tracking of such events at scale
  • It's required to know the event schema in advance to be able to identify and track them

Check the implementation here.

As we see, the existing implementations don't provide a fair trade-off between the cost overhead added by events to the contract execution and the trackability of the events.

Tracking of dictionary-based events like in the CEP-78 or CRDAO implementations requires pre-generating of possible next dictionary key hashes for all the contracts using the following formula:

dictionary item hash = blake.blake2b(dictionary URef hash, dictionary key, 32)

With over 6,000 contracts already on the Testnet, this approach is not scalable. Imagine having hundreds of thousands or millions!

However, the dictionary item bytes also contain the original __events URef:

dictionary item bytes = data + dictionary URef hash + dictionary key

Based on that fact, we could create a dictionary-based event standard that produces verifiable events, which also means that they will be identifiable, but we need to improve the scalability and solve the event schema problem.

Objective

The objective of this proposal is to come up with a contract event standard that:

  1. Is more cost-efficient compared to the CEP-47 implementation
  2. Produces events that are easily identifiable in the deploy execution results
  3. Doesn't require querying the network to collect all the relevant data
  4. Doesn't require the contract implementation knowledge to read the events
  5. Ensures that the events belong to the claimed contract
  6. Allows event consumption through SSE at scale
  7. Would be a single event standard on the Casper Network until the native events arrive

Motivation

The inability to track the on-chain logic makes the new CEP-78 NFT standard and the Casper Network less attractive. This proposal aims to solve this problem and make contracts trackable by contract developers and tools like CSPR.live.

Proposal

The proposal is based on Maciej's standard but adds several changes.

  1. All contracts following the proposed event standard should have a named key called event_standard holding the event standard name (to be defined)
  2. Events are emitted as storing values to the dictionary defined under the contract's __events named key
  3. Event keys are auto-incremented integers provided by the counter stored in the __events_length named key
  4. Event body is encoded as bytes in the form Vec<u8>
  5. Event body starts with the event_ string followed by the event name as one single string (event_ + <event name> + <the rest of the body>)
  6. The rest of the body is followed by the event data according to the event schema defined in the __event_schemas as Vec<(String, CLType)>. For example:
[
	{
		"key": "cep47_transfer"
		"value": [
			{
				"key": "owner"
				"value": "Key"
			},
			{
				"key": "recipient"
				"value": "Key"
			},
			{
				"key": "token_id"
				"value": "string"
			}
		]
	},
]

The differences with the standard proposed by Maciej are points 5) and 6). Forcing all the events to start with event will simplify filtering events from other writes to dictionaries. In addition, having contracts to declare their event schemas will make event-tracking implementation-agnostic.

Event Parsing Logic

The event parsing logic will be the following:

  1. Upon every contract deploy, the parser should check for the event_standard named key. If it's the standard described in this proposal, the parser should store a mapping between the __events URef and a pair of values, contract package hash, and the event schemas provided in the __event_schemas named key
  2. When parsing deploy execution results, the parser should check all writes to dictionaries identified as transforms with the key that starts with dictionary-
  3. For each write to a dictionary, the parser should check if it's an event by verifying that it's a vector of bytes that starts with a string beginning with the event_ prefix
  4. If the write to a dictionary is indeed the event, then the URef address should be read from the remainder of the dictionary item bytes, and the corresponding contract package and the event schemas should be found from the mapping described in 1)
  5. If there is a schema defined for the event, then the parser should parse the event according to that schema.

Questions and Discussion Topics

  1. Are there any other ways to easily mark writes to dictionaries as events instead of prefixing them with event_?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment