Skip to content

Instantly share code, notes, and snippets.

@zsfelfoldi

zsfelfoldi/storagefee.md

Last active Feb 15, 2021
Embed
What would you like to do?

A pay-for-storage model for evolving tree hashed data structures

Author: Zsolt Felfoldi (zsfelfoldi@ethereum.org)

This proposal describes a relatively lightweight and easy to maintain structure that is directly applicable for the implementation of a "storage fee" scheme on the EVM state and is also flexible enough to accomodate future storage schemes. It allows renting each account and contract storage entry separately, solving the problem of funding contracts with many user entries like ERC-20 tokens. Its economic model lets users know the exact expiration time of their entries when paying for rent. It does not need removal transactions, allows lazy garbage collection but still avoids "tree rot" (scenarios where it's not instantly clear whether a transaction is executable or not). The economic model can also be considered separately from the proposed data structure. It might require some more formal study but I believe it has good stability/fairness properties.

Data structure

We differentiate between two kinds of tree nodes:

  • value node: contains actual data associated with the address it's located at (might also have children). EVM account/contract objects (including contract code) and contract storage entry leaves belong to this category.
  • branch node: no data associated, has children.

The chain history is divided into epochs (epoch size is somewhere between a week and a few months). Individual addresses of the tree address space are either active (rented until the end of an epoch), inactive (rent expired) or empty (null value, no rent). Value nodes are stored for active addresses (even if their value is null). Branch nodes are considered active if they have at least one active child. Active nodes (either value or branch) always have a limited active lifetime (lastActiveEpoch is stored with every node). Value nodes can have a lastActiveEpoch independent from their children while branch nodes store the highest lastActiveEpoch of their children. (Note: the garbage collection mechanism of the implementation may also want to store the earliest expiration that hasn't been garbage collected yet but that does not need to be a part of the consensus.) If a value node becomes inactive then its children always become inactive and can be forgotten. Inactivation is therefore automatic; there is no need for inactivation lists and mass inactivation events cannot cause a data processing congestion. Still, the active/inactive state of each entry is exactly defined by consensus. Transactions trying to read inactive addresses always fail and the cost of the execution is still charged. Note: inactive addresses are unreadable even if their last value was a null value.

Renting an address means setting the lastActiveEpoch field of a value node to a value greater than or equal to the current epoch. Pricing is discussed later. Nodes have no owners, anyone can rent any address and anyone who is permitted by state/contract logic to write to a given address can do so regardless of who rented the address. Active addresses and empty addresses (null value, no rent) are readable, inactive addresses are unreadable regardless of whether their last value was null or not.

Permitted operations are:

  • read an active or empty address
  • write to an active or empty address
  • rent an empty address
  • extend the rent of an active address
  • reactivate an inactive address (makes its old value readable again; Merkle proof needed)
  • rent an inactive address and instantly overwrite its value (Merkle proof of old value is not needed)

Reactivation is a special transaction that cannot be performed by a contract. Wallets/apps should be able to take care of important addresses and reactivate them if necessary before performing other operations. Renting addresses and extending existing rents on write operations can be automatic though; the transaction can contain a lastActiveEpoch value until which it automatically rents every address it writes to. Since the operations allowed for contracts never need Merkle proofs, this scheme does not require access lists. It's the caller's responsibility though to know when the transaction might hit an inactive entry and reactivate them before the call.

Keeping or removing stubs

The described logic keeps stubs (hash references) for inactive children/subtrees forever. Since stubs can appear at any level in the tree, many inactive entries at the same level (like removed dust accounts) will only cause a logarithmic overhead (subtrees containing all inactive entries are removed and only a single stub is retained at the common prefix of their addresses). This logic probably works better with binary trees than "hexary" ones. Adding further cleanup logic is possible though; if a branch node has only one active child and the rest of its children have expired a long time ago (longer than a specified reactivation period) then it may be permanently removed by a special transaction (after which reactivation is no longer possible). The economic and security implications of such a mechanism are not studied here.

Rent economics

Our goal is to limit the storage capacity requirements of the tree database. We assign a cost to each active address which consists of the actual size of the data stored there and an estimated logarithmic branch node overhead plus estimated database overheads (assuming a typical/practical database representation of the tree). We keep track of the total storage costs per epoch in epochCostCounter[epochIndex]. When an address is rented for the current or a future epoch then its storage cost is added to the epoch's total cost counter. (Note: this is not an issue with EVM but if the size of the storage entries is variable then the rent should be pre-paid for an estimated maximum size so that it does not need to be updated each time the value is changed.) We want to keep the total cost of each epoch under totalCostLimit. We gradually sell the space in future epochs; in each block we calculate epochCostLimit(epochIndex) for each epoch with the following formula:

epochCostLimit(epochIndex) = totalCostLimit / 2**((epochIndex*epochLength-blockNumber)/futureSellConstant)

This means that as we sell space further in the future the space available for sale decreases exponentially, while the space available in each epoch is increased exponentially over time, until the end of the epoch, when it reaches totalCostLimit. For example, if futureSellConstant is the number of blocks per year then half of the space in any epoch is available for sale one year before the end of the epoch; a quarter of it is available two years in advance, and so on.

Pricing (price per storage cost per epoch) is adjusted in a way that flexibly enforces the current epoch cost limit:

soldRatio = epochCostCounter[epochIndex] / epochCostLimit(epochIndex)
storagePrice(epochIndex) = basePrice * 100**(-COT(soldRatio*PI))

where COT is the trigonometric cotangent function COS(x)/SIN(x). basePrice is exponentially adjusted up or down at the end of each epoch based on soldRatio (the ratio of final sold space vs. the theoretical limit). If more than 50% of the limit was sold then the basePrice is adjusted up, otherwise down.

This price control scheme ensures that the storage utilization is on average 50% of the theoretical maximum, allowing enough headroom for the control mechanism. The exponential nature of the function ensures that even if basePrice is very off (for example because ETH price is changing rapidly) the resulting offset in COT(soldRatio*PI) will be logarithmic. Since the cotangent function approaches +- infinity as soldRatio approaches 0 or 1, the limitation ensures that the price will always be cheap enough if soldRatio is close to 0 and always too expensive if it is close to 1. Note that we don't need to actually calculate powers of cotangents each time (should not use floats in consensus anyway), instead we can use a similar precalculated piecewise linear function of storagePrice vs. soldRatio, making it both easy to specify exactly and very cheap to calculate.

When renting an address for multiple future epochs the price (ETH/storageCost) is calculated for each yet unpaid epoch, added together and multiplied by the maximum storage cost of the address (the rent cost paid by the transaction sender is burned). The reason why we use relatively large epochs is that the epochCostCounter updates and the price calculations can be processed in a limited number of iterations on an in-memory structure. Maybe this formula might seem complicated at first and definitely needs some further analysis but it is my strong intuitive conviction that selling limited resources into the future should look something like this, setting hard limits with price control but also gradually putting the supply on the market so that no one can buy it all up while it's cheap.

@dankrad

This comment has been minimized.

Copy link

@dankrad dankrad commented Feb 12, 2021

It's the caller's responsibility though to know when the transaction might hit an inactive entry and reactivate them before the call.

That's effectively an access list, however.

Since stubs can appear at any level in the tree, many inactive entries at the same level (like removed dust accounts) will only cause a logarithmic overhead (subtrees containing all inactive entries are removed and only a single stub is retained at the common prefix of their addresses).

It would be interesting to see actual space savings. It feels to me that on average usage, this scheme will still require quite a lot of storage for all these stubs; If 50% of addresses are active and 50% expired, what's the actual space savings? It is probably much less than 50%, may only be 25%?

The market mechanism is not very flexible -- in particular, it does not allow for speculation on future storage price. I think it would be a good idea to add a way to buy storage and sell it later (e.g. by being able to buy storage tokens for a certain epoch, without actually writing anything), because otherwise you are effectively subsidising use cases that have very well predictable storage needs over those for which it is less predictable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment