Skip to content

Instantly share code, notes, and snippets.

View rjl493456442's full-sized avatar
🐢

rjl493456442

🐢
View GitHub Profile

Clarification on contract redeployment

Abstract

This writeup lists issues related to contract redeployment and vague specifications, making a proposal to precisely define the expected behavior in such instances.

Background

Contract deployment

goroutine 221235 [running]:
runtime/pprof.writeGoroutineStacks({0x2098ea0, 0xc017704fc0})
runtime/pprof/pprof.go:703 +0x70
runtime/pprof.writeGoroutine({0x2098ea0?, 0xc017704fc0?}, 0xc004e97308?)
runtime/pprof/pprof.go:692 +0x2b
runtime/pprof.(*Profile).WriteTo(0x1ad124b?, {0x2098ea0?, 0xc017704fc0?}, 0x1?)
runtime/pprof/pprof.go:329 +0x14b
github.com/ethereum/go-ethereum/internal/debug.(*HandlerT).Stacks(0x1, 0x0)
github.com/ethereum/go-ethereum/internal/debug/api.go:199 +0x6f
reflect.Value.call({0xc00d9d8d20?, 0xc00da320f8?, 0x7f48ed108898?}, {0x1ac9cc9, 0x4}, {0xc012577220, 0x2, 0x0?})

0x2eaae106975b5fd5717ac8b50baa34404b9699e2a3a9dcdeeb931965c3a36711

// Copyright 2020 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of

Leveldb overview

Leveldb will migrate the data from low level to high level. The deeper level has more data.

For example, the size limit of level1 is 100MB(by default), level2 is 1GB, level3 is 10GB...).

If the leveldb is full, then 90% data is in the deepest level.

But usually it's not full. E.g. Level0 is 100MB, level1 is 1GB, level2 is 5GB. But we can still hold the conclusion that most of the data is in the deepest level.

Database thoughts

Issues explaination

For trie node reading, we have a huge read amplification. According to Peter's benchmark statistic, the real data size 35GB, while the total disk read is 10,000GB, the read amplification is around 280.

I think this issue not only exists in state serving(for fast sync), but also exists in the block processing. Now we have two state accessing modes during the block processing:

Read from the trie directly

Geth metrics

Blockchain

General metrics

一个区块执行&校验的时间可以包含以下几部分内容:

  • 区块执行(从数据库中读取数据,EVM执行合约,将脏内容写回Trie,并且重新计算Trie hash)
  • 区块验证(验证tx root, receipt root等,占比很小可忽略)
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
// unsetRefs removes all internal node references(hashnode). It
// should be called after a trie is constructed with two edge proofs.
//
// It's the key step for range proof. All visited nodes should be
// marked dirty since the node content might be modified. Besides
// it can happen that some fullnodes only have one child which is
// disallowed. But if the proof is valid, the missing children will
// be filled, otherwise the entire trie will be thrown anyway.
func unsetRefs(root node, remove bool, removeLeft bool) {
switch rn := root.(type) {