monero-project/research-lab#12 wrote:
I believe it's time to seriously review the proof of work algorithm used in Monero in light of the very serious consequences we have all witness with mining centralization in the Bitcoin community.
Some urgency might not be a bad idea, as the window in which we can make such broad and sweeping changes is narrowing.
Shouldn’t you mention my recent revelations as one of the potential the prior art sources of this new found urgency? I mean upstanding open source and all right.
https://www.reddit.com/r/Monero/comments/6r2xsm/is_moneros_anonymity_broken/dl75h7s/?context=3
^^ see the bottom of the yellow highlighted post for mention about blocks+PoW being the problem
Is Monero’s (or All) Anonymity Broken?
^^ summaries here and here
Are DECENTRALIZED, Scalable Blockchains Impossible?
^^ currently not complete, still being written to be more widely published within days
Shocking Crisis Coming to Cryptocurrency (in Sept?)
You’ll probably need my assistance given I’ve been researching, discussing about, and brainstorming the solution to this issue for the past years.
This might be a bit too radical/off topic but I think one issue that might be important to consider in PoW is the competitive exclusion principle: http://en.wikipedia.org/wiki/Competitive_exclusion_principle
I don’t believe this will help because ultimately every possible algorithm you can think of can be made at least an order-of-magnitude or two more efficient on custom hardware (per agreement I had with @tromp on this conclusion). And all 14nm/16nm ASICs are only manufactured in two fabs in the world. Mining is inherently a centralization paradigm in many ways. How could we know if some secret mining hardware (or even just very large economies-of-scale making the lowest-cost miner) is not already mining Monero? Why would they tell us if their motivation is to sustain a honeypot?
Even if you force the miner to have a copy of the entire blockchain, and even make disk or memory accesses a significant component of the computation, it can still be made more efficient with customized hardware. And economies-of-scale will I think always win the efficiency race.
We've investigated this before, mostly around Cuckoo Cycle, and at some point it fell by the wayside.
I intensely investigated different memory hard proof-of-work algorithms (some were my own) and even deeply analyzed @tromp’s Cuckoo Cycle. My conclusion is wider in scope: that proof-of-work is an evolutionary cul-de-sac (just “another failed mutation”).
The issue at the highest-level of abstract (i.e. generative essence) conceptualization is that, “impossible to have a fungible token on a blockchain in which the consensus doesn't become centralized iff the presumption is that the users of the system gain the most value from the system due to its monetary function”.
Do you think "tangle" type configuration (like IOTA) can be suitable and robust enough to fulfill the main function of Money- to be a storage of value that can be deferred through space/time?
They never showed how it converges without centralized servers enforcing that all transacting participants only run the same Monte Carlo strategy. Apparently given significant defection it will not converge on a single longest-chain, i.e. afaics it doesn’t converge decentralized. It also depends on proof-of-work (PoW).
The alternative for a DAG which does converge and doesn’t rely on PoW is Byteball’s Stability Point algorithm, but this has the downsides that I discussed with its creator @tonych last year. It has a peculiarity that afair transaction fees don’t scale with increasing exchange price of the token. More generally, essentially this is a closed set of delegates which decide the longest-chain, thus has the same weakness of TenderMint (and Vitalik’s Casper) in that if more than 33% or 50% (or what ever is the liveness ratio) stop responding then the longest-chain doesn't advance and requires a hard fork to unstuck, i.e. it is deterministic finality of confirmation not probabilistic as is the case for PoW.
(Note: this comment was not deleted by @fluffypony as of the time of writing this, but it is archived here just in case)
Actually I thought of that conceptually before when I was trying to devise a solution for the liveness-gets-stuck issue that I mentioned about Byteball, but didn’t bother to fully develop the model, because it has a very obvious and fatal flaw because they ostensibly didn’t model the economics of it. Their model is the provability that it can’t be gamed algorithmically. But afaics, they didn’t model the economic ramifications of their algorithm.
Their algorithm is essentially scaling the amount of PoW difficulty (that all mining node ID’s must have to survive a PoW challenge round) by the rate of changes to the ID set. So assuming there is no attacker, then everyone agrees to play nice then the difficulty remains low. But the specific flaw is its communism because it steals from those who have greater or low-cost hashrate and redistributes to the marginal miners, because every good or bad ID has the same weighted vote. Of course the same entity can create more than one ID to spread its hashrate, but this is attackable because if the threshold of their splits are exceeded by an attacker who issues too many ID joins/deletes per round, then the split IDs are deleted by the challenge round and amplify the attacker’s effect. So the economic implications are amplification instability else communism. We must understand the economics of decentralized consensus.
Also it appears to me that it requires some trusted setup on the initial randomness to create a non-gamed ID member set for the committee which acts as the “server”. There may be other issues, as this is brand new so peer review is presumably lacking.