I believe it's time to seriously review the proof of work algorithm used in Monero in light of the very serious consequences we have all witness with mining centralization in the Bitcoin community.
Some urgency might not be a bad idea, as the window in which we can make such broad and sweeping changes is narrowing.
Shouldn’t you mention my recent revelations as one of the potential the prior art sources of this new found urgency? I mean upstanding open source and all right.
^^ see the bottom of the yellow highlighted post for mention about blocks+PoW being the problem
Are DECENTRALIZED, Scalable Blockchains Impossible?
^^ currently not complete, still being written to be more widely published within days
You’ll probably need my assistance given I’ve been researching, discussing about, and brainstorming the solution to this issue for the past years.
This might be a bit too radical/off topic but I think one issue that might be important to consider in PoW is the competitive exclusion principle: http://en.wikipedia.org/wiki/Competitive_exclusion_principle
I don’t believe this will help because ultimately every possible algorithm you can think of can be made at least an order-of-magnitude or two more efficient on custom hardware (per agreement I had with @tromp on this conclusion). And all 14nm/16nm ASICs are only manufactured in two fabs in the world. Mining is inherently a centralization paradigm in many ways. How could we know if some secret mining hardware (or even just very large economies-of-scale making the lowest-cost miner) is not already mining Monero? Why would they tell us if their motivation is to sustain a honeypot?
Even if you force the miner to have a copy of the entire blockchain, and even make disk or memory accesses a significant component of the computation, it can still be made more efficient with customized hardware. And economies-of-scale will I think always win the efficiency race.
We've investigated this before, mostly around Cuckoo Cycle, and at some point it fell by the wayside.
I intensely investigated different memory hard proof-of-work algorithms (some were my own) and even deeply analyzed @tromp’s Cuckoo Cycle. My conclusion is wider in scope: that proof-of-work is an evolutionary cul-de-sac (just “another failed mutation”).
The issue at the highest-level of abstract (i.e. generative essence) conceptualization is that, “impossible to have a fungible token on a blockchain in which the consensus doesn't become centralized iff the presumption is that the users of the system gain the most value from the system due to its monetary function”.
Do you think "tangle" type configuration (like IOTA) can be suitable and robust enough to fulfill the main function of Money- to be a storage of value that can be deferred through space/time?
They never showed how it converges without centralized servers enforcing that all transacting participants only run the same Monte Carlo strategy. Apparently given significant defection it will not converge on a single longest-chain, i.e. afaics it doesn’t converge decentralized. It also depends on proof-of-work (PoW).
The alternative for a DAG which does converge and doesn’t rely on PoW is Byteball’s Stability Point algorithm, but this has the downsides that I discussed with its creator @tonych last year. It has a peculiarity that afair transaction fees don’t scale with increasing exchange price of the token. More generally, essentially this is a closed set of delegates which decide the longest-chain, thus has the same weakness of TenderMint (and Vitalik’s Casper) in that if more than 33% or 50% (or what ever is the liveness ratio) stop responding then the longest-chain doesn't advance and requires a hard fork to unstuck, i.e. it is deterministic finality of confirmation not probabilistic as is the case for PoW.