This proposal is an alternative/complement to the SWARM contract. Described here:
http://www.wired.com/2014/03/geeks-guide-karl-schroeder/
Any node that wants to win money storing and serving somebody else data, registers himself as a node, register a public key and send money to deposit (Send money to the contract account in the block chain). From now on we will call him the DATA_KEEPER.
When somebody (the DATA_OWNER) wants to store one chunk of data publicly, the next steps occurs:
- The DATA_OWNER stores the data in some public storage. For example IPFS.
- The DATA_OWNER asks publicly who wants to sign a contract with him and starts a blind auction. This is done by sending the money that he is willing to pay in a transaction to the block chain with the next parameters:
- Hash of the chunk of data that he wants to store.
- Reference where data can be found. (For IPFS/SWARM this is not necessary)
- ExpirationDate: Timestamp/blocknumber that the data will expire and the DATA_KEEPER will not be obligated to keep.
- Minimum storage deposit amount for the bidder to be accepted.
- Oracle accounts that the DATA_OWNER trusts with.
- Timestamp/blocknumber when the blind bidding phase ends
- Timestamp/blocknumber when the disclosure phase ends
- The QoS agreement (See below) 3.- Now, all DATA_KEEPERs interested in storing the data and getting the proposed reward. Read the Data from IPFS, stores it in his premises (PIN it) and sends a signed and encrypted (blinded) bid to the block chain. 4.- After the blind bidding phase ends, the disclosure phase starts. During this phase, the DATA_KEEPERS will disclose the secrets they encrypted the bids with. (DATA_KEEPERS can use the same secret for all transactionss coinciding in time and disclosure the secret for all concurring auctions in a single transaction to save GAS) 5.- After the disclosure phase ends, the winner is determined implicitly (calculated), and he is the only one responsible to keep and serve the data. The others can dismiss the data if they want (UNPIN it). If no bids are presented, the money is returned back to the DATA_OWNER. In case of tie, wins the first in that submited the bid.
When the data expires, the DATA_KEEPER can claim the reward (If the deposit is valid). And he will get 90% of the value. The other 10% will go to the Oracle account.
Anybody can ask in the block chain to an oracle to check for a chunk of data. (This request will cost a small fee. (Predetermined by the oracle).
The Oracle will do the check. If he can get it, he will just publish it in the IPFS for a short period of time.
If the Oracle can not get it, he cancels the deposit and all its active contracts and returns all associated money with the next formula.
- Ci := Money payed for user DATA_OWNER in active contract i.
- Ti := Total interval (number of blocks) the contract i should last.
- RTi := Remaining Time (in blocks) to expire contract i.
- D := Total amount in the deposit.
- Total amount to distribute = SUM(Ci) + D
- Amount to give to the demandant = D*0.1
- Amount to give to oracle = SUM(Ci)*0.1
- TOTAL_CONTRACT := Amount to give to all active contracts = SUM(Ci) + D - D*0.1 - SUM(Ci)*0.1
- PROP_CONTRACT_i := Proportion to give to each contract i = ((RTi-Ti)/Ti)*Ci / SUM( ((RTi-Ti)/Ti)*Ci )
- Amount to give to each contract = PROP_CONTRACT_i * TOTAL_CONTRACT
The idea behind this formula is the next:
- The DATA_KEEPER loses every thing.
- The ORACLE is neutral. It will get the same if he DATA_KEEPER do the job or not.
- The DEMANDANT gets a reward of 10% of the Deposit.
- The remainder is distributed with the affected users in proportion of the money they payed and the time remaining of the contract.
NOTE: I don't like the idea of burning money.
Instead of loosing all the deposit money in the first oracle check fail, The contract can have various chances and a progressive loose of the deposit. A QoS. Example:
If the first time that a check is performed does not response, retry it after 5 min. If after 5 min does not get it, retry it after 10min and “execute” 10% of the deposit. If after 10min dos not get it, execute 10% more of the deposit and retry it after 20min. If after 20min does not get it, execute 10% more of the deposit and retry it after 40min.
And so on until there is no deposit.
So here is how much the deposit would be reduced in function of an out of service:
- 5min 0%
- 15min 10%
- 35min 20&
- 1h15min 30%
- 2h35min 40%
- 5h15min: 50%
- 10h35min: 60%
- 21h15min: 70%
- 1d18h35min 80%
- 3d13h15min 90%
- 7d2h35min 100%
This can be defined as a QoS agreement. This QoS agreement is proposed by DATA_ONER and accepted by the DATA_KEEPER.
Great job writing this up thanks.
I see a few issues. First of all, the oracle is not accountable, which means they are designated based on reputation. Now if that is the case, you could just choose your data keepers based on reputation too.
Second, having a contract with one individual is a slim guarantee for having content available, how is redundancy properly incentivised? if letting others store the data will decrease your chances of reward then you want to keep it, if on the other hand it is always you who gets the reward then others are not incentivised to act as fallback store for you without profit.
Third, the wrongdoers lost deposit is given as a consolation to the owner (instead of being burnt or distributed among all the registered nodes), then reporting nonexisting lost chunks from bogus owners can provide early exit for a node.
Forth, You write that the oracle is neutral, but i don't see how it is incentivised not to accept a bribe from either party and report the data found when it is not or the other way round.
Moreover ORACLE's can be disintermediated by a contract, so no need for reputation-based
Firfth, if no proof is required it creates a little stickiness in the system, since there is no way for a third party to believe a claimant, the oracle or the defendant are saying, other than trying to download from the given node.
While a proof of custody sent in could provide cheap way around, it still does not prove that the data is available for the owner at any given time.
Sixth, having a blockchain transaction for each piece of content is hardly viable, especially on the chunk level. On document/collection level, the problem is its incompatibility with the design for chunk-level distributed storage, though it could work IPFS/torrent style.
In the swarm model, therefore, we reached the following conclusions:
I am working on a rather exciting variant of this system based on the rigorous smart syncing protocol and recursive blaming (you can provably delegate or share responsibility). If it works,