Skip to content

Instantly share code, notes, and snippets.

@l0k18
Last active March 4, 2019 02:16
Show Gist options
  • Save l0k18/5da6648d49d5fb3b41fb3961e50a9ac6 to your computer and use it in GitHub Desktop.
Save l0k18/5da6648d49d5fb3b41fb3961e50a9ac6 to your computer and use it in GitHub Desktop.
Distributed Journal Cache

What is Distributed Journal?

The semantic difference between journal and ledger is a very simple one - it starts with many snippets, and over time the tail becomes the final version that contains everything essential from the many little parts that compose it.

The correspondence of 'many small parts' to this protocol, there is a race to pin the network maximum confirmations to each recently passed transactions.

Interval Tree Clocks

Interval Tree Clocks are a system whereby special set-based splitting and joining operations that can determine the time-ordering of many sparsely connected events. There is an implementation for Java, C and Erlang, that has been forked here and will be ported to Go when the next phase of the roadmap begins:

https://github.com/ricardobcl/Interval-Tree-Clocks#summary-high-level-presentation-of-itcs-and-its-use

However, it is only the base, as these events, so each transaction uses instead Merkle trees to tie itself to its nearest temporal relatives, and compact truncated hashes and limit on the width of the candidate transactions.

Spliting is modeled in this two nodes the split recipient receives the inverse of the other, and for the joining of one to many is the same as how they bind transactions into a bitcoin header.

These are instead used to create a pure cryptographic causal sequencing, instead of set-based, thus better fitted to an extremely high traffic multi-protocol mesh.

Extending to support new protocols one has the additional possibility to extend the mesh for all of the protocols - with each addition, acting as a mutual temporal reinforcement, which improves event sequence resolution.

Distributed Journal Cache

The growing edge of the merkle tree vertexes is intentionally potentially very thick, but ultimately, for each individual transaction, only one correct record is required in the long term, so it makes sense that if one is going to use competition, via latency, to validating and pinning transactions to their temporal sequence with the others.

The headers contain the merkle tree of the network consensus number of immediate previously received transactions, and index reference using a truncated cryptographic hash (64bits), that are used as the indices for the recently received transactions cache.

Latency as a proximity limiter of centralisation

Because there is a limit to the first pass number of inbound links that are accepted for reasons of efficiency and limited cache resources, one will tend to end up validating those sent quite close to you, having the double benefit of reducing transaction latency and an abundantly sufficient redundancy that assists not only in sequence resolution,

It also means income from transaction processing tends to be where the income generation is going on, helping also in the way this appropriate positioning can have compounding effects.

Because latency is the biggest factor allowing one node to to bind themselves earliest time to newly received transactions, there is only so much distance (in network latency terms) allowed between older and newer transactions claiming their previous newest predescessors, meaning precision increases as the network scales.

Total ordering of all transactions and those who found and proved when they found them.

The network can then compute the total ordering of transactions with fairly high precision because of the limit on how many backreferences it may permit. It is trivial to determine who is first and all the others in exact causal order, by the newest (first link to new transaction, which if it maintains its' position will be the miner who gets paid.

The network rewards response time, first and foremost. Response time is the best guarantee of authenticity, because by definition spoofs and replays are not original, and with an inbound node limit, the opportunity for getting on the end of the shortlist that will get pruned anyway rapidly disappears as confirmations stack up.

Protocol Mesh

Instead of arbitrarily narrowing the network's many channels and possible interconnects, they can subscribe to some protocols and not others, at their discretion and to suit their resources.

By enabling this, as more protocols join a mesh, as each new protocol is added, one more, until the limit, number of different other protocols can be referenced, creating an interlocking sequencing, which as explained above, becomes more precise the wider it grows.

Private? Public?

Protocols can have any type of coding scheme for their transaction records that they want to. Some types of data have different processing weight and therefore require different grades of hardware to implement, and interact with the valuation assigned to final draft signers' designated account.

Public data, more likely to have a longer lifespan of demand on it, provides for a potential market in accounts that can access data further back in the past than regular working journal cache nodes, via authtenticated RPC services. Because operating nodes makes sense by its horizontal scalability, the main difference between operators will be in the processing, memory and storage resources they add to their private/priority service clusters.

The tendency of this network will be towards network capacity will be naturally attractive due to a high transaction rate when demand is high in a local area, meaning that the network will add capacity soonest where it is demanded the most, because those there catching more of the final drafts are getting more for less work, essentially.

Proof of concept simplicity

A simple example could be a blog posting distribution network. Proof of delivery is stored on the journal cache. Nodes get paid per delivery, and creators get a percentage of each delivery.

Nodes cache the blog posts and prune their cache according to least recently delivered.

Payment begins with the initial introduction of a new piece of data to the network. This process begins with propaating a notification about the metadata of the new content, and is signed by the creator.

As users demand the item, they sign the public offers and these propagate back to the origins, and these signed requests, delivery, and confirmations must go back complete to the creator, who then signs bundles of them, creating payments to the accounts of the couriers and themselves.

Add a prediction market

With all of this data floating around, it could become possible there could be a secondary layer whereby nodes speculate on the popularity of content and are rewarded, automatically, by the fact they were ready to deliver it sooner, and this response time is recorded in the ITC causality mesh.

In fact it's not even protocol, it's just a gamification. But if this further accumulated points that altered other network parameters, this would also mean a derivative asset can be imputed out of the causality mesh, and the issuance rate compounds with stake, with curation track record, subscriber numbers, any computable and certain ranking score system can issue some kind of benefit, as the idea would be that such skills have a tendency to compound in their effect when they are more conneccted.

Subject matter specialists whose nose for the new hotness would compete for having more subscribers, and subscriber count would be a fluid score that would change their receipt of the share of the base token dividend in the share from receipt and additionally indicating approval, the main metric good curators have the highest rankings for.

Not just digital service delivery

The reecord of delivery proven by signatures and payments could form an interlocking web with a logistics tracking protocol that allows the creation of everything from property rental to consulting services or cleaning services.

In this field again, there are roles and analyst roles in which there can be competition for share of subscribing providers of the specific commodities they track and predict. These ranking scores could dictate the distribution of a share of the administrative cost for being connected to their logistics signalling messages and support.

Digital deed registry

Since all of these things essentially are secured by and competitively optimised allocation, communications, logistics, supply chain, it makes sense that if the receipt documents are now signed transactions signifying a completed production phase, and money moving through directly and promptly, then the waybills of everything, their digital deed, containing their essential metadata, and these records automatically are indexed into a directed acyclic graph of acquisitions and dispositions and records securely stored backups anonymously designated by each user, constituting a living will.

Only transferrable items, not perishable ones, would be valid deed bearing items, though of course a shipment box would constitute a deed bearing item, in any case where it will be exchanged, delivered, routed or received, and be extinguished at its expiry date.

Multiplayer Gaming

Multiplayer games essentially create a cconsensus event space in which there are objectives and prizes and special items, cosmetic items, can potentially be traded, or lost, though hopefully not lost!

In the more preferable network environments for multiplayer games is that it be possible to select games based on their mean latency. Game chat messages between gamers' game engine and the server model can adapt their distribution paths to minimise cycles to confirmations and concurrency of multiple cluster members.

Security messaging systems

Security services operating in overlapping service areas benefit a lot from coordination. meaning when in the locality workers are automatically synchronised on incident reports where the location is in the service range of competitors,

Unlike pizza delivery, services like fire fighting, armed security, and the like, mutually benefit each other if they can back each other up and achieve lower premiums for their area due to lower claim frequency, and an efficiency bonus for cooperating organisations with low unresolved incident rates.

DJC instantly recognises the nearest peers in the network, as it is more likely to get a final draft, so it has a naturally propagation distance/urgency that can dynamically adjust if urgency is high.

In through all of this, as well, the competition of cooperating teams using thtis system of realtime data that further acts as proof of service that is then issued in the payment distribution for providing that service.

Conclusion

Horizontal scalability is always the biggest challenge in eliminating the brittleness of centralised networks and specifically their effect amplification when they are hit where they will cause maximum synchronisation problems.

Clock-based open entry, low/no trust 'Blockchain' ledgers focus primarily on structuring the issuance to fit a time schedule.

If you can completely eliminate errors in attribution of sequence of events recorded by multiple untrusted nodes in the flow of messages through the network, you can create consensus shared-states such as

  • exclusivity of control of supply, of a payment instrument in the addresses and their unlocking keys.
  • multiplayer and collaborative virtual environments
  • the monetisation, curation, logistics and delivery for any kind of digital services or equally, to physical ones.
  • even to the improved cooperation between competitors in security services through private, logged intelligence sharing systems that ensure furthermore that all have a congruent record should something go wrong, the number of false leads should be easier to reduce.

In the case of payment instrument, in particular, lays a potential to create a true use-multiplying effect that swells supply where more is needed, and shrinks it where it is not.

The obvious potential benefit in matching supply with velocity is it will attract competition, where demand is high when incumbency bonus is in its zone of operational efficiency - for example, being there with your taxi when people need you to be.

By rethinking entirely what it is, you go beyond just (only) simple payment networks for consenus limited tokens, and notice that properties of confirmation sequence and proof could have many potentially very useful functions in both analysis and deployment of resources where more information that is more finely and accurately temporally sequenced.

There is many competitions going on in market places for the most productive allocation of resources, and accurate judgement is a subject matter specialty and automating and certifying the rankings of users of the system, enabling the market-driven selection of the better curators and analysts of businesses, it also enables production management structures to optimise more dynamically, adopting models as we see in the 'sharing economy'.

To make this system into a means for anyone and everyone to help coordinate, locate demand, identify supplies.

Empowering the shrinking of the minimum size in a given operational unit by automating certain kinds of information processing and distribution will also combine well with the increasingly mechanised mechanising of mechanisation of labor.

Mechanisation frees resources of deadlocks that can be eliminated, and dynamic logistics systems can enable dynamic responses to demand that scale faster and respond better in failure modes.

Money goes electronic first because its is the most base-level coordinating signal for the most base level movement of stuff and mixind and filtering of stuff to make other stuff.decentralised, low latency, multi-and arbitrary-meshed protocols that can implement any type of service in any kind of way, but cooperate with other node types in coordinating the network with greater width.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment