Skip to content

Instantly share code, notes, and snippets.

@flyingzumwalt
Last active September 25, 2016 17:53
Show Gist options
  • Save flyingzumwalt/02fbf076fbe778b55c66ae3d6bef8927 to your computer and use it in GitHub Desktop.
Save flyingzumwalt/02fbf076fbe778b55c66ae3d6bef8927 to your computer and use it in GitHub Desktop.
09-23-2016 Conversation about IPFS on #decentralize channel of Internet Archive Slack

----- September 23rd, 2016 -----

btrask [6:29 PM]
@edsilv response to your question, "what is the business case for decentralization?" https://bentrask.com/?q=hash://sha256/33ba37008b55d3188c6b1bf8

edsilv [5:28 AM]
@btrask interesting stuff about devops applications. I think online libraries/archives too are a natural place for decentralised tech to take root. @flyingzumwalt mentioned that David Dias is presenting at the IIPC general assembly next spring. Would be great if he was armed with a solid set of business cases for the "decision makers" in the crowd.

"As a public service institution I want to provide a scalable, highly available, and free as in freedom and beer collection of knowledge" type thing.

flyingzumwalt [9:04 AM]
@edsilv that’s a great suggestion. I would love to hear people’s thoughts on what those business cases are. I’ll be taking a stab at expressing the value of decentralized web for libraries and archives in a poster at Hydra Connect in October. Here’s the outline of the key message and secondary messages I’ve come up with so far: ipfs/community#173 (comment) Please let me know if you have ideas about how I can present a clearer/stronger case. Feel free to chime in on the github issue. GitHub Poster on IPFS & IPLD at Hydra Connect 2016 in Boston · Issue #173 · ipfs/community · GitHub @flyingzumwalt and @RichardLitt will be attending Hydra Connect in Boston and presenting a poster on IPFS, authenticated data structures, IPLD and their applicability to libraries & archives. Is a...

kevinmarks [12:14 PM]
I wrote about implementing decentralized verification http://www.kevinmarks.com/distributed-verify.html

edsilv [5:24 PM]
@flyingzumwalt David Newbury from the Carnegie Museum of Art mentioned on the IIIF slack that he's interested in creating a tool to generate IIIF manifests from uploaded images, however he doesn't want to end up on the hook for "hosting costs for the entire Internet".

I suggested that he IPFS pins the manifest and assets temporarily, allowing the user to pin the resulting hash on their own node.

Tumbleweeds ensued, although no actual disagreement... I think this would be a pretty killer demo. I have proven already that IPFS can be used to load static "level 0" manifests in this way.

Being able to make these kinds of tools without incurring massive hosting fees seems like a business case to me.

flyingzumwalt [9:27 PM]
@edsilv I ❤️that pattern of passing assets to a service via IPFS and then having the service pin the result temporarily so that users can retrieve it. Note that this approach still means the host is running CPUs to host the service and incurring network traffic to get the input content and return the result. It seems easier to simply distribute the software so people process the assets locally on their own machines, but I guess there is value in both approaches — distributing software vs hosting a service.

----- September 24th, 2016 ----- edsilv [2:15 PM]
@flyingzumwalt what if a network of libraries were to host the exact same service? Just thinking about what @btrask was saying about devops applications. If the service were packaged up in a generic enough way (docker?) IPFS pubsub could broadcast an update, and the network of servers could install the latest version, perhaps with each instance adding itself to an orbitdb registry to advertise its availability. Any http front end could provide a gateway to this distributed service, with the orbitdb providing a means to distribute the workload.

edsilv [2:50 PM]
The servers would need to communicate with each other, maybe using webrtc or a common RESTful api. If the distributed service exposed a RESTful api for each node (location accessible via the orbitdb registry) server A could hand off all work of processing the uploaded files to server B. Once complete server B could send back the IPFS hash to server A to display on its front end. The user would then pin this to download the files directly from server B.

The orbitdb could keep a running tally of how much work is being performed by each node in order to fairly distribute it. When a new job is required it would seek the node with the lowest "score" - sure there must be an existing algorithm for that.

Nodes could be configured to say whether they pin things permanently or not. Perhaps this would affect their "score" too. So nodes contributing storage are less likely to be chosen for CPU work over nodes that are not?

[2:55]
BTW, I expect this is all total bunk as I've just woken up from a long nap feeling quite disorientated :-)

flyingzumwalt [4:31 PM]
@edsilv these are good ideas. The IPFS & orbit devs would definitely have suggestions around this — ie. you want to consider the new pubsub implementation for communication between nodes. I’m wondering if we can move this conversation to irc so they can chime in? (edited)

edsilv [4:57 PM]
@flyingzumwalt Cool, yeah go for it.

edsilv [9:08 AM]
@flyingzumwalt I guess what we're talking about is essentially decentralised load balancing. This could be applied to any kind of service

[9:11]
although with an emphasis on fair "division of labour" as opposed to optimising for speed

@edsilv
Copy link

edsilv commented Sep 25, 2016

edsilv [9:08 AM]
@flyingzumwalt I guess what we're talking about is essentially decentralised load balancing. This could be applied to any kind of service

[9:11]
although with an emphasis on fair "division of labour" as opposed to optimising for speed

@flyingzumwalt
Copy link
Author

👍 @edsilv I added those two messages to the gist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment