Skip to content

Instantly share code, notes, and snippets.

@s0lesurviv0r
Created November 9, 2022 08:53
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save s0lesurviv0r/6b70f70e4131cd45b0d0e61acc4a3904 to your computer and use it in GitHub Desktop.
Save s0lesurviv0r/6b70f70e4131cd45b0d0e61acc4a3904 to your computer and use it in GitHub Desktop.
Decentralization for Disasters

Decentralization for Disasters

Introduction

I was fortunate enough to grow up without a dependence on the internet and cell phones. We still used paper maps, driving directions printed from Yahoo, memorized phone numbers, had physical books, and had local meet up spots. I have nothing against the shift to digitizing these functions but I feel the current approach is very risky.

Nearly all the aforementioned functions have been turned into apps. Contact books to Facebook (yes many people don't even have their friends phone numbers), maps and driving directions to Google and Apple maps, and books/knowledge to a variety of apps/websites and Wikipedia. Again, the apps are not necessarily the problem. Quite the contrary considering I carry a laptop and cell phone. I should actually have a level of redundancy I didn't enjoy before digitization. However, the key problem in today's digital ecosystem is that "apps" are engineered with the assumption that devices are "always connected". This isn't really a noticeable problem considering we've enjoyed relatively stable cell/internet infrastructure.

I really can't blame application developers for the risk they've exposed us to. Like myself, these developers live in urban areas with stable internet access and having anything less than 99.9% availability is pretty foreign. Having been an application developer, it was rarely a thought to consider or test applications for offline circumstances. If that wasn't enough, the business model complicates things further.

Today, applications primarily generate revenue through two methods, advertising and data/content. For applications monetizing through advertising, an active internet connection is required to retrieve an ad for the user to see and to track that the ad was indeed seen and/or clicked. The entire advertising ecosystem requires an active connection and since the app can't make money without the ads then why bother supporting offline users.

Even where the application doesn't rely on advertising, the content is stored in a walled garden behind a controllable gate. Think of Google Maps or Google Earth. Google has assembled this incredibly large dataset of satellite imagery, businesses, roads, cities, etc, yet access is tightly controlled. I understand that Google may be required to have a tight grip on this dataset per license agreements from data providers. This is not always the case with every business but leads to the same problem, making it difficult to provide offline applications. This particular example is flawed because Google Maps has recently introduced an offline mode but this is an exception rather than a rule. Even if the will exists, sometimes the resources don't. It takes extra engineering resources to design and implement offline functionality.

The reality is that regardless of monetization strategy, or lack there of, the impact for the effort to provide offline capabilities may not be there. My working idea is not to force these changes on existing applications but rather use a separate, but complimentary ecosystem. I'll detail what I do personally and how my experience supports this thesis.

I feel it's appropriate to start with files. The current trend is keeping files on a third party service. Whether it's photos on iCloud/Google Photos, documents on OneDrive/Google Drive, books on Amazon, or music on the plethora of streaming services, the idea is the same. In every case files will be stored on an external service and some subset will be spread across a user's devices. This model works as long as there is a stable internet connection. For myself, I'd prefer to have all my documents, photos, books, and music available to me, at all times. To make this happen I started using an application called Syncthing. It's available for nearly every platform except for iOS (I have an Android phone). Syncthing allows you to connect all you devices together and select which folders to share among each other. If a folder gets changed then the new versions are replicated over to all the other devices that share that folder. This obviously has to happen over a local network or the internet but that's only to perform replications. Therefore using Syncthing, every device has a full offline copy of the files I need it to.

Let me illustrate why having full copies of files is important. To start off with, my phone and laptops have the latest copies of all my books, manuals, maps, and other emergency resources. This means I have access to medical text, repair manuals, survival guides everywhere I go. In terms of space and bandwidth it costs little to ensure my devices have the same files. Second, documents are crucial during emergencies. The last thing I want is to struggle finding bank accounts, insurance policies, identification, etc, when evacuating or when the internet infrastructure is down. Finally, having photos on hand makes it easier to find family should we get split up.

Outside of any PDF maps I may keep synced across my devices, I use a few different applications with offline map capability. For my Android phone I use Organic Maps, it provides the ability to download OpenStreetMap data for entire states and countries. It also provides offline driving directions and points of interest (e.g. stores, gas stations, hotels). For my laptop I use YAAC (Yet Another APRS Client) as it supports offline maps incredibly well and is a full functioning APRS client. For those that don't know, APRS is an amateur radio protocol for sharing, usually location centric, tactical data.

I'd also like to point out a newer piece of software that's been gaining traction and that I hope to start using on a regular basis. The InterPlanetary File System (IPFS) is a project to decentralize the web, partly through hosting replicas of the same files/pages on many volunteering peers. I won't go too far into the technical details as I intend this post serve a broader audience. Anyone can run an IPFS peer, on any of their devices, and access or share files/pages to anyone else on the network. This isn't the classic model of client and server, instead everyone is a peer. When requesting a file or page, the local IPFS peer will contact other peers to try and get a copy. This process works in reverse when sharing a file/page. To share a document I'll add it to my local IPFS peer, receiving a unique file fingerprint that I may share with others. This fingerprint is what other peers will use when requesting this document. There are many features in IPFS but the key for diaster scenarios is that I can choose to "pin" certain documents and be able to access them even if I have no access to the IPFS network, or the internet on which it rides on. Even further I can still access files on whatever subset of the internet still remains connected.

I hope this post provides an overview of why we need decentralization and the tools I use personally. Most of it resolves around having offline content, as is the model with most decentralized platforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment