Skip to content

Instantly share code, notes, and snippets.

What would you like to do?

So, as I mentioned last time, I have two fundamental goals with dat that are not addressed by simply running dat share.

  • Uptime: making sure that the site is seeded even if my local laptop is closed, eaten by a bear, or disconnected from the internet
  • Resilience: ensuring that there's a way to restart my website if the original seeding computer is lost. I try to make everything on my primary work/personal computer work in such a way that I can recover it all, easily, onto a new machine if I need to

To break these down a bit more, uptime is a combination of two things:

  • Ensuring that there are seeders
  • Ensuring that those seeders are seeding, and they're up-to-date

These sound similar, but are really significantly different goals.

Ensuring that there are seeders

I can ensure that there are seeders by creating them. One seeder is a local Lenovo x220i ThinkPad that I bought off of eBay for about eighty bucks. It runs Debian and seeder, and not much else, and I'm pretty satisfied with my decision to go with older laptop hardware instead of a Raspberry PI: I dislike things, and the Raspberry PI requires lots of knickknacks to get it going: a keyboard, monitor, flash card, wifi dongle, mouse. A computer, on the other hand, costs less and has all of those things, and only requires one wire: the power supply. Plus, it's cheaper.

The disadvantage of the ThinkPad is that installing Linux on arbitrary hardware can be a bizarre challenge. In my case, the USB flash drive that I tried to boot Debian from simply didn't cooperate: I ended up buying 3 more flash drives from Amazon for a grand total of $12, and they immediately resolved the issue. But not without a few nights of trying every variation of formatting and imaging the flash drive I had.

Ensuring that those seeders are seeding

This ends up being quite a bit trickier. Let's say you publish your newest post and run dat share. How long do you need to wait before you can close your laptop, confident that the latest blog post is distributed through the decentralized-net? The answer is, well, it's tricky:

  • By default dat lists a number of peers, but gives no indication of which peers are 'up to date'. Your site could have 12 peers, but all of them might be seeding the old copy, without your newest blog post. You just don't know.
  • You don't know if a pinning server is one of the up-to-date peers. If you're using a site like or running your own pinning servers on AWS or your crusty old ThinkPad, the only way of telling of hashbase or those peers are up-to-date is by visiting hashbase's http website. You might have 12 peers, but all of them are similar people who are temporarily seeding until they close Beaker Browser or turn off their laptops.

Unfortunately, the default display for peering statistics isn't a reliable indicator of the actual health of a site.


Before diving into more of the details, I'll also unpack what resilience means. The central question is what is a dat. Play out the following scenario:

  • You publish your site with dat share or Beaker Browser, and share the dat link with friends: "Hey, check out my coffee fansite at dat://c0ffee"
  • You share your dat site from your local machine and you carefully use dat keys export to write down the dat's secret key on a piece of paper, or store it in your password manager.
  • You're updating this excellent website about coffee when, ironically, you spill coffee all over your computer, completely destroying it.
  • Now you ask yourself: how can I update dat://c0ffee again? I have the key, and I have the URL - what's next? The answer might surprise you, in a bad way. It's a bad surprise. There are two scenarios:
    • Someone liked your coffee site enough that they're seeding it: hooray! You can run dat clone to grab a copy from them, import your key, and you're back in business.
    • Nobody liked your coffee site enough to seed it. In this case you're totally hosed. That link you shared with your friends is lost forever in the sands of time, never to host a coffee fansite, or anything else, ever again.

So, to dig deeper into that existential question of what's a dat, the answer is not 'a public and secret key'. Secret keys do not grant someone the ability to host a dat at a public key (aka a dat link). The question, then, is, what does?

Essentially, to continue to host a dat indefinitely, you'll need

  • The public key
  • The private key
  • The .dat folder
  • Possibly a copy of all the data (TODO: confirm/deny this)

This isn't a theoretical problem. Like many web people, I use Jekyll to manage this blog. It has inputs of Markdown files and templates, and it outputs a _site directory of HTML, CSS, and essentially everything that goes on the internet. My base assumption is that Jekyll is an idempotent function: given the same input, it produces the same output. So I should be free to delete the _site directory, re-run jekyll build, and get the same _site directory as I had before.

But if I did that to a _site directory that contained a .dat, then I might be losing access to my site forever, which is profoundly uncool.

Now, technically, I expect most people to access the dat version of this website via its ./.well-known/dat file, a sort of 'crosswalk' between DNS, HTTPS, and dat, that makes dat:// resolve to some dat URL. So I could just update that file to point to a new dat, and for most users it'd be all fine. But there are drawbacks to that approach:

  • The well-known file is a stopgap, and the potential replacements for it - like moving dat information into DNS-over-HTTPS records - do not afford the same quick changes
  • For people who were accessing the site from the dat URL, they will never see updates
  • Any pinning sites that I've configured to pin this site would have to be reconfigured or have to constantly refresh the well-known file so that they know what to pin
Copy link

sull commented Aug 9, 2018

Thanks for the notes. I've often thought about these issues as well. It is in part why one of my side projects that involves DAT takes a cloud first approach for now. Meaning, it does not fully embrace all of the values of the p2p web movement (yet). It does the inverse of hashbase/homebase. Publishing is done via cloud web app and only published content/data is seeded on DAT p2p network where the author and others can also pull/seed/fork it. Publishing from the author's local computer either manually or via a local javascript app to the original DAT archive hosted in the cloud (as a super peer) is not possible until I add multiwriter key management support or by adding an ad hoc method to merge a fork which has updated content (messy).

So essentially my approach is Cloud --> DAT until I can be confident in a DAT --> Cloud design (for this specific side project).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment