The following is an “oral history” of Freenet FMS and Web of Trust, received via personal communication on 2017 May 6 from Chris Wellons, who has kindly approved its distribution with no guarantees of infallibility.
Have you ever heard of Freenet? I haven't paid any attention to it in almost a decade, so I have no idea what state it's in these days. It's a lot more advanced than Dat, but less accessible -- you have to install a large, ill-behaved Java application. Like a Dat URL, Freenet URLs embed a key that encrypt and authenticate the data, so only people who know the URL can decrypt it. It has both static URLs with fixed data (like BitTorrent magnet links), and appendable URLs (like Dat), where someone who controls a private key can update the data behind it. The latter is built on the former. This mechanism is how Freenet blogs, and other updating Freenet websites, work.
One way that it's more advanced is that requests and responses are made via Freenet's built-in onion-routing network (like TOR). That way no one knows who's asking for what, just that someone is asking. With Dat, you could monitor which IPs are accessing a particular dataset pretty easily. For example, you choose a node ID in the Kademlia network (the DHT algorithm used by Dat) that's nearby one of the chunk hashes in the Merkle tree of the data being monitored. Requests for that data have to find their way to you.
Freenet had (has?) the most censorship-resistant forum I've ever seen to this day. I think it was called FMS. As a front end it exposes an NNTP server for the forum (e.g. the Usenet protocol), and a web server for account administration. You browse the forum with your favorite NNTP client, so it's comfortable and familiar.
As a storage backend it used Freenet. You store all your forum messages under your own per-forum-account appendable URL. To find messages on the forum, your client polls all the other accounts that it knows about and gathers their messages. Since everyone writes to their own space, there's no way to remove someone else's messages. Super censorship resistant!
However that's only half the problem. You also need some way to deal with spam and the like, since that gets harder when you can't remove messages. There's also the account discovery system: how do you find the accounts to poll? This is tied into the anti-spam system. When you make an account, you solve some CAPTCHAs for an existing account (this is automated by the software for the existing user, so they don't have to manage it). Once their software is satisfied, it adds the new account (e.g. appendable URL) to its published list of known accounts.
This list has the account URL along with two scores for that account: the spam score for their posts and the score for their own published listing. That's where it gets complicated, though fortunately most users don't have to think about it. Basically how much you trust someone else's list depends on how much everyone else scores their list. It's a meta-score. The more effective someone is at identifying spam, or similar, the more reliable their spam scores are, and the higher other users rate their scores. This makes for a sort of meritocratic moderation system in which you're free to ignore any moderators you don't like, and continue seeing messages they give low scores to. It's all pretty neat.
For accounts whose scores are below some threshold, your client doesn't even bother polling their URLs. This is important because it means you're not replicating their posts on the network. The more something is accessed, the more accessible it becomes. Stuff that isn't accessed in awhile falls off the network. If you poll a spammer, or a bad user, you're helping to propagate their messages.
The web admin part existed to manage this spam list stuff since that doesn't have an analogy in NNTP.
Another really cool app was a Freenet backend for Mercurial, where you could push commits to an appendable URL, and very effectively host an active repository on Freenet. FMS used this to host its source and collaborate with multiple developers. It was very cool, and I also haven't seen anything like it since.
It's not all sunshine and roses, of course. In the West, the internet is currently a very free place, and it's unlikely to change anytime soon. You can say pretty much anything you want without any real problems. There are really only two kinds of content that will get you in trouble on the internet: copyright infringement (mild consequences) and child pornography (very serious consequences). Given that it's a lot more useful to do things on the regular internet than on Freenet, and so anything that you can do freely on the internet you'll do there instead of on Freenet, I'll let you guess what sort of content thrives on Freenet.
There's a good recent quote from Scott Alexander about this phenomenon happening elsewhere:
The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
Fortunately, ten years ago this stuff was easy to avoid since it was all clearly segregated and demarcated on the network via social (not technical) forces. That's probably still the case. That's because there are basically two kinds of people on Freenet: the cryptography-loving civil libertarians and the child pornographers (who had no where else to go). The former had very little tolerance for the latter, and so formed two nearly disconnected social webs. And I mean that literally. Someone had plotted/visualized the FMS score listings (described above) and you could see the two, nearly-distinct subgraphs. The FMS score mechanism severely punished anyone associated with child porn from interacting with the regular forums, resulting in two separate FMS web-of-trust networks. Really it was a neat phenomenon, and showed that with enough social stigma, even something like FMS, without any hard moderation, could still completely exile undesirables.