Skip to content

Instantly share code, notes, and snippets.

@lojikil
Last active January 23, 2021 14:01
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lojikil/f75711261b64eea660833f703f6e7acd to your computer and use it in GitHub Desktop.
Save lojikil/f75711261b64eea660833f703f6e7acd to your computer and use it in GitHub Desktop.

A Whistlepig Overture

These are my collected ideas for a Gopher-like protocol that takes ideas from HATEOAS, 9p, and Gopher, and adds in cryptographic controls that make things nicer. There are a few specific goals:

  • The protocol should be encrypted from the start.
  • The encryption should be simple, like SSH and SPKI, and limit Certificate Authorities, in favor of federation.
  • The protocol itself should be simpler than HTTP, but a bit more meaty than Gopher proper.
  • If possible, support for unmodified Gopher/HTTP clients would be nice.
  • API support from the start: no "hacking in" file systems, chat, email, &c.
  • Namespaces like domains are great, let's do more of that!
  • Separate out documents from applications.

I'll cover each of these ideas in the remainder of this document.

Encrypted sebz gur fgneg

The internet is no longer a place wherein unencrypted, unverified protocols can meaningfully survive. As such, Whistlepig should support fully encrypted and authenticated communication from the start. Note that this does not imply a lack of anonyminity; keys needn't declare who actually owns them, only that this is the person who says that they are. Encryption should be relatively modern, something along the lines of AES-GCM, with an ephemeral RSA key, signed with the bearer's authentication key.

TOFU, PSK, and other delictable key delights

Two-way SSL is a clearly superior form of authentication, with one major draw back: the requirement to host some sort of Certificate Authority that can handle authentication of user keys. There are historical and modern alternatives however, and Whistlepig should support one of these. Currently under consideration are: macaroons/vanadium and SPKI.

Broadly, the key system of a user should be the following:

  • a generic "this is my host" anonymous key. There is no need for this to be tied to anything specific to the user.
  • N (where N >= 1) "user certificates", which identify the user based on some value.

The anonymous key is only used for when a user wishes to "anonymously" (really, pseudonomously) access a site. A user can certainly maintain N of these keys as well, or the browser could even generate a new one for each host, to crack down on tracking networks. The second type of key would be an actual identity key. This would tie the user to some specific identity, say, lojikil@somehost!somecorp. The purpose of this key is to ensure that a user is who they say they are, and to sign any ephemeral keys used for communication. On the flip side, a server need not store anything about the user save for their certificate's fingerprint; no requirement for a password or the like needed. Users' key managers can be used to store keys securely, with both per-manager and per-key passwords (i.e. both the manager and the key itself will have passwords, hopefully separate ones).

NOTE: J-PAKE also works rather well here.

Not much meat on a Gopher

Believe it or not, I use Gopher pretty heavily in my work. In fact, Chasha is probably my most widely-deployed framework in terms of numbers of systems. I use Gopher for remote monitoring, management, file system access, &c. Whilest Gopher is nice because it's easily implemented from both perspectives, it doesn't provide many useful features that we would want in a modern protocol. On the flip side, I defy you to find someone who actually understands all the implications of an HTTP/1.1 implementation. In fact, at the office, we've had large discussions simply on the notion of Cache-Control and Pragma. Furthermore, protocols like 9p exist, but are widely unused, even if they may offer superior ideas.

Many years ago I thought of a reduced 9p called 3p. It worked thusly:

  • R <tag> <path> <data-length> <data>
  • T <tag> <data-length> <data>
  • E <tag> <data-length> <error-message>

This worked relatively well, and I was able to quickly implement clients for this new reduced protocol. A few years ago, I was pondering about Gopher and how to make it better, and 9p, and came up with an "advancement". This smooshed together ideas from 9p, Gopher, and OpenWAIS, in order to make a protocol that I was "relatively" happy with:

  • M <tag> <selector> <query-length> <data-length> <query> <data>
  • R <tag> <data-length> <data>
  • E <tag> <error-length> <error-message>

However, I was still stuck with the old Gopher issue: a server was stuck with a single host, because paths were just effectively a file system path. Several months ago, I was thinking about $GOPATH, and about the failure of Gopher, when it struck me: a hostname is just a file system name space stuck in the path. I'll cover this a bit later in the document, but suffice it to say, I think I have something.

So, the protocol works thusly:

  • a <tag> is some timestamp-coordinated identifier on the client that is used to correlate server responses.
  • a <selector> is some path on the server, potentially prefixed with a host name.
  • a <query> is some refinement of the path/selector, in some idempotent fashion
  • the <data> is some data to send to the receiver attached to that selector.

so a typical session would be:

M 74710433-89B7-4724-B606-AC0241E18A5C /mail/lojikil/read 25 0 since=2016-09-25T0900.00
R 74710433-89B7-4724-B606-AC0241E18A5C 1400 <mail data>

Here we've sent a tag (which is just a random UUID), a selector (/mail/lojikil/read), a query length of 25 and a data length of 0, and the query since=2016-09-25T0900.00. The server then responds to this with the client tag we sent, and the data requested. Wrt chunked or overly large data, I do think that the client should be able to handle N-ary R messages from the server, each tagged with the same tag, so that servers can "push" data down to the client. This would require clients to implement a reasonable timeout for such transactions as well, so as to not be DoS'd by malicious servers.

NOTE: what if, instead of sending query-length data-length query data, we just used Netstrings for all content? Even paths could just be Netstrings.

Another extremely useful aspect of Gopher was the fact that data was structured, not marked up. This meant that we knew the type of a resource before accessing a link, in a fixed-format message response from Gopher servers:

7My Search System    /foo/search    quux.lojikil.internal    7070
1Some User stuff I wrote    /foo/lojikil    quux.lojikil.internal    7070
0My Diary entries    /foo/lojikil/diary.txt     quux.lojikil.internal    7070
iThe above is my fancy search system    NONEXISTENT    quux.lojikil.internal    7070

the spaces above actually represented tabs, but the crux of the point was that Gopher data was structured: a client knew prior to even accessing /foo/lojikil that they should expect a directory, and that /foo/lojikil/diary.txt was going to be plain-text. In modern terms, this is similar to HATEOAS, or other ReSTful styles. I'm on the fence as to whether or not to utilize plain S-Expressions or JSON (or both, and use a Gopher+-style system of allowing the client to request either). S-Expressions would be interesting, in that if we used SPKI, the entire system from front to back would be S-Expressions.

Sings like a whistlepig, dressed like an Apache

I know full well that few if anyone will adopt this approach, so backwards compatibility with HTTP & Gopher would be useful to have. I'm not sure how much I care to support those directly myself, however.

APIs are important

In olden times, protocols would be drafted for hyper-specific focuses. Think about all the mail-like protocols that exist. Personally, my favorite is PCMAIL RFC-984, which has some interesting qualities. Regardless, the point is that it would be interesting to have standard notions of how accessing chat-rooms (like 9p-to-IRC bridges), mail (like the various 9p-to-SMTP bridges, and MBOX vs mail-dir), &c. should work. Having standard mechanisms for accessing standard items would be extremely useful. I think at least the following should be defined:

  • Mail: how does a user send mail to another user on this system?
  • News: how does a user get, in a standard fashion, news (like RSS) for an item/server?
  • Chat: how does chat/private message work, and can it be supported simply?
  • Posting: does this server allow users to post data? How?
  • Normal site content should be discoverable, via "menus" or whatever mechanism.

Tackling these problems straight away would be extremely useful, as instead of a multitude of protocols, we can fix this with one, and not hacked upon like HTTP.

From now on, you should call me by my chosen name: srv001dc0us

One major advantage that HTTP has over Gopher is a simple little header: Host. This allows a single server to host multiple distinct sites. On the flip side, it also allows a group of servers to host the same content based solely upon a small string that a user can easily provide. There's no reason why Gopher-style selectors cannot be prefixed with the namespace that a given path should be tied to. For example:

  • foobar.lojikil.com/index.qml is a request to the main FQDN's document.
  • /index.qml is a request to the default site's document
  • 0apudznmb2jf76l934tgsxwe8qyhri/index.qml might be a darknet site hosted on the server.

The interesting point here is that we can have multiple namespaces of data, similar to HTTP & Plan9-namespaces.

An abyss awaits

This isn't my first foray into the word of dark nets; if you're counting along at home, this would be at least my 5th. The idea here is that if a server were to host a "hidden service", it could easily determine who should have access to that service in a few ways:

  • by connection port.
  • by IP address (e.g. a local SOCKS proxy is the sole connection point)
  • by user key.

Because the protocol would have user identity tied in, hosts could validate users by which key prior to answering a request; This would mean that "dark sites" could easily sit on the same host, and the server could happily utilize standard traffic as a foil for defeating traffic analysis. (NOTE: Obviously this does not handle the notion of what happens when your server is seized, and thus there would still be the requirement to have a tangible threat model in place to handle such a situation).

An application is not a document

One thing that bothers me with the modern web is that we are generally delivering applications in a protocol and system designed for delivering documents. Anyone with a library or search background can explain to you the intricacies of why 12083 Math, OpenMath, and the like are all available: because we're looking for ways of describing data in a machine digestible way. This doesn't really translate well to applications: these are items that may do radically new things, and we hamper them by requiring that they draw to a <canvas> and fit inside a <div>. As such, a major portion of Whistlepig should be that applications are separate entities from documents, which are separate from resources. Similar to Word or other XML documents, Whistlepig documents should include all resources necessary within the request, so that applications needn't round-trip multiple times to a server. Applications, for their part, should include some sort of binary VM information, ala WebAssembly, and be provided with a simple sandbox of file system space, constraint-based drawing (ala QML), and a simple W7/E-Rights/PoLA capability model for security (utilizing engines from The Scheme Programming Language: an execution space is given a certain amount of energy, that allows that application to run until such time as it either needs information from outside, or it runs out, and the next is scheduled).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment