Skip to content

Instantly share code, notes, and snippets.

@wicrum-wicrun
Last active February 10, 2024 07:32
Show Gist options
  • Star 22 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save wicrum-wicrun/b308b298721ccbe0de9c6c73a6ee4242 to your computer and use it in GitHub Desktop.
Save wicrum-wicrun/b308b298721ccbe0de9c6c73a6ee4242 to your computer and use it in GitHub Desktop.

The target audience is people who are familiar with Urbit's architecture, though not necessarily much of its code.

Plunder and Urbit

As some of you already know, i recently left my job as a core dev for the Urbit Foundation to work on a similar system called Plunder. Plunder was created in 2020 by two former Tlon employees, after their proposal for a new version of Nock was rejected. They have since reworked that significantly and built a reference implementation of their own system. You can follow its continued development on its mailing list.

I've known about Plunder for quite some time now, but their recently released demo -- in which the system is used to serve a 70 GB dataset, complete with metadata and searchable -- made me feel the need to explore it again and in greater detail. Doing this with my personal server doesn't feel like a big ask, but there is currently no item on the Urbit roadmap that would enable it.

What i found was that their system is incredibly well-designed and it made me question several things that i took for granted or simply wasn't aware of in Urbit. My initial reaction was that even though Plunder was interesting and better in some ways, Urbit would win simply because it has more momentum and more people working on it, since it can become good enough. But after sitting with these questions for a while, i'm not so sure.

This goes both directions, by the way. I am simply not sure. But what i do know is that i've been blessed with a good brain and a better gut feeling, and as i grokked Plunder, Urbit gradually ceased to spark joy. So i'm really just doing what i personally need to, though i do also believe i have good reasons. This text explains those reasons, as a consideration to my friends in the community.

I realize that Urbit is far from finished, so i will only focus on fundamental issues here, rather than incidental. I don't care about current design flaws that could be fixed by half a year of directed effort. I'm concerned with issues that i believe cannot be addressed in Urbit without changing the fundamental architecture.

Similarities

Before going into the differences between the two systems, let's just make clear in which ways they are similar.

  • Both systems attempt to create a foundation for all of computing that overlays on top of existing systems.
  • Both systems attempt to make it realistic for single individuals to operate their own networked software at all layers of the stack.
  • Both systems do so by way of a single frozen function which arrives at a state by processing an event log.
  • Both systems use jets (hardware acceleratable operations) to make this a realistic foundation even as specialized hardware evolves.
  • Both systems are single-level stores, without the need for a separate storage abstraction.
  • Both systems are, at their core, completely homoiconic and support universal introspection. All code and data are always fully serializable.
  • Both systems allow code to construct code, so that upgrades can happen entirely internally.
  • Both systems are standardization projects which attempt to freeze software and protocols so that code and data continue to work, and software developers have a foundation they can rely on.

An overview of Plunder

With the similarities out of the way, let's have a look at the fundamental components that make up Plunder, and how they differ from similar components in Urbit:

  • Plan, a computational model. This corresponds to Nock, and can be thought of as bytecode. It is designed to be a good compile target for functional languages, and to map easily onto existing hardware.

  • Sire, an assembly language for Plan. In some ways, this corresponds to Hoon, but in other ways it doesn't. Like Hoon, it's used to specify jets. Unlike Hoon, it's used to write pills (bootstrap sequences), so that these are human-readable and can actually be trusted. Also unlike Hoon, it's not intended to be used as an application language. It is purely a bootstrapping language intended to be an extremely tiny state-machine interpreter that can extend itself into something that can be used to implement a more complex toolchain, so that the dependencies of all other code are as small as possible.

    Note that Plunder doesn't ship with a high-level language. It is language agnostic and as such any language that is implemented on it would simply be a demo rather than anything authorative. There are plans to implement a Plan backend for Elm, but it doesn't exist yet. Since Elm already has a performant JS/HTML/CSS backend, if a Plunder backend were added, then full-stack applications could be written within a single language. Any compilers would generally target Plan, not Sire.

  • Cogs, a very bare-bones application model. In some ways, these correspond to Gall agents, but in another way these correspond to Arvo. From the Plunder perspective, your urbit only runs one single monolithic application with many plugins. This monolith is called Arvo, the runtime is tightly coupled to that particular application, and all its plugins (Gall agents) are tightly coupled to Arvo.

    In contrast, Plunder is able to run many applications. Cogs are basically fully independent, communicate using public/private keys, and can only interact in basic ways via opt-in, frozen interfaces. A running cog can be moved between different ships, and with some easily surmountable caveats regarding addressing, this will be completely transparent to both that and other cogs. If you think of cogs like processes in Erlang/Elixir, you're not very far off. Within a single cog, you can still have an OS like Arvo, with a more opinionated application model/plugin system with transactional interactions and tighter coupling. But you can also have standalone programs that don't run in any OS at all.

Holism vs modularity, or: fat standards vs slim standards

Perhaps you can already start to see the difference in design philosophy.

Urbit builds a single holistic system and attempts to solve all relevant problems correctly once and for all. It attempts to standardize not only the computational model (Nock), but also a high-level language within a novel paradigm (Hoon; subject-oriented programming), an operating system kernel (Arvo), the interfaces to its kernel modules (lull.hoon) and a standard library (zuse.hoon). It is extremely ambitious and inventive, and by necessity, also highly opinionated.

Plunder, by contrast, is much more modular and agnostic as to what the correct solution is, if one even exists. It really only attempts to standardize three things: a computational model (Plan), an assembly language for writing pills and jets in (Sire) and the interface that cogs use to interact with each other and the hardware. It prefers to rely on established and well-studied prior art instead of rolling its own solutions. It is quite unopinionated, very language agnostic, and can be thought of more as a platform-for-OSes rather than a VM+OS-bundle.

One might say that Urbit is more like Apple, while Plunder is more like Linux. But this is not entirely true, because it's still very possible to build a tightly controlled Apple-like ecosystem on Plunder. Rather, the difference is that Plunder favors thin and agnostic standards, while Urbit favors fat and opinionated standards. Plunder doesn't eschew holism, it simply recognizes that there are many different ways to do it, and we don't know which one is best. It is much more similar to Xen than it is to Linux.

Urbit is a ball of mud

In Urbit, many layers of the stack are tightly coupled to each other. Naturally and relatively uncontroversially, Arvo depends strongly on Hoon, and Hoon depends strongly on Nock. This is not very strange. Hoon is supposed to be a very minimal macro language with some added type checking on top of Nock, and Arvo is written in Hoon. But it also goes in the other direction: Nock is not a neutral foundation, but is in practice coupled to both Arvo and Hoon. We'll talk about the Nock-Arvo coupling now and return to Hoon later.

Formally speaking, neither Nock nor Vere/Ares are coupled to Arvo. The runtime just expects to be passed an "Arvo-shaped noun" (a core with +load, +peek, +poke and +wish arms). But it can only run one such application, and in practice, it's only realistic to run that particular one. The I/O drivers make many assumptions on how the kernel modules work, and those modules all make assumptions on how the kernel work, so users need to accept the whole package deal in order to be able to communicate with each other.

Personally, i believe that software should be as simple as possible; it should have few connected parts, with the keyword being "connected" rather than "few". It's not bad to have many components. Indeed, most of the time it's absolutely essential to break things up into many parts to be understood separately. But this only works if these are possible to consider in isolation. If a system consists of many parts which all make assumptions on each other beyond their respective formal interfaces, what you're really dealing with isn't a multi-component system, it's a single messy component.

This means that by decoupling the computational model from the OS, Plunder improves not only modularity but also understandability, which is something that Urbit explicitly strives to achieve. Of course, reduced understandability through increased coupling could be worth it if this allowed you to build something better, but this doesn't really seem like it's the case. The interesting things that Arvo does are mostly dynamic code generation, universal introspection and virtualization -- features that Plunder also provides, while also being more general purpose.

The point is, of course, that Arvo should become so simple, neutral and obviously correct that it will become a standard that no one could get any conceivable advantage from deviating from -- hence the kelvin versioning. This is certainly a laudable goal and Arvo does make certain good choices and contains some promising innovations. But it doesn't seem anywhere near obviously correct, except perhaps that the code is correct assuming its own design choices.

If i'm being realistic, standardizing something as complex and opinionated as an OS, not to mention an application model, doesn't exactly seem achievable in our current stage of civilization. Rather it is something that has to evolve under competition, and providing a frozen persistent execution environment with a base-level of compatibility seems like the best way to facilitate this evolution.

Urbit is a centralized system

Since users that want to run new applications and communicate with their friends need to use recent versions of Arvo and its vanes, and since these are developed by the Urbit Foundation, Urbit is a centralized system. This has both political and technological implications. Political, because it means that the technology is susceptible to institutional capture. Technological, because it means that the ground can shift under your feet.

To some extent, these problems are intrinisic:

  1. Software must often change over time.
  2. Networked software must coordinate any changes to keep the network working.
  3. A central authority is required for this coordination.
  4. Central authorities, if they matter, will always be captured by the current power structure in society.
  5. Therefore, most networked software that matters will eventually be controlled by the current power structure.

An obvious strategy to mitigate this is to reduce "often" to "sometimes", and "most" to "some". We want to standardize as many interfaces and applications as possible, so that they don't have to change. This is the whole point of kelvin versioning, of freezing software.

Frozen software

Frozen software is powerful both politically and technologically. Politically, because the extent to which open protocols are socially agreed to be frozen is the extent to which they cannot be modified by the current power structure. The coordination cost is too high. Technologically, frozen software is powerful because of standardization. Once you've reached 0K, you have a standard that people can rely on, always and forever. We want a system where old code will always run on new implementations, and new code will always run on old implementations (though the latter may incur a performance hit). No more building on sand.

Freezing evolving applications is obviously not possible. Freezing the computational model is obviously possible, given jets. On these two points, Urbit and Plunder agree. Where they differ is the rest.

Urbit kelvin versions the following files:

  • hoon.hoon, the definition of the Hoon programming language.
  • arvo.hoon, the OS kernel, but excluding the vanes (kernel modules).
  • lull.hoon, primarily containing interfaces to the vanes.
  • zuse.hoon, a standard library used by the vanes.

Plunder does not care about any of the above. Programmers can choose to implement all of them, or none. Users can choose to run all of them, or none. Plunder does, however, care very strongly about freezing a set of core hardware interfaces for things such as sending or receiving a network message, setting a timer, or starting a new cog (application/OS). If you squint, it's kind of like freezing only lull.hoon and including that as part of the VM spec, instead of defining it inside the system.

Agnostic hardware interfaces

But don't squint too hard. The implications of shifting the responsibility to implement these interfaces from the OS to the VM are important. Laying the responsibility of implementing a network protocol on the OS means that any code that wants to communicate over the network needs to rely on that OS.

In Urbit, there is no neutral and agnostic way to send a raw noun over the network. There is only the ames.c driver, which evolves in tandem with ames.hoon, neither of which are kelvin versioned. The interface that Ames exposes (via lull.hoon) is kelvin versioned, but this says nothing about how Ames communicates with other Ames instances, nor about whether it arbitrarily censors or not. It also doesn't say anything about in which order and over which wires it communicates with the other vanes, meaning that these might have to change in tandem. None of this is intended to be frozen, but could very well change in perpetuity. If you don't like new code that's being pushed, too bad! Use it anyway, start maintaining a compatible fork, or get locked out.

And again, it's not only political. The main thing that is needed for the technological paradigm that Urbit is trying to establish to actually work, is a set of frozen core hardware interfaces. These need to be completely neutral and agnostic to any OS or applications that happen to use them. Simply sending a message to another computer should not require you to run any particular code from any particular source inside of your ship. Otherwise you're back to the old situation where the ground can suddenly shift under you. This is not standardization in any practical sense.

The primary goal of the Plunder architecture is to make it permanent, reliable and breachless, and to do so as soon as possible:

  • The message protocol (%port) is is intended to be specified as a frozen hardware interface, so that stale ships aren't left behind by breaking changes to Ames.
  • The same goes for the file-sharing protocol (%book), so that the whole network stack needed to do software updates is nailed down. You'll never be left behind.
  • Sire, Plunder's assembly language, is included in the VM and is designed as a specification language for jets. This means that the jet nouns and hashes can be frozen and included in the specification, instead of changing whenever hoon.hoon does.
  • The serialisation system is designed to make a naive jam-based snapshot format viable for use at scale. This means that a new runtime can always import the snapshots of a very old or incompatible one.

Frozen applications and liquid OSs

Ideally, we'd even like to freeze some core applications. In Urbit, this will only be possible when the interface to the application model in lull.hoon is frozen, and even then, agents have to rely on new versions of ames.hoon, ames.c and gall.hoon as we've already discussed. So in practice, Gall agents cannot ever be completely frozen because the ground can shift under them.

Since Plunder can run many applications instead of just a single plugin system, this is trivial. You can have simple and standardized applications such as email or chat, that run directly at the bottom layer of the stack. If you have working software and can get packets to your target, you can continue to use that without change, forever.

And then for tighter integration, most users will likely have another layer, where you explicitly sacrifice some sovereignty to a central authority so that they can coordinate software changes and provide a more unified user experience. You could even give that authority more control than you give the UF, because you can opt out.

Specifically, you could give them the ability to cordinate entire ecosystem changes, like in a Linux distro or Apple's App Store. All of your main applications could come from your distro, and the distro audits changes to applications. Applications could also be typechecked against each other to prevent breakages at the interfaces between them. Imagine knowing that two applications send correctly typed values over the correct wires, in the right order. The possibilities here are vast, and Arvo has barely scratched the surface of holistic design!

But most importantly, the OS is exactly the layer where you want innovation to happen. You want to be able to incorporate new primitives into your execution environment, instead of having to build balls of mud on top. You don't want to commit to one way of doing things forever. Freezing an OS makes no sense!

Intermission: Explaining Plan

Before going into some slightly more technical problems with Urbit, it's worth spending just a little bit of time talking about Plan, "Plunder's Nock". It's not necessary to remember this in detail since i will mention the relevant parts when we get into specific discussions, but it's good to have an intuition for the basic structure.

Syntax

Plan's formal specification is slightly more complicated than that of Nock. The data model of Nock is the noun (*), which is either a natural number (@, called an "atom") or a pair of nouns (denoted with square brackets [* *] and called a "cell"). It can be represented as follows:

* ::= @     # Atom
    | [* *] # Cell

The noun is famously simple; it competes with the list as the simplest possible recursive datatype. Plan's data model is slightly more complicated, since it extends it with two additional cases:

plan ::= <plan>      # Pin
       | {@ @ plan}  # Law
       | (plan plan) # App
       | @           # Nat

As you can see, atoms have been renamed to "nats" (as in "natural number"), and "apps" are simply cells with parens instead of square brackets. But why do we also have "pins" and "laws"? Why these extra cases? The answer is that this tiny bit of extra information makes performant runtimes much easier to build. Even better: some optimizations that were previously only possible with major caveats suddenly become trivial to fully guarantee in all cases.

We will see exactly how and why later, but for now the point is this: There will always be a discrepancy between the formal specification and the means by which this is implemented in practice. Nock all but ignores implementation simplicity, which makes this discrepancy very large. This forces runtime implementers to rely on convention in addition to the specification. Additionally, certain optimizations become literally impossible, while many others simply become so difficult that in practice everyone is forced to trust the work of a few experts. In contrast, Plan maximizes formal simplicity subject to the constraint that the result should be practical and easy to make performant in practice.

Semantics

Okay, so that's the syntax: how the data is shaped. But what does it mean? The plan here isn't to go through all the reduction rules in detail, but it's worthwhile to try to give an intuition for what these four syntactic constructs are used for.

  • Pins are hints. <i> tells the runtime that i can be globally deduplicated, possibly jetted, and stored in a contiguous region in memory and on disk.

  • Laws are functions. {n a b} is a function named n that takes a arguments and processes them using its body b. The name n isn't semantically important, but is included to facilitate jet matching and increase legibility.

  • Apps are applications. (f a) is an application of the function f to the argument a. The function f could evaluate to a law, but it could also evaluate to a pin or even a nat. There exist five axiomatic functions that are simply referred to by the nats 0 through 4. So for example, (3 10) is a perfectly legal app, which means "increment 10 by one" and so will result in 11.

    Structurally, apps are identical to cells in Urbit and can be decomposed in the same way. However, their semantics are very different and they associate to the left instead of to the right. That is, (a b c) := ((a b) c), while [a b c] := [a [b c]]. So (add 5 6) means ((add 5) 6), which will be further reduced to 11 since add is a law that takes two arguments. On the other hand, (add 5) cannot be further reduced but is simply a pair of a law and a nat that can be passed around as any normal value, until it is eventually applied to a second argument.

  • Nats serve several different functions, just as in Nock:

    • Nats can be data, representing numbers, atomic strings, boolean values, and so on.
    • When applied to arguments, the nats 0 through 4 can represent either instructions or axiomatic functions, depending on the context. Some of these correspond to Nock instructions, others don't.
    • In law bodies, nats can also be indices into the law's argument list, similarly to how Nock 0 indexes into the subject. In other words, these are effectively variable names. For example, the law {'head' 2 1} is a function named head that takes two arguments and returns the first, while {'tail' 2 2} is a function named tail that takes two arguments and returns the second. The 0 index is used to refer to the law itself, similarly to the $ arm in Hoon.

(Depending on your previous experience, it may help to think of Plan as essentially a serialization format for the lambda calculus. That description does ignore a few essential details, but it's a very good start. But don't worry if that doesn't mean anything to you.)

It's worth noting that even though pins and laws are internally represented using specialized data structures (to guarantee certain invariants), both of them can be converted to and from apps. This means that the user interface to Plan's data model is exactly as simple as the noun. In particular, the two axiomatic functions 0 and 4 mean "create a law" and "create a pin", respectively. So (0 'head' 2 1) is the same as {'head' 2 1} and (4 420) is <420>. The only thing the programmer ever has to write and deconstruct is apps.

The above is obviously not a complete introduction to Plan, but it should be enough to understand the following sections. So with that out of the way, let's have a closer look at two interrelated problems: performance issues, and language agnosticism.

Urbit is speed and data limited

As i mentioned in the introduction, Plunder can already work with significant amounts of data. Urbit on the other hand, still relies on the user setting up an S3 bucket to realistically be able to share pictures, not to mention videos. Urbit does have plans to mitigate this issue, through what's called off-loom storage. Simply put, the plan is to allow some nouns to be stored outside of the loom. As of early June 2023 this is still not on the roadmap, but it should be perfectly doable in principle. The bigger issue though, is computational speed. If you have lots of data, you need to be able to traverse it quickly.

Let's just get one thing out of the way before we dig into this section. A common retort is that the performance of Nock hasn't been a problem in practice. This is totally missing the point. The point of having faster computers isn't to have faster interactions, but to be able to do more things. In practice, slow performance is an application engineering problem. If you're fine with slow foundations, port all your apps to Electron. We can certainly work around Nock's performance limitations by only building certain things and always building them in very particular ways. Do we want to?

Nock 11 and jets as side effects

Just as a reminder, Nock 11 looks like this:

*[a 11 [b c] d]      *[[*[a c] *[a d]] 0 3]
*[a 11 b d]          *[a d]

(I've changed c to d in the second reduction rule so that the jetted value has the same name in both rules.)

When evaluated, Nock 11 is discarded and you get back the jetted value d (possibly after running a dynamic check on c). Point being, whatever d is, you first have to execute [a 11 b d] in order for it to magically transform from a regular noun into a jet. Formally speaking, Nock 11 has no observable effect (except possibly a crash), but it does assign magical interpreter state to d; Nock 11 has a side-effect. This means that jets need to be carefully constructed using the Nock 11 ritual, lest they remain normal nouns.

This works well for code that's pulled from the standard library, since we have full control over how that's constructed -- it's compiled from Hoon source code. It works less well for anything that's sent over the network, because if you serialize and deserialize some jetted value d using +jam and +cue, Nock's evaluation rules will discard the Nock 11 before passing d to +jam. This means that after calling +cue, the receiver will just have a regular noun, with no idea if and how to jet it. In the case of calling a gate, you can simply compare the hash before running it. But in the case of a data structure, the hash will be different depending on the values therein, and so the hash strategy doesn't work.

This matters because some data structures are impossible to define in both Nock and Plan, and need to be jetted. If you've spent any time around Urbit devs, you might have heard this referred to as "data jets". Outside of pure numeric calculation, data structure access is what programs primarily do, which means that it has to be fast to scale.

As an example, Hoon's +map data structure provides logarithmic time search, insertion and deletion. It is however impossible to natively implement a hash map, which would offer the same operations at constant time ammortized. You could make +map a data jet so that it's specialized to a hash map by the runtime, but because of Nock 11, this would be undone as soon as you sent it over the network.

As another example, the only reason why Plunder's 70 GB picture demo is performant is because the metadata is stored in a B+ tree, in which you need to use arrays rather than linked lists to perform efficient binary searches and fast splitting. Hoon could likely solve this by implementing arrays as atoms, but this raises two questions. First, how would you store (pointers to) arbitrarily sized nouns, and not just fixed size atoms? Second, does this really seem desirable? It's a really clever solution, but should you really have to be that clever to do something so basic? You would also lose all type information of the elements, and the data would be nigh-impossible to deconstruct without using the library functions.

Pins, laws, and jets as patterns

In Plunder, jets are implemented by pattern matching instead of as a side-effect. Anything that has the right shape to be a jet is always jetted. This includes both data and code -- even partially applied functions. This would be very expensive to do in Urbit, since everything is just a noun. Checking whether every noun you encounter is of a shape that corresponds to a jet would be extremely wasteful. Pins and laws make this cheap to do.

Code jets

Code jets are implemented as pinned laws. Remember that a pin is a single value i wrapped in a magic box <i>, as a hint that i is supposed to be globally deduplicated. Recall also that laws are functions {n a b}. So then a pinned law <{n a b}> is a globally deduplicated function; it is not a transient function like an anonymous lambda, but rather something that is expected to be kept around to be called again and again. Not all pinned laws are jetted, but a pinned law is the only kind of function that ever makes sense to jet.

So the first time the runtime encounters a pinned law l, it needs to do two things. First, it needs to compute h=hash(l) and store h->l in a lookup table, so that future encounters of l can be efficiently deduplicated. Second, it needs to check h against the available jets, and if applicable, tag l with one bit of metadata saying that it should be jetted. Then, when any future copies of l are encountered, the runtime will again compute h and replace the new copy with a pointer to the old one. Since l was already marked as jetted, this means that jetting the new copy is already done and didn't result in any performance hit at all.

Note that this means that you can send jetted code over the network. Remember the function add. I claimed that it's a law, but it's actually a pinned law, and it's jetted. If there's a pin anywhere in a message you receive over the network, your runtime will immediately hash it, and check the hash against its available pins for possible deduplication. Since jets in Plunder are frozen, your runtime will find that the add pin does in fact exist, and replace the value you received with a pointer to your own already-jetted add. This also works if the value you received was a partially applied function, such as (add 5).

(You should maybe be a little bit careful to not allow laws you've received over the network to send arbitrary network messages to others. But as long as you are, the only thing they can really do is waste compute and storage; they're pure functions!)

Data jets

Sending partially applied jetted functions over the network is certainly cool, but the real performance gain comes from data jets. Interestingly, they are implemented in a very similar way.

Let's take the example of an array of plan values, or a "row" as it's called. A row of length 3 is implemented as the law {0 4 0}, i.e. a law with the name 0 that takes four arguments and returns itself. What? How is this an array? Well, let's say that you wanted to represent the array [3 4 5]. You could store this as the following app:

({0 4 0} 5 4 3)

Because the law takes four arguments but has been applied to only three, this does not reduce. It is simply a pair that can be passed around, so all the values in the array are available. It also behaves similarly to a list, because the first element (3) is furthest out in the nesting (remember that apps associate to the left). To get the first element, simply call cdr. To get the second, call car and then cdr, and so on.

Still though, what is that law? It's actually nothing in particular, it's just a pattern that is cheap to match on and won't ever be matched by userspace code. The name 0 is null in both ASCII and UTF-8, and so in most languages it will be an illegal name for everything except perhaps anonymous functions. The body is 0, which means that the function immediately returns itself. An anonymous function that only returns itself is objectively useless, so we know that we'll never encounter this law in any other context. Still though, we need to check whether every new law we encounter matches this pattern. Fortunately, this only requires exactly three operations: (name==0 && body==0).

Nock is not neutral

Finally, let's circle back to a claim that i made in the beginning: Nock is dependent on Hoon. This is because in practice, running other functional languages than Hoon on Nock is very hard to do in a performant way. There are at least three reasons for this: Nock 0, jet matching and the semantics of Hoon.

Nock 0 and CPU caches

As a reminder, Nock 0 is basically just an alias for the axiomatic function / (pronounced "slot"):

[a 0 b]               /[b a]

/[1 a]                a
/[2 a b]              a
/[3 a b]              b
/[(a + a) b]          /[2 /[a b]]
/[(a + a + 1) b]      /[3 /[a b]]
/a                    /a

Given a subject a and an axis b, Nock 0 returns the noun present at that axis in the subject. If there is none, it loops forever (i.e. crashes). So in the general case where nothing about a is known, the runtime has to follow O(log b) pointers to reach the desired value.

The only reason modern computers are fast is because the memory controllers optimize heavily for reading contiguous memory. The further out from that (or a few recognized patterns) you get, performance falls off a cliff. In other words, Nock 0's tree addressing fights the way modern computers work, causing many L2 cache misses.

The plan to work around this is to make Nock compiled rather than interpreted, and:

  1. Perform subject knowledge analysis to predict where different parts of the subject will be stored.
  2. Make heavy use of the fact that Hoon generates the static Nock 9 rather than the dynamic Nock 0+2 for all its function calls, to predict which parts of the subject you need to access.

It is true that these work very well when the shape of the subject and the axis are statically known. But if Nock is to be truly general, the real question is this: I hand you a noun and an axis. How do you dereference the axis in one (or maybe two) indirect jumps? The answer is that you don’t, unless the entire noun is structured in a way that is known, not just to the compiler, but to the runtime. This may be the case for anything produced by Hoon, but Nock in itself is very flexible. Both the axis and the subject could be dynamically constructed at runtime using Nock 2, making subject knowledge analysis completely useless.

It's certainly possible to optimize the Nock runtime a lot. You could perform heavy static analysis to remove any subject oriented patterns, since every subject manipulation is an allocation which adds an indirection to everything else. For example, it would be possible to replace Nock 8 (=+ and =/ in Hoon) with a register, but that requires that the subject isn't used in any unconventional ways. This would likely improve things a lot, but not in general, only in certain patterns we can recognize.

At the end of the day, optimizing different systems have very different complexity/performance trade-offs. There are also massively different de-facto performance ceilings. All models of computation aren't equal.

Constant time dereferencing

So how does Plunder improve on this? Simple: by making dereferencing ("Nock 0") a natively constant time operation, instead of one that runs in logarithmic time by default.

It is perhaps worthwhile to take a step back here. Nock was introduced as "Maxwell's equations of software". This is a reference to Alan Kay who, while studying the LISP 1.5 Programmer's Manual, realized that

[The] half page of code on the bottom of page 13 [...] was Lisp in itself. These were 'Maxwell's Equations of Software!' This is the whole world of programming in a few lines that I can put my hand over.

It is true that in theory, you can build all possible programs using just eval and apply from LISP 1.5. But in practice, the first case of eval would cause significant problems: assoc runs in linear time, meaning that name dereferencing gets slower the more names you have in your environment.

Nock's biggest innovation might be that it reduced this to logarithmic time. By treating names as a UX affordance rather than something that should be semantically important at the bottom layer, it could replace the association list with a tree ("the subject"). All name dereferencing has to be compiled to Nock 0, which runs in logarithmic time. As such, Nock is the first axiomatic computing system to be even remotely practical to use as a foundation for all of computing.

But logarithmic time is still logarithmic time, especially so when it comes to something as fundamental as name resolution, an operation that is performed many many times for any program you run. And this construction still fights the way modern hardware works.

Plan does away with the environment entirely. If you need a piece of data to be accessible to a piece of code, simply inline it or pass it as an argument. Recall that custom functions are defined using laws {n a b}, where a is the number of arguments. This means that the number of arguments is known for all functions, and the argument list can be stored in a contiguous memory block during execution. This means that dereferencing runs in constant time and works with the hardware instead of against it.

Inlining code and data or passing them as arguments may sound expensive or annoying. But "inlining" really only means "inline a pointer", and the extra arguments will be added by the compiler, not the programmer. This is something that can be done by any compiler, it does not make any assumptions on the programming language except that it can be lambda lifted.

Nock is not a compile target

Chasing pointers to traverse trees may be bad, but the real killer for the "Nock is a compile target" narrative is that it doesn't have jets, making it not just annoyingly slow but impractically slow. Since jet matching is done on an entire Hoon core, any compiler that wishes to target Nock has to generate a very large noun exactly the same way that the current version of the Hoon compiler does. For example, the noun that you have to generate to match the +sub jet is 2kB jammed, since it includes the whole first section of hoon.hoon in its context:

> (met 3 (jam sub))
> 2.069

Of course, every compiler that wants to target Nock could simply hard code all of the nouns required for jet matching, and be forced to release a new version whenever one of these changes. This is annoying and shows that Nock is dependent on Hoon, but it's perfectly doable.

But the real answer here, and the one that you typically get if you ask people involved in Urbit how other languages should be implemented, is: compile to Hoon instead of Nock, or interpret the language in Hoon. Your third-party compiler/interpreter may still have to change whenever Hoon does, but not necessarily in all cases.

Hoon is also not a compile target

So this seems like a viable solution, though it does put three interrelated issues into focus:

  1. Is Nock still the foundation of the system, rather than just an implementation detail of the actual foundation that is Hoon?
  2. On one hand, Hoon is a fairly rich and opinionated language, and as such not an obvious choice for a compile target,
  3. But on the other hand, Hoon is a fairly minimal and primitive language, and as such not an obvious choice for implementing an entire operating system and applications.

I will only dig deeper into the second point here. But as a brief reflection, it seems to me like Hoon attempts to straddle two or even three distinct roles, resulting in it not fulfilling either one of them very well.

Hoon is a statically typed language, so any code you want to generate in it will have to pass its type checker (or bypass it using Nock 2, incurring performance hits). In general, type systems aren't a solved problem. Different languages have their own unique type systems that don't necessarily translate well to other type systems. Because of this, using a typed language as a general purpose compile target is something to be wary of, especially when it's very opinionated and non-standard. Hoon's type system is fairly unique and very conservative, it rejects many programs that would be valid in other languages. This does not seem like a good compile target.

Despite this, you could probably support a decent subset of say Standard ML or Purescript, but the curried representation of functions is not going to perform well. For example, consider the following Purescript:

foo x y z = ...

which would correspond to the Hoon:

++  foo
  |=  x=*
  |=  y=*
  |=  z=*
  ...

In most modern functional languages, this is an extremely common pattern: lots of small functions, with each one taking many arguments. These are also often partially applied and passed as arguments to other functions. Since Hoon gates have a lot of overhead in Nock (point-free Hoon is explicitly discouraged), this would not perform well.

Nock-Hoon is also unfriendly to laziness. You can't implement thunks using traps because they don't guarantee idempodence: kicking the same trap multiple times would result in the computation being performed over and over again. Of course, this can be handled by a cache (using ~+), but this only lasts for a single Arvo event. There's a UIP to implement persistent caching, but this is not what you want either: once all references to a thunk is gone, it should be deleted, not persisted.

Standard ML and Purescript are both strict languages, but they do use lazy evaluation for e.g. streams, and Haskell is primarily lazy. These languages aren't necessarily the right ones to implement something like Arvo in, but a Nock-like system that cannot support them is pretty stunted.

Hoon is neither easier nor simpler

A common argument is that none of this matters since traditional functional programming is hard and abstract, so we don't need to support it. Both Nock and Hoon are designed to be similar to physical machines, something that humans are used to thinking about. Compared to monad transformers, dependent types or Haskell's -XDataKinds extension, Hoon's type system is dead simple and significantly less abstract. Haskell's type checker is just under 14k LOC, the same size as the entire Hoon compiler!

But if you instead compare Hoon to Elm, the difference is much less stark. The entire Elm type checker is implemented on just over 3k LOC, and the type system is generally considered very intuitive and easy to understand, while also being more powerful than Hoon's. It's difficult to compare the type checker's code size to that of Hoon's, since the entire Hoon compiler is in a single 14k LOC file, but the full Elm compiler is roughly 26k LOC -- comfortably in the same order of magnitude. It's also not only more powerful but also significantly more featureful, with user-friendly error messages, a code pretty-printer and so on.

Elm is also orders of magnitude easier to learn and work with than C++ or Rust, but far more people learn C++ and Rust. All the evidence points to: people are willing to learn things if there is a big pay-off, and unwilling to learn things if there is not. The difficulty and novelty of the language aren't really significant factors to adoption. Hoon has explicitly banked on the same observation.

It is certainly true that Hoon feels more like a machine than many functional languages, but from having worked with it a fair bit, i'm unconvinced that this makes it easier to write error-free programs. My error rate is roughly the same as in other typed functional languages. I also find it more difficult to tell when a Hoon program is correct, because of encouraged use of mandatory state (the subject). Generally speaking, it seems to me like encouraging thinking of computer programs as physical machines is a false affordance, something that Urbit explicitly tries to avoid in other parts of the system.

Hoon does offer a few unique features that aren't found in normal functional languages. Both the typed metaprogramming story and negligible-overhead virtualization are really cool, but these features are basically pointless in most userspace code. Requiring that users learn an esoteric language because it has advantages they don't need doesn't seem like the best design choice. And if you need a high-level language with these features for an Arvo-like OS on Plunder: build it! Hoon supplies the blueprint and Plunder provides the tools.

Porting Urbit to Plunder

Which brings us to my last point. You might wonder, "Oh but can't Urbit just use Plunder as its foundation?". The answer is yes, with major caveats. Implementing Nock on Plan would likely be inefficient, Hoon is entirely defined by its macro expansion into Nock, and the rest of the system runs on Hoon. Additionally, many of Arvo's vanes assume things from the runtime which aren't necessarily true in Plunder.

Still, there's a path:

  • Arvo
    1. Identify a programming language that seems appropriate for Arvo and implement that on Plunder. Or more interestingly, implement a Nockless dialect of Hoon.
    2. Identify the good pieces of Arvo whose functions aren't fulfilled by Plunder's hardware, and port these to run as a cog in Plunder. AI could likely be useful here, though i'm sure that it would need a handful of human overseers that already understand the system well.
    3. These overseers should also continue to improve Arvo after the porting is done, since it still has lots of interesting and promising future work.
  • Gall agents should be carefully considered whether they should remain as Arvo plugins, or become independent cogs. Once that strategy has been decided upon, AI can again likely help with the tedium of porting.
  • Azimuth should be implemented as a library that cogs can opt in to using if they want, though you'd have to very carefully make sure that you interpret the L2 state in exactly the same way as naive.hoon does.

Conclusion

Urbit was initially introduced as a thought experiment, in which Martian civilization had been writing software for 50 million years. It was presumed that several times during those years, they had taken to rewriting the stack from scratch, to attempt an escape from the big ball of mud. Most of these attempts failed, but the last one succeeded. Martian software today is neither a big ball of mud, nor tends to become one.

Urbit is Earth's first attempt at this and i have immense respect for Curtis for initiating that. But what's your bet that he got it right immediately? Do we really have to be bound by what he received from aliens during an acid trip?

Having looked at Plunder, i believe that it is more right than Urbit. Whether it will lead to an escape from the ball of mud, i don't know. Whether it will become more popular than Urbit, i don't know. But i do believe that it's better positioned to achieve Urbit's stated technical goals.

All this being said, the first step is in many ways the most important, and getting this right on the first attempt seems impossible. Plunder would likely not exist if it weren't for Urbit, and some people may still have things to learn from the Urbit-incarnation of the experiment. The culture and Schelling Point that Urbit provides is also extremely valuable. I've made many friends through Urbit, and learned a lot.

I might also be wrong in my assessment. There are certainly people who are both smarter and more qualified than me who still believe in Urbit. But i don't, so i can't stay. I don't do sunk cost fallacy. But this is fundamentally an aesthetic judgment on my part.

So if you're also excited about Plunder, join us at your own risk. Plunder has no momentum, no culture and no money to offer anyone at the moment, including me. I am not getting paid and no one is recruiting. There is no legal entity, just a few nerds. It's really only attractive to those who want to build something that lasts, with not much else going for it. But i'm going to help build the thing that i believe can actually work.

@darighost
Copy link

darighost commented Aug 15, 2023

The comparison to Linux is apt. People tell the story that Linux was this revolutionary new thing (omg, OPEN SOURCE OS??? WHAT) but there were actually a handful of contenders for the throne. Eg

  • Minix

Leading to the famous Tanenbaum–Torvalds debate

  • BSD

Should have won, but was embroiled in a legal dispute. By the time the dispute ended Linux had already won.

  • GNU

If RMS had prioritized Hurd... alas.


In two cases, the technically superior microkernel loses because it's vaporware. In any case, I hope this puts a fire under us Urbit fans to make more cool stuff. Then everyone wins.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment