Skip to content

Instantly share code, notes, and snippets.

@belisarius222
Last active October 15, 2022 18:31
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save belisarius222/1b79c398ef408d75e849f63df3d2cf18 to your computer and use it in GitHub Desktop.
Save belisarius222/1b79c398ef408d75e849f63df3d2cf18 to your computer and use it in GitHub Desktop.
Parallel Processing in Urbit

The best idea we have so far for parallel processing comes from ~master-morzod, and it's this:

You want to kick a long-running Nock computation (e.g. a call to the hoon compiler) out of the main event loop and run it in another thread so it doesn't block normal Arvo event processing while it's running. To do this, have Arvo store a $trap (a thunk, or unevaluated Nock expression), representing this computation, give it a number, and emit an effect to the runtime asking the runtime "please wake me up to run trap 37".

A naive runtime, upon seeing this effect, will just immediately enqueue an Arvo event asking Arvo "please run trap 37". This will defer execution of the trap until later, but it will still block the main loop.

A smart runtime doesn't immediately enqueue an Arvo event. Instead, it scrys into Arvo asking it for trap 37, and obtains the trap. It runs the trap in another thread (note that this requires adding basic multithreading support to the Nock interpreter). Once the computation completes, it puts the result in a runtime cache whose key is the trap (or maybe its hash), and whose value is the result. Then it injects the Arvo event saying "please run trap 37". Arvo dutifully "runs" the trap, but since the runtime has its result cache, instead of re-running the Nock, the runtime looks up the result from its cache, and it completes almost instantly. In the same event, Arvo delivers the result to the module (vane or application code) that had asked for the trap to be run, and the rest of the event continues normally.

This approach solves several problems that other approaches have:

  • it works fine on a naive runtime that doesn't have parallelism
  • no computation results, which can be very large, are written to the event log
  • there are no difficult type-level problems, such as ingesting results from the runtime and trying to type-check them
  • replaying the event log is not more complex (for performance, replay could also spin up multiple threads, but it doesn't have to for correctness)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment