This document expands upon this document
- To explain the core/plugin architecture
- To expand upon the tasks of core
- To expand upon the goals of separation
In the purest form of a SeoulOS instance, both core and plugins run on the same computer. In this case, communication between modules is quick, efficient, and done over a sos-sock
, which is described below.
In distributed SeoulOS instances, though, the nemefits of the core/plugin architecture becomes more evident:
- Core contains logic for running kernel tasks, processes, memory allocation, and
sos-sock
s. Thus, Core concerns its self with execution - Plugins provide support for things like filesystems, networking, hardware, and other such kernely goodness
- Plugins, generally, open a
sos-sock
to core which is an in-memory construct (see below). - When running on other instances, with separate memory, this is bridged and runs over an established network
- This allows things like disks and network adapters, and userland to run far away from executors and allocators. Transparently.
This, though, is not the main strength of this architecture; distributed systems (or dumb terminals) aren't new, or even that useful in the vast majority of workloads. Self contained servers don't need this. Home computers don't need this. And given we don't have a strategy for distributed executors/ allocators (yet), HPC can't use this.
The main strength is in how core and plugins communicate.
Note: in the following section, the term interrupt
is kinda naughtily used to include syscalls too- There's a difference between the two in kernel land, but we can comfortably abstract them in SeoulOS land to plugin developers.
The sos-sock
works similarly to various POSIX sockets. It is opened with the open(DEST, FLAGS)
syscall (with bit sos-sock
set).
A plugin opens a sos-sock
, whereupon the kernel checks a couple of things:
- Is the process opening the socket owned by root alone?
- Does the process have the
CORE_PLUGIN
bit set? - Is the process verified by some system trusted developer? (or not... we need a strategy for this, but probably looks like some ELF text value being a signed checksum of the program. Maybe adding new trusted certs requires a reboot- ie, it's in immutable kernel memory)
If these things pass, core returns a filehandle for bidirectional communication.
Plugin then sends up to two messages:
enum InitMessageType {
Provides,
Consumes,
}
struct InitMessage {
type: InitMessageType,
interrupts: Vec<usize>,
}
The Provides
message is mainly used in device drivers; it provides a way of telling core "Hey, you might not know this interrupt exists, but it does, so make sure it doesn't double-fault". It's a more costly operation that the Consumes
message (which can safely assume an interrupt handler is registered), and so it makes sense to split it out.
Once core has signalled that everything is bob on (it writes a 1
to the socket and triggers the plugin), the plugin sends a Ready
signal (it writes a 2
, etc.) the plugin is instantiated.
Interrupt handlers in core (with the exception of stuff like executing processes/ doing memory things/ or fault handlers) then lookup the relevant interrupt handlers and pass messages on.
Messages to plugins look pretty simple; messages are little more than in-memory representations of register values- no magic code encode/ decode is necessary. This has two benefits:
- Core doesn't need to do any complex data mangling; it can hand off stuff nice and quickly
- Plugins can be written in any language - if it can read a message from a
sos-sock
, and if it can process that message in some way then you've got yourself a plugin, baby!
Core exists to provide the barest of bare minimum 'core' tasks a kernel needs. It:
- Configures a global descriptor table and interrupt handlers for the hardware it's on
- Initialises memory, including separation of kernel and userspace memory (although rust ownership makes this a little less critical than in other kernels)
- Provides
exec
capabilities - Has a dirt simple embedded
b-tree
based filesystem (containing an initialinit
process which starts some initial plugins before handing off to the real init) - Runs some self-tests (does
async/await
work the way we expect? isvga
memory writable?)
This is pretty much it. There's not much more work a kernel needs to do. From there, plugins can process events and everyone is happy.
These are actually defined at build time- a user should have the power to decide which initial plugins run. However, usually the following are necessary:
- fs - filesystem operations
- clock - provide access to the hardware clock
- nic - provide access to network cards
Anything else can be loaded from the init task, really. Hell, even these plugins don't have to be run from core. Have a filesystem or clock implementation you'd prefer? Use that mate- register the correct handlers and you're laughing.
We touched briefly on this in a few places, but goals of such a thing are:
- Kernels are complicated, and hard to test. By minimising kernel code and moving things to userland we make unit testing easier (rather than testing whether a file is written to a disk in an fs, for instance, our kernel can test it passes the right message on, and the fs can test it forms the correct message to write to disk)
- Kernels generally need to be written in special ways, in specific languages, with all kinds of potential for bugs. By passing plugin logic to any language that suits the plugin best, we hope to avoid these issues. Especially memory safety issues.
- Having a huge kernel with lots of stuff turned on, with optional module loading, makes things complex and potentially slow. A process which is better suited to building a kernel core/plugin set per machine is better, which is built in to this project