Skip to content

Instantly share code, notes, and snippets.

@PoignardAzur
Last active December 19, 2022 11:55
Show Gist options
  • Save PoignardAzur/c60cd72d790c083b7b55269bc91912db to your computer and use it in GitHub Desktop.
Save PoignardAzur/c60cd72d790c083b7b55269bc91912db to your computer and use it in GitHub Desktop.
Druid design thoughts

These are snippets from a bunch of conversations / monologues I've had about GUI design on the Druid zulip.

https://xi.zulipchat.com/#narrow/stream/147926-druid/topic/Thoughts.20about.20.22active.22

Olivier FAURE: I've never been satisfied with the active status in Druid (and Masonry), but every time I tried to explain why, I wouldn't find the words. Since I've recently rewritten small bits of docs relating to active, I've had an occasion to put my thoughts in order.

Currently, active is a flag which:

Can be set by widget code, but never by framework code.
Forces mouse events to be routed to the active widget, even when the cursor is out.
Can be read by widget code to change how the widget is rendered (or laid out).

Aaaaand... that's it. Any other information it might represent (eg that it means "the user is in the middle of a mouse press on this widget") is purely down to widget implementation.

Olivier FAURE: Ideally, I would like "active" to be better integrated in the framework:

The framework should guarantee that there is only one active widget at a time (per pointer).
The framework should force the widget to lose active state when some events occur; not just MouseLeave, but also Tab being pressed, the window losing focus, the widget being disabled, etc.
There should be an equivalent to setPointerCapture in the browser; so that when a widget is active and the cursor leaves that widget, not only does the widget still get pointer events, other widgets do not (or at least in a way that lets them know the original widget still "owns" the pointer).

https://xi.zulipchat.com/#narrow/stream/317477-masonry/topic/Figuring.20out.20the.20architecture.20of.20the.20Command.20system

Olivier FAURE: As I mentioned in #checkins, I'm currently stalled on figuring out how I want my messaging system to work.

Olivier FAURE: In short, the problem I'm trying to solve is I want elements of the widget tree to be able to send messages to other widgets or to the platform in a strongly-typed way. Some of these messages should be broadcast, some of them should have a single target, some of them should expect a response; again, these parameters should be expressed in a strongly-typed way.

Olivier FAURE: (Interestingly, this is also a problem I've faced when my current work at Artifex on WebAssembly and WebWorkers. Workers let you send messages, but their API predates promises, so they don't have an easy interface to tell you "the function triggered by this message returned this result", so we had to cobble that together ourselves.)

Olivier FAURE: Right now Druid mostly handles messaging with the Command type. (And also Notifications, but they work the same way.) Commands are a type-erased message type that store Arc as a payload. They also use a clever scheme with a Selector type to hide the type-erasure from the user. Because Selectors are statically typed, and each Command must be associated with the Selector used to create it to get its payload, the call to downcast is hidden away from the user and expected to always succeed.

Olivier FAURE: Commands and Selectors in Druid are used as a catch-all for messages that don't fit Druid's pass architecture. Right now they're used:

Olivier FAURE:
To pass data between custom widgets in case where that data can't be conveniently expressed with the Data parameter [Widget -> Widget].
To notify a parent widget of something that is semantically related to UI input, mostly from textboxes. [Notification, Widget -> Widget]
To mutate the state of the current window, eg close it, resize it. [Widget -> Window].
To broadcast a message to all widgets of a window. [Widget -> Window]
To request opening a file dialog. [Widget -> App]
To send the result of the dialog to the requesting widget. [App -> Widget]
To do any of those things from a background thread. [ExtEventHost -> *]

Olivier FAURE: These use cases are pretty heterogenous, and the type of the data they require isn't quite the same, which requires a bunch of ugly compromises:

Olivier FAURE:
Most Commands are passed through the event pass, which only gives a shared reference to them. So Commands that need to deliver an owned payload pass a SingleUse<T>, which is a newtype for Mutex<Option<T>>.
Commands can be targeted with Global, Window(WindowId) or Widget(WidgetId) (there's also an Auto variant which is swapped with other values before reaching platform code). Commands of type Window can either be an event broadcast to every widget in the window, or a built-in special event (eg CLOSE_WINDOW, SHOW_SAVE_PANEL), depending on the selector.

Olivier FAURE: The code that handles commands is littered with special cases, with panics and warnings that address cases that should be impossible in principle. This feels like something that should be handled by the type system instead. Not just for convenience, but because having type-system integration makes your systems more modular. Instead of having to think how your new corner case interacts with every existing corner case, you fit your data into a pre-determined shape, and you're confident that existing code knows how to handle that shape.

Olivier FAURE: So. To summarize what I'd want:

I want a way to express that Widgets, Windows and the system can send messages to each other.
Some messages should be broadcast, others should be single-target. It should be possible to take() the payload of a single-target message.
Some messages (probably only single-target messages) should also expect a response. That response will by nature be asynchronous, which means that sending the message should return a promise/future for the response data.
Since we're going to the trouble of representing promises in our type system, we should go all the way and make them cancellable. For some promises, cancelling would be ignored. For others (eg a web request), it would save some non-trivial amount of work.
That means that if we plug an actual executor (eg tokio) into our framework, that executor's futures could be translated with our bespoke promises.
We should be able to send messages to and from different types of targets: Widgets, Windows, background threads.
All of this should ideally rely on serializable non-lifetime-ish data, eg indices instead of pointers, to make the system easier to reason about, log, test, hot-reload, etc.
Speaking of testing, we should have a way for unit tests to mock commands going through the system layer. Eg, if we're writing an input widget that makes an "Open File" dialog when clicked, we should be able to test that widget while mocking the SHOW_OPEN_PANEL event.
All of this should still preserve type erasure. When we're writing a FlexList container, we don't want it to have to be generic on the type of messages the different child widgets can send.

Olivier FAURE: I've already implemented a stub of a promise system taking inspiration from Commands (in that it uses type-erased payloads with statically-typed ids to hide downcasting). It works well-enough for one very specific use-case (getting data from background computations).

Olivier FAURE: But now that I'm trying to adapt it for handling dialogs, I'm realizing the implementation isn't general enough.

Olivier FAURE: Right now the different message-passing systems are kind of bespoke, and don't really go through a single unifying "switchboard". In other words, this is missing:

We should be able to send messages to and from different types of targets: Widgets, Windows, background threads.

Olivier FAURE: On the other hand, if I want to really do something generic... Well, the design footprint gets massive. I still don't know how to do the " It should be possible to take() the payload of a single-target message" part.

Olivier FAURE: I think the one thing I need to focus on if I want big immediate wins is to add a PromiseTarget type and a root-level method to dispatch a promise-result. Refactoring the entire command system can come later (the first step will probably to differentiate between targeted commands and broadcasts).

https://xi.zulipchat.com/#narrow/stream/147926-druid/topic/Brainstorming.20how.20to.20do.20Flex.20better

Olivier FAURE: I'm currently bumping into problems with layout and flex, and I've had the thought that we should have a thought-out, deliberate user story for implementing layouts.

Right now, my own approach is essentially trial-and-error at the compilation level, and I get the impression it's the canonical approach, by default. You write your code, try to wrap your head around how Flex is supposed to work, compile, and cross your fingers that when compilation is done, what's displayed matches what you wanted.

Olivier FAURE: My problems with the status quo:

Compilation of a druid program is fairly long. It's long because it's rust, and because druid is generics-heavy. This means the feedback cycle is longer than it should be.
Layout problems are non-local. This is especially a problem for me, since I'm mixing a lot of custom code with druid widgets (and druid widgets keep evolving), but I've had this problem even on pure druid toy projects. Because of how layout is computed, there is very little isolation of where problems come from. In an ideal workflow, if you do a one-line error, you should get either an error message some sort of visual indication that points you towards the line you need to fix. Currently, the visual feedback is unhelpful, and the error messages are obscure.

Olivier FAURE: This is an area where I don't think incremental improvements quite cut it. The problem exists not because Raph or Colin are bad at API design, but because making a strong user story requires a lot of coordination and preliminary work, and we haven't put in that work yet. Some things I think we need:

We need some way to implement to test layout changes without recompiling the entire project. Some projects like SixtyFPS do hot-reloading, but that approach might require a lot of work, and I'd like something that can be implemented fast. I'm thinking something like being able to parse a JSON config file in real time, and update layout parameters of arbitrary widgets based on the file's content. (So kind of a hot-reloaded version of the Flex example).
We need better ways to to visualize layout algorithms in real time. Right now we have debug_paint_layout and that's it. A lot of the values involved are implicit: if you have a label, you know that the label's text is "Hello world" because it says so right there; if you have a flex widget, it's not immediately obvious that the flex value is "infinite" or whatever, because it's used with other values to calculate placements, sometimes in subtle ways. And sometimes different values produce exactly the same result!
We need a better way to notify users of layout problems. Right now the approach is "every user event, spam the same warning complaining about invalid values in the console". This floods the console with messages, but the messages themselves aren't that helpful: they don't indicate which widget has the problem; also, they're worded in a way that complains about values in the source code; but for someone reading the warning, it's really not obvious where these values come from and what the fix will be.

https://xi.zulipchat.com/#narrow/stream/147926-druid/topic/Thoughts.20about.20druid's.20event.20methods

Olivier FAURE: To write down / expand on what I said in the meeting:

Right now Druid has some pretty complicated methods for traversing the widget tree with lots of special cases (though Colin is right that, by GUI framework standards, we're pretty tame). The root of the problem is that, while we're trying to share code for traversing the widget tree, recursing or not, propagating changes upward, etc, between different events, a lot of events actually have slightly different semantics.

Some events only target a single widget (eg commands, timers, focus changes)
Some events are broadcast and target all widgets (eg commands again, window connected, widget added)
Some events target a specific widget and all its parents (mouse and keyboard events). Because mouse events have an associated position, that position must be changed when passed to a child widget to account for its origin.
Some events are more complicated and apply an update to the entire tree, but skip the update if it's redundant (DisabledChanged, BuildFocusChain).

Olivier FAURE: Also, non-lifecycle events are associated with an "handled" flag, which is used for some platform stuff, and also to short-circuit certain events.

Olivier FAURE: An interesting pattern is that the "event targets a specific widgets and its parent" plus the "is_handled" flag is pretty similar to how the browser handles events: https://javascript.info/bubbling-and-capturing

Olivier FAURE: Another interesting point is that the type implementing the Widget trait gets to choose whether or not to pass on those events, but in virtually every case we want don't want it to skip recursing.

Olivier FAURE: (gotta go, more on this later)

Olivier FAURE: (am back)

Olivier FAURE:

the type implementing the Widget trait gets to choose whether or not to pass on those events, but in virtually every case we want don't want it to skip recursing.

What I mean by that is that the fact that parent widgets need to pass events and lifecycles in their trait impl methods is mostly an architectural accident. In many cases, the lifecycle implementation can be for child in self.children { child.lifecycle(stuff); } and the container widget can't do anything meaningful other than pass down the event.

Olivier FAURE: And in some cases, passing down the event is outright superfluous and will be ignored by the child WidgetPod (eg Notification, HotChanged, FocusChanged, etc).

Olivier FAURE:

What I mean by that is that the fact that parent widgets need to pass events and lifecycles in their trait impl methods is mostly an architectural accident.

By "architectural accident" I mean "this is because of how rust works and how druid is structured". Because each widget owns its children, the parent can't simply iterate on the children (or at least, not easily). So if you want any operation to occur recursively on the widget tree, you need the widget impl of each container to iterate over their children.

Olivier FAURE: So my ideal API would be something like this:

event() for events that target a specific widget and its parents; EventCtx has an is_handled field. Parent widgets must recurse unless they deliberately block the event.
command() for dynamically-typed events that target a specific widget. CommandCtx has is_handled. Doesn't recurse. (Maybe includes Notification?)
lifecycle() for both events that target a single widget and events that target many widgets. Selection which widgets are target is done exclusively by the framework. LifecycleCtx doesn't have is_handled, and parents can't recurse.
get_children_mut(), to get mutable access to all the widget's children. Somehow.

It's not that far away from what I've done in masonry, but the part about iterating on children is missing, so the event code is still pretty complicated.

https://xi.zulipchat.com/#narrow/stream/197474-checkins/topic/Olivier.20Faure/near/289888815

Notes about hot state:

Hot state (the thing that changes when your mouse hovers over a button) is annoying to implement, because it breaks the convenient abstraction of multiple static passes over the widget tree.

Ideally, what you'd want is "first handle events, then update widget states, then compute layout, then paint", where each 'then' is an indestructible wall that only be crossed in one direction.

Hot state breaks that abstraction, because a change in a widget's layout (eg a button gets bigger) can lead to a change in hot state.

To give an extreme example: suppose you have a button which becomes very small when you hover over it (and forget all the reasons this would be terrible UX). How should its hot state be handled? When the mouse moves over the button, the hot state will get changed, and the button will become smaller. But becoming smaller make it so the mouse no longer hovers over the button, so the hot state will get changed again.

Ideally, this is a UX trap I'd like to warn against; in any case, the fact that it's possible shows we have to account for cases where layout has an influence on previous stages.

In actual druid code, that means:

Widget::lifecycle can be called within Widget::layout.
Widget::set_position can call Widget::lifecycle and thus needs to be passed context types, which gives the method a surprising prototype.

We could have set_position set a hot_state_needs_update flag, but then we'd need to add in another UpdateHotState pass (probably as a variant to the Lifecycle enum).

Another problem is that hot state handling is counter-intuitive for someone writing a Widget implementation. Developers who want to implement "This widget turns red when the mouse is over it" will usually assume they should use the MouseMove event or something similar; when what they actually need is a Lifecycle variant.

Other things hot state is missing:

A concept of "cursor moved to inner widget" (though I think's that's not super useful outside the browser).
Multiple pointers handling.

https://xi.zulipchat.com/#narrow/stream/147926-druid/topic/The.20layout.20algorithm.20is.20frustrating

Olivier FAURE: I need to write down my frustrations with the layout architecture at some point.

I'm trying to port some druid widgets to masonry, and the layout methods feel fiddly and brittle. There are a lot of subtleties and small code snippets where it's not completely clear when reading/testing the code how changing a bit will impact the final layout.

Olivier FAURE: (I don't know how much my problems are with Druid's implementation of the Flutter architecture, vs the Flutter architecture itself; I've never used Flutter, so I can't tell)

Olivier FAURE: Trying to put some issues into words:

Some widgets will return box_constraints.max() as their size, even though the Scroll widget will give infinite max constraints to its children.
Flex::row(child1, child2) and Split::columns().with_flex_child(child1).with_flex_child(child2) render differently, even though they conceptually do the same thing.
Buttons will tend to occupy all the available space, even having a button take half your screen makes no sense visually.
Changing some Flex params sometimes does nothing, because some Flex params (or having Flex children) will "paper over" the effects of other params. This is not immediately obvious if you don't have a model of what these params do.

Olivier FAURE:
Calling widget.expand() returns that widget in a SizedBox with a fixed size of f64::INFINITY; this feels super unintuitive when reading the code.

Olivier FAURE: Another example of layout being counter-intuitive:

Adding .center() to your widget will change not only its position, but also its size. Eg in my_widget.center().expand(), my_widget doesn't have the same size as in my_widget.expand().

Olivier FAURE:
Adding .expand() to a flex container does nothing, because .expand() works by passing bigger minimum-size constraints to its child, and Flex calls .loosen() on all its children.

Olivier FAURE: I think we could make headway by identifying a "taxonomy" of widget types. I've had the gut feeling for a while that widgets fit into categories, and the way two widgets calculate their layout will be very different or very similar depending on whether they're in the same category.

Olivier FAURE: (Eg a Split widget will mostly just pass constraints from the parent, whereas a Text or an Image with an aspect ratio will have an unconstrained parameter that will change based on the constrained param, and a Painter will take all the available space and paint a pattern in it)

Olivier FAURE: And some types can never be composed with some other types (eg a Scroll widget with a widget expanding to available space); that feels like a meaningful property that should be surfaced.

Olivier FAURE:

I think another example of this is: fixed-size widgets shouldn't be flex children.

Yes! Exactly.

Olivier FAURE:

maybe the main thing that could be improved is a better way of ensuring that those compositions make sense

The way I think about it is this: the layout algorithm is a protocol for widgets to coordinate with each other.

Olivier FAURE: You want that protocol to let widgets communicate what they need and don't need (eg saying "I'm not going to take more space than this", which is what compute_max_intrinsic does), and also to be expressive enough that widgets don't rely on "accidental" features of the protocol

Olivier FAURE: In particular, I don't like that some widgets pass a "large" minimum size to their child to "force" them to have this size. I feel this is a source of some of the accidental complexity I'm seeing.

https://xi.zulipchat.com/#narrow/stream/147926-druid/topic/.22hot.22.20in.20the.20presence.20of.20multiple.20pointers

Olivier FAURE: I think the operative question is "what does 'hot' mean in terms of UI interactions?".

For me, hot should maybe mean "The thing that will be selected if the user applies the 'select' verb", but it's not a perfect abstraction.

Olivier FAURE: Thinking about it in more detail... I think "hot" is a druid-specific term, but if we imagine it as a more general version of "mouse hover", then "hot" means different things in existing GUIs depending on the platform:

On single-pointer desktop, it means "hovered with the mouse".
On mobiles, it means "is being long-pressed".
On smart TVs, it means "is being selected either with the pointer or with arrows".
On console games, it means "is being selected with arrows".

Olivier FAURE: The different expectations for different platforms means it's hard to have a single set of coherent rules. For instance, using the arrows has a different effect on a console / smart TV and on a desktop PC (although they're probably considered different keys? EDIT - No, they're not, or at least keyboard_types doesn't seem to distinguish them).

Olivier FAURE: Also, a lot of this is currently handled by Widget implementations, but I don't think it should be their responsibility? Because if it is, people will implement basic cases by default, but won't implement niche keybindings like eg Home and End keys, PageUp and PageDown, etc. I think ideally, druid should let widgets signal "I am semantically defined as a button" (kind of like register_for_focus() currently does) and lets druid internals handle hot-switching; that way interactions with various platforms are automatically handled; it would probably help with accessibility support too.

Olivier FAURE: Also, there's some overlap between "hot" and "focus", but not total overlap, because things can trivially be hot and not focused.


https://xi.zulipchat.com/#narrow/stream/147926-druid/topic/PointerEvent

Olivier FAURE: Some open-ended questions I have regarding pointer events, and events in general:

What are the different categories of events we want to handle, and in what "layer" do we want to handle them? For instance, if we consider mouse events, there are several interpretations of the same event, at different semantic "levels":
    OS reported mouse move (possible layer: druid-shell).
    Mouse pointer entered widget (possible layer: Widget trait impl).
    Widget is hovered, on_hovered callback is called (possible layer: Widget builder).
What are the different state modifiers of a widget? Right now I think we have "hot", "active", "focused" and "disabled". How do these modifiers relate together, and are their invariants? Which part of the code is responsible for maintaining these invariants?

Olivier FAURE: Also, somewhat related: how do we handle "pointer enter" events that aren't strictly due to a mouse move? For instance, if a layout shift changes the position of the mouse, or if the user scrolls without changing the pointer position; technically there might not be a "pointer move" event, but in practice we need to update which widget is hot, etc.

Olivier FAURE: I'm not sure exactly how to put those questions: I'm not wondering how to do this; I'm wondering what code should be responsible for this. It's a different kind of minutia.

https://xi.zulipchat.com/#narrow/stream/317477-masonry/topic/About.20pass.20context

Olivier FAURE: Thinking out loud about some design stuff:

Olivier FAURE: Right now the big thing I'm working on is the WidgetMut type, and overall how passes are handled in masonry. Right now masonry inherits from druid, and has roughly the same passes:

events
lifecycle
layout
paint

Olivier FAURE: Events is mostly about handling user inputs and IO, lifecycle is a catch-all for operations that can read and write lots of widget states, layout is for calculating widget sizes and positions (and other layout stuff), paint is for actually displaying pixels onscreen.

Olivier FAURE: Ideally we'd want a straightforward progression through passes: events trigger lifecycle stuff, both trigger recomputing of the layout, all of which lead to repainting the widget tree.

Olivier FAURE: In practice, it's not so simple. Layout changes can eg change whether the mouse is hovering over an item, causing some lifecycle stuff to happen. And all events, including layout and paint, take a &mut MyWidget and can trigger arbitrary state changes in said widget, which in principle could lead to layout changes.

Olivier FAURE: For instance, maybe a widget's layout method stores data in an internal cache so that following calls to layout go faster.

Olivier FAURE: Pragmatically speaking, most of these cases where eg the paint method causes data changes that cause layout changes would be anti-patterns. You don't want to design your Widget so that it might or might not be resized depending on how the framework handles repainting.

Olivier FAURE: So the questions is: In actual practice, what should happen when a layout-time or a paint-time method changes data that should affect layout? How does the rubber meet the road?

Olivier FAURE: AFAICT, the answer in druid right now is: nothing. There is no way for the layout() method to call request_layout(), so if any layout-affecting data is changed, the framework won't know until another changes triggers a layout reflow.

Olivier FAURE: This is because every passes takes a different FoobarCtx context object. Eg event has EventCtx, layout has LayoutCtx, etc. Except right now, I'm trying to build a WidgetMut abstraction, which is a reference to a widget paired with a single Ctx reference. Ideally, WidgetMut would appear to the user as a single "rich pointer" type; I don't want to have to create a WidgetMutEvent, a WidgetMutLifecycle, etc.

Olivier FAURE: Some of the context methods are very pass specific; eg paint stuff, layout stuff. But events and lifecycle mostly use the same things?

Olivier FAURE: I don't know, these abstractions get really messy and hard to reason about.

Olivier FAURE: When I look at the FooCtx implementations, it feels like most methods in EventCtx could be put in LifecycleCtx too; some methods in LifecycleCtx are specific to one lifecycle analysis (eg register_for_focus, register_for_text_input) and it feels like those should be refactored away anyway.

Olivier FAURE: Some methods are in every Ctx except PaintCtx, which feels somewhat bespoke and arbitrary.

Olivier FAURE: (because, again, paint() is allowed to do arbitrary transformations to the widget anyway)

Olivier FAURE: And some getter methods layout, is_hot are in every Ctx except LayoutCtx, which I guess it to avoid having layout depend on non-existing layout, but... again, it feels kind of bespoke?

Olivier FAURE: I feel like there's some way to structure this and integrate it into the type system better, but I'm not sure what it is.

Olivier FAURE: Thinking about what type-level passes would entail... maybe something like ECS? Where different passes have access to different components, in a way expressed in the type system? Then designing a new pass wouldn't require adding a branch to eg the Lifecycle struct; it would be more like adding a trait impl with associated types for the pass inputs and outputs?

Olivier FAURE: Still seems convoluted.

Olivier FAURE: Also, it's not like passes are straightforward functions with data in and data out.

https://xi.zulipchat.com/#narrow/stream/147926-druid/topic/Brainstorming.20a.20safer.20abstraction.20for.20layout

Olivier FAURE: Something that came to mind while I was looking at druid's layout code: part of the difficulty in understanding / maintaining that code is that it does a lot of unconstrained number-crunching. By "unconstrained", I mean that is does a lot of math operations (x + width, y / row_count, max(this.width, other_thing.width), that kind of stuff) where the logical connections between these operations is implicit, and you can only check whether they actually respect any invariants by visualizing the math in your head.

So I'm thinking there's probably a better abstraction, that respects invariants by design.

Olivier FAURE: The abstraction I'm thinking of goes like this: say you have a layout() method where the min and max constraints are guaranteed to be the same, both those from your parents and those you pass to your children. In this case, your constraints are just a rectangle, and your task is to cut that rectangle into sub-rectangles to pass to your children.

Olivier FAURE: In some cases, those sub-rectangles may overlap (eg in a Z-stack widget), but in the default case you can assume that they won't. So you can imagine the layout algorithm in those common cases as one that is given a rectangular cake, and has to figure out a bunch of cuts to make smaller cakes, some of which are the children, some of which are thrown away (padding space or whatever).

Olivier FAURE: Given that abstraction, we can imagine a struct Cake(Size, Pos);, for lack of a better name (yes, this is just a rect). Some possible methods:

impl Cake { /// Cut a cake into two. Self-explanatory. fn split(self, axis: Axis, position: f64) -> (Cake, Cake);

/// Tries to join two Cakes. Only succeeds if the two cakes have disjoint and have a common edge. fn try_join(cake1: Cake, cake2: Self) -> Option;

/// Equivalent to doing two splits and keeping only the middle part. fn cropped(self, axis: Axis, before: f64, after: 64);

/// Cut the cake into n equal parts. fn share(self, count: u32) -> impl Iterator<Item = Cake>;

/// Returns a mapping of the size of each child divided by the size of the parent. /// /// The listed children must all be disjoint and fit in the parent cake. fn proportions(self, children: [&Cake; N]) -> [f64; N]; }

Olivier FAURE: The split() method could panic if given a position that doesn't fit it. Or we could have a try_split method. Either way, it would force the widget writer to explicitly consider the case where there isn't enough room to draw the thing.

Olivier FAURE: These methods would be kinda like array bounds checks: they would add some overhead that the compiler would ideally remove (although removing float comparisons is harder than with ints). They might be disabled in release builds.

Olivier FAURE: Also, Cake would not implement Copy. Since most methods would take self by value, we'd get a type-level abstraction of a concept of "real estate": if you use space of a sub-widget, you can't use it for another sub-widget (unless you .clone() the cake).

Olivier FAURE: Thoughts?

Olivier FAURE: (To be clear, the idea is that instances of "Cake" or whatever we call it wouldn't be passed to layout methods; they would just be an internal helper)

https://xi.zulipchat.com/#narrow/stream/197474-checkins/topic/Olivier.20Faure/near/263836983

I'd like some way to access arbitrary Widgets from just their id. There are multiple ways to do that, but the one that I think is conceptually cleanest is to store Widgets in a hashmap (or something more optimized) with ids as keys.

There are some fundamental problems with that approach:

The way Druid currently works (and, really, the way any GUI framework would logically work), when you're doing a low-level operation on a widget, it usually does the same operation on its children. Eg, if you want to compute a widget's layout, you usually need to compute its children's layouts first. That means you need to borrow the children while the parent is still borrowed. If the children are in boxes, that's fine; if they're in a shared hashmap, you need to mutably borrow multiple elements of the hashmap at the same time. You can do that with refcells (or something more optimized), but your design immediately becomes more brittle, because lifetime errors happen at runtime instead of compile time.

Constructing container widgets becomes more complicated, because widgets no longer store their children directly, they store the ids of their children, which are stored in a global hashmap. So SizedBox's constructor:

pub fn new(inner: impl Widget<T>) -> Self

becomes

pub fn new(inner: impl Widget<T>, global_widget_hashmap: &HashMap<WidgetId, Widgets>) -> Self

Alternatively, the child widget could be added to the hashmap during the WidgetAdded pass, but that means the parent must store inner until that pass. (Also, I'd like to remove/replace the WidgetAdded pass, because I think it's a big code smell)

A parent trying to access child widgets loses type information. This isn't as bad as it appears since most container widgets use Box<dyn Widget>, but some need the type information. If you have a WidgetPod<FooBar>, it's easy to get a FooBar out of it; if you have a WidgetId, it's more complicated. The obvious solution is downcasting; also, I could add TypedWidgetIdsthat would do the downcasting for you, guaranteed valid by the type system (kinda like command selectors).
It all makes the crate way less user-friendly, with more abstractions you need to learn. This one I'm fine with; I'm trying to make an "intermediary representation" GUI library; the target audience is GUI framework developers, who are expected to have higher tolerance for idiosyncrasies and elaborate designs. But it makes the whole design less fun to work on; I basically end up feeling like I'm going to throw away half my work because it's too complicated.

Now, despite these complications, there's a few reasons I want this design:

I'm trying to move away from the druid model where only a widget is allowed to mutate its children, and it can only do that through its trait methods. That model works for toy examples with clear data flows, but for anything more complicated you end up having to use a ton of Commands. In particular, I'd like to be able to write an app that changes its widget tree from a single top-level function. (The last time I tried to use Druid in for a real-world-ish example, I was super frustrated because I couldn't do that).
I want to be able to send targeted events directly from top-level code to a widget; right now, every event has to go through the entire hierarchy, which is why you have special events like TargetedCommand, RouteImeChange, RouteFocusChanged whose only purpose is to be passed down to children until they find the right target. We use a bloom filter as a heuristic to avoid sending these events to every member of the hierarchy, but the filter is less effective when there are lots of widgets, which is when you need it most.

Anyway, I'm still not sure what code I'm going to write. The first issue I mentioned (having to use interior mutability and losing lifetime guarantees) is the one that bothers me the most. I think I need to get to it to have a better vision of what the potential solutions would be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment