Skip to content

Instantly share code, notes, and snippets.

@DiamondLovesYou
Last active November 17, 2016 11:48
Show Gist options
  • Save DiamondLovesYou/abd8b6117f1be40dbb4b18fe282ed25d to your computer and use it in GitHub Desktop.
Save DiamondLovesYou/abd8b6117f1be40dbb4b18fe282ed25d to your computer and use it in GitHub Desktop.

This is kinda a reply to /u/HeroesGrave's reply. Over the past year+, I've put a lot of thought into creating a solid GUI library. I know I'm kinda late to the party (until now, I hadn't written this down, other than what been coded), nonetheless here are my thoughts.

Some areas I've left kinda vague, hoping that readers would be able to extrapolate. Feel free to fork/PR, comment, critique etc.

Goals

  • Make it easy to lay elements out

  • Properly separate the model from everything else

  • First class support of 'hologram' Gui's (IE those subjected to a projection matrix, also called RTT GUI), including container children LOD. I specifically need this for my project.

  • Resolution agnostic.

  • Crate friendly, in that it should be easy to offer groups of elements preposition-ed for drop-in use, or custom Rules. For example, println!()-esk stuff for types which can decompose themselves into the basic Model's.

  • Platform specific sensor aware (IE rotation on android/etc, multi-touch). (Maybe?)

  • Platform independent, but still requiring Gles.

  • i18n

  • Handle user input (maybe make this a feature?)

    • Must be able to handle at the component level for things like playing a click or w/e when the user enters key etc. Also, styles may want to do something with mouse input. Multi-touch for example.
    • We require some level of platform access. For example, to open the software keyboard on devices which need it for text input.
  • Animations (feature-ed? Will be annoying to featurize the text cursor blink)

  • Video (feature-ed), probably via a texture handle.

Non-goals

  • Non-Immediate mode.
    • Textures can be expensive to redraw completely, especially when scaled up to the viewport size, so while vertexes could be sent every frame without much perf loss, textures can't.
    • Resources can be forced into 'static, and only freed after the frame after last use (IE after the next swap buffers).
  • Multiple backends.
    • This crate should generate a command list and send that to the dependent. I suppose that in this case the "backend" would be the code which translates the commands into the commands needed for w/e API is used, Vulcan or Gles.
    • For video frame texture handles, a simplifying assumption would be to accept an id which is used when sent back to the application in the command list. The application would then do whatever it needs with that id for the actual render calls.
    • Also, some graphics APIs (cough OpenGL cough) don't compose very well as libraries.
  • Following Rust's ownership.
    • I've tried to do this (because it sounds great): it doesn't really work sadly. Indices aren't feasible because of the tree nature of UIs; of course it's possible, but at that point you're already allocating. Might as well allocate a Rc and talk to the Model directly in your controller/views. Furthermore, an index is a just synonym for a pointer (IE base + offset), so it would be cheaper to just have an Rc+RefCell than to require a HashMap lookup every time, same for using indices into a Vec<Option<_>> (which might cause issues in some cases b/c one can only resize in some cases).
    • Generics won't solve this problem either because you won't be able to 'unify' certain generic parameters (this is a rustc shortcoming. I tried it in a previous iteration) and you'll need to use box traits anyway for containers.
    • This may be valuable to revisit in the future if we find there is a lot of benefit from generalizing sibling chains so that both owned and Rc+RefCell cases can be handled.
  • User defined widgets.
    • Compose your widgets, yo. Use one or more custom styles if you need to do weird things.
  • Macros to generate GUIs at compile time.
    • I don't think this is worth it at all. What gains would be had in run time that LLVM can't already handle for us? LLVM is limited with what it can do regarding heap allocation, but anything stack allocated and initialized related should already be optimized to the best possible quality (or so close it makes no odds).
    • Having said that, I think a fruitful macro could be a quote!(..)-like facility (that's still a thing, right?) to make the process for using the 'layout factories' more ergonomic. Like JSX.

Design

Inspired by HTML/CSS to form two core traits (first).

Model

As you would expect, this is the data to display. Includes:

  • Text
  • Images
  • Videos

You may notice a couple of things are missing, like containers, buttons, and text boxes; this is deliberate. For one, containers aren't content, they're more of a stylistic structure used for positioning, or are a organizational tool. The rest are left out because they can be reproduced by styles (IE a button is a background that changes based on mouse state + text), or a text element that accepts input, has an optional hint, and has a contrasting background, and possible text hiding). Even video could be crudely emulated by a sequence of images (not really though because of timing/realtimeness).

Note, for succinctness, I'll group containers with the models by call them an element. I shouldn't do this, because in code ui::Element<T> only manages Models.

Models are owned by an Element struct, wrapping a Rc+RefCell. Controllers set model values based on application/Controller specific models. No automatic propagation between these two; what if a user is editing this field?

Rule

This actually is bunch of things (perhaps this should be split?).

  • Where we learn how to render the models itself. Also where the Substrate is setup.
    • Including model specific details like a Text's cursor, if in focus.
  • Implemented by types which are Model agnostic. In this instance, the rule type would wrap the sub-rule and override specific values on the trait, like (from CSS) box model width and height. For example, borders should add space to the content of child rules (down to the model rule), or subtract space from the space supplied by parent rules.
  • Possible substrate chaining effects (IE offsets in 3D space). This instantiates new substrates, but shouldn't be terrible when proper texture management.

Containers as we know them live here. Some additional notes:

  • Sizes will always be relative to an element's parent. It will not be relative to the content of the element.
    • This includes text font sizes. The actual, optimal font pixel size should be found auto-magically.
  • Animations on all properties.

Document<View, Stylesheet>

Manages the root Substrate, or rather owns a root container (more of a model in this single case), which already has substrate management facilities. Provides functions to mutate the View or StyleSheet, flag needs_repaint, get the render command list (for sending to Vulcan/gles), etc.

I didn't put this in the section heading, but Document is also parameterized on a ProjViewBasis (for optimizing flat Documents, ie HUD-esk).

View

As a type, the view holds the only the Models that the Document's owning Controller cares about, plus maybe a cache for large layouts (IE a large folder). As a trait, the view builds the layout of the document, assisted by the Document StyleSheet.

Queried by the Document, supplied with the root builder object and tasked with replacing the whole document layout.

Stylesheet

Shared rules used by the View.

Input

Mouse

Due to the 3d feature, I use two additional attachments:

  • instance id (assigned by sibling construction) (uint attachment)
  • instance rect coords [0, 1] (vec2 attachment)

The input manager uses this for mouse picking, as required by text input.

Keyboard

Separated into two things:

  • Control
  • Char

Control is things like:

  • select up/down with granularity (IE add the right char to the current text selection)
  • shift up/down with granularity (IE page/line)
  • enter (different than getting a \n character)
  • delete with a direction (IE backspace is delete-left)
  • tab

Char is the actual content. It does not include any control chars that are "received" (IE Text without multiline translates '\n' into an enter control).

Animation

Only render when it is needed (battery life!). This is accomplished with two animation modes:

  • Continuous
  • Periodic (NB video goes here)

Both can be bounded by a length of time. Obviously, continuous animations cause the "main" loop to run at it's max allowed fps. Periodic is, as one would expect, a best effort, tick every T time.

All animations put references of themselves into the FrameTiming manager. This manager checks these animations to know how long the app can sleep for and updates their inner state on every tick (for running animations).

Wanted: window event wait with timeout!

Open Questions

  • How much should we featurize/modulize this stuff?
  • I think it would be wise to completely unify the image and video Models into Image. This would see the Image model taking a texture handle, and the corresponding base Rule would subdivide the Substrate plane, specializing vertex attributes to use the texture.
  • I can foresee an argument that the FrameTiming stuff is too intrusive for integration into an existing application.
  • Texture uploading is expensive.
    • On the one hand, Gles has subtexture update. Not reliable across all platform I've heard; given my experience with Qualcomm, I believe it.
  • Other shit I forgot.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment