You'll see a live demonstration and design overview of Ivan, a retrocomputing personal project written in Elm. Ivan is a web application framework for working with 80s-era analog vector graphics, which uses an analog oscilloscope as its output device.
From this talk, the audience will get:
- A concrete example of how the "make impossible states impossible" idea around data modeling in Elm can apply to the problem domain of computer graphics
- Some fun demo material that will provide a refreshing visual contrast to the usual web application
This presentation would focus mainly on how Elm relates to the task of modeling the problem domain of graphics. The background concepts and supporting systems will be explained only as far as they introduce the domain and clarify the examples.
- 2 mins - Introduction to analog vector graphics
- 1 min - Project goals
- 1 min - Introduction to the project's tech stack
- 30 sec - Acknowledgements to microcontroller developer
- 1 min - Demos
- 2 mins - "Transformations" and the "rendering pipeline"
- 4 mins - Challenges in modeling the rendering pipeline
- 6 mins - Implementation of the rendering pipeline in Elm
- 3 mins - The "object tree" and its relationship to transformations
- 5 mins - Implementation of the object tree in Elm
- 3 mins - Demo with audience participation
We'll briefly acquaint the audience with the analog vector graphics and how they behave, focusing mainly on communicating the ideas of:
- Arbitrary positioning of the electron beam via analog X, Y, and Z inputs
- No pixels: continuous phosphor coating on the screen
We'll show some slides and clips to contextualize the appearances of analog vector graphics in early computer art, computer science, video games, and film.
Examples: 1 | 2 | 3 | 4 | 5 | 6
None of these sources will be given explanation apart from their appearances in slides and clips, with appropriate attributions.
I'll introduce a summary of my goals for this project:
- General-purpose graphics framework, with an Elm programmer as the target user
- Should provide support for a wide variety of applications:
- Drawing
- Animation
- Simulations / games
- Serial port communication requirement
- Platform-agnostic
We'll review a high-level diagram describing the relationships between the following:
- Analog vector display (responsible for displaying graphics)
- Microcontroller (responsible for receiving drawing instructions and converting them to analog signal outputs)
- Server (responsible for serving the web-app and sending drawing instructions over serial port)
- Elm application (responsible for editing and animation features, handling user interaction, and sending drawing instructions to the server)
A moment to recognize the open-source developer behind the microcontroller board and firmware. I'll share a link to the project page where the audience can find info for buying the componenents involved and building it if they're so inclined.
I'll show off some animation capabilities and features with a demo or two. These will probably showcase keyframed 3D animation and a simple simulation or game.
A general overview of the problem domain, along these lines:
In a so-called "rendering pipeline", geometry data (coordinate points, in our case) are transformed in several ways to reach their final destination as images. If we take the points that form a simple 3D shape like a cylinder as an example, they might be transformed through the following steps:
- Model space: Model space is the "source of truth" for the cylinder. Here it is represented in relation to a unit cube, aligned so that it can be scaled and rotated around the origin point. Its points have X, Y, and Z coordinates.
- Scene space: the cylinder is positioned in relation to the "scene", a container which holds all the objects that are to be included in the 3D "world". Its points still have X, Y, and Z coordinates, but with different values; they've been scaled, rotated, and translated to represent different areas in space.
- Image space: the cylinder is flattened into a 2D view through a process called perspective projection, like the way a camera turns a view of the world into a flat picture. The cylinder's points now have only X and Y coordinates.
- Device space: the cylinder's 2D coordinates are transformed to fit the coordinate system of the display device. The cylinder's points have X and Y coordinates, but they may be scaled, rotated or reversed.
The manipulations involved in transforming objects through the pipeline, from model space to device space, can be subtle. Because of the shared properties of X and Y coordinates throughout, this creates many opportunities for errors through operations on the wrong type of geometry data. And many of these errors wouldn't crash a program; they'd just yield unexpected results.
Duck typing is often characterized as an advantage in dynamically typed languages like Ruby or Python, since it can simplify contracts and introduce flexibility without extra effort. For this domain, however, the language characteristic of duck typing is a liability, because it makes it more difficult to introduce any guarantees that remove the possibility of operating on the wrong data. The expense in instantiating separate objects to represent the same geometry in different stages of transformation also makes Python or Ruby a potentially poor fit for timing-critical applications like real-time graphics.
Elm's type system can help guarantee valid data for a given step in the process. We can model points in each of four different modules--
ModelGeometry.elm:
type Point
= Point Vector3D
SceneGeometry.elm:
type Point
= Point Vector3D
ImageGeometry.elm:
type Point
= Point Vector2D
DeviceGeometry.elm:
type Point
= Point (Int, Int)
This allows us to model our pipeline as higher-level functions for converting our data between spaces:
Pipeline.elm
toSceneObject : ObjectTree -> SceneGeometry.Object
toSceneObject objectTree =
objectTree
|> ObjectTree.toObject
|> List.map toSceneLineSegment
toImageObject :
(Vector3D -> Vector2D)
-> SceneGeometry.Object
-> ImageGeometry.Object
toImageObject projection sceneObject =
List.map (toImageLineSegment projection) sceneObject
toDeviceObject :
ImageGeometry.Bounds
-> ImageGeometry.Bounds
-> ImageGeometry.Object
-> DeviceGeometry.Object
toDeviceObject imageBounds deviceBounds imageObject =
imageObject
|> ImageGeometry.normalize imageBounds deviceBounds
|> List.map toDeviceLineSegment
And a view function can read like the pipeline's process:
render : Model -> ( Model, Cmd Msg )
render model =
model.objectTree
|> Pipeline.toSceneObject
|> Pipeline.toImageObject Pipeline.perspectiveProjection
|> Pipeline.outputToDevice model.imageBounds model.deviceBounds
In graphical editors, drawable objects are often modeled as a tree, so that many objects can be manipulated as a group. This project needed something similar, so that, for example, in a 3D scene, groups of objects could be transformed in 3D space. This meant that when rendering, each object neededed to be manipulated in terms of its own transformations (scale, rotation, and translation), but also needed to inherit the transformations applied to its parent nodes in the tree.
type ObjectTree
= Group (List Transform3D) (List ObjectTree)
| Object ObjectWithId
All Object
s under a Group
in an ObjectTree
can have their Transform3D
s propagated down the tree by fold
ing calls to applyTransform3D
into a partially applied function:
allTransformsAsFunctions : List Transform3D -> (Vector3D -> Vector3D)
allTransformsAsFunctions transforms =
transforms
|> List.map Transform.applyTransform3D
|> List.foldl (>>) identity
This was noteworthy to me because I normally see fold
applied to data rather than functions.
The function allTransformsAsFunctions
applies can then be map
ped onto the objects that make up the tree:
applyTransformsToObject : List Transform3D -> ObjectWithId -> ObjectWithId
applyTransformsToObject transforms objectWithId =
let
mapObject =
ModelGeometry.mapObject <| allTransformsAsFunctions transforms
in
mapObjectWithId mapObject objectWithId
applyTransformsToTree : List Transform3D -> ObjectTree -> List ObjectWithId
applyTransformsToTree transforms tree =
case tree of
Group childTransforms childTrees ->
applyTransformsToTrees childTransforms childTrees
|> List.map (applyTransformsToObject transforms)
Object object ->
[ applyTransformsToObject transforms object ]
If I can implement the features necessary by September, we may be able to do a brief audience participation demo, where audience members can log into the app and draw on the oscilloscope screen from their devices.
As a multidisciplinary and personal project, this is one I feel is a great fit with your goals for topics this year. This library is one that I've attempted to write in Ruby and Javascript before, but gave up on because:
- The long chains of data manipulation involved in DIY 3D graphics were too frustrating to debug in dynamically typed languages, and
- The requirements for performance, easily-built UI, and serial port access were difficult to satify simultaneously.
Elm with ports has made this project a pleasure to work on, and I think the visual results will be an impressive and fun addition to the conference lineup.