Skip to content

Instantly share code, notes, and snippets.

@addyosmani
Last active January 23, 2016 21:39
Show Gist options
  • Star 10 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save addyosmani/3f951c66f6cc607e9ae9 to your computer and use it in GitHub Desktop.
Save addyosmani/3f951c66f6cc607e9ae9 to your computer and use it in GitHub Desktop.
JS Musings

Composition

On an architectural level, the way we craft large-scale applications in JavaScript has changed in at least one fundamental way in the last four years. Once you remove the minutia of machinery bringing forth data-binding, immutable data-structures and virtual-DOM (all of which are interesting problem spaces) the one key concept that many devs seem to have organically converged on is composition. Composition is incredibly powerful, allowing us to stitch together reusable pieces of functionality to "compose" a larger application. Composition eschews in a mindset of things being good when they're modular, smaller and easier to test. Easier to reason with. Easier to distribute. Heck, just look at how well that works for Node.

Composition is one of the reasons we regularly talk about React "Components", "Ember.Component"s, Angular directives, Polymer elements and of course, straight-up Web Components. We may argue about the frameworks and libraries surrounding these different flavours of component, but not that composition is inherently a bad thing. Note: earlier players in the JS framework game (Dojo, YUI, Exy) touted composition strongly and it's been around forever but it's taken us a while for most people to grok the true power of this model as broadly on the front-end.

Composition is one solution to the problem of application complexity. Now, the way that languages in the web platform evolve are in direct response to the pain caused by such complexity. Complexity on its own can take lots of forms, but when we look at the landscape of how developers have been building for the web over the last few years, common patterns can be one the most obvious things worth considering baking in solutions for. This is why an acknowledgement by browser vendors that Web Components are important is actually a big deal. Even if you absolutely don't use 'our' flavour of them (I'll contest they are powerful), hopefully you dig composition enough to use a solution that offers it.

Where I hope we'll see improvements in the future are around state syncronization (syncronizing state between your components DOM and your model/server) and utilizing the power of composition boundaries.

Composition Boundaries

It would be unfair to talk about composition on the web without also discussing Shadow DOM boundaries (humor me). It's best to think about the feature as giving us true functional boundaries between widgets we've composed using regular old HTML. This lets us understand where one widget tree ends and another widget's tree begins, giving us a chance to stop shit from a single widget leaking all over the page. It also allows us to scope down what CSS selectors can match such that deep widget structures can hide elements from expensive styling rules.

If you haven't played around with Shadow DOM, it allows a browser to include a subtree of DOM elements (e.g those representing a component) into the rendering of a document, but not into the main document DOM tree of the page. This creates a composition boundary between one component and the next allowing you to reason about components as small, independent chunks without having to rely on iframes. Browsers with Shadow DOM support can move across this boundary at ease, but inside that subtree of DOM elements you can compose a component using the same div, input and select tags you use today.

How does Shadow DOM compare to just using an iframe, something used since the dawn of time for separating scope and styling? Other than being horrible, iframes were designed to insert completely separate HTML documents into the middle of another, meaning by design, meaning accessing DOM elements existing in an iframe (a completely separate context) was meant to be brittle and non-trivial by default. Also think about 'inserting' multiple components into a page with this mechanism. You would hit a separate URL for the iframe host content in each case, littering your markup with unwieldily iframe tags that have little to no semantic value. Components using Shadow DOM for their composition boundaries are (or should be) as straight-forward to consume and modify as native HTML elements.

Playing Devil's advocate, some may feel composition shouldn't happen at the presentation layer (it may not be your cup of tea), others may agree it does. Let's start off by saying you absolutely can have component composition without the use of Shadow DOM, however in order to achieve it, you end up having to be extremely disciplined about the boundaries of your components or add additional abstractions to make that happen. Shadow DOM gives you a way of doing this as a platform abstraction.

Whilst this is natively in Chrome and coming to other browsers in future, it's exciting to see existing approaches exploring support for it - e.g React are working on adding support for Shadow DOM and hope to evolve this work soon. Polymer already supports it and there appears to be interest from Ember and Angular in exploring this space.

Component messaging

What about communicating between components? Well if you're working with decoupled, modular components there are a few options here.

Direct reference via component APIs is undesirable (unless working on something super simple) as this introduces direct dependencies on specific versions of other components. If their APIs vastly change, you either have to pay the upgrade price or deal with breakage. Instead, you could use a classic global or in-component event system. If you require communication between components without a parent-child relationship, events + subscription remain one way to go. In fact, React recommends this approach, unless components do have a parent-child relationships in which case just pass props. Angular uses Services for component communication and Polymer has a few options here: custom events, change watchers and the <core-signals> element.

Is this the best we can do? Absolutely not. For events you're going to fire-and-forget, the global event system model works relatively well but it becomes hairy once you start to need stateful events or chaining. As complexity grows, you may find that events interweave communication and flow control and whilst there are many ways to improve your event system (e.g functional reactive programming) you may find yourself with events running arbitrarily large amounts of code. What might be better than a global event system is CSP (Communicating Sequential Processes), a formalised way to describe communication in concurrent systems. CSP channels can be found in ClojureScript or Go and have been formalized in the core.async project. CSP handles coordination of your processes using message passing, blocking execution when consuming or putting from channels, making it easier to express complex async flows.

The issue they solve is at some point, you may require duplex channels and rely on a stringly-typed conventional way of doing this:

thingNeedsPhoto { id: 001, uuid: "foo" }
thingPhoto { data: "../photo.png", uuid: "foo" }

Later matching the two together. You've now mostly re-implemented function calls on top of a lossy, unoptimized event channel. Whenever there's a mis-match because of a typo debugging is going to be a pain. James Long and Tom Ashworth have some rock solid posts on CSP and Transducers (composable algorithmic transformations) are worth looking at if you find yourself wanting more than a global event system.

Modules

I've written in the past about the net gains of large-scale systems that take advantage of decoupling and a sane approach to JavaScript 'modules'. I consider us in a far far better position now than we were a few years ago, no longer just making do with AMD, RequireJS and the Module pattern. In many cases we've moved beyond this and it's almost passe to frown at someone using a build-step in their authoring workflow. We can thank the abundance of increasingly reliable tooling around Browserify or a plethora of transpilation-friendly ES6 features; super-useful while we wait for modules to eventually ship in browsers.

Whether it's ES6 modules, classes or CJS we have sufficient tooling to gift our projects with strong compositions whether they're client or server-side, isomorphic or otherwise. That's kind of amazing. Don't get me wrong, we have a long road ahead towards maturing the quality of our ecosystems but our composition story for the front-end is strong today.

Side: JavaScript modules may not always be the best container format for components and their templates and a lot of people still use additional tooling to load and parse them. HTML imports are a nice way to both package component resources but always loads scripts without blocking the parser (they still block the load event). I remain hopeful that we'll see usage of imports, modules and interop of both systems evolve in the future.

The Offline Problem

We don't really have a true mobile web experience if our applications don't work offline. There have been fundamental challenges in achieving this in the past, but things are getting better. This year, APIs in the web platform have continued to evolve in a direction giving us better primitives, most interestingly of late, Service Workers. Service Workers are an API allowing us to make sites work offline through intercepting network requests and programmatically telling the browser what to do with these requests. They're what AppCache should have been, except with the right level of control. We can cache content, modify what is served and treat the network like an enhancement. You can learn more about Service Worker through Matt Gaunt's excellent Service Worker primer or Jake Archibald's masterful offline patterns article.

In 2015, I would like to see us evolve more routing and state-management libraries to be built on top of Service Workers. First-class offline and syncronization support for any routes you've navigated to would be huge, especially if developers can get them for next to free. This would help us offer significant performance improvements for repeat visits through cached views and assets. Service Workers are also somewhat of a bedrock API and request control is only the first of a plethora of new functionality we may get on top of them, including Push Notifications and Background Syncronization.

Component APIs and Facades

One could argue that the "facade pattern" I've touched on in previous literature is still viable today, especially if you don't allow the implementation details of your component leak into its public API. If you are able to define a clean, robust interface to your component, consumers of it can continue to utilize said components without worrying about the implementation details . Those can change at any time with minimal breakage. An addendum to this could be that this is a good model for framework and library authors to follow for public components they ship. While this is absolutely not tied to Web Components, I've enjoyed seeing the Polymer paper-* elements evolve over time with the machinery behind the scenes having minimal impact to public component APIs. This is inherently good for users. Try not to violate the principle of least astonishment i.e the users of your component API shouldn't be surprised by behaviour. Hold this true and you'll have happier users and a happier team.

Immutable data structures

In previous write-ups on large-scale JS, I haven’t really touched on immutability. If you’ve crossed paths with libraries like immutable-js or Mori and been unclear on where their value lies, a quick primer may be useful. A persistent data structure is one which preserves the previous versions of itself when changed. Data structures like this are immutable (their state cannot be modified once created). Mutations don’t update the in-place structure, but rather generate a new updated structure. Anything pointing to the original structure has a guarantee it won't ever change.

Let's try to rationalize it in the form of a Todo app. Imagine in our app we a normal JS array for our Todo items. There's a reference to this array in memory and it has a specific value. A user then adds a new Todo item, changing the array. The array has now been mutated. In JavaScript, the in-memory reference to this array doesn't change, but the value of what it is pointing to has.

For us to know if the value of our array has changed, we need to perform a comparison on each element in the array - an expensive operation. Let's imagine that instead of a vanilla array, we have an immutable one. This could be created with immutable-js from Facebook or Mori. Modifying an item in the array, we get back a new array and a new array reference in memory. If we were to go back and check the reference to our array in memory is the same, it's guaranteed not to have changed. The value will be the same. This enables all kinds of things, like fast and efficient equality checking. As you're only checking the reference rather than every value in the Todos array the operation is cheaper.

As mentioned, immutability should allow us to guarantee a data structure (e.g Todos) hasn’t been tampered. For example (rough code):

var todos = ['Item 1', 'Item 2', 'Item 3'];
updateTodos(todos, newItem);
destructiveOpOnTodos(todos);
console.assert(todos.length === 3);

At the point we hit the assertion, it's guaranteed that none of the ops since array creation have mutated it. This probably isn't a huge deal if you're strict about changing data structures, but this updates the guarantee from a "maybe" to a "definitely".

I’ve previously walked through implementing an Undo stack using existing platform tech like Mutation Observers. If you’re working on a system using this, there’s a linear increase involved in the cost of memory usage. With persistent data structures, that memory usage can potentially be much smaller if your undo stack uses structural sharing.

Immutability comes with a number of benefits, including:

  • Typically destructive updates like adding, appending or removing on objects belonging to others can be performed without unwanted side-effects.
  • You can treat updates like expressions as each change generates a value. You get the ability to pass objects as arguments to functions and not worry about those functions mutating the object.
  • These benefits can be helpful for writing web apps, but it’s also possible to live without them as well and many have.

How does immutability relate to things like React? Well, let's talk about application state. If state is represented by a data structure that is immutable, it is possible to check for reference equality right when making a call on re-rendering the app (or individual components). If the in-memory reference is equal, you're pretty much gauranteed data behind the app or component hasn't been changed. This allows you to bail and tell React that it doesn't need to re-render.

What about Object.freeze? Where you to read through the MDN description of Object.freeze(), you might be curious as to why additional libraries are still required to solve the issue of immutability. Object.freeze()freezes an object, preventing new properties from being added to it, existing properties from being removed and prevents existing properties, or their enumerability, configurability, or writability, from being changed. In essence the object is made effectively immutable. Great, so..why isn’t this enough?

Well, you technically could use Object.freeze() to achieve immutability, however, the moment you need to modify those immutable objects you will need to perform a deep copy of the entire object, mutate the copy and then freeze it. This is often too slow to be of practical use in most use-cases. This is really where solutions like immutable-js and mori shine. They also don’t just assist with immutability - they make it more pleasant to work with persistent data structures if you care about avoiding destructive updates.

Are they worth the effort?

Immutable data structures (for some usecases) make it easier to avoid thinking about the side-effects of your code. If working on a component or app where the underlying data may be changed by another entity, you don't (really have to worry if your data structures are immutable. Perhaps the main downside to immutability are the memory performance drawbacks, but again, this really depends on whether the objects you're working with contain lots of data or not.

We have a long way to go yet

Beyond an agreement that composition is fundamentally good, we still disagree on a lot. Separation of concerns. Structure. Data-flow. The necessity for immutable data structures. Data-binding (two-way isn't always better than one-way binding and depends on how you wish to model mutable state for your components). The correct level of magic for our abstractions. The right place to solve issues with our rendering pipelines (native vs. virtual diffing). How to hit 60fps in our user-interfaces. Templating (yes, we're still not all using the template tag yet).

Onward we go

Ultimately how you 'solve' these problems today comes down to asking yourself three questions:

  1. Are you happy delegating such decisions and choices to an opinionated framework?
  2. Are you happy 'composing' solutions to these problems using existing modules?
  3. Are you happy crafting (from scratch) the architecture and pieces that solve these problems on your own?

I'm an idiot, still haven't 'figured' this all out and am still learning as I go along. With that, please feel free to share your thoughts on the future of application architecture, whether there is more needed on the platform side, in patterns or in frameworks. If my take on any of the problems in this space is flawed (it may well be), please feel free to correct me or suggest more articles for my holiday reading list below:

Note: You'll notice a lack of content here around Web Components. As my work with Chrome covers articles written on Web Components primitives and Polymer, I feel sufficiently coverered (maybe not!), but am always open to new explorations around these topics if folks have links to share.

I'm particularly interested in how we bridge the gaps between Web Components and Virtual-DOM diffing approaches, where we see component messaging patterns evolving here and what you feel the web is still missing for application authors.

@ashumeow
Copy link

ashumeow commented Dec 9, 2014

Nice! ;)

@karlbright
Copy link

A great piece of writing and definitely something that has been on mind mind as I have been trying to publish and write more modules myself.

The questions you proposed got me thinking. More and more people are building modules and leaning towards composition as a solution to application complexity, and as such, so are the frameworks.

Question 1 suggests delegating these decisions and choices to an opinionated framework, which I think is something that a lot of people choose over composing solutions using existing modules. What is important to note is that the solutions and choices that these frameworks are making are composed of modules themselves.

We're seeing a shift in how people are thinking about application development. Frameworks are becoming less about being a beast that handles everything you can throw at it and providing a solution for every possible issue you may face, and more about providing solutions for specific issues that can be split out from the framework its self.

The facade pattern you mention trips up here of course, frameworks can provide this pattern and change these modules as long as they are being used within the existing framework "scaffolding", they can ensure that the public API does not change.

React is a great example of this. It's a very lightweight library but yet its compared to the larger frameworks such as Ember and Angular. The reason for this is that people are creating their own application frameworks around this technology. Arguments around data flow and structure are left to the individuals, and as a result, are helping us produce even more modules for providing solutions or options in the React "universe".

The magic in abstractions is coming less from the framework, and more from the use of modules and composition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment