Skip to content

Instantly share code, notes, and snippets.

@domenic
Last active July 28, 2016 22:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save domenic/f3ecf2dd1b27409567ba59e3fabbc047 to your computer and use it in GitHub Desktop.
Save domenic/f3ecf2dd1b27409567ba59e3fabbc047 to your computer and use it in GitHub Desktop.
Third state concerns

Concerns about the third state

People have raised a variety of concerns about the third state idea, such as in issue #24. The most essential concern is for code like

try {
  f(); // or await f() or yield f()
  g();
} catch (e) {
  h();
}

People are concerned that with the introduction of the third state, neither g nor h will fire.

This document attempts to address those concerns, in two main parts. First, arguing that the concern should not be held strongly. Second, expanding on the arguments in the introduction to the third state concept by arguing that the benefits of third state are so great that they cannot be ignored.

Arguments against the concern

This is not actually an invariant

Given the above code sample, it is not actually true that g() or h() will always be called. There are a number of possible reasons for this:

  • In the case of await f(), the promise returned by f() may be forever-pending. This will cause that function to suspend indefinitely.
  • Similarly, in the case of yield f(), the caller driving the returned generator may never call the generator's next() method. This will cause that function to suspend indefinitely.
  • In the case of yield f(), the caller driving the returned generator may call the generator's return() method. This will cause the function to exit, bypassing any catch blocks (but hitting finally blocks), analogous to a cancelation.
  • In the case of a synchronous f(), the script may be aborted. This will cause that function and everything on the stack to terminate, without triggering any finally blocks or similar, but still allows code to run in that event loop when further tasks are enqueued.
  • The script may cause the event loop to exit, e.g. through the window.close() or process.exit() APIs.

This is only a problem when mixing new code with unaware code

Introducing a third state is in no way a backward-incompatible change. No existing code will start behaving differently. The concern only becomes relevant when f is "new" code, which uses a third state, but the surrounding code is "old" code, which is not aware of that possibility.

However, this is a common issue when extending the JavaScript language. Old code may be making assumptions, and when new code is called by old code, those assumptions may not hold. Here are some simple examples from past evolutions of JavaScript:

  • Touching a property would previously not modify global or otherwise shared state. The introduction of getters and setters invalidated any code that assumed this.
  • A variety of invariants used to hold for all objects (such as hasOwnProperty returning true implying getOwnPropertyDescriptor would return an object). The introduction of proxies invalidated any code that assumed this.
  • It used to be the case that typeof results were constrained to a known set; code could do a switch on that known set and perform actions for each, with no need for a default case. The introduction of symbols invalidated any code that assumed the previous constrained set.
  • It used to be the case that Object.prototype.toString could be used as a brand check. The introduction of @@toStringTag invalidated any code that assumed this.

(There are probably more waiting to be listed...)

Old code can always be protected from new code

If you know you are in a scenario where there is old code that cannot cope with a new completion type (despite it already needing to do so for the reasons listed in "This is not actually an invariant"), then you can always protect the old code from the new code. You simply define

function f2() {
  try {
    return f.apply(this, arguments);
  } cancel catch (e) { }
}

and instead of giving the old code f, you give it f2.

Such code is often buggy anyway

In general such code is trying to execute a poor-man's finally block, and should be using finally instead. Consider a more concrete example:

try {
  doStuff();
  cleanup();
} catch (e) {
  cleanup();
}

Here, if cleanup throws, then it will be executed again! This was clearly not what was intended; instead the code should be using

try {
  doStuff();
} finally {
  cleanup();
}

Other languages already have the notion of bypassing the default catch block

Notably, I've been informed that Ruby has a whole hierarchy of errors which are not caught by default, requiring explicit opt-in to do so.

Arguments for why a third state is essential

Treating cancelation as exceptional has bad real-world consequences

  • @benjamingr's story of false downtime alarms: tc39/proposal-cancelable-promises#14 (comment)
  • A lot of the Node ecosystem assumes that any uncaught exception should cause cleanup and process shutdown/restart. Every library that assumes this (e.g. most HTTP and testing libraries) would need to be updated to account for cancelation not being exceptional.

Evidence from C# indicates that their use of exception types is a developer footgun

The JavaScript community has already moved in this direction for asynchronous cancelation

Existing abrupt completion guards in the spec should propagate cancelation, not an exception.

Dan thinks iterators should catch task canceled exception, and probably that there are other places in the spec. This is generally not true; they should propagate cancelation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment