Skip to content

Instantly share code, notes, and snippets.

@Centril
Last active April 30, 2019 13:52
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Centril/2a8888444c8952fb1ba9617a4031a472 to your computer and use it in GitHub Desktop.
Save Centril/2a8888444c8952fb1ba9617a4031a472 to your computer and use it in GitHub Desktop.
Centril's views on the "Await Syntax Write Up"

Centril's views on the "Await Syntax Write Up"

in our opinion this — which we will call the "error handling problem” — remains the primary problem that we are trying to resolve.

I don't agree that this is the primary problem; it's one of many problems to resolve; Chaining (and thus not forcing temporaries), general composability, and working well with IDEs is among notable problems.

Syntactic Sugar Solution:

I think it is a stretch to call this syntactic sugar in the first place. The syntax await? is composing await + ?. In my view this is not enough semantic compression to be deserving of the description "syntactic sugar".

Based on a set of shared values, the language team has consensus that we would like to avoid two of the above options, narrowing our choices down to two broad categories.

I've always said that a postfix sigil is agreeable to me if no one else in the team objects. E.g. the suggestion of foo#? due to @coder543 I do find reasonable. A sigil is certainly more agreeable to me than a prefix keyword and especially as compared to await?.

We have consensus that this argues strongly against using a sigil syntax, or any selection which does not contain the string "await” as a part of its syntax. For that reason, we have excluded sigil based syntaxes like @from further consideration. If we adopt a postfix syntax, it will include the await keyword in some way.

I disagree with the rationale that calling the feature "async/await" implies the syntax must have the keyword await in it. Async/await is in my view primarily about semantics, ease of use, and looking more like synchronous code. The main reason I see against some postfix sigil is that it is harder to spot. However, we did find that ? was acceptable to read (and await is in my view less side-effecting control flow than ? is). In my view the hard-to-spot issue isn't make or break for foo#?.

Prefix

The support for . here seems weak. It's not just about not having to parenthesise. The syntax must also be sufficiently unobtrusive to main chaining look decent. The prefix forms do not afford themselves to chaining after awaiting.

The argument has been made that because await modifies the user's interpretation of the entire expression it is applied to (i.e. recognizing that that expression will be evaluated as a future), it is valuable for it to appear at the beginning of the expression.

It seems to me that this argument would apply equally to ? since it modifies the semantics of the entire applied-to expression but we don't write ? prefix and moved away from try!.

but because expressions being awaited are themselves likely to be important (because they perform an expensive operation such as IO), and the await "highlights” them for readers trying to understand the piece of code.

Method chains can be expensive operations as well, yet we do write those postfix and don't even tell the user that they may be expensive since we cannot know that.

Users making this argument find that this helps them to read and understand these expressions because it reduces the degree to which they need to "re-order” the expression into a sequence of steps in their mind.

And critically, await is co-located with ? allowing you to spot them together easily.

It's a point of disagreement how frequently users will want to do this beyond the error handling case, and also whether or not the language should encourage this kind of code.

However, you are not forced to chain just because you can. In my view, being forced to introduce names for temporaries that are not semantically important is not helpful. It makes users introduce bad names. Naming is a hard task in general and users should not be forced to do if it it isn't necessary. I think forcing temporary let bindings also makes users keep more things in their head (because there are more bindings in scope that can be used). Moreover temporaries encourage longer functions, due to the increased number of bindings. I generally believe that we should try to facilitate short functions and discourage longer ones.

Another point here is that you wouldn't have needed temporaries in the equivalent sync code. Much of the point of async/await syntax is to look mostly like sync code and so avoiding those temporaries works towards that goal.

for example, IDEs can include await in their drop-downs even if it means a rewriting of the expression to insert a prefix await.

I don't think this is a good alternative however. If the concrete syntax does not match the trigger the IDE works with, then the user will actively need to remember to try .a or otherwise try to call methods. The syntax .await can even lead to discovery of the syntax for some users thus improving learnability.

All unary keyword operators in Rust are prefix

There are exactly 2 stable unary keyword operators in Rust. These operators are:

  • return $expr
  • break $expr

The keywords if, match, let, for, while, loop, etc. are not operators (or if they are, then so is field access itself). Constructs such as loop { ... } and async { ... } might be unary, but they take blocks, not expressions. If you want to use loop { ... } as a precedent, then that is an argument for await { ... } but not await expr.

Notably, the operators return and break both return !. This makes them completely useless as postfix operators. However, await is an operation completely unlike return. It takes an object (the future) containing a T and effectfully extracts the T out. The other operator that is most fits this description is ?. It too takes an object (the result) containing a T and effectfully extracts T out. In my view taking the que from return instead of ? is misleading the user.

These two factors make postfix await syntax a significant divergence from Rust's syntactic history.

In my view no more so than await? which in my view not only semantically bolts two operations together but also does so syntactically. I think both aspects are unprecedented.

Moreover, prefix await is the syntax chosen in other similar languages (such as C#, JavaScript, and Python)

My assessment of JavaScript developers is that they are on average generally a more versatile group who adapt to change often. I personally think that the move from await expr to expr.await is rather small in terms of familiarity. Both still use the keyword. If a postfix sigil were to be used, then the familiarity argument for prefix await would be stronger in comparative terms.

For some who hold this form consistency as important, it seems to be of paramount importance, affecting even Rust's credibility with end users as a seriously considered and well designed language.

I feel the same way about feature orthogonality and composability. It seems to me a key aspect of Rust language design, similar to Haskell's, that we value the composability of features and that things can be decomposed.

As aforementioned, the syntactic form await? does also not exist in other languages so this is also innovation and "weird".

Even if these choices were all correct, the argument goes, our weirdness budget has been stretched to the point that we must weigh the weirdness of deviation from syntactic precedent in this case even more highly.

I understand the point of being economical with weirdness and complexity. However, await? and await introduce two syntactic forms (even making it more complex to implement in libsyntax and lowering) which means learning more and together arguably also results in as much weirdness.

Supporters of this argument conclude, then, that introducing a piece of syntactic sugar to solve the very common interaction with ?, and accepting what they see as a slight degree of inconvenience in other situations (in some cases, they are also likely to argue, making the code more clear), is better than deviating from syntactic precedent in this way.

Yes, but as aforementioned, the "sugar", if we agree to call it that, introduces syntactic deviation itself.

Non-orthogonal features, however, require much more effort for developers to learn and understand.

It's not just requiring more effort to understand. It is the expectation that things work orthogonally and the surprise when they don't.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment