Skip to content

Instantly share code, notes, and snippets.

@domenic domenic/ Secret
Last active Jan 4, 2016

What would you like to do?
Comparison of WHATWG Streams and W3C Streams

Comparison of WHATWG Streams and W3C Streams

User-Created Readable or Writable Streams

W3C streams abstract away interaction with the sources and sinks, by saying that they are internal implementation details. They provide no constructor for ReadableStream or WritableStream that would allow you to use those interfaces to wrap arbitrary user-created sources or sinks. In general, the underlying source and sink models are left unspecified.

WHATWG streams very explicitly explain how you can create readable and writable streams wrapping arbitrary sources and sinks, by providing functions to the constructor conforming to a specified interface with access to specified capabilities. Specific platform streams may override the implementation details of the stream, but in that case they will override the constructor. (See "Subclassing Streams".) In short, WHATWG streams puts the explanatory primitive first.

Duplex and Transform Streams

The WHATWG streams spec is not very good about explicitly stating how duplex and transform streams are to be handled. However, we've hashed this out extensively in the issue tracker, and have settled on a design we are very happy with. Both duplex and transform streams are represented by { in, out } pairs, which you can use a readable stream's pipeThrough method to pipe through, or you can manipulate individually. There are plans to provide a simple helper for creating a transform stream pair from a given transform function.

W3C streams do not seem to acknowledge either concept. They include something called a ByteStream, which has both read and write interfaces (but does not subclass either). It is hard to decipher the language surrounding its internal data flow ("Its dataSink is bufferedDataQueue wrapped with a data source wrapper"; "Its dataSource is also bufferedDataQueue"), but it seems to be implying some sort of direct connection between calling write() and data being available for read(). This would be a null transform stream, which is not terribly useful.

Buffering Strategy (Writable Streams)

WHATWG streams can provide varying buffering strategies at creation time, which may be more or less appropriate for given data types. For example, the built-in LengthBufferingStrategy is often appropriate for ArrayBuffers and strings, whereas CountBufferingStrategy is useful for objects of unknown shape; a custom strategy could of course be used by a stream that knows more about its contents. These strategies come in the form of two functions, count(data) which returns how much a piece of data contributes to the internal buffer; and needsMoreData(bufferSize), which returns whether or not the internal buffer needs more data or should be considered full. As data is written into the stream, these functions are consulted, with the stream changing state (from "writable" to "waiting") if needsMoreData returns false.

W3C streams take a different approach. They count on the person writing to a writable stream to provide a cost parameter indicating how much the incoming data takes up in the buffer. If it is not provided, the unspecified mechanisms of the data sink are allowed to compute it (somehow). They also let their awaitSpaceAvailable() method (analogous to WHATWG streams' wait()) return the available space in the buffer. However, there is no way of determining whether you need to call awaitSpaceAvailable() before actually calling it; thus, if space is already available, calling it is wasteful, causing another turn of the event loop.

Buffering Strategy (Readable Streams)

WHATWG streams reuse the same approach for readable streams as they do for writable streams, with a buffering strategy being provided at creation time. As data is pushed into the stream from the underlying source, the strategy's functions are consulted, with the return value of needsMoreData being given as the return value of the push method used by the stream's creator. In this way the stream's creator can be notified of a full buffer and communicate the backpressure signal to the underlying source.

W3C streams do not seem to provide a way of communicating backpressure to the underlying source if nobody reads from the stream. They provide many mechanisms for governing how much data to read from the stream, e.g. the mutable pullAmount property and the readUpTo(size) methods. But I cannot figure out how, if at all, backpressure (a lack of reading) is communicated.

Sync vs. Async Reads

WHATWG streams follow the model outlined by Isaac on public-webapps, with synchronously-available data and state properties, and the ability to asynchronously poll for new data. This has advantages when data is available synchronously (e.g. in OS-level memory buffers, or stream-level buffers), as it can then be read without inducing artificial microtask delays. Notably, these delays would be compounded at every step in the pipe chain.

W3C streams have an asynchronous read model, which introduces a delay when data is available synchronously. It also allows concurrent reads, which simply throw an error in W3C streams (but are not possible in WHATWG streams).

Encoding and Data Types

WHATWG streams are entirely agnostic to the type of data that flows through them. There is no concern for encoding, decoding, validation, MIME types, etc. We believe this approach has proven very successful in the Node.js community, allowing varied pipe chains and transforms to grow into a broad ecosystem. For example, you could produce a stream of string data from a stream of binary data by piping through an appropriate transform stream; this decouples the complexity of codecs from the underlying stream implementation.

W3C Streams are much more concerned with such details of the data flowing through them. Readable streams have two attributes, readBinaryAs and readEncoding, which only work for binary data. But if they are present, they can mutate the data passing through the stream, changing it from bytes (in what form??) into strings, blobs, or ArrayBuffers. These attributes can be set at any time, even during reading, which seems problematic. Writable streams have writeEncoding, which they do not use to mutate the data, but instead simply pass through to the underlying sink. Readable streams also have a type attribute which is supposed to be a MIME type, but is not used by the specification at all.


WHATWG streams transparently allow multi-destination piping when using ReadableStream, by automatically introducing a TeeStream in between. (BaseReadableStream provides a lower-level abstraction without this support.)

W3C streams will fail if you try to pipe them to multiple destinations. Instead, you call fork() to create a "copy" of the stream that refers to the same underlying source. See "Data Source Models" for the implications of this.

Data Source Models

WHATWG streams allow arbitrary data sources, including synthetic and one-time ones, producing any type of data.

In W3C streams, the forking model necessitates a "data source wrapper for range reference counting on produced bytes." That is, data sources must produce bytes, and must retain those bytes in a quickly-indexable fashion that can be arbitrarily rewound and replayed according to the position of various cursors in its cursorMap. This is also manifest in the StreamReadResult interface, which includes the number of bytes consumed from the source.

Abort Signals

WHATWG streams is careful to specify how abort signals propagate up and down a pipe chain. It also uses them to clear internal buffers and ensure no resources are leaked.

W3C streams includes abort signals, but does not do anything with them beyond forwarding them to the underlying source or sink. In particular, it is unclear what happens to ongoing reads, writes, or pipes.

Public API Usage vs. Magic

In WHATWG streams, everything is explained in JavaScript. Interactions between the underlying source and sink are mediated via JavaScript functions. The pipeTo method uses only public methods on its destination. pipeThrough is written in terms of pipeTo. Multi-pipe is done in terms of an exposed TeeStream primitive. And so on. This allows user-created pieces of the ecosystem to be introduced, as they can be with other ECMAScript primitives, without getting caught out by concerns about internal state accessible only to the implementation. Everything can be observed, tested, intercepted, modified, and explained, at each step of the way.

W3C streams leans heavily on underspecified data source and sink models. The pipe method directly transfers to the underlying sink from the read stream, bypassing any public API. We are trying to avoid this kind of magic in the web platform these days.

Other Use Cases

WHATWG streams acknowledges, and describes solutions for, the following use cases that W3C streams does not address:

  • Passive data watching, via ReadableStreamWatcher and WritableStreamWatcher
  • writev support, via CorkableWritableStream

It's not entirely fair to count these as points for WHATWG streams, however, as they have not been fleshed out in detail yet.

Integration with the Rest of the Platform

The W3C streams spec includes a useful section on APIs that could benefit from streams. We largely agree with this list. However, we are not sure what place such non-normative wishlists have in specs.

WHATWG streams is based on an extensible web approach, of providing a solid base primitive which can be used, in user-space, to wrap the various pseudo-streams already provided by the web platform. That is, we can see which stream wrappers become most used in user space, before mandating (or suggesting) them in standards-space.

However, there is a flaw in a purely user-driven approach: many of the psuedo-streams exposed on the web platform today do not provide any means to send backpressure signals. Exposing such backpressure signals, either through lower-level APIs or through actual streams, is going to be necessary for a flourishing stream ecosystem.


This comment has been minimized.

Copy link

commented Jan 22, 2014

But I cannot figure out how, if at all, backpressure (a lack of reading) is communicated.

For non-synthetic streams (eg, sockets), a "lack of reading" is all that's required for backpressure to be communicated. You just don't call read(2) on the fd, and so any data available sits and waits in the network layer buffer. For a protocol transform stream (eg, http message being parsed off of and underlying socket), you only read(2) on the underlying source fd when your consumer needs some data. So, backpressure is handled here, albeit in a way that is less obvious for synthetic streams, and (imo) harder to reason about.

Also, it only makes sense for byte/string streams, not for object streams.

These attributes [readBinaryAs and readEncoding] can be set at any time, even during reading, which seems problematic.

You are too much a gentleman by half. They don't "seem problematic". They are absolutely unacceptable and unsafe for the only reasonable use anyone has for them: decoding bytes as unicode code points. What happens when you have a 3-byte utf8 sequence, and you read the first byte of it as binary, and then try to interpret the 2 remaining as utf8? Or for that matter, when you read all three as unicode, but one at a time?

Anyone who calls this an edge case has not spent much time with Node. It's not an edge case. It's the normal case, and if unaddressed, it will lead to serious security exploits in real world applications. I'm not being inflammatory here, I'm speaking from real personal experience. We must not allow this. This is the worst part of the w3c streams proposal.

That is, data sources must produce bytes, and must retain those bytes in a quickly-indexable fashion that can be arbitrarily rewound and replayed

This defeats the purpose of streams entirely. How does one "rewind" a never-ending video stream?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.