Inspired by whatwg/streams#253 (comment), but way updated
Use cases:
- (all) Want to read a 10 MiB file into a single ArrayBuffer
- (chunkwise) Want to read a 10 MiB file, 1 MiB at a time, re-using a single 1 MiB ArrayBuffer, and calling
processChunk
on each chunk before reading the next one - (ping-pong) Want to read a 10 MiB file, 1 MiB at a time, re-using two separate 1 MiB ArrayBuffers, because you are asynchronously processing (
processChunk
) the 1 MiB chunks and want to get a potential 1 MiB headstart in parallel with the async processing
Note: we're using ES2016 async
/await
syntax so that we don't have to do recursion with promises but instead can just use loops. This shouldn't really bias the comparison as all of these designs involve the same amount of promises.
Summary:
async-readinto-no-transfer.js
was my initial idea, which causes observable data races and so is a no-go.setallocator.js
was based on a design of @wanderview, but ends up being awkward due to the pause/resume semantics that need to be added. It also has observable data races as written; I haven't tried yet writing an updated version without them.feed.js
was based on an idea of @tyoshino.wait(ab).js
is a simplification offeed.js
and is where I've landed for now.
Also of note: there is an idea I didn't write out here, which is an async read(sourceAB, offset, bytesDesired) -> Promise<{ result, bytesRead }>
where result
is a transfer of sourceAB
. It is a bit more powerful in that it allows multiple reads into the same pre-allocated backing memory (albeit at the cost of a temporary variable that ends up with a detached array buffer for each read). However it is so awkward I don't think it's worth really considering.
The semantics of readInto in this example is that it fulfills the returned promise only when the specified region is fully filled? Or details are omitted for simplicity of examples?
In chunkwise, readInto() is invoked with (ab, 1MiB, 1MiB), (ab, 2MiB, 1MiB), ... But ab is ONE_MIB long. Some typo?