Imagine a scenario where we dispatch two requests one to get our Todo named Q
and
one to toggle our todo to completed named M1
.
Q1 will return:
{
id: 1
name: 'make graphcache updates idempotent',
completed: false,
}
M1 will return:
{
id: 1
name: 'make graphcache updates idempotent',
completed: true,
}
Let's assume we are in a scenario where we are refreshing this todo on the background because we have a cached result
with a cache-and-network
policy. This page is shown Q1 dispatches and we tick off our todo making M1 dispatch.
This results in a race-scenario, if M1 completes before Q1 then we'll look at erroneous data since completed will
be false!
When we are about to forward
an operation to the next exchange (most commonly fetch
) we add the operation.key
in a
Set named inFlightKeys
.
When we get the first result we add this to an object (or map) that maps from key to result, additionally this schedules
a task to flush the inFlightKeys (and results) to our cache.
When this timeout is reached we first check if we have all keys, if so we empty the inFlightKeys (and results) and start
updating the cache in order.
When we don't have all results we keep everything around but apply the queries we already can in order, the next operation set to arrive will be the one that has been kept around.