This gist provides wrappers around java-dataloader's classes to make them compatible with coroutines, as well as a "dispatcher" that provides a mechanism for dispatching dataloaders enqueued during a GraphQL operation execution.
For context on why this is necessary, see this issue on graphql-java and specifically this comment with a solution that Apollo uses for their own GraphQL services.
The crux of the issue is that graphql-java's dataloader dispatching mechanism assumes one dataloader per field, and does not allow mixing coroutines and dataloaders or composing multiple dataloaders.
The existing dataloader instrumentation keeps track of how many times .load()
is
called when executing resolvers at one level of field resolution. After resolving all fields
in one level, it then calls dispatch()
the same number of times.
If you call the dataloader in a suspended function, the graphql-kotlin library wraps it in a future and the inner dataloader isn't visible to the instrumentation.
If you call another dataloader in sequence (i.e. with thenAccept
), the second dataloader isn't
tracked because the instrumentation has moved on to the next level.
In either case, the GraphQL executor hangs.
This is very different from the original JavaScript implementation of dataloader, which
uses Node's nextTick
or the browser's setTimeout
to enqueue a dispatch after some amount
of work is done. Single-threaded, evented runtimes are way easier!
This Kotlin-friendly solution is to use a debounced flow to enqueue and dispatch work without an event loop.
Each time we call load()
, we emit a value on the flow — this is simpler than counting calls
per-level.
Dave Glasser's insight was that flow { }.debounce()
allows us to enqueue several calls to
load()
before dispatching the dataloaders, fulfilling the promises for each load()
in a
batch. Dataloaders are used to batch network calls and database queries that generally take many
milliseconds, so a one millisecond delay while we enqueue load()
calls is acceptable.