Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save swyxio/108d90755aa3f34401dcb488c2f0f5aa to your computer and use it in GitHub Desktop.
Save swyxio/108d90755aa3f34401dcb488c2f0f5aa to your computer and use it in GitHub Desktop.
Optimistic, Offline-first apps using serverless functions and GraphQL

now published as https://www.swyx.io/writing/svelte-amplify-datastore

Optimistic, Offline-first apps using serverless functions and GraphQL

Some thoughts on the challenges of the first 2 and possibly solving them with the latter 2.

Optimistic

In a world where API latency is unpredictable, the way to make user interactions seem instant is essentially by lying to the user. Most implementations of optimistic updates work like this:

  • duplicate what the result of the interaction would be in clientside code, while sending off the interaction to the server.
  • (optional) If this succeeds, the legitimate result may replace the clientside simulated result
  • If this fails, a notification is shown and the result is reverted.

Pulling this off well is tremendously hard to do:

  • Design considerations of whether to make it clear the optimistic result is not final, and how to revert on failure
  • Authentication may expire, or APIs may hit other limits
  • Properly keeping the rest of the app in sync that may need to know about this update
  • Firing off multiple interactions that depend on each other, where some may fail and some may succeed, possibly arriving at the API out of sequence.
  • State changes on the serverside that may impact the results of user's interaction (for example, from other users)
Paradigm 1: Client <-> Server

Against the sole benefit of "feeling instant", the engineering challenge of coordinating all these cases may often kill the goal.

Offline-first

A constraint that can simplify the design and engineering of Optimistic UI is the idea of Offline-first apps. This concept is still very new and not that popular with webapps, and traditionally has more to do with local storage and manipulation of data (and subsequent syncing). The usage of service workers to do this gives this concept a lot of overlap with Progressive Web Apps.

Here the challenge is to download some subset of data that is likely to be useful, as well as being able to locally operate on that data, while being able to sync back and forth with the data store.

However, simply having an explicit layer to control syncing on one side (facing the server) and updates on the other (facing the client) with explicit global knowledge of whether we are in online or offline state can make the programming model a lot clearer. The remaining challenge is having to duplicate the logic between server and service worker.

Paradigm 2: Client <-> Service Worker <-> Server

Dual-purpose Serverless functions

This seems to be an interesting use case for serverless functions. Given a serverless setup, we already have functions that are written to be small and stateless. What if we deployed them in both the serverside API and the service worker to create an offline experience?

This sounds so stupid as to be ridiculous. Many serverless functions ping other APIs, they won't work offline anyway! In particular, how will the functions interact with the datastore, which is also likely to be an API?

I don't know for sure. But if we were able to lock down what you can do with these "dual purpose" serverless functions, make a sandboxed/subset of them; could it work? If the datastore offered both an API and a client-only version, could it work? How much do we have to cut away, and is what remains substantial enough to make the restrictions worth it?

Paradigm 3: Client <-> Service Worker (running serverless functions) <-> Server (running serverless functions)

*functions are as far as possible the same on both sides*

GraphQL as constraints?

One way to dramatically lock down the surface area of REST endpoints is to only communicate back and forth between client and service worker and server with GraphQL. For queries and mutations that work within our sandbox, we could just execute the exact same resolvers on service worker and on server. For queries and mutations that require mocking an optimistic response, we could simply write another resolver that would only work for the liminal optimistic period before being replaced by the real response.

Essentially we use the fact that GraphQL enforces strict schemas but can be resolved in different ways to accomplish our task. I should emphasize that this is by no means a necessary part of the solution; it simply works out that graph based resolution is easier to get multiple dependencies correct for a solution like I propose, rather than relying on wild-west serverless function implementation.

Paradigm 4: Client <-GraphQL-> Service Worker (running local resolver) <-GraphQL-> Server (running serverless resolver)

*resolver functions are as far as possible the same on both sides but allowed to differ*

I have no idea

all of the above is just the result of late night musing. I don't know if this can be done, or is worth doing. Just felt like writing it out for future reference.

@chmelevskij
Copy link

chmelevskij commented Mar 21, 2019

While Dual-purpose Serverless functions sounds like wild wild west, it seems like something what can be abstracted with a framework and have lower entry barrier, especially for mid sized apps. Making it quite seamless to use.

@swyxio
Copy link
Author

swyxio commented Sep 9, 2020

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment