Skip to content

Instantly share code, notes, and snippets.

@AndrewIngram
Last active January 24, 2016 19:52
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save AndrewIngram/8264e18891a5662837f3 to your computer and use it in GitHub Desktop.
Save AndrewIngram/8264e18891a5662837f3 to your computer and use it in GitHub Desktop.
GraphQL Steps

The idea of these steps is that they can be implemented progressively and add value at each step.

  1. Your current approach is to talk to REST APIs from your client-side code
  2. Build your GraphQL server
  3. Instead of making multiple REST requests for a page load, make a single GraphQL query
  4. Develop a means of composing GraphQL queries out of fragments. There are a few existing solutions for this, but they're not particularly polished

At this point you have a pretty nice solution for fetching data, but no strategy for caching or semantics surrounding that data. There's also questions surrounding how to minimise data fetching on navigation using client-side routing.

  1. Client-side routing optimisation is potentially straightforward (if you're good at working with trees), but might become impractical depending on how other problems are solved. The basic idea is that you build the query for the current and next routes, convert them to ASTs (this is easy using graphql-js) and create a diff query that only asks for data you don't have from the current route, then merges the response with the response from the previous request (which you hopefully kept around).

  2. If you want to add the data to client-side stores, such as a Flux implementation, you need to do some extra legwork. Essentially you need to know what types of data represent objects. This is where you start mimicking Relay's node/connection stuff (or you could just use it directly, it's pretty good). You basically need to convert your query to an AST, instrument it with fields like id and __typename for anything representing an object, then when the data comes back you should have enough information to know what it means. The catch is that, at the point you instrument it, you need access to your schema. Relay solves this at build time using a babel plugin, avoid having to bundle a potentially massive schema. This is a good solution, but means you can't do any dynamic queries - they're essentially locked in during the build.

  3. Further steps are caching, mutation APIS and so on. I'm not yet sold on the initial benefits of caching, because there's no pattern for cache invalidation yet (if we look at Relay). I think query diffing at the route level (which Relay can't do yet) might be enough of a benefit.

My ideal initial library would provide APIs for composition, diffing and instrumentation, but without any assumptions that you're using React, Flux, React-Router etc, ie a purely functional library.

@AndrewIngram
Copy link
Author

My general feeling is that the process of incremental improvement leads you to something that looks like Relay. The catch with Relay is that the process seems to have been (understandably) followed according to Facebook's use cases, which means it has the feel of an API designed for Facebook scale and Facebook UI decisions. So I think there's value in exploring the same path, but from a more generalist perspective, i.e. making sure common use cases are addressed. It should also help dispel some of the magic surrounding Relay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment