The idea of these steps is that they can be implemented progressively and add value at each step.
- Your current approach is to talk to REST APIs from your client-side code
- Build your GraphQL server
- Instead of making multiple REST requests for a page load, make a single GraphQL query
- Develop a means of composing GraphQL queries out of fragments. There are a few existing solutions for this, but they're not particularly polished
At this point you have a pretty nice solution for fetching data, but no strategy for caching or semantics surrounding that data. There's also questions surrounding how to minimise data fetching on navigation using client-side routing.
-
Client-side routing optimisation is potentially straightforward (if you're good at working with trees), but might become impractical depending on how other problems are solved. The basic idea is that you build the query for the current and next routes, convert them to ASTs (this is easy using graphql-js) and create a diff query that only asks for data you don't have from the current route, then merges the response with the response from the previous request (which you hopefully kept around).
-
If you want to add the data to client-side stores, such as a Flux implementation, you need to do some extra legwork. Essentially you need to know what types of data represent objects. This is where you start mimicking Relay's node/connection stuff (or you could just use it directly, it's pretty good). You basically need to convert your query to an AST, instrument it with fields like id and __typename for anything representing an object, then when the data comes back you should have enough information to know what it means. The catch is that, at the point you instrument it, you need access to your schema. Relay solves this at build time using a babel plugin, avoid having to bundle a potentially massive schema. This is a good solution, but means you can't do any dynamic queries - they're essentially locked in during the build.
-
Further steps are caching, mutation APIS and so on. I'm not yet sold on the initial benefits of caching, because there's no pattern for cache invalidation yet (if we look at Relay). I think query diffing at the route level (which Relay can't do yet) might be enough of a benefit.
My ideal initial library would provide APIs for composition, diffing and instrumentation, but without any assumptions that you're using React, Flux, React-Router etc, ie a purely functional library.
This all sounds like you've got a solid grasp on it to me. I agree with the incremental approach of using GraphQL by building atop whatever you're using today (e.g. REST).
Client-side routing is one path to take, another is just being explicit with your top level queries. It's perhaps a little more imperative to have explicit queries written for each scenario, but it also means having no "framework" code necessary on the client. This is a tradeoff of course.
I also support the exploration you're doing. We found that the costs of performing a lot of work dynamically offset too many of the benefits of having dynamic queries. Knowing queries statically both provides some nice guarantees like knowing your queries are safe before deploying your app, but also offers the ability to generate code to do things that otherwise may be expensive to determine dynamically at runtime. But again, this is a tradeoff and it's early days so exploration is exactly what's necessary.
Caching I think is pretty critical. This is something Relay spent a lot of time getting right and an area our team is planning to spend more time and attention on this year. Every GraphQL client at Facebook has some sort of consistency cache and it's often necessary for certain kinds of user experiences. But as with all things, caches present a tradeoff between consistency and work. A cache can be expensive to maintain (memory, computation) if you're not reaping benefits from it.