Skip to content

Instantly share code, notes, and snippets.

@skanev
Last active April 27, 2017 19:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save skanev/f3468e8b85a3d6bc16c7aa493229eda7 to your computer and use it in GitHub Desktop.
Save skanev/f3468e8b85a3d6bc16c7aa493229eda7 to your computer and use it in GitHub Desktop.
On GraphQL

I like GraphQL for the regular reasons. It feels to me that it delivers on its promise, althought it's not simple or straightforward to start using it (but neither is SQL itself).

I'll approach this from two perspectives:

Using a GraphQL API as a client

I had the experience on this side when doing a small internal app ontop of the GitHub GraphQL API. The previous version was using the REST API and it had a lot of problems. Specifically, GitHub's API tends to be very coarse-grained – if you want a pull request, you will get it along with all its data, but none of the relations (reviews and comments). If you want to fetch all the pull requests and their comments (to then analyze), you end up making a lot of HTTP cals. Worse, the payloads are pretty big – they have a lot of stuff in each object, including a lot of HATEOAS links that I just don't care about. I ended up having to make a lot of requests and disregarding a lot of the traffic I received. It was slow.

Moving to the GraphQL API, I could get a fine-grained endpoint with only the data I need. I just need title, description, author and time, so that's what I only get. Even better, I can fetch related data – I can use the same request to obtain the reviews and the comments. Sure, there are some paging limitations, but I end up making significantly fewer requests. If there are less than 30 items in each parent, I can get all the PR data for a repo in a single request. It allowed me to fetch with relatively few requests (say 5) as opposed to the amount I needed otherwise (say about a hundred).

Controlling both the client and the server

Suppose we're doing a mobile application that talks to a Rails backend over HTTP (which we are). The web app and the mobile app are handled by two different teams. Whenever a mobile developer does not have the right endpoint (in terms of granularity), they need to either (1) fetch too much data with multiple requests or (2) go grab a web developer to implement a new endpoint for them on the right level of granularity. The first is wasteful, the second involves communication and coordination.

Furthermore, you can be economic in places where you have an OKish level of granularity, but you still can benefit from narrowing it down. For example, you use the same API (1) to get the first page of all user data, (2) the first page of users with only names and ids and (3) the first page of users with some related data like groups they belong to or the population of cities they live in. Note that once you have implemented a sufficiently full GraphQL API, all those endpoints are available for free. Otherwise, you either have to create a more complicated /users endpoint that has options like fields=id,name or with_country_population=true (for example).

The last part of the logic holds only for reading (you need to make new endpoints for mutations on the right level), but to me it feels that most of the wastefulness in client-server talk is around fetching data and GraphQL just removes the need of a complicated API.

So it feels like a big win to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment