I like GraphQL for the regular reasons. It feels to me that it delivers on its promise, althought it's not simple or straightforward to start using it (but neither is SQL itself).
I'll approach this from two perspectives:
I had the experience on this side when doing a small internal app ontop of the GitHub GraphQL API. The previous version was using the REST API and it had a lot of problems. Specifically, GitHub's API tends to be very coarse-grained – if you want a pull request, you will get it along with all its data, but none of the relations (reviews and comments). If you want to fetch all the pull requests and their comments (to then analyze), you end up making a lot of HTTP cals. Worse, the payloads are pretty big – they have a lot of stuff in each object, including a lot of HATEOAS links that I just don't care about. I ended up having to make a lot of requests and disregarding a lot of the traffic I received. It was slow.
Moving to the GraphQL API, I could get a fine-grained endpoint with only the data I need. I just need title, description, author and time, so that's what I only get. Even better, I can fetch related data – I can use the same request to obtain the reviews and the comments. Sure, there are some paging limitations, but I end up making significantly fewer requests. If there are less than 30 items in each parent, I can get all the PR data for a repo in a single request. It allowed me to fetch with relatively few requests (say 5) as opposed to the amount I needed otherwise (say about a hundred).
Suppose we're doing a mobile application that talks to a Rails backend over HTTP (which we are). The web app and the mobile app are handled by two different teams. Whenever a mobile developer does not have the right endpoint (in terms of granularity), they need to either (1) fetch too much data with multiple requests or (2) go grab a web developer to implement a new endpoint for them on the right level of granularity. The first is wasteful, the second involves communication and coordination.
Furthermore, you can be economic in places where you have an OKish level of
granularity, but you still can benefit from narrowing it down. For example,
you use the same API (1) to get the first page of all user data, (2) the
first page of users with only names and ids and (3) the first page of users
with some related data like groups they belong to or the population of cities
they live in. Note that once you have implemented a sufficiently full GraphQL
API, all those endpoints are available for free. Otherwise, you either have to
create a more complicated /users
endpoint that has options like
fields=id,name
or with_country_population=true
(for example).
The last part of the logic holds only for reading (you need to make new endpoints for mutations on the right level), but to me it feels that most of the wastefulness in client-server talk is around fetching data and GraphQL just removes the need of a complicated API.
So it feels like a big win to me.