Skip to content

Instantly share code, notes, and snippets.

@khaledosman
Last active August 6, 2020 09:57
Show Gist options
  • Save khaledosman/cbbb15866f0bdc328b2d5a9e6ef503db to your computer and use it in GitHub Desktop.
Save khaledosman/cbbb15866f0bdc328b2d5a9e6ef503db to your computer and use it in GitHub Desktop.
AppSync

apollographql/apollo-server#2129 (comment)

  1. No JS resolvers, ugly template mappings
  2. it replaces your entire backend and creates everything for you, which works fine if you just want to build another crud app, but if you want to have custom resolvers fetching data from external data sources then it gets annoying and you have to configure custom lambdas for each resolver. Plus I want to build my own custom backend, I couldve just used graphcool or graphCMS instead if I didnt want to build a backend.
  3. it forces you to use AWS services like Cognito for authentication and dynamoDB as a db which I hate because of its terrible syntax so I prefer to use a MongoDB Atlas backend instead. Dont want to use dynamoDB with appSync? you have to create another lambda to pipe the response to your external database.
  4. its more difficult to debug and you lose good monitoring solutions like Apollo Engine.
  5. it forces you to learn the AWS way of doing things instead of normal graphql setup and creates a vendor lock-in. AWS services come with hidden prices.
  6. all the extra round trips between AWS services, authentication lambda, custom resolvers lambdas, piping to external database lambdas, cognito, appsync, database, api gateway websockets, listening to dynamodb updates lambdas to fetch connected users and send realtime updates, etc.. just create unnecessary added latency and complexity.
  7. how do you setup shared caching across your lambdas/apollo-servers like a redis or memcached? AWS offers Elasticache which can only be deployed inside a VPC, so it requires you to put your lambdas inside a VPC. which is really bad for performance because VPCs increase the coldstart time of your lambdas dramatically by 10+ seconds. so its almost unusable since its cost overrides the benifit of having a cache in first place.
  8. AppSync subscriptions use API Gateway sockets which also has its own limitations like: a. Maximum 2 hours connection duration then timeout b. custom @aws_subscribe directive in your graphql schema <-- vendor lockin c. 10 minutes idle connection timeout d. There’s no way to broadcast messages to all connected clients with one API call. you need to make an API call to AWS for each connection you want to send a message to. Publishing a single update to 1m clients requires fetching all 1m connection IDs out of the database and making 1m API calls which is slow and unscalable, unless you use SQS and lambdas to defer the logic, which adds even more latency due to the extra roundtrips. e. Connection metadata needs to be stored in a database, This means that for every connection and disconnection, a Lambda needs to be run. to store information about who connects/disconnects in a database which makes it stateful, adds extra roundtrips/latency and makes it less scalable for a big number of users as it can easily hit lambda/API gateway execution limits.

I believe the only way to do a proper scalable graphql server setup in AWS is to create your own websocket server via an ECS or use PubNub, use an external redis cluster or also create your own via ECS, and you can use lambdas for the apollo-server setup with a database of your choice. and manually connect/publish to your websocket server for subscriptions. The AWS solutions are not really well thought through IMO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment