Skip to content

Instantly share code, notes, and snippets.

@misterussell
Created June 5, 2018 18:21
Show Gist options
  • Save misterussell/983eb05bd96c5872c7a2d4fde26da60f to your computer and use it in GitHub Desktop.
Save misterussell/983eb05bd96c5872c7a2d4fde26da60f to your computer and use it in GitHub Desktop.
6 5 18 Blog Post

GraphQL + Apollo + AWS AppSync - Adding Multiple Table Elements Simultaneously

It was a bit of a struggle trying to figure out how to use Apollo Client and AWS AppSync to add multiple table elements, across different tables. The struggle is not over yet but I intend to expose some of the process in this post.

  • I am using AppSync with React Web, not native.
  • I am using Apollo Client, without the Addition of Boost.
  • I am not using AWS Amplify or Mobile CLI to handle authentication and backend because I started to find that it was too magical and that Cognito's segmented logic was easier to noodle my way around.

The Data

For my example I will be creating Recipes.

A Recipe is composed of multiple Elements.

An Element is compose of multiple Ingredients.

{
  recipeID: "12345",
  recipeTitle: "Ceasar Salad"
  elements: [
    {
      id: "12121",
      name: "Dressing",
      recipeID: "12345"
      ingredients: [
        {
          id: "23232",
          name: "oil",
          count: 0.33,
          measurement: "cups"
        },
        {
          id: "45454",
          name: "vinegar",
          count: 1,
          measurement: "cups"
        }
      ]
    },
  ]
}

Setting up your baseline GraphQL Schema in AppSync

I am currently using AppSync with DynamoDB, a non-relational database, so have a table created for Recipes, Elements, and Ingredients. Each piece of data references the ID of the parent data, creating a trail that can be followed with the GraphQL Schema.

Schema definitions require a few different parts.

  1. The item that you are adding.
  type Recipe {
    id: ID!
    elements: [Element]!
    title: String!
    type: String!
  }
  1. The input for that item, which cannot include enums.
  input CreateRecipeInput {
    id: ID!
    title: String!
    type: String!
  }
  1. And lastly a mutation to add the data to a table, which returns a specific type.
  type Mutation {
    createRecipe(input: CreateRecipeInput!): Recipe
  }

AppSync does a good job of creating the baseline resources needed in the Schema but certain fields will seem like they are missing to newcomers. Taking things slowly and having the AWS AppSync Dev Guide open will help with a deep dive into how things need to work.

Testing your schema with the built in Queries options is also a great way to see how different behavior work, as well as if your resolvers are functioning correctly. Once we have confirmed that the baseline schema are creating table entries correctly (yes, the testing creates actual data!) we can add to our schema so that there is method to handle multiple items simultaneously. This is powerful to me because it leaves the enumeration out of the front-end, keeping the code cleaner.

We don't need to recreate each piece, but we do need a new mutation because we are sending the pieces in a new configuration. For learning purposes I am going to assume that we can add multiple recipes at the same time, even though I am only sending a single recipe at a time.

  type Mutation {
    addRecipe(
      recipes: [CreateRecipeInput!],
      elements: [CreateElementInput!],
      ingredients: [CreateIngredientInput!],
    ) : RecipeResult
  }

You'll notice that our returned item is RecipeResult. This is a new schema Type that needs to be created as well. My returns aren't working correctly at the moment though, so I'm not going to dive into that yet. I will update this post when that has been handled. For now, I'm verifying that things are created by checking my data sources.

Creating Data Sources and updating IAM Roles

The parent data source doesn't need any additional indices for lookups, but the children do. This will allow DynamoDB to sort by the parent table item's ID. This is an important detail primarily for queries, but if the parent item's ID is not included in the mutation (which is a requirement so shouldn't be an issue), then you will have broken data.

After the Data Sources are created, you can navigate to the AWS IAM Management Console to update the permissions on any of the roles that are automatically created for your tables. AppSync seems to do a pretty good job of creating the different working pieces needed to quickly(ish) bootstrap an app. We can select our parent data source RecipeTable and edit the policy so that it can handle dynamodb:BatchWriteItem for the different resources we need access to. We end up with a policy that includes the table and any sub elements for the region and account of our app.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:PutItem",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:UpdateItem",
                "dynamodb:BatchWriteItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:YOUR_REGION:YOUR_ACCOUNT:table/RecipeTable",
                "arn:aws:dynamodb:YOUR_REGION:YOUR_ACCOUNT:table/RecipeTable/*",
                "arn:aws:dynamodb:YOUR_REGION:YOUR_ACCOUNT:table/ElementTable",
                "arn:aws:dynamodb:YOUR_REGION:YOUR_ACCOUNT:table/ElementTable/*",
                "arn:aws:dynamodb:YOUR_REGION:YOUR_ACCOUNT:table/IngredientTable",
                "arn:aws:dynamodb:YOUR_REGION:YOUR_ACCOUNT:table/IngredientTable/*"
            ]
        }
    ]
}

Creating the Resolver to Enumerate

Without leaving AppSync we can now create the resolver for our new mutation. Why can't we use one of the automatically generated resolvers? They don't handle Batch Writes to our DynamoDB table. When adding a new custom mutation AppSync automatically creates an empty slot for a Resolver. This allows you to define custom logic for resolvers across the board. With these we are able to restrict view access, add additional information to data server-side, and a slew of other functionality can be added. A powerful tool for cleaning up the front-end's responsibilities.

For our addRecipe mutation, we create a new Resolver for the RecipeTable that is also being used when creating a standard Recipe table entry. For my Primary Index I used the id field on the Recipe. I don't care about sorting these on the backend because I have not made decisions yet about how I want to present these to users.

Request Mapping Template

<script src="https://gist.github.com/misterussell/0fc00a26c69f3b7a4c3fb0e5d8d3361c.js"></script>

The request mapping template looks over each of the different types I pass in with GraphQL, creating an empty array so that it can enumerate over the request and then fill an array is each of the items. For the Recipe I am also adding an author field which is the Cognito username of the user who created the item. This gets passed through automatically as my API is using Amazon Cognito User Pools to authenticate GraphQL calls (I'll go over this in another post).

Each item in the array is automatically mapped with the built in DynamoDB utilizes on AWS. Que magic! J'adore.

The last, very important detail of the Request Mapping Template that needs to be handled is setting the operation to BatchPutItem. This is how the resolver knows it is going to need to handle multiple pieces of data for one, or more tables.

Response Mapping Template

There are Response mapping templates for Batch Operations that can be used. These do not return correctly for me currently. I will update this part of the post when I discover why.

Creating the React Component

I'm not going to do a complete walkthrough of creating the component, because this is post is about sending multiple items. For a full walkthrough this post by Tyler McGinnis was quite helpful, Building Serverless React GraphQL Applications with AWS AppSync.

First a .js file needs to created to save the GraphQL. If you were succesful at saving in the AppSync tools then you have the basic idea. Things look a bit differnt in production though:

import gql from 'graphql-tag';

export default gql`
  mutation addRecipe(
    $recipes: [CreateRecipeInput!]
    $elements: [CreateElementInput!]
    $ingredients: [CreateIngredientInput!]
  ) {
    addRecipe(
      recipes: $recipes
      elements: $elements
      ingredients: $ingredients
    ) {
      recipes {
        id
      }
      elements {
        id
      }
      ingredients {
        id
      }
    }
  }
`

The main different is that there are variables defined $recipes that store the information and then get sent to the AppSync mutation. These prep the data to be sent along. From this mutation we can make the general assumption that we are going to need 3 arrays. One for each table we need to send data to.

Now, this is where I hit a pretty large roadblock. I knew that I needed to structure my data from the AddRecipe component such that I was sending an array of objects. Each object being either a Recipe, Element, or Ingredient. Apollo's docs for the Mutation component didn't go into depth about how to send this type of data along. Eventually I was able to use mutate variables, which seemed to contain the data correctly. You'll notice that I am using compose with only a single graphql function, that's for thinking ahead in case functionality needs added to the component.

export default compose(
  graphql(AddRecipeGQL, {
    props: props => ({
      addRecipe: (recipeData) => props.mutate({
        variables: {
          recipes: recipeData.recipes,
          elements: recipeData.elements,
          ingredients: recipeData.ingredients,
        },
        optimisticResponse: {
          __typename: 'Mutation',
          addRecipe: {
             ...recipeData,
             __typename: 'Recipe'
          }
        }
      })
    })
  }),
)(AddRecipe);

The graphQL-tag .js file is imported as AddRecipeGQL and passed to a graphQL function. This keeps thinks a little cleaner than if the GraphQL mutation was embed directly into the Apollo function. Into my props mutation I pass a variable recipeData which is an object of arrays for each set of data that I'm trying to create.

In my variables parameter I then define the variables that need passed into the GraphQL mutation mentioned above. This is how Apollo knows which data is related to the variable passed through the props mutation. Passing in the variable without theses throws errors and data is not updated.

The optimisticResponse is used to immediately update local data, for UI purposes, while a response is pending from the server. It is passed all recipeData with a single __typename for the recipe.

  • Unforunately Apollo is throwing me errors requesting some additional __typename for the smaller pieces of data, but until I can dig into if these are needed both server-side and on the front-end I will not add them.

Why AppSync?

I like this setup because it is fast and offers a lot of out-of-the-box functionality for offline storage. Setting up my own backend for data, with control of what is returned and how takes a lot of extra logic off the front-end. It moves the front-end closer to input/display-only rather than something that needs to worry about stamping data, over-complicated endpoint calls, or managing API calls to update data. Now that I have the basics down it is a comparatively quick thing to bootstrap to get things up and running. The options to reach out to Lambda functions also interests me as a future integration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment