Skip to content

Instantly share code, notes, and snippets.

@convict-git
Last active August 27, 2021 07:23
Show Gist options
  • Save convict-git/b6ba1fd38d6594c88ba3cffa0541aa6e to your computer and use it in GitHub Desktop.
Save convict-git/b6ba1fd38d6594c88ba3cffa0541aa6e to your computer and use it in GitHub Desktop.
{GraphQL, Apollo GraphQL, Client Side} Notes for noobs

{GraphQL, Apollo GraphQL, Client Side} Notes for noobs

  • Client's cache policies API can be used to connect a object between two queries so we don't have to fetch information that we know is already available. Link(see the last part)

  • Apollo Client includes local state management features out of the box, that allow you to use your Apollo cache as the single source of truth for data in your application.

Table of Contents

Basics

Frontend masters exercise

  • import React, { useState } from "react";
    import gql from "graphql-tag";
    import { useQuery, useMutation } from "@apollo/react-hooks";
    import PetsList from "../components/PetsList";
    import NewPetModal from "../components/NewPetModal";
    import Loader from "../components/Loader";
    
    // Fragment on Pet type
    const PETS_FIELDS = gql`
      fragment PetsField on Pet {
        id
        name
        type
        img
        owner {
          age @client
        }
        vaccinated @client
      }
    `;
    
    // A query to get list of pets
    const GET_PETS = gql`
      query AllPets {
        pets {
          ...PetsField
        }
      }
      ${PETS_FIELDS}
    `;
    
    // A mutation to add single pet
    const CREATE_PET = gql`
      mutation CreatePet($newPet: NewPetInput!) {
        addPet(input: $newPet) {
          ...PetsField
        }
      }
      ${PETS_FIELDS}
    `;
    
    export default function Pets() {
      const [modal, setModal] = useState(false);
      const pets = useQuery(GET_PETS);
      // const [createPet, newPet] = useMutation(CREATE_PET); // without updating the cache
    
      const [createPet, newPet] = useMutation(CREATE_PET, {
        update(cache, { data: { addPet } }) {
          const { pets } = cache.readQuery({ query: GET_PETS });
    
          cache.writeQuery({
            query: GET_PETS,
            data: { pets: [addPet, ...pets] },
          });
        },
        // optimisitic updates can be handled here as well
      });
    
      // if (pets.loading || newPet.loading) {
      if (pets.loading) {
        // don't load on newPet now (optimistically update)
        return <Loader />;
      }
    
      if (pets.error || newPet.error) {
        return <div>Error!!!</div>;
      }
    
      const onSubmit = (input) => {
        setModal(false);
        createPet({
          variables: { newPet: input },
          // optimistic updates for better user experience
          optimisticResponse: {
            addPet: {
              id: toString(Math.floor(Math.random() * 100000)),
              name: input.name,
              type: input.type,
              img: "https://via.placeholder.com/300",
              __typename: "Pet",
            },
          },
        });
      };
    
      console.log(pets.data);
    
      if (modal) {
        return (
          <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} />
        );
      }
    
      return (
        <div className="page pets-page">
          <section>
            <div className="row betwee-xs middle-xs">
              <div className="col-xs-10">
                <h1>Pets</h1>
              </div>
    
              <div className="col-xs-2">
                <button onClick={() => setModal(true)}>new pet</button>
              </div>
            </div>
          </section>
          <section>
            <PetsList pets={pets.data.pets} />
          </section>
        </div>
      );
    }

Some depth

Queries

  • Polling: Polling provides near-real-time synchronization with your server by causing a query to execute periodically at a specified interval. To enable polling for a query, pass a pollInterval configuration option to the useQuery hook with an interval in milliseconds:

    const { loading, error, data } = useQuery(GET_DOG_PHOTO, {
      variables: { breed },
      pollInterval: 500,
    });
  • Refetching: Refetching enables you to refresh query results in response to a particular user action, as opposed to using a fixed interval.

    const { loading, error, data, refetch } = useQuery(GET_DOG_PHOTO, {
       variables: { breed }
    });
    
    ...
    <button onClick={() => refetch()}>Refetch!</button>

    Checkout how to manage the refetching state using networkStatus, as loading doesn't change on refetch

  • The useLazyQuery hook is perfect for executing queries in response to events other than component rendering. It doesn't immediately executes the query, instead returns a function to call whenever needed.

    const [getDog, { loading, data }] = useLazyQuery(GET_DOG_PHOTO);
    
    ...
    
    <button onClick={() => getDog({ variables: { breed: 'bulldog' } })}> Click me!  </button>
  • We can set fetchPolicy according to our needs, like the default cache-first or network-only.

Mutation

  • We can update your locally cached data to reflect the back-end modification in two ways. Either refetch the query (pass the list of queries to useMutation, either by query name or DocumentNode object parsed with gql) or manually update the cache.

    // Refetches two queries after mutation completes
    const [addTodo, { data, loading, error }] = useMutation(ADD_TODO, {
      refetchQueries: [
        GET_POST, // DocumentNode object parsed with gql
        "GetComments", // Query name
      ],
    });
  • Apollo Client normalizes the objects received from mutation response and cache them according to their __typename and id fields by default. Cache is kept in sync if {id,__typename} combination returned in the mutation response is already present in cache.

  • Newly created objects aren't automatically added in the collections (eg. cache of previous query that has a list of the items), for them, you have to manually update the cache, or refetch. Refer
    We can "double check" your update function's modifications by refetching affected active queries. You can also ensure a list of query is always refetched by giving refetchQueries inside the mutation. We can selectively refetch outside of mutation too using client.refetchQueries().

Refetching

Subscription

  • Prefer the client to poll intermittently with queries, or re-execute queries on demand when a user performs a relevant action (such as clicking a button). When should we use it?

    • Small, incremental changes to large objects can be subsribed the server can push the small changes rather than the whole expensive large object everytime.

    • Low-latency, real-time updates, when you can not afford to wait for the poll or changes are rare but should be seen instantly.

Fragments

  • Piece of logic that can be shared between multiple queries and mutations.

    // Usage of fragments
    import { CORE_COMMENT_FIELDS } from "./fragments";
    
    export const GET_POST_DETAILS = gql`
      ${CORE_COMMENT_FIELDS}
      query CommentsForPost($postId: ID!) {
        post(postId: $postId) {
          title
          body
          author
          comments {
            ...CoreCommentFields
          }
        }
      }
    `;
  • Colocating fragments: Fragment attached to React Component.

    // Component hierarchy
    // FeedPage
    // └── Feed
    //     └── FeedEntry
    //         ├── EntryInfo
    //         └── VoteButtons
    
    // Access to fragments of child components
    FeedEntry.fragments = {
      entry: gql`
        fragment FeedEntryFragment on FeedEntry {
          commentCount
          repository {
            full_name
            html_url
            owner {
              avatar_url
            }
          }
          ...VoteButtonsFragment
          ...EntryInfoFragment
        }
        ${VoteButtons.fragments.entry}
        ${EntryInfo.fragments.entry}
      `,
    };

Caching

  • ⭐⭐ Overview of how caching works (MUST CHECKOUT!) ⭐⭐

    • InMemoryCache maintains the flat lookup table that reference each other (hence the graph in graphQL).

    • If you have queried different fieldds of the same object, cache stores all in at one.

    • Data Normalisation:

      1. Identify distinct objects in the query response with their __typename and id (or _id)
      2. Generate cache ID - by default __typename:id (we can customise it)
      3. Replace object with references to avoid duplication and maintain consistency among all copies
      {
         "__typename": "Person",
         "id": "cGVvcGxlOjE=",
         "name": "Luke Skywalker",
         "homeworld": {
            "__typename": "Planet",
            "id": "cGxhbmV0czox",
            "name": "Tatooine"
         }
      }
      // after normalisation becomes...
      {
         "__typename": "Person",
         "id": "cGVvcGxlOjE=",
         "name": "Luke Skywalker",
      
         "homeworld": {
            "__ref": "Planet:cGxhbmV0czox"
         }
      }
  • Read about cache interaction with {readQuery, writeQuery}(query specific access), {readFragment, writeFragment}(random access)

  • Bypassing the cache: We can bypass the cache if we know a certain query needs to be executed only once (eg. User token) with "no-cache" fetchPolicy.

    const { loading, error, data } = useQuery(GET_DOGS, {
      fetchPolicy: "no-cache",
    });
  • Persisting and Rehydrate the cache from a storage provider like AsyncStorage(react-native) or localStorage(web). persistCache persists every write to the cache with configurable debounce interval and you can immediately restore it in async.

    import { persistCache, LocalStorageWrapper } from "apollo3-cache-persist";
    // import { persistCache, AsyncStorageWrapper } from 'apollo3-cache-persist';
    
    const cache = new InMemoryCache();
    
    // await before instantiating ApolloClient, else queries might run before the cache is persisted
    await persistCache({
      cache,
      storage: new LocalStorageWrapper(window.localStorage),
      // storage: new AsyncStorageWrapper(AsyncStorage),
    });

Pagination

  • GraphQL doesn't automatically guarantee small responses, specially when you query a field that contains a list as it can be huge and have an enormous response.

  • There are many different pagination strategies a server can use for a particular list field: offset-based, cursor-based, page-number-based, forwards, backwards, and so on ...

  • fetchMore

    • member of ObservableQuery returned from client.watchQuery, or can be taken out from useQuery hook as well.

    • The cache doesn't know by default that it should merge our fetchMore's (followup query's) result with the original query's result. We define our fieldPolicy inside the typePolicy to merge the new data with the existing data. Refer Core-API.

Optimistic mutation

  • If the mutation response is predictable and doesn't go through server-side computation, it's possible to optimistically update the UI to make the whole experience more responsive.

  • // Example usage
    function CommentPageWithData() {
      const [mutate] = useMutation(UPDATE_COMMENT);
      return (
        <Comment
          updateComment={({ commentId, commentContent }) =>
            mutate({
              variables: { commentId, commentContent },
              optimisticResponse: {
                // `updateComment` as named in the schema
                updateComment: {
                  id: commentId,
                  __typename: "Comment",
                  content: commentContent,
                },
              },
            })
          }
        />
      );
    }
  • Optimistic mutation lifecycle

    • The optimistic response is stored as a seperate optimistic version of the object and doesn't overwrites the cache immediately. This keeps the cache clean if the our prediction was wrong or mutation doesn't succeed.

    • All active queries listening to the modified comment are updated immediately triggering a re-render in the associated components. This happens in async with our network request hence the optimistic changes are immediately visible.

    • Once the mutation's actual response is received from the server, the optimistic version is removed from the cache. The usual overwrites of cache for the modification anyway happens, if needed.

    • All active queries listening to the modified comment are again updated and if the response matches our optimistic response, users don't see any change.

Note

  • Query global data and user-specific data separately to improve the performance of your server-side response cache. It allows the server to cache the global data as a single response and send it to anyone, and cache seperately for user-specific data.

Doubts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment