Skip to content

Instantly share code, notes, and snippets.

@captbaritone
Created September 20, 2023 05:07
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save captbaritone/5bad7ef40c6d7d2d763f9a60b431c98a to your computer and use it in GitHub Desktop.
Save captbaritone/5bad7ef40c6d7d2d763f9a60b431c98a to your computer and use it in GitHub Desktop.
Dropping Live Resolver Docs [Raw]

0. Relay Live Resolvers

Relay Liver Resolvers are an experimental Relay feature which lets you extend your server’s GraphQL schema to include additional client state. For example, you might want to expose data from a legacy state management solution or from IndexDB to your Relay components.

Your experience writing GraphQL resolvers on the server should mostly apply to writing Live Resolvers. The main distinction is that Live Resolvers are reactive. Instead of just reading a single values, Live Resolvers model a value that might change over time, similar to an observable.

Pages

Experimental

Live Resolvers are experimental. They require additional configuration to use:

  • You must enable the following runtime feature flags
    • ENABLE_RELAY_RESOLVERS
    • ENABLE_CLIENT_EDGES
  • You must enable the following compiler feature flag
    • enable_relay_resolver_transform
  • You must use the experimental LiveResolverStore when you configure your Relay Environment.

Additionally, Live Resolvers are not yet compatible with a few features of Relay:

  • Garbage Collection. Live Resolvers should only be used with Garbage Collection disabled
  • Optimistic Updates. Live Resolvers may still behave unexpectedly in the face of optimistic updates.

Setup Your Runtime

To use Relay Resolvers in your Relay setup while they are still experimental, you will need to configure your Relay Environment specially to use the experimental LiveResolverStore.

import {Environment, Network, RecordSource, RelayFeatureFlags} from 'relay-runtime';
import LiveResolverStore from 'relay-runtime/store/experimental-live-resolvers/LiveResolverStore';

// Enable feature flags
RelayFeatureFlags.ENABLE_RELAY_RESOLVERS = true;
RelayFeatureFlags.ENABLE_CLIENT_EDGES = true;

// Configure your environment
export default new Environment({
  /* ... other config options */
  store: new LiveResolverStore(new RecordSource()),
  
  // Report errors thrown by Resolvers
  requiredFieldLogger: (e) => {
    if(e.kind === "relay_resolver.error") {
      console.error(`Error ${e.owner}.${e.fieldName}`, e.error)
    }
  }
});

Working example: here

Configure Your Compiler

{
  "src": "./src",
  "schema": "./schema.graphql",
  "language": "javascript",
  "featureFlags": {
    "enable_relay_resolver_transform": true
  },
  "eagerEsModules": true, // Especially important if you are using TypeScript
  "schemaExtensions": ["./src/relay/schemaExtensions"]
}

Working example: here

01. Relay Resolver Syntax

Live Resolvers are declared using a docblock with the tag @RelayResolver. These docblocks will get picked up by the Relay compiler. Relay will then expect the module that contains the docblock to export a corresponding named export for that docblock. This allows Relay to automatically pull in exactly the resolver code needed to provide the data your components need.

Note: Relay does not currently parse your actual JavaScript. However, Relay will emit type assertions into its generated files which assert that your resolvers match Relay’s expectations.

`/** @RelayResolver User.greeting: String */
export function greeting(): string {
  return "Hello!"
}
`

MARC NOTES Need some place to define what the expected resolver signatures are, doesn’t really make sense this early in the doc since it hasn’t covered things like model syntax... but not sure exactly where it should fit. Would like it to cover things like the rules of (@rootFragment means first param is rootFragment key, lack of rootFragment and if the node the field is on is of model syntax then the first param is the model, and maybe even a table to make sure we’re covering all the bases of what is expected?). Can we have a

02. Adding Fields

Adding a new field to the schema requires declaring telling Relay about the existence of the field with a @RelayResolver docblock tag, followed by implementing an exported resolver function for that field.

The value for the @RelayResolver docblock tag describes the name and type of the field in format that is almost exactly GraphQL’s Schema Definition Language. The format looks like: ParentType [dot] field_name [colon] ReturnType. For example @RelayResolver User.full_name: String.

Reading Data Via a Fragment

The baseline way for a resolver to access data is by defining a fragment on the field’s parent. With this approach you pass your fragment to readFragment in the body of your resolver. Note that you must also tell Relay the fragment you are using with the @rootFragment docblock tag:

`import {readFragment} from 'relay-runtime/store/ResolverFragments';

/**
 * @RelayResolver User.full_name: String
 * @rootFragment UserFullName
 */
export function full_name(key): string {
  const user = readFragment(graphql`
    fragment UserFullName on User {
      first_name
      last_name
    }`, key);
  return `${user.first_name} ${user.last_name}`;
}
`

When you read via a fragment, Relay will ensure that the data you depend upon is fetched from the server (or from another resolver!) and that any time one of those values changes, your resolver will be re-evaluated.

Note: Your resolver function must be exported from your module as a named export using the field name (in the example above this is full_name). If you make an error here, you should get a Flow error in your generated Relay artifact where the field is used.

03. Relay Model Types

In a server implementation of a GraphQL schema, generally each GraphQL type is associated with either a model class or a data shape representing that type. Fields which return this type will return an instance of this model, and resolver fields for that type will be modeled as either methods on that type, or functions which accept an instance of that type.

Relay Model Types let you follow a similar pattern on the client and come in two flavors:

  • Strong (default)
  • Weak

Strong objects must have a unique ID, whereas weak objects may not.

Defining a strong model type

Defining a strong model type requires a docblock to tell Relay about the existence of the type, as well as a function which is responsible for taking an unique id, and returning the JavaScript model type for that id’s instance. Note that the model type can be any JavaScript structure you want as long as its immutable.

The function must be a named export function using the same name as the newly defined GraphQL type.

`type UserModel = {
  id: string,
  firstName: string,
  lastName: string,
}

/**
 * @RelayResolver User
 */
export function User(id): UserModel {
  return getUserById(id);
}
`

You may now add an edge that returns this type, simply be defining a resolver which returns a user id:

`/**
 * @RelayResolver Query.admin: User
 */
export function admin(): DataID {
  return "1"; // Admin ID is 1
}
`

Note: While model ids must be unique among their type, they need not be fully globally unique.

With these two fields we now have a schema that looks like this (in GraphQL SDL):

`type User {
  id: ID
}

type Query {
  admin: User
}
`

Defining a weak model type

Some times your type simply does not have its own unique identifier. In these cases, you may define your type as @weak. With weak types, rather than defining a resolver, you simply define the JavaScript model type. However, it now becomes the responsibility of the resolver which returns an edge to this type to provide the model JavaScript object.

Weak objects are delineated with the @weak docblock tag.

`/**
 * @RelayResolver User
 * @weak
 */
export type UserModel = {
  firstName: string,
  lastName: string
}
`

Thats it! Now lets look at defining an edge to a weak type:

`/**
 * @RelayResolver Query.admin: User
 */
export function admin(): UserModel {
  return { firstName: "Clara", lastName: "Shai" };
}
`

Note: For types which do have a sense of identity, strong types are encouraged since they ensure there is only ever one source of truth for a given item of that type.

Generally weak objects are only necessary when you have a bag of properties that you wish to group together but that actually that “belong” to their parent type. For example a ProfilePicture might not have its own identifier, but it’s really just a collection of properties that belong to the parent User/Page etc which have been grouped together for ease of use.

Fields on Model Types

Note that even though our UserModel JavaScript type in both cases above included firstName and lastName, they have not yet been added to our schema. Lets try adding them!

Adding fields to model types is considerably easier than adding fields to server types. By default, resolvers on model-backed types simply get passed an instance of the JavaScript model backing the type. Lets see that in action now:

`/*** @RelayResolver User.first_name: String */
export function first_name(user): string {
  return user.firstName;
}

/*** @RelayResolver User.last_name: String */
export function last_name(user): string {
  return user.lastName;
}
`

We could even add a derived field full_name:

`/*** @RelayResolver User.full_name: String */
export function full_name(user): string {
  return `${user.fistName} ${user.lastName};
}
`

With these field added, our schema now looks like this:

`type User {
  id: ID
  first_name: String
  last_name: String
  full_name: String
}

type Query {
  admin: User
}
`

Note: If an individual field resolver would prefer to read data using a fragment (rather than getting passed the model) they may specify a @rootFragment in their docblock. In this case, the resolver will get passed a fragment key similar to resolvers on server types.

04. Live Resolvers

So far we’ve demonstrated resolvers which are simply pure functions. They read values off of their parent and compute some value which is a pure function of those inputs. But what if we wanted add data from outside of Relay to the graph? This is where Live Resolver come in. A live resolver may read data from other sources, such as a legacy data store, a client-local database like IndexDB, some over browser-only data.

As means of example, lets add an example that adds the user’s computer’s battery level to the graph:

`import type {LiveState} from 'relay-runtime/store/experimental-live-resolvers/LiveResolverStore';

let battery = null;
navigator.battery.then(b => {
  battery = b;
});

/**
 * @RelayResolver Query.battery_level: Float
 * @live
 */
export function battery_level(): LiveState<number> {
  return {
    read() {
      // Our promise may not have resolved yet.
      // We'll come back and fix this in the next section
      if(battery == null) {
        return 1;
      }
      return battery.level;
    },
    subscribe(cb) {
      battery.addEventListener("levelchange", cb);
      return () => {
        battery.removeEventListener("levelchange", cb);
      }
    }
  }
} 
`

We can now query the user’s battery level from anywhere in our app like this:

`query MyQuery {
  battery_level
}
`

A more common use of Live Resolvers is to connect up a client data layer to Relay. For an example see Live Resolvers as a Bridge to a Legacy Data Store.

Note how there is a subtle bug with the possibility that the battery API is fetched via a promise and may not be available yet. To support use-cases like this, Live Resolvers have the ability to suspend. Lets look at that next: Live Resolver Suspense

05. Live Resolver Suspense

Since Live Resolvers model data that changes over time, they might also need to model data that is not yet available. To handle this case, Relay lets you return a special suspense sentinel value from your resolver which will cause all components reading that field to suspend until the resolver updates a new non-suspense value is returned.

In our previous example, we had a bug where the battery API is accessed via a promise and my not be available yet. Let’s add suspense to our resolver to handle that initial loading state:

`import type {LiveState} from 'relay-runtime/store/experimental-live-resolvers/LiveResolverStore';
const {
  suspenseSentinel,
} = require('relay-runtime/store/experimental-live-resolvers/LiveResolverSuspenseSentinel');

let battery = null;
const batteryPromise = navigator.battery.then(b => {
  battery = b;
});

/**
 * @RelayResolver Query.battery_level: Float
 * @live
 */
export function battery_level(): LiveState<number> {
  return {
    read() {
      if(battery == null) {
        return suspenseSentinel();
      }
      return battery.level;
    },
    subscribe(cb) {
      let unsubscribed = false;
      if(battery == null) {
        batteryPromise.then(() => {
          if(!unsubscribed) {
            cb();
          }
        });
      }
      battery.addEventListener("levelchange", cb);
      return () => {
        unsubscribed = true;
        battery.removeEventListener("levelchange", cb);
      }
    }
  }
}
`

06. Live Resolvers as a Bridge to a Legacy Data Store

One canonical use for Live Resolvers is to model data from an existing data store, such as Redux in your graph. This allows new product code to be written using Relay APIs without having to worry about if the data is the old data store or coming directly from GraphQL. In the future you can even migrate types to the GraphQL server without needing to meaningfully change the product code that consumes it.

Let’s combine our knowledge of Relay Model Types and Live Resolvers to prototype adding a User type to our graph that is backed by Redux.

`import type {LiveState} from 'relay-runtime/store/experimental-live-resolvers/LiveResolverStore';

/**
 * @RelayResolver User
 * @live
 */
export function User(id): LiveState<UserModel> {
  return {
    read() {
      const users = getUsers(store.getState());
      return users[id];
    },
    subscribe(cb) {
      return store.subscribe(cb);
    }
  }
}

`

Warning: Note that Redux will notify Relay on every state update, even ones that don’t affect this user. Today this might cause some problematic performance characteristics in Relay, but we are actively working on a new architecture which we expect to solve this issue.

Tip: Check out our section on Batching Live Resolver Updates for making updates from Redux-like data sources more performant.

For a full example see: https://github.com/captbaritone/redux-to-relay-with-live-resolvers-example

For an experimental example that uses Live Resolvers to model SQLite data in the graph see: captbaritone/redux-to-relay-with-live-resolvers-example#7

Prev: 05. Live Resolver Suspense Next: 07. Batching Live Resolver Updates

--

07. Batching Live Resolver Updates

Today, each Live Resolver field must maintain its own subscription to the external data source. If a single change to that data source will result in notifying many fields, you may wish to batch updates to Relay such that Relay can wait for all notifications triggered by that one update and process it as a batch. The RelayLiveResolverStore exposes a batching method to support this. In the case of a Redux store, you might employ with an approach like:

`// The store used in your RelayEnvironment
const relayStore = new LiveResolverStore(source, {
  gcReleaseBufferSize: 0,
  log,
});

const reduxStore = createStore(/* ... */);

const originalDispatch = reduxStore.dispatch;

reduxStore.dispatch = action => {
  relayStore.batchLiveStateUpdates(() => {
    // All Redux subscribers (including Relay Live Resolvers)
    // will be notified syncronously here.
    originalDispatch(action);
  });
};
`

Since Redux’s dispatch method results in notifying all subscribers synchronously, this approach will allow Relay to wait until its received all the notifications for this action before it starts computing new resolver values or notifying its subscribers of the changes.

08. Fields that return JavaScript values

In some cases, it may not be possible to express the data you need to expose in the graph using GraphQL types. As an escape hatch Relay offers a special custom scalar called RelayResolverValue. If you type resolver as returning this custom scalar, Relay will allow you to return an arbitrary JavaScript value from your resolver.

`/*** @RelayResolve Query.set_of_ids: RelayResolverValue */`
`export`` ``function`` set_of_ids``():`` ``Set``<number>`` ``{`
`  ``return`` ``new`` ``Set``([``1``,`` ``2``,`` ``3``]);`
`}`` `

Components can now read this field, and they will get out a Set<number>. In fact, Relay will even ensure that read field is type safe by generating a type that is (essentially) “The return value of the function set_of_ids.

We strongly recommend that you avoid this escape hatch when at all possible, but acknowledge that it may be helpful in certain situations.

Note: Relay still expects these values to be immutable

09. Edges to Server Types

In addition to scalar fields, you can also use Resolvers to model an edge to a server type, as long as that server type implements Node. This resolver must simply return the ID of the object to which the field is an edge.

Here’s an example of adding a root field selected_user to the query that returns a User based on data in a Flux store.

`import type {LiveState} from 'relay-runtime/store/experimental-live-resolvers/LiveResolverStore';

/**
 * @RelayResolver Query.selected_user: User
 * @live
 */
export function selected_user(): LiveState<DataID> {
  return {
    read() {
      return getSelectedUserId(store.getState());
    },
    subscribe(cb) {
      return store.subscribe(cb);
    }
  }
}
`

This could then be read from a fragment like this:

`function SelectedUserAndFriend({userKey}) {
  const data = useFragment(graphql`
    fragment SelectedUserComponent on Query {
      selected_user @waterfall {
        name
        best_friend {
          name
        }
    }`, userKey);
  const user = data.selected_user;
  return <div>{user.name} + {user.best_friend.name}</div>
}
`

Next: When you read a Resolver that’s an edge to a server type, Relay will have to fetch the information hanging off of that edge as a second request, since it only knows which user to fetch once the client code has actually rendered.

This means reading edges like this come at a cost of an additional network waterfall. To make this explicit, Relay requires that you annotate every read of a field like this with the directive @waterfall. The Relay compiler will ensure you apply this directive correctly.

Implementation Notes

To achieve this followup server request, the Relay compiler will generate a custom query for each selection that references the resolver field. For example, this selection would generate a query that looks like this:

`query SelectedUserComponent_selected_user_query($id: ID) {
  node(id: $id) {
    ... on User {
        name
        best_friend {
          name
        }
    }
  }
}
`

This query is not ever presented to you. Its simply an implementation detail of how Relay fetches server data. Its similar in its implementation to @refetchable fragments.

10. @outputType Resolvers

@outputType is currently an experimental feature. Please get in touch if you intend to use it in a production use-case.

In cases where you have self-contained blobs of data whose’s shape can be defined using GraphQL schema, there is a resolver variant that allows you to define a resolver that returns a fully populated blob that conforms to a schema.

@outputType can be used by defining a self-contained type in your client-schema extension, and then defining a resolver which returns that type and is annotated with @outputType.

Relay will generate a return type for your resolver which models the fully expanded shape of the blob. When your resolver evaluates, the resulting blob will be published into the Relay store.

Client Scheme Extension

type SomeBlob {
  name: String
  sub_object: SomeSubObject
}

type SomeSubObject {
  some_field: String
}

Resolver

/**
 * @RelayResolver Query.some_blob: SomeBlob
 * @outputType
 */
export function some_blob() {
  return {
    name: "I am a blob!",
    sub_object: {
      some_field: "I am a field on a sub-object"
    }
  }
}

Note that @outputType can be combined with @live to model a blob of data that changes over time.

Component (consuming data)

Components can then request data from within the blob.

function MyComponent() {
  const data = useLazyLoadQuery(graphql`
    query MyComponentQuery {
      some_blob { # Will invoke the some_blob resolver
        sub_object {
          some_field
        }
      }
   }`);
  
  return <h1>{data.some_blob.sub_object.some_field}</h1>;
}

Constraints

OutputType is currently an experimental feature. Please get in touch if you intend to use it in a production use-case. Currently @outputType is constrained in a few ways:

  • Return type must be weak (no id field) and contain only weak children
  • Return type cannot be recursive. No child of the field can define a field that reference the parent type
  • Is gated behind a compiler config feature flag relay_resolver_enable_output_type

Prev: 09. Edges to Server Types Next: ⛔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment