Skip to content

Instantly share code, notes, and snippets.

@malectro
Created March 20, 2018 21:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save malectro/accc1940dc850a29dd36de0f1fc6e86d to your computer and use it in GitHub Desktop.
Save malectro/accc1940dc850a29dd36de0f1fc6e86d to your computer and use it in GitHub Desktop.

Redux TTL

The Premise

Currently a request to the api for filtered data is inserted into our store in a normalized fashion.

For example, getting audience-members/${audienceMemberId}/due-events returns a set of DueEvents, which are dumped in a single object that holds all due events regardless of audience member (or any other filter properties).

Then when the list is rendered, a selector refilters the DueEvents stored in the heap by audienceMemberId.

Reducer

export default (state: DueEventsState = {}, action: DueEventsAction) => {
  switch (action.type) {
    case RECEIVE: {
      return action.payload.reduce((dueEvents, dueEvent) => {
        dueEvents[dueEvent.id] = dueEvent;
        return dueEvents;
      }, {...state});
    }
  }
  return state;
};

Selector

const getDueEvents = state => state.dueEvents;
const selectGroupedDueEventsByAudienceMemberId = createSelector(
  getDueEvents,
  dueEvents => groupBy(dueEvents, 'audienceMemberId')
);

const useAudienceMemberId = (state, audienceMemberId) => audienceMemberId;
export const selectAudienceMemberDueEvents = createSelector(
  selectGroupedDueEventsByAudienceMemberId,
  useAudienceMemberId,
  (groupedDueEvents, audienceMemberId) => groupedDueEvents[audienceMemberId]
);

This pattern has benefits over storying each query separately: DueEvents used elsewhere are updated as a side affect, and users are never presented with two versions of the same data.

In the case where the API paginates the data, the implementation is slightly more complicated. The page, the list of ids and page number, needs to be stored as well because selecting the page on the client can be slow and yield different results than the server. (This is debateable.)

The Problem

But all this data can build up. On the web we often don't worry about memory leaks like this because they're slow to build and likely not to cause a problem before the user closes or refreshes the tab. Ideally, though, we wouldn't leave the possibility of replicating the entire db in client memory.

Possible Solutions

It seems that we need some sort of TTL on our redux data, but this leaves us with some open questions.

  • How and when would we cull this data?
    • On an interval?
    • On get? (This doesn't fully solve the memory bloat problem.)
  • How would we handle normalized dependencies when data is randomly culled?
    • If a dependency is missing, we could assume that parent data needs to itself be culled.
    • We could keep some sort of WeakMap reference between data objects and their dependents.
  • Would removing our normalizations and just keeping queries with TTLs greatly simplify this process?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment