Currently a request to the api for filtered data is inserted into our store in a normalized fashion.
For example, getting audience-members/${audienceMemberId}/due-events
returns a set of DueEvent
s,
which are dumped in a single object that holds all due events regardless of audience member (or any
other filter properties).
Then when the list is rendered, a selector refilters the DueEvent
s stored in the heap by audienceMemberId
.
Reducer
export default (state: DueEventsState = {}, action: DueEventsAction) => {
switch (action.type) {
case RECEIVE: {
return action.payload.reduce((dueEvents, dueEvent) => {
dueEvents[dueEvent.id] = dueEvent;
return dueEvents;
}, {...state});
}
}
return state;
};
Selector
const getDueEvents = state => state.dueEvents;
const selectGroupedDueEventsByAudienceMemberId = createSelector(
getDueEvents,
dueEvents => groupBy(dueEvents, 'audienceMemberId')
);
const useAudienceMemberId = (state, audienceMemberId) => audienceMemberId;
export const selectAudienceMemberDueEvents = createSelector(
selectGroupedDueEventsByAudienceMemberId,
useAudienceMemberId,
(groupedDueEvents, audienceMemberId) => groupedDueEvents[audienceMemberId]
);
This pattern has benefits over storying each query separately: DueEvent
s used elsewhere are updated as a side affect,
and users are never presented with two versions of the same data.
In the case where the API paginates the data, the implementation is slightly more complicated. The page, the list of ids and page number, needs to be stored as well because selecting the page on the client can be slow and yield different results than the server. (This is debateable.)
But all this data can build up. On the web we often don't worry about memory leaks like this because they're slow to build and likely not to cause a problem before the user closes or refreshes the tab. Ideally, though, we wouldn't leave the possibility of replicating the entire db in client memory.
It seems that we need some sort of TTL on our redux data, but this leaves us with some open questions.
- How and when would we cull this data?
- On an interval?
- On get? (This doesn't fully solve the memory bloat problem.)
- How would we handle normalized dependencies when data is randomly culled?
- If a dependency is missing, we could assume that parent data needs to itself be culled.
- We could keep some sort of
WeakMap
reference between data objects and their dependents.
- Would removing our normalizations and just keeping queries with TTLs greatly simplify this process?