Skip to content

Instantly share code, notes, and snippets.

@bvaughn
Last active May 21, 2024 11:40
Show Gist options
  • Save bvaughn/60a883af01716a03a1b3285a1029be0c to your computer and use it in GitHub Desktop.
Save bvaughn/60a883af01716a03a1b3285a1029be0c to your computer and use it in GitHub Desktop.
Notes about the in-development React <Profiler> component

Profiler

React 16.4 will introduce a new Profiler component (initially exported as React.unstable_Profiler) for collecting render timing information in order to measure the "cost" of rendering for both sync and async modes.

Profiler timing metrics are significantly faster than those built around the User Timing API, and as such we plan to provide a production+profiling bundle in the future. (The initial release will only log timing information in DEV mode, although the component will still render its children- without timings- in production mode.)

How is it used?

Profiler can be declared anywhere within a React tree to measure the cost of rendering that portion of the tree. For example, a Navigation component and its descendants:

const Profiler = React.unstable_Profiler;

render(
  <App>
    <Profiler id="Navigation" onRender={callback}>
      <Navigation {...props} />
    </Profiler>
    <Main {...props} />
  </App>
);

Multiple Profilers can be used to measure different parts of an application:

const Profiler = React.unstable_Profiler;

render(
  <App>
    <Profiler id="Navigation" onRender={callback}>
      <Navigation {...props} />
    </Profiler>
    <Profiler id="Main" onRender={callback}>
      <Main {...props} />
    </Profiler>
  </App>
);

Profilers can also be nested to measure different components within the same subtree:

const Profiler = React.unstable_Profiler;

render(
  <App>
    <Profiler id="Panel" onRender={callback}>
      <Panel {...props}>
        <Profiler id="Content" onRender={callback}>
          <Content {...props} />
        </Profiler>
        <Profiler id="PreviewPane" onRender={callback}>
          <PreviewPane {...props} />
        </Profiler>
      </Panel>
    </Profiler>
  </App>
);

Although Profiler is a light-weight component, it should be used sparingly. Every component adds CPU and memory overhead to an application.

onRender callback

The onRender callback is called each time the root renders. It receives the following parameters:

  • id: string - The id value of the Profiler tag that was measured. (This id can change between renders if it is derived from state or props.)
  • phase: string - Either "mount" or "update" (depending on whether this root was newly mounted or has just been updated).
  • actualTime: number - Time spent rendering the Profiler and its descendants for the most recent "mount" or "update" render. 1
  • baseTime: number - Duration of the most recent render time for each individual component within the Profiler tree. 1
  • startTime: number - When the Profiler began the recently committed render. 2
  • commitTime: number - The time at which the current commit took place. 2

1: See "Timing metrics" section below for more detailed information about what this time represents.

2: See "Start and commit times" section below for more detailed information about what this time represents.

Timing metrics

Here is a review of the types of timing React is now capable of reporting:

User Timing API

Measures start/stop times for each component lifecycle.

  • Opt in mechanism: Feature flag (typically DEV mode only)
  • Scope: Tracked for all components in an app
  • How is it measured?
    • Start/stop times for each component lifecycle
    • Measured as a realtime graph
  • When is it recorded?
    • Realtime graph is recorded after each lifecycle call.
  • What does it tell us?
    • Flame graph paints a useful picture of how events (e.g. mouse clicks) tie together with rendering.

“Actual” render time (new)

Time spent rendering the Profiler and its descendants for the most recent render/update.

  • Opt in mechanism: Wrap a component with <Profiler>
  • Scope: Measured for descendants of Profiler only
  • How is it measured?
    • Start timer during “begin” phase, stop during “complete” phase
    • Paused (and accumulated) for scheduling/timing interruptions 3
    • Paused (and accumulated) for aborted renders (e.g. suspense)
  • When is it recorded?
    • A new snapshot is recorded each time a Profiler is re-rendered
  • What does it tell us?
    • How well does the subtree make use of shouldComponentUpdate for memoization?
    • The more this time decreases for update renders, the better the memoization.

“Base” render time (new)

Duration of the most recent render time for each individual component within the Profiler tree.

  • Opt in mechanism: Wrap a component with <Profiler>
  • Scope: Measured for descendants of Profiler only
  • How is it measured?
    • Measured for each fiber below a Profiler component.
    • Recorded during “begin” phase.
      • Times are not updated/recorded if a component skips render because of shouldComponentUpdate
      • (Descendant times are also not updated in that case)
    • Bubble up (summed) for the Profiler during “complete” phase
    • Total times logged for Profiler (not for individual fibers)
  • When is it recorded?
    • A new snapshot is recorded each time a Profiler is re-rendered
  • What does it tell us?
    • How expensive our render functions are in the worst case (no memoization).
    • Lower this number by reducing the work done in render.

3: Until "resume" behavior is implemented, interruptions will not accumulate time.

Start and commit times

At first glance, these values may seem redundant. Why is commit time necessary? Why isn't it just the time at which the onRender callback is called? And why is start time not just the commit time less the "actual" time?

Start time

Start time identifies when a particular commit started rendering. Although insufficient to determine the cause of the render, it can at least be used to rule out certain interactions (e.g. mouse click, Flux action). This may be helpful if you are also collecting other types of interactions and trying to correlate them with renders.

Start time isn't just the commit time less the "actual" time, because in async rendering mode React may yield during a render. This "yielded time" (when React was not doing work) is not included in either the "actual" or "base" time measurements.

Commit time

Commit time could be roughly determined using e.g. performance.now() within the onRender callback, but multiple Profiler components would end up with slightly different times for a single commit. Instead, an explicit time is provided (shared between all Profilers in the commit) enabling them to be grouped if desirable.

@petar-i-todorov
Copy link

startTime here means when the render phase begins or when the commit phase begins? I've also posted this question on S.O.. Please halp 🙏

Did you find the answer to your question? I'm trying to figure out how React.Profiler works for hours and it's still very unclear. Your schema from SO helped me understand it better tho. I think the right answer to your question is a), isn't it?

@curlhash
Copy link

curlhash commented May 7, 2024

@bvaughn Dear Brian, could you please help me understand if I could rely on the data from onRender for setting performance optimization goals? Will it be the same data I inspect in DevTools? I use onRender to calculate:

  1. number of all re-renders for a given component,
  2. total time for this component that was spent on render.
    I planned to use those two metrics (besides web vitals) to get objective data on the component performance and use it as acceptance criteria for all future refactors of the component my team will be doing.

Say <Cart /> component experienced 23 re-renders and spent 423 ms on all of them. On the next iteration of <Cart /> this component (whatever the implementation) should spend less - that would be the goal of refactoring. However, it seems that the metric "total time" doesn't make much sense, it could be different even inside a single .map for different items. Or do I want to measure not absolute time taken for rendering, but percentage compared with the previous implementation, say -10% of total time?

  1. Having that in mind, how do I want to set the objective targets for performance optimization?
  2. Should I focus only on web vitals? Though for small components refactor LCP, FCP, FID seem to be too large to see any difference.
  3. should I aim only for the number of re-renders?

Sorry for a lot of questions, I would appreciate any pointers and answers!

Hi @sorokinvj , I'm working on adding similar monitoring and curious to know if you got these answers? It will be a great help if we can discuss this

@sorokinvj
Copy link

@curlhash Dear Abishek, to be honest I don't remember everything related to that case, but in my mind currently there is an understanding:

  1. The number of re-renders per se is not that important, but if you suspect it is causing the problem, then a) make an objective metric for the problem (say page loading time is huge), monitor it, try solving re-renders issue and then monitor how it affects the problem metric. Hopefully you will see correlation. If not - forget about re-renders, there is something else that contributes to the problem

  2. Absolute rendering times seen in profiler - can't be considered as a metric, but they might give you some insights anyway, just don't rely on the numbers, but use them to update your intuition.

  3. If you still think that there is something weird going on in one of your components, but you are unsure what - just ask for a code review, usually there are some ways to optimize component code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment