How React Compiler Works Under the Hood

If you have ever stared at a useMemo dependency array and wondered whether you were describing the computation or accidentally encoding a race with concurrent rendering, you have already felt the gap between React’s execution model and what hooks can conveniently express. React Compiler closes part of that gap by moving memoization from author-maintained hooks into a compile-time analysis phase. The important thing to internalize is that this is a build plugin, not a behavioral change to the React package you import at runtime. Your components still run under React 17, 18, or 19; what changes is the JavaScript the bundler emits after the plugin has rewritten your functions.

From AST to HIR: why a control-flow graph matters

The plugin consumes the same Babel AST your TypeScript or SWC pipeline already produces, then lowers it into a high-level intermediate representation—the HIR—that is tailored for React-specific concerns rather than for generic JavaScript optimization. The HIR is organized around a control-flow graph (CFG): nodes represent basic blocks of execution, edges represent branches, loops, early returns, and merge points. That structure matters because React components are almost never straight-line code. You return null when data is missing, you branch on feature flags, you short-circuit when a child tree is empty. Manual useMemo runs before those branches in source order; if you need a memoized value only on one side of a conditional, you either hoist computation you did not want to run, split components artificially, or give up.

The compiler’s CFG-based view lets analysis ask: which expressions are reachable from which entry points to the render function, and which values depend on which others along those paths? Data-flow passes track how values are produced, consumed, and merged across branches. Mutability passes ask whether a value might be mutated after creation in a way that would invalidate a cached result. Together, those passes approximate the questions a human reviewer asks when deciding if referential stability is safe—except they are exhaustive for the analyzed function and consistent across refactors.

Dependency graphs and expression-level memoization

Once the compiler understands control flow and mutability, it builds dependency graphs for the values and functions that flow into JSX or into child props. The unit of optimization is intentionally smaller than “the whole component body.” Granular, expression-level memoization means two sibling computations can be cached independently: one might invalidate when user.id changes, another when a theme token changes, without forcing both into a single hook or splitting components purely for hook boundaries.

Concretely, imagine a dashboard shell in TypeScript that derives several derived values for the header and the sidebar from overlapping inputs:

type LayoutProps = {
  user: { id: string; name: string };
  accent: string;
};

export function DashboardLayout({ user, accent }: LayoutProps) {
  const greeting = `Hello, ${user.name}`;
  const sidebarKey = `${user.id}:${accent}`;
  return (
    <>
      <Header title={greeting} />
      <Sidebar cacheKey={sidebarKey} accent={accent} />
    </>
  );
}

A human might wrap greeting and sidebarKey in separate useMemo calls with duplicated dependency lists, or one useMemo returning an object (creating a new object each time any dependency changes and defeating fine-grained memoization downstream). The compiler’s graph can treat greeting and sidebarKey as separate memoized cells, each with its own precise dependency set, and emit code that preserves referential equality for greeting when only accent flips—something you can achieve manually, but rarely without friction.

After conditional returns: what hooks cannot do

The canonical illustration of compiler advantage is memoization after an early return. Hooks cannot be called conditionally; therefore any useMemo that should logically live “only when we actually render children” must be hoisted above the guard or split into another component. The compiler is not subject to the hooks rules in the same way—it rewrites your function into a valid hook-using implementation.

Consider a theme provider that merges a partial theme with context and should not allocate a new context value when there are no children:

import { createContext, use } from "react";

type Theme = { color: string; radius: number };

const ThemeContext = createContext<Theme>({ color: "#000", radius: 4 });

function mergeTheme(partial: Partial<Theme> | undefined, base: Theme): Theme {
  return { ...base, ...partial };
}

type ThemeProviderProps = {
  theme?: Partial<Theme>;
  children?: React.ReactNode;
};

export function ThemeProvider({ theme, children }: ThemeProviderProps) {
  if (!children) {
    return null;
  }
  const base = use(ThemeContext);
  const value = mergeTheme(theme, base);
  return <ThemeContext value={value}>{children}</ThemeContext>;
}

Hand-written useMemo for value would have to sit above if (!children) return null, which means you still run use and mergeTheme on paths where you return null—unless you refactor into inner components. The compiler can recognize that value is only needed on the branch where children exist and still memoize it with dependencies { theme, base } in a transformed program that satisfies the rules of hooks. That is not a small stylistic nicety; it is the difference between structuring components for the optimizer in your head and structuring them for the product.

Validation passes and coupling to Babel

The React team’s early posts on the compiler emphasize that the Babel plugin is largely decoupled from Babel itself—the interesting work lives in the IR and passes, not in a one-off AST visitor. That architecture is why SWC integration can trail slightly yet still share the same conceptual pipeline: lower to HIR, analyze, emit. For you as an application author, the practical consequence is that compiler upgrades are build-time upgrades. Pinning babel-plugin-react-compiler (or the framework wrapper that enables it) is how you lock in a specific optimization strategy; a future minor version might choose different granularity for the same source. Teams with thin regression coverage around rendering should treat that like any other compiler toolchain and use exact versions until they have confidence.

ESLint rules bundled with the recommended eslint-plugin-react-hooks preset encode many of the invariants the static passes assume. When a pattern is ambiguous or unsafe, the compiler’s conservative choice is to skip the component rather than apply a wrong optimization. Linting narrows the gap between “compiles” and “compiles and optimizes.”

Mental model for TypeScript developers

TypeScript does not change the fundamental story: types are erased, the compiler sees JavaScript-like semantics. Where TS helps you is modeling props and context values so that mutability and optional branches are honest. If you lie to the type system—props.config as MutableConfig and then mutate during render—you violate the same rules the compiler needs. Conversely, readonly props, discriminated unions for loading states, and immutable update patterns align with analysis.

You should not expect the compiler to fix algorithmic complexity. If you sort ten thousand rows on every render, memoizing the sorted array reference still leaves you with ten thousand comparisons whenever dependencies change. The compiler makes React’s reconciliation and child prop comparisons cheaper; it does not replace Big-O discipline in your domain logic.

What you get is a pipeline that mirrors how an expert would reason about a component—control flow, dependencies, mutability—applied mechanically and at finer granularity than manual hooks typically achieve, including cases like post-return memoization that the hooks API cannot express without structural workarounds. The next section turns from internals to the wires and switches: how you enable that pipeline in real projects and adopt it without freezing your roadmap.