INP: Measuring and Fixing Interaction Latency
In March 2024, Google retired First Input Delay (FID) and replaced it with Interaction to Next Paint (INP) as a Core Web Vital. The change was significant: FID only measured the input delay before the browser started handling the very first interaction on the page. INP measures the full latency of every interaction throughout the entire page visit — clicks, taps, keyboard input — and reports the worst-case value at the 75th percentile.
This distinction matters enormously in practice. A page could pass FID while having terrible interactivity the moment a user started scrolling or clicking into a form. INP closes that gap. If your page performs a heavy computation when a dropdown opens, or if a third-party script blocks the main thread during user navigation, INP will surface it.
The Good threshold is under 200ms. Needs Improvement: 200–500ms. Poor: above 500ms.
The Full Interaction Lifecycle
Every interaction INP measures consists of three phases:
Input delay: The time between the user's action (mousedown, touchstart, keydown) and the moment the browser's event loop starts handling it. This is entirely caused by other work occupying the main thread — long tasks that prevent the browser from getting to the event queue.
Processing time: The time spent executing your event handlers. If a click handler triggers a React re-render of a large component tree, or runs a synchronous loop over thousands of items, this phase grows.
Presentation delay: The time between your event handlers completing and the browser actually painting the visual update — committing the layout changes, compositing layers, and displaying the frame. This phase is affected by layout thrashing, large DOM trees, and non-composited CSS animations.
The total interaction latency (INP) is input delay + processing time + presentation delay. Chrome DevTools shows all three phases when you click on an interaction in the Performance panel.
Profiling INP: Finding the Problem Interactions
Before optimizing, you need to know which interactions are failing and why. Two approaches work together:
Field data (CrUX): PageSpeed Insights shows your INP value from real user sessions. If you're in the Needs Improvement range, you know you have a real problem — but CrUX doesn't tell you which interaction is causing it.
Lab profiling (Chrome DevTools): Open the Performance panel, click Record, reproduce the sluggish interaction, stop recording. Look for Long Animation Frames (LoAF) — frames that took over 50ms to process. In Chrome 124+, the Performance panel highlights these in red in the Frames row. Click on a LoAF to see what JavaScript executed during it, broken down by function.
The web-vitals JavaScript library also reports INP attribution in field data, including the
element that was interacted with and the event type:
import { onINP } from "web-vitals/attribution";
onINP(({ value, attribution }) => {
const { interactionTarget, interactionType, inputDelay, processingDuration, presentationDelay } =
attribution;
console.log(`INP: ${value}ms on ${interactionTarget} (${interactionType})`);
console.log(` Input delay: ${inputDelay}ms`);
console.log(` Processing: ${processingDuration}ms`);
console.log(` Presentation: ${presentationDelay}ms`);
});
Log this data to your analytics pipeline and you can identify exactly which interactions are failing for real users in production.
Long Tasks: The Root Cause
Any JavaScript that runs for more than 50ms on the main thread is a "long task." Long tasks block everything else — including handling user input. If a 200ms task is running when the user clicks a button, that click sits in the event queue for up to 200ms before the browser can even start processing it. That 200ms shows up directly in the input delay phase of INP.
Common sources of long tasks:
- Heavy JavaScript framework hydration on initial load (React, Vue, Angular all hydrate synchronously by default)
- Synchronous third-party scripts: analytics, tag managers, consent management platforms
- Large JSON parse operations
- Unvirtualized lists with hundreds of DOM nodes being re-rendered
- Blocking
localStorageor synchronousIndexedDBoperations
scheduler.yield(): Breaking Up Long Tasks
The most direct fix for long tasks is breaking them into smaller chunks that yield control back to
the browser between each chunk. The new scheduler.yield() API makes this ergonomic:
async function processLargeDataset(items) {
const results = [];
for (let i = 0; i < items.length; i++) {
// Do work for one item
results.push(expensiveOperation(items[i]));
// Yield every 50 items to let the browser handle pending interactions
if (i % 50 === 0) {
await scheduler.yield();
}
}
return results;
}
scheduler.yield() returns a Promise that resolves in the next task, giving the browser an
opportunity to process any queued user input before continuing. Unlike setTimeout(fn, 0), it
integrates with the browser's task prioritization — if there's pending user input, it gets handled
before the resumed task continues.
For broader compatibility (Safari and Firefox support is still arriving), use setTimeout as a
polyfill:
function yieldToMain() {
if ("scheduler" in window && "yield" in scheduler) {
return scheduler.yield();
}
return new Promise((resolve) => setTimeout(resolve, 0));
}
async function processChunked(items) {
for (let i = 0; i < items.length; i++) {
processItem(items[i]);
if (i % 50 === 0) {
await yieldToMain();
}
}
}
Web Workers: Off-Main-Thread Computing
For truly heavy computation — parsing large files, running algorithms over datasets, doing cryptographic operations — the right solution is removing the work from the main thread entirely using Web Workers:
// worker.js
self.onmessage = ({ data }) => {
const result = heavyComputation(data);
self.postMessage(result);
};
// main.js
const worker = new Worker("/worker.js");
worker.postMessage(largeDataset);
worker.onmessage = ({ data }) => {
// Update UI with result — this runs on main thread, but the heavy work is done
updateDisplay(data);
};
Web Workers run on separate threads and can't block the main thread regardless of how long they run. The tradeoff is that they don't have access to the DOM, so you need to structure your architecture to move computation off-thread and then apply results on the main thread.
Libraries like Comlink simplify the Web Worker message-passing boilerplate.
Code Splitting and Dynamic Imports
If a click handler imports a heavy library before executing, that import takes time — and it happens on the main thread, contributing to processing time. Dynamic imports let you load heavy dependencies on demand:
// Without code splitting — heavy library loaded at page load regardless
import { processMarkdown } from "heavy-markdown-library";
button.addEventListener("click", () => {
processMarkdown(content);
});
// With dynamic import — library only loads when the button is actually clicked
button.addEventListener("click", async () => {
const { processMarkdown } = await import("heavy-markdown-library");
processMarkdown(content);
});
The first click will still be slower while the import completes, but subsequent clicks are fast. You can further improve this by preloading the module after the page has become idle:
// Preload after page load, not during
window.addEventListener("load", () => {
requestIdleCallback(() => {
import("heavy-markdown-library"); // Preloads without executing
});
});
Third-Party Scripts: The Underrated INP Killer
Analytics platforms, tag managers (Google Tag Manager, Tealium), cookie consent platforms, and live chat widgets frequently cause INP failures. They run synchronous code during page load, register event listeners that do expensive work, and inject DOM nodes that trigger layout recalculations.
Best practices for third-party script management:
Load async or defer: Never load third-party scripts with a bare <script src=""> tag. At
minimum, use async. Better yet, defer loading entirely until after the page becomes interactive:
// Load analytics only after the page is idle
window.addEventListener("load", () => {
requestIdleCallback(() => {
const script = document.createElement("script");
script.src = "https://analytics.example.com/tracker.js";
script.async = true;
document.head.appendChild(script);
});
});
Audit with Chrome DevTools: In the Performance panel, third-party scripts appear with their origin in the flamechart. Filter the Bottom-Up tab by domain to see which third parties are consuming main-thread time. If a tag manager is responsible for 800ms of long tasks on page load, that's your first target.
Input Handler Optimization: Debounce and Throttle
Attaching expensive operations directly to high-frequency events like scroll, resize,
mousemove, or input will cause continuous INP failures as the browser processes dozens of events
per second:
// Bad: runs synchronously on every scroll event
window.addEventListener("scroll", () => {
updateStickyHeader();
recalculatePositions(); // expensive
});
// Better: throttle with requestAnimationFrame
let scheduledRaf = false;
window.addEventListener("scroll", () => {
if (!scheduledRaf) {
scheduledRaf = true;
requestAnimationFrame(() => {
updateStickyHeader();
recalculatePositions();
scheduledRaf = false;
});
}
});
requestAnimationFrame batches visual updates with the browser's natural rendering cycle, ensuring
you're not computing more frames than the display can show.
For search inputs and other text fields that trigger data fetching or expensive filtering, debounce the handler:
function debounce(fn, delay) {
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => fn(...args), delay);
};
}
input.addEventListener("input", debounce(filterResults, 200));
List Virtualization for Large Collections
Rendering hundreds or thousands of DOM nodes is one of the most reliable ways to produce INP failures. React's default rendering model re-renders the entire visible subtree on state changes, and a list of 500 items means 500 DOM nodes being evaluated even when only a handful changed.
List virtualization renders only the items currently in the viewport, keeping the DOM size manageable:
// react-window example
import { FixedSizeList } from "react-window";
function VirtualizedList({ items }) {
return (
<FixedSizeList height={600} itemCount={items.length} itemSize={60} width="100%">
{({ index, style }) => <div style={style}>{items[index].name}</div>}
</FixedSizeList>
);
}
@tanstack/virtual provides a more flexible headless implementation that works with any UI
framework and supports variable-height items.
CSS Containment: Limiting Layout Scope
When a component updates, the browser may need to recalculate layout for the entire document — or just for a subtree, if you've told it the component is self-contained. CSS containment does exactly this:
.widget {
contain: layout style paint;
}
contain: layout style paint tells the browser:
layout: The widget's children don't affect the layout of elements outside itstyle: CSS counters and quotes generated inside don't leak outpaint: Content inside isn't visible outside the widget's bounds (enables paint containment optimizations)
contain: strict is shorthand for layout style paint size — the most aggressive containment,
appropriate for widgets with fixed dimensions.
For off-screen content, content-visibility: auto goes further by skipping rendering entirely until
the element enters the viewport:
.article-section {
content-visibility: auto;
contain-intrinsic-size: auto 800px; /* Approximate rendered height for layout reservation */
}
This reduces initial rendering work and improves Time to Interactive, which reduces input delay for
early interactions. Be aware that content-visibility: auto can cause CLS if the approximated
intrinsic size differs significantly from the actual rendered height.
The INP Optimization Checklist
- Measure INP from CrUX field data in PageSpeed Insights — confirm you're failing the 200ms threshold
- Instrument with
web-vitalsattribution to identify which interactions are slowest in production - Profile with Chrome DevTools Performance panel — find Long Animation Frames and identify JavaScript culprits
- Break long tasks with
scheduler.yield()or chunk processing withsetTimeout(fn, 0)as fallback - Move heavy computation to Web Workers
- Code-split heavy libraries with dynamic imports
- Load all third-party scripts with
asyncordefer; consider deferring to after page load withrequestIdleCallback - Debounce/throttle high-frequency event handlers using
requestAnimationFrame - Virtualize long lists with
react-windowor@tanstack/virtual - Apply CSS
contain: layout style paintto independent widgets - Use
content-visibility: autofor long-form content sections