Skip to content

Web Development

Core Web Vitals in 2026: what changed and what still matters

Andrew Roper · · 7 min read

Quick answer: in 2026 Core Web Vitals are LCP (Largest Contentful Paint ≤ 2.5s), INP (Interaction to Next Paint ≤ 200ms), and CLS (Cumulative Layout Shift ≤ 0.1). All three need to pass at the 75th percentile of real-user data over a 28-day window for a URL to be considered “good.” The metrics are part of Google’s ranking signal, but the bigger story is that they’re also a strong proxy for the user experience your visitors actually have.

Core Web Vitals matter for two reasons: Google uses them in ranking, and slow sites lose visitors regardless of ranking. The first reason gets the attention; the second moves more revenue.

Here’s the current state of the metrics, what changed, and what it takes to consistently pass on a real production site.

What the three metrics actually measure

LCP (Largest Contentful Paint). The time from when the page starts loading to when the largest content element (typically the hero image, headline, or video poster) is rendered. Measured in seconds.

What it means in plain English: how long until the user sees the main thing on the page. If LCP is 4 seconds, your visitor stares at a partial page or a blank screen for 4 seconds before the main content appears.

Threshold: ≤ 2.5s for “good,” ≤ 4.0s for “needs improvement.”

INP (Interaction to Next Paint). The time from when the user interacts with the page (clicks, taps, types) to the next visual update. Measured in milliseconds.

What it means in plain English: how quickly the page responds when the visitor does something. Replaced FID (First Input Delay) in 2024 because FID only measured the first interaction; INP measures the worst interaction across the visit, which is much closer to what users actually feel.

Threshold: ≤ 200ms for “good,” ≤ 500ms for “needs improvement.”

CLS (Cumulative Layout Shift). A score representing how much the page’s visible content unexpectedly moves around during loading. Unitless — lower is better.

What it means in plain English: how often the page jumps around as it loads. The classic offender: a button you were about to click moves down because an ad loaded above it. Frustrating, sometimes catastrophic for conversions.

Threshold: ≤ 0.1 for “good,” ≤ 0.25 for “needs improvement.”

web.dev/vitals is the canonical reference and tracks the metrics as they evolve.

What changed from earlier versions

For teams who haven’t looked at this since 2022:

FID was replaced by INP in 2024. FID measured only the first interaction, which most pages handled fine. INP measures across the whole session and is much harder to pass — particularly on JavaScript-heavy sites.

The mobile/desktop bar tightened. What was “good” in 2022 is “needs improvement” in 2026 in real-world data because more sites now pass; the relative bar has moved up.

Field data dominates lab data. Google uses Chrome User Experience Report (CrUX) data — real visits from real Chrome users — not Lighthouse lab scores. A site can pass Lighthouse and fail CrUX if the metrics don’t hold up at the 75th percentile of real visitors.

INP-related ranking impact has grown. Sites that performed well on FID but struggle on INP (often heavy SPAs and apps) have noticed real ranking movement since the metric switch.

What it takes to pass LCP

The dominant factors for LCP, in order of typical impact:

1. The size and delivery of the hero image. For most marketing pages, the LCP element is a hero image. Two things matter: total bytes (smaller is faster) and how quickly the request starts (earlier is faster).

What works: serving images in modern formats (AVIF or WebP), sized appropriately for the device (responsive images with srcset), with fetchpriority="high" on the hero image, and preloaded if it’s critical.

2. Server response time (TTFB). The time from request to first byte of HTML. If TTFB is 1.5s, you’ve already used 60% of your LCP budget before the browser has anything to render.

What works: serving pre-rendered HTML where possible (static-output frameworks like Astro excel here), edge caching, fast hosting, minimal server-side computation per request.

3. Render-blocking resources. CSS and synchronous JavaScript in the <head> block rendering. Each one delays LCP.

What works: inlining critical CSS, deferring non-critical CSS, deferring JavaScript that isn’t needed for above-the-fold rendering, eliminating render-blocking third-party scripts.

4. Web fonts. Default font loading produces FOIT (flash of invisible text) which can become the LCP element. Bad.

What works: font-display: swap to show fallback fonts immediately, preloading critical fonts, self-hosting fonts (avoiding the Google Fonts round-trip), using variable fonts to reduce file count.

What it takes to pass INP

INP is mostly about JavaScript. Sites with heavy client-side frameworks struggle here even when LCP is great.

1. Reducing JavaScript execution time. Less JavaScript means less main-thread work, which means faster response to interactions. The biggest single win for most sites.

What works: shipping less JavaScript (the cheapest fix is the framework / architecture), code-splitting so each page only loads what it needs, lazy-loading components below the fold, using more HTML and less JavaScript where possible.

2. Breaking up long tasks. A single 500ms JavaScript task blocks every interaction during it. Breaking it into chunks (with await boundaries, requestIdleCallback, or scheduler.yield()) keeps the page responsive.

What works: profiling the page interaction with Chrome DevTools Performance, identifying long tasks, splitting them deliberately.

3. Avoiding hydration tax. React, Vue, and similar frameworks “hydrate” static HTML by attaching JavaScript event handlers. Hydration is expensive on JavaScript-heavy sites and dominates INP.

What works: islands architecture (only hydrating interactive parts), server components where supported, framework-free progressive enhancement for marketing pages.

4. Third-party scripts. Analytics, chat widgets, A/B testing tools, ad scripts — each one is JavaScript that runs on every page and contributes to INP.

What works: ruthless audit of what’s loaded, deferring non-critical third-party scripts, using lighter alternatives (Plausible vs Google Analytics, for instance), measuring the actual cost of each.

What it takes to pass CLS

CLS is mostly about reserving space for things before they load.

1. Image and video dimensions. Every image needs explicit width and height attributes. Without them, the browser doesn’t know how much space to reserve and content shifts when the image loads.

2. Web font fallbacks. When the web font loads, text reflows from the fallback. If the metrics are very different, content shifts. Modern CSS (size-adjust, ascent-override) can match fallbacks more closely; for design-critical fonts, consider preloading.

3. Late-loading content above existing content. Banners, cookie notices, A/B test variations — anything that injects content into the DOM after initial render and pushes existing content down. Reserve space for these or render them in fixed positions.

4. Animations using top/left. Animating layout properties causes shifts. Use transform for animations — the browser handles them on the compositor without affecting layout.

Lab vs field data

A site that scores well on Lighthouse can fail CrUX. Why:

  • Lighthouse runs once, on a controlled environment. CrUX measures real users on real devices and connections.
  • Lighthouse simulates a single page load. CrUX captures the full user journey, including INP across multiple interactions.
  • Lighthouse uses synthetic throttling. CrUX has the actual long-tail of slow networks, old devices, and bad battery states.

The honest test: Lighthouse for catching regressions during development, CrUX for real-world performance. A site that passes Lighthouse but fails CrUX needs work. A site that passes CrUX is genuinely fast, regardless of what Lighthouse says.

Where the wins come from

For teams looking at where to invest:

  • For marketing sites built on a modern framework: the wins are usually in image delivery, font loading, and trimming third-party scripts. Most modern frameworks make it easy to pass CWV by default.
  • For WordPress sites: the wins are usually in plugin discipline, image delivery, and a serious caching/CDN strategy. The framework isn’t the constraint; the ecosystem on top of it usually is.
  • For SPAs and JS-heavy apps: the dominant work is on INP — less JavaScript, smarter hydration, breaking up long tasks. This is harder than the LCP work and requires more architectural commitment.

We deploy primarily on Astro for marketing site work specifically because it produces static HTML by default, which makes passing CWV the path of least resistance rather than the result of careful tuning. We’ve written about why.

Common questions

What are Core Web Vitals? Three metrics Google uses to measure user experience: LCP (loading speed), INP (interactivity responsiveness), and CLS (visual stability). All three need to pass at the 75th percentile of real-user data for a URL to count as “good.” Used as a ranking signal and a UX proxy.

Do Core Web Vitals affect SEO ranking? Yes, as one signal among many. The direct ranking impact is moderate; the indirect impact (faster sites have lower bounce rates, higher engagement, better conversions) is often larger. Pages with poor CWV can rank well on strong content; well-built pages with good CWV consistently outperform comparable competitors.

What is INP and how is it different from FID? INP (Interaction to Next Paint) replaced FID (First Input Delay) in 2024. FID measured only the first interaction; INP measures the worst interaction across the whole page session. INP is harder to pass and a more honest measure of how responsive a page feels.

How do I check my Core Web Vitals? Use Google’s PageSpeed Insights (which combines lab + field data), Search Console’s Core Web Vitals report (field data from CrUX), or a real-user monitoring tool. For development, Chrome DevTools’ Performance tab profiles individual page loads.

What’s a good Core Web Vitals score? “Good” thresholds in 2026: LCP ≤ 2.5 seconds, INP ≤ 200 milliseconds, CLS ≤ 0.1. All three need to pass at the 75th percentile of real users over a 28-day window. Sites passing all three reliably are in the minority — and noticeable.

If your site isn’t passing CWV and you’re not sure what to fix first, start a project and we’ll do an audit. Often the fix is structural rather than incremental.

Let’s build something

The right system,
built once, properly.

If your business is ready to scale beyond what off-the-shelf tools can support — we should talk.