👨🏼‍💻

khriztianmoreno's Blog

Home Tags About |

Demystifying Core Web Vitals - A Developer's Guide to LCP, INP, and CLS

Core Web Vitals are ranking signals, but most teams still optimize them like lab-only scorecards. This guide turns CWV into actionable engineering work: how to measure (field + lab), how to debug root causes in DevTools, and which fixes actually move the 75th percentile.

Core Web Vitals are engineering work, not a Lighthouse mini-game

If you work in web performance, you’ve probably seen the same pattern:

  • The team “optimizes Core Web Vitals” by chasing a green Lighthouse score.
  • The score improves.
  • The Search Console report doesn’t.
  • Real users still complain that the site “feels janky” or “lags when I tap”.

That gap exists because Core Web Vitals are field metrics. They’re evaluated on real user experiences (CrUX / RUM) at the 75th percentile, segmented by device + network. Lighthouse is still useful, but it’s not the authority.

There’s also a newer tension that many teams haven’t fully internalized:

  • INP quietly replaced FID on March 12, 2024.

The switch matters because it changes what “good performance” means. FID mostly rewarded “fast to start handling the first tap”. INP rewards something stricter and closer to how users judge sites: consistent responsiveness across the entire session.

If you want to build pages that rank and feel fast, you need to treat CWV as:

  • Instrumented production signals
  • Debuggable traces
  • Fixable bottlenecks

References:

Core Web Vitals (quick overview)

One table. No fluff.

MetricMeasuresGood (75th percentile)
LCPMain content load≤ 2.5s
INPInteraction responsiveness≤ 200ms
CLSVisual stability≤ 0.1

Field data vs lab data (why your “green Lighthouse” can still fail)

  • Field (CrUX / RUM): what Google ranks on and what users feel.

    • Aggregated over real sessions.
    • Evaluated at the 75th percentile.
    • Sensitive to slow devices, poor networks, CPU contention, third-party scripts, and long sessions.
  • Lab (Lighthouse / DevTools): what you use to reproduce, regress-test, and debug.

    • Controlled environment.
    • Great for identifying causes.
    • Not authoritative for SEO.

If you take only one rule from this post:

  • If it’s not measured in the field, it does not exist.

References:

LCP (Largest Contentful Paint)

What actually counts as LCP

LCP is not “the time the page looks ready” and it’s not “when the hero image finishes”. Chrome continuously tracks LCP candidates as the page renders, then reports the final winner.

Common candidates include:

  • <img> elements (including responsive images)
  • <video poster> frames
  • Some background images (when painted as an image contentful element)
  • Large text blocks (big headings, hero copy)

What matters for debugging is which element won and why it was late.

In practice, LCP investigations start with:

  • Identify the LCP element
  • Break down its timeline
    • TTFB
    • Resource load time (image/font)
    • Render delay (main thread / hydration / layout)

Reference:

Measure LCP (code)

Use web-vitals in production to capture field LCP and the element responsible.

import { onLCP } from "web-vitals";

onLCP((metric) => {
  console.log(metric.value, metric.element);
});

If you’re doing serious work, don’t stop at console.log. Send it to your RUM endpoint and segment it:

  • Device class
  • Effective connection type
  • Route/template
  • A/B cohort

Common real-world LCP killers

  • Late-loading hero images

    • Hero requested after CSS/JS resolves.
    • Image is discovered late because it’s in CSS or injected after hydration.
  • Server TTFB

    • Slow origin.
    • Cold starts.
    • Heavy SSR compute.
    • Cache misses.
  • Client-side rendering waterfalls

    • “Blank shell” + fetch + render.
    • Framework hydration blocking paint.
    • Fonts delaying text paint (FOIT) or relayout (FOUT).

Fixes that actually work

1) Make the hero request early (and intentionally)

<link rel="preload" as="image" href="/hero.webp" fetchpriority="high" />

Guidance:

  • Preload vs prefetch

    • preload is a strong “I need this for this navigation” signal.
    • prefetch is speculative and can be ignored under pressure.
  • fetchpriority

    • Use high for the actual LCP element.
    • Don’t blanket-apply it to everything, or you’ll just reorder contention.

2) Stop hydration from blocking the first meaningful render

Frameworks differ, but the principle is the same: ship less JS before first paint.

If your stack supports it, prefer:

  • SSR or static rendering for the LCP region
  • Partial hydration / islands for below-the-fold widgets
  • Streaming SSR where it reduces blocking time

3) Fix TTFB before you touch images

If your LCP breakdown shows TTFB dominates, optimizing images won’t move the needle.

Typical “TTFB-first” wins:

  • CDN caching for HTML
  • Server render caching
  • Reduce backend fan-out
  • Avoid expensive personalization before first byte

Framework callout (Next.js)

The exact knob depends on version and routing (Pages vs App), but the intent is to reduce runtime JS when possible.

export const config = {
  unstable_runtimeJS: false,
};

Treat this as a reminder, not a copy/paste fix: validate what your framework version actually supports, and measure the impact on hydration + interactivity.

INP (Interaction to Next Paint)

This is the heart of CWV work in 2025.

Why FID was misleading (and why Google replaced it)

FID measured only one thing:

  • The delay between a user’s first interaction and when the browser could begin processing it.

It did not measure:

  • Event handler execution time
  • Rendering time after the handler
  • The 20th interaction on a long-lived page
  • Responsiveness when the app is “fully loaded” but the main thread is constantly busy

That’s why teams could “fix FID” with small tweaks (or by shifting work later) and still ship experiences that feel laggy.

INP fixes that. It tracks responsiveness across the page lifetime and is designed to align with perceived UX pain.

Reference:

How INP works (the mental model)

INP is roughly:

  • The worst (or near-worst) interaction latency observed during the page’s lifetime
  • For each interaction, it measures from input to the next paint

So INP doesn’t just punish “slow click handlers”. It punishes any moment where:

  • The main thread is tied up (long tasks)
  • Rendering is expensive
  • Updates are synchronous and block paint

Measure INP (code)

import { onINP } from "web-vitals";

onINP((metric) => {
  console.log(metric.value, metric.attribution);
});

The attribution payload is your bridge from “INP is bad” to “this exact interaction is bad”. Fields vary by browser + library version, but you should expect to see (or derive):

  • eventType (e.g. click, pointerdown, keydown)
  • target / interaction target (which element)
  • interactionId (to correlate related events)

Typical INP problems (what I see most often)

1) Long JS tasks

If the main thread is blocked for 200ms+, you’re effectively rolling the dice on INP.

Usual causes:

  • Heavy synchronous parsing/serialization
  • Expensive formatting
  • Large list reconciliation
  • “One big reducer” updates

2) Overhydration / excessive client work

Hydrating everything up front often creates a permanently busy main thread:

  • Too many components hydrating at once
  • Client-side routing transitions doing expensive work
  • Recomputing derived state on every input

3) Third-party scripts

Third-party is a frequent INP saboteur because:

  • It schedules work unpredictably
  • It competes for CPU during user interactions
  • It’s often shipped without performance budgets

Debugging INP: go from metric → interaction → long task

What works consistently:

  1. Start with field data

    • Confirm INP is actually a production issue.
    • Identify the worst templates/routes.
  2. Reproduce in DevTools Performance

    • Record an interaction (click, type, open menu).
    • Look for long tasks around the interaction.
  3. Use the “why is this slow?” breakdown

    • Scripting vs rendering vs style/layout.
    • Identify the specific function/component causing the stall.

References:

Fix patterns that move INP

1) Defer non-critical work off the interaction path

requestIdleCallback(() => {
  initAnalytics();
});

Rules of thumb:

  • Idle work must be cancelable.
  • Don’t schedule “idle” tasks that are actually required for UX.

2) Split long tasks (don’t just “optimize” them)

If you have 300ms of work, shaving 20ms off rarely saves INP. You need to create paint opportunities.

Options:

  • Break work into chunks with setTimeout(0) / queueMicrotask strategically
  • Use scheduler.postTask where available
  • Move computation to a Web Worker

3) Make UI updates interruptible (React example)

button.addEventListener("click", () => {
  startTransition(() => {
    setState(expensiveUpdate);
  });
});

The key idea is not “use transitions everywhere”. It’s:

  • Keep the immediate response lightweight
  • Let expensive work yield so the browser can paint

4) Reduce the amount of work per interaction

High-leverage wins:

  • Virtualize long lists
  • Avoid synchronous JSON.stringify on large objects during input
  • Memoize derived computations (but verify it actually reduces work)
  • Don’t re-render entire pages on keystrokes

5) Put third-party on a leash

If you can’t remove it:

  • Load it later
  • Gate it behind consent or after first interaction
  • Use Partytown or worker-based approaches where applicable
  • Apply performance budgets and enforce them

CLS (Cumulative Layout Shift)

CLS is the metric most teams think they understand… until ads, fonts, and “just one dynamic banner” wreck production.

Why CLS still fails teams

  • Ads and embeds that resize after load
  • Fonts that swap and change metrics
  • Dynamic components that mount above existing content
  • Late UI injection (cookie banners, promos, consent, tooltips)

Measure CLS (code)

import { onCLS } from "web-vitals";

onCLS((metric) => {
  console.log(metric.value);
});

For debugging, capture attribution (shift sources) when possible, and pair this with DevTools “Layout Shift Regions”.

Reference:

Fixes that matter

1) Reserve space for media

img {
  aspect-ratio: 16 / 9;
}

Why this works:

  • The browser can allocate space before the resource loads.
  • You avoid “downward pushes” that score as layout shifts.

If you can, also include explicit width and height attributes on images. aspect-ratio is a great complement, especially when responsive sizing makes static dimensions less obvious.

2) Don’t let fonts surprise layout

<link
  rel="preload"
  as="font"
  href="/inter.woff2"
  type="font/woff2"
  crossorigin
/>

Important nuance:

  • font-display alone is not enough.
    • It changes swap behavior, but it doesn’t guarantee metric compatibility.
  • Use metric-compatible fallbacks (or size-adjust) to reduce reflow when the custom font loads.

3) Treat late UI injection like a breaking change

Cookie banners, promos, “new feature” banners: if it mounts above content after initial paint, it’s likely a CLS hit.

Patterns that avoid CLS:

  • Render the space from the start (even if content fills later)
  • Overlay instead of pushing (carefully; don’t block content)
  • Insert below the fold

Tooling stack (Google-first, engineer-friendly)

Use each tool for what it’s good at.

Lighthouse

Best for:

  • Regression detection in CI
  • Controlled before/after comparisons
  • Catching obvious LCP/CLS issues early

Not best for:

  • Proving SEO impact
  • Predicting the 75th percentile

PageSpeed Insights (PSI)

Best for:

  • One-page snapshot that combines CrUX field data + lab
  • Communicating “field vs lab” to stakeholders

Chrome DevTools (Performance panel)

Best for:

  • Root cause analysis
  • Interaction traces for INP
  • Identifying long tasks, layout thrash, expensive paint

CrUX Dashboard

Best for:

  • SEO reality
  • Trend analysis over time
  • Segmenting by device class and connection

Search Console CWV report (use carefully)

It’s useful for broad monitoring, but it has sharp edges:

  • Aggregation can hide the worst templates
  • “Needs Improvement” can hide how bad the tail is
  • It’s not a debugging tool

SEO impact (brief and accurate)

  • Core Web Vitals are ranking signals (part of page experience).
  • They behave like tie-breakers, not magic bullets.
  • Bad CWV can cap growth because:
    • You lose tie-breaks on competitive queries.
    • User behavior (bounce/back-to-SERP) gets worse.
    • Crawl and render efficiency can suffer on heavy pages.

Reference:

Conclusion: optimize how users feel, not how pages score

Core Web Vitals stop being ambiguous the moment you treat them as engineering metrics:

  • LCP is about delivering the main content with minimal server delay and minimal render blocking.
  • INP is about keeping the main thread available for humans, not just for boot-time metrics.
  • CLS is about honoring layout as a contract.

The biggest shift (and the reason INP matters so much) is that performance is now interaction-first. It’s no longer enough to load quickly; you have to stay responsive.

And the non-negotiable rule still stands:

  • If it is not measured in the field, it does not exist.

Optimizing Core Web Vitals means optimizing how users feel, not how pages load.

Checklist (high leverage)

Baseline

  • Verify field reality: CrUX/PSI 28-day field data for key routes/templates.
  • Segment: mobile vs desktop; slow network vs fast; logged-in vs logged-out.
  • Pick a target: move the 75th percentile, not the median.

LCP

  • Identify the LCP element (field + lab).
  • Preload the LCP resource (usually hero image) and set fetchpriority="high".
  • Kill render-blocking work on the LCP path (CSS/JS/hydration).
  • Fix TTFB if it dominates the breakdown.

INP

  • Record real interactions in DevTools Performance.
  • Find long tasks around the interaction.
  • Split tasks and create paint opportunities.
  • Defer non-critical work (requestIdleCallback, post-task scheduling).
  • Reduce per-interaction work (virtualize, memoize, avoid full rerenders).
  • Audit third-party scripts with budgets.

CLS

  • Reserve space for images, embeds, and ads (width/height, aspect-ratio).
  • Preload critical fonts and use metric-compatible fallbacks.
  • Eliminate late-inserted UI above existing content.

Profile

@khriztianmoreno