📅 Published
Optimizing Core Web Vitals in 2026: LCP, INP, CLS Explained
Core Web Vitals are Google's attempt to quantify user experience with three numbers, and despite the justified scepticism about reducing complex interactions to metrics, the framework has proven useful — not because the scores perfectly capture user experience, but because optimising for them tends to produce genuinely faster, more stable sites. The three metrics in 2026 are Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). INP replaced First Input Delay in 2024, and the bar has been recalibrated. If your site passed the old thresholds comfortably, it may not pass the current ones.
This guide covers what each metric actually measures, how to diagnose poor scores, and the specific optimisations that produce measurable improvement. The focus is on practical server-side and front-end changes, not synthetic testing tricks. This page is part of the web development section and connects to the web performance topic hub.
Largest Contentful Paint (LCP)
LCP measures the time from navigation start until the largest visible element in the viewport finishes rendering. This is typically a hero image, a video poster frame, or a large text block. The target is under 2.5 seconds.
What usually causes poor LCP
Slow server response. If your Time to First Byte (TTFB) is 800ms, your LCP cannot be faster than 800ms plus rendering time. Server response time is the floor. Compression configuration like the Brotli setup documented elsewhere on this site directly affects TTFB for HTML and CSS delivery.
Render-blocking resources. CSS and synchronous JavaScript in the <head> block rendering until they finish downloading and executing. Every kilobyte of blocking CSS adds to LCP.
Unoptimised hero images. A 2MB JPEG hero that is not preloaded, not properly sized, and not served in a modern format is the single most common LCP problem on content sites.
Practical LCP optimisations
-
Preload the LCP image. Add
<link rel="preload" as="image" href="/img/hero.jpg">in the document head so the browser starts downloading it before it discovers it in CSS or markup. -
Use modern image formats. AVIF or WebP at appropriate quality levels can reduce hero image sizes by 40–60% compared to JPEG without visible quality loss.
-
Inline critical CSS. Extract the CSS needed for above-the-fold content and inline it in a
<style>tag. Defer the full stylesheet withmedia="print"swapped tomedia="all"on load. -
Optimise server response time. Enable HTTP/3 (covered in the HTTP/3 guide), configure compression, use a CDN, and ensure your server can generate HTML in under 200ms.
Interaction to Next Paint (INP)
INP measures the latency between a user interaction (click, tap, keypress) and the next visual update. Unlike the old First Input Delay metric, INP considers all interactions throughout the page lifecycle, not just the first one. The target is under 200 milliseconds.
What usually causes poor INP
Long JavaScript tasks. Any JavaScript execution that takes more than 50ms blocks the main thread, delaying the browser's ability to respond to user input. Framework hydration, large state updates, and synchronous data processing are the usual culprits.
Excessive DOM size. Pages with thousands of DOM nodes take longer to re-render after interactions. Each interaction triggers layout and paint work proportional to the affected DOM subtree.
Third-party scripts. Analytics, ad tech, chat widgets, and social media embeds frequently execute long tasks that block the main thread during user interactions.
Practical INP optimisations
-
Break up long tasks. Use
requestIdleCallback,setTimeout(fn, 0), or thescheduler.yield()API to split heavy JavaScript into chunks that yield to the browser between executions. -
Defer non-essential JavaScript. Anything that does not need to run before the user's first interaction should be loaded with
deferorasyncand executed after the page is interactive. -
Virtualise long lists. If your page renders hundreds of items, use windowing libraries that only render visible items. The DOM nodes you do not create cannot slow down interactions.
-
Audit third-party scripts. Use the Performance tab in Chrome DevTools to identify which scripts contribute the longest tasks during interactions. Remove or lazy-load the worst offenders.
Cumulative Layout Shift (CLS)
CLS measures the visual stability of the page — how much visible content shifts unexpectedly during the page lifecycle. The target is under 0.1.
What usually causes poor CLS
Images and iframes without dimensions. When the browser does not know the size of an element before it loads, it initially renders it at zero height, then shifts the page when the content arrives and occupies space.
Dynamically injected content. Banners, cookie notices, and ad slots that insert themselves into the document flow push existing content down the page.
Web fonts causing layout shifts. When a web font loads and replaces the fallback font, the different metrics (character widths, line heights) cause text to reflow and shift surrounding elements.
Practical CLS optimisations
-
Always specify image dimensions. Include
widthandheightattributes on every<img>and<video>element so the browser can reserve the correct space before loading. -
Reserve space for dynamic content. Use CSS
min-heighton containers that will receive late-loading content like ads or embeds. -
Use
font-display: swapwith size-adjusted fallbacks. Configure fallback fonts with adjusted metrics that match your web fonts closely, minimising the reflow when the web font loads. -
Avoid inserting content above existing content. If you must show a banner, overlay it rather than pushing page content down.
Measuring in the field
Lab measurements (Lighthouse, PageSpeed Insights) are useful for development but do not represent real user experience. Field data from the Chrome User Experience Report (CrUX) or your own Real User Monitoring (RUM) is what Google actually uses for ranking decisions.
Check your site's field data at pagespeed.web.dev using the "Origin Summary" section. If your lab scores are good but field scores are poor, the problem is likely affecting users on slower devices or connections that your development setup does not simulate.
The metrics improve most when you address the issues affecting the worst-performing visits — the 75th percentile user on a mid-range Android device over a cellular connection, not the developer on a fast MacBook over fibre.