👨🏼‍💻

khriztianmoreno's Blog

Home Tags About |

Posts with tag performance

Designing for Trust - Best Practices for Better Identity Forms

2025-12-03
web-developmentfrontenduser-experiencesecurityaccessibilityperformanceformsautofillhtmltrust

Designing for Trust: Best Practices for Better Identity Forms

The Impact of Performance on User Experience and SEO - Why Every Millisecond Counts

2025-12-01
performanceseouser-experiencecore-web-vitals

Users do not see metrics. They feel delays. Google tries to measure that feeling. This post connects milliseconds to money and rankings, making one thing undeniable: Performance is user experience.

Advanced Performance Monitoring with Lighthouse, PageSpeed Insights, and Search Console

2025-11-23
performancelighthousepagespeed-insightssearch-consoleweb-vitals

Tools do not fix performance. Systems do.Most developers treat performance as a checklist: run a Lighthouse audit, fix a few warnings, and move on. But performance is not a static state; it is a changing ecosystem. Your users are on different devices, networks, and browsers. Your code changes daily.If you are only looking at performance when you remember to run a report, you are already behind.This guide explains how to move from one-off audits to continuous performance monitoring using the three pillars of Google's tooling ecosystem: Lighthouse, PageSpeed Insights (PSI), and Google Search Console (GSC).1. Intro: Performance is a system, not a reportWe have all been there. You run a Lighthouse audit locally and get a 95. You push to production, run it again, and get a 72. You check PageSpeed Insights, and it shows completely different numbers. Then you look at Search Console, and it says everything is fine—but the data is from three weeks ago.It feels contradictory, but it isn't. Each tool is answering a different question.To build a robust performance culture, you must stop chasing a single "score" and start listening to the signals your system is sending.2. The three pillars of Google performance toolingBefore diving into workflows, let's establish a mental model for what each tool actually does.| Tool | Data Type | Question it Answers | | :--------------------- | :------------------- | :--------------------------- | | Lighthouse | Synthetic (Lab) | What could go wrong? | | PageSpeed Insights | Hybrid (Lab + Field) | What is going wrong? | | Search Console | Field (CrUX) | Does Google see a problem? |Understanding this distinction is crucial. You use Lighthouse to catch regressions before they ship. You use PageSpeed Insights to validate real-world impact. You use Search Console to track long-term health and SEO ranking signals.3. Lighthouse: advanced usage beyond scoresLighthouse is a lab tool. It runs in a controlled environment (your machine or a CI server) with a predefined device and network profile. Because it is synthetic, it is deterministic and reproducible. This makes it the ideal tool for Continuous Integration (CI).Advanced Lighthouse workflowsDon't just run Lighthouse manually in Chrome DevTools. That relies on your local computer's CPU power and extensions, which skews the results.Run Lighthouse in CI to prevent performance regressions before they merge.# Example: Running Lighthouse via CLI npx lighthouse https://example.com \ --preset=desktop \ --only-categories=performanceBy integrating Lighthouse CI, you can enforce performance budgets. For example, fail the build if the bundle size exceeds 200KB or if the Largest Contentful Paint (LCP) is forecasted to be over 2.5s.Ignore the score, read the traceThe 0-100 score is a summary for managers. Engineers need the Trace.The trace reveals exactly what happened during the page load:Main-thread blocking time: Which specific JavaScript function locked up the UI?Render-blocking resources: Did a CSS file delay the first paint?Unused JavaScript: Are you shipping library code that is never executed?Use the Lighthouse Trace Viewer to drill down into the milliseconds.Lighthouse Score Example4. PageSpeed Insights: bridging lab and fieldPageSpeed Insights (PSI) is often misunderstood because it presents two sets of data next to each other:Lab Data (Lighthouse): A simulation run on Google's servers.Field Data (CrUX): Real data from real users visiting your site (Chrome User Experience Report).This is why teams get confused. "Lighthouse says my LCP is 1.2s, but Field Data says it's 3.5s. Which is true?"The Field Data is the truth. It represents what your actual users are experiencing. If your Lab data is fast but Field data is slow, your local simulation assumes a faster network or device than your users actually have.Reading PSI correctlyField data → Reality. This is what Google uses for ranking. If this is red, you have a problem, no matter what Lighthouse says.Lab data → Diagnosis. Use this to reproduce issues found in the field.Opportunities → Hypotheses. These are algorithmic suggestions. Not all of them will improve Core Web Vitals.Important Rule: Never optimize for a Lab metric that does not correlate with a Field problem. If PSI says "Remove unused CSS" but your FCP is already 0.8s in the field, that optimization is a waste of time.5. Google Search Console: performance at scaleGoogle Search Console (GSC) provides the "Core Web Vitals" report. This is aggregated CrUX data grouped by URL patterns.LimitationsDelayed: Data is a 28-day rolling average. You won't see the impact of a deploy today until weeks later.Aggregated: It groups pages together (e.g., "200 similar URLs"). It can be hard to debug a specific outlier page.Why it mattersDespite the lag, GSC is critical because this is exactly how Google groups and ranks your site. It tells you if you have systemic issues affecting entire sections of your application (e.g., "All product pages have poor CLS").Use GSC to:Detect systemic issues: If CLS spikes across all /blog/* pages, you likely introduced a layout shift in your blog post template.Validate long-term improvements: After a fix, use the "Validate Fix" button to tell Google to start verifying the new data.6. Synthetic vs. Real User Monitoring (RUM)This distinction is the conceptual core of modern performance engineering.Synthetic Monitoring (Lighthouse, WebPageTest)Pros: Fast feedback, controlled environment, cheap to run, CI-friendly.Cons: Not representative of real users, misses device diversity, "clean room" environment.Best for: Preventing regressions during development.Real User Monitoring (CrUX, web-vitals.js)Pros: Actual user experience, captures device/network diversity, correlates with business metrics and SEO.Cons: Slower feedback loop, requires instrumentation, data can be noisy.Best for: Understanding reality and verifying fixes.Why you need both: Synthetic finds problems early (while coding). RUM confirms they actually matter to users.7. Building a modern monitoring stackA mature engineering team uses a pipeline that looks like this:Local Development: Engineers use Lighthouse in DevTools to spot obvious issues.CI/CD Pipeline: Lighthouse CI runs on every Pull Request. It blocks merges if metrics degrade beyond a threshold.Production (Synthetic): A service runs a Lighthouse check on key pages every hour to catch infrastructure issues or third-party script regressions.Production (RUM): The site reports Core Web Vitals to an analytics endpoint (using web-vitals.js) to track real-time trends.SEO Health: The team reviews Search Console weekly to ensure no new URL groups are flagged as "Poor".8. Common mistakes teams makeChasing Lighthouse 100: A score of 100 on a Developer's MacBook Pro means nothing if your users are on low-end Android devices.Ignoring INP in field data: Interaction to Next Paint (INP) is hard to reproduce in Lab tools because they don't click around. You must rely on Field data for INP.Treating Search Console as real-time: Don't panic if GSC doesn't update the day after a fix. It takes time.Optimizing Lab-only regressions: If Lighthouse complains about a metric that looks green in CrUX, deprioritize it.9. What’s next: where monitoring is goingPerformance monitoring is moving towards attribution. It's not enough to know that the page is slow; we need to know what code made it slow.Route-level CWV: Tools are getting better at attributing metrics to specific SPA routes (Soft Navigations).More granular CrUX data: Google is exposing more detailed data in the CrUX History API.Interaction breakdowns: LoAF (Long Animation Frames) API is revolutionizing how we debug main-thread blocking, giving us stack traces for long tasks in the wild.10. ConclusionNo single tool gives you the full picture. Lighthouse helps you build fast software. PageSpeed Insights helps you debug user issues. Search Console helps you maintain your search ranking.Performance is not a task you finish. It is a signal you continuously listen to. Start listening today.Key ReferencesWeb Vitals DocumentationGoogle Search Central: Core Web VitalsLighthouse DocumentationPageSpeed InsightsI hope this has been helpful and/or taught you something new!Profile@khriztianmorenoUntil next time

Mastering Chrome DevTools for Web Performance Optimization

2025-11-17
performancedevtoolschromechrome-devtools

Turn Chrome DevTools from a viewer into a performance debugging weapon.Most teams open DevTools too late. Or they look at the wrong panels, drowning in noise while missing the signals that actually affect user experience.If you are a senior frontend engineer or performance owner, you know that "it feels slow" isn't a bug report—it's a symptom. This guide is for those who need to diagnose that symptom, understand the root cause, and verify the fix.We are focusing on Chrome DevTools features that directly map to Core Web Vitals. No fluff, just the workflows you need to fix Interaction to Next Paint (INP), Largest Contentful Paint (LCP), and Cumulative Layout Shift (CLS).1. Mental model: from symptom to root causeBefore clicking anything, you need the right mental model. Metrics tell you what is wrong. DevTools explains why.Performance isn't about magic numbers; it's about the Main Thread. The browser's main thread is where JavaScript runs, HTML is parsed, and styles are calculated. It is a single-lane highway. If a heavy truck (a long task) is blocking the lane, fast cars (user clicks, animations) are stuck in traffic.Key Rule: If the main thread is blocked, UX is broken.2. Performance panel: the center of truthThe Performance panel allows you to record exactly what the browser is doing over a period of time. It records:Main thread activity: JS execution, parsing, GC.Rendering pipeline: Style calc, Layout, Paint, Compositing.Network timing: When resources are requested and received relative to execution.User input handling: How long the browser took to respond to a click.Recording a useful traceIdle traces are useless. You need interaction traces.Open DevTools (Cmd+Option+I / Ctrl+Shift+I) and go to the Performance tab.Check Screenshots and Web Vitals in the capture settings. Memory is usually optional unless you suspect a leak.Click the Record button (circle icon).Interact with the page (click the button, scroll the list, open the modal).Click Stop.3. Reading the Performance timelineThe resulting trace can be intimidating. Ignore 90% of it initially. Focus on these sections: FPS & CPU: High-level health check. Solid blocks of color in CPU mean the main thread is busy.Network: Thin lines showing resource loading order.Main: The flame chart of call stacks. This is where you spend most of your time. Frames: Screenshots of what the user saw at that millisecond.The Experience TrackThis is your best friend. It explicitly marks:LCP: Where the Largest Contentful Paint occurred.Layout Shifts: Red diamonds indicating CLS.Long Tasks: Tasks taking >50ms (red triangles).Spotting Long TasksA "Long Task" is any task that keeps the main thread busy for more than 50ms. In the Main section, look for gray bars with red triangles at the top corner. These are the tasks blocking the browser from responding to user input (INP).4. Debugging LCP with DevToolsLCP measures loading performance. To fix it, you need to know what the element is and why it was late.Identify the LCP element: In the Timings or Experience track, find the LCP marker.Inspect the element: Hovering over the LCP marker often highlights the actual DOM node.Analyze the delay:Resource Load Delay: Was the image discovery late? (e.g., lazy-loaded hero image).Resource Load Duration: Was the network slow or the image too large?Render Delay: Was the image loaded but waiting for a main-thread task to finish before painting?Typical LCP root causes:Late discovery: The <img> tag is generated by JavaScript or has loading="lazy".Render blocking: Huge CSS bundles or synchronous JS in the <head> pausing the parser.Server TTFB: The backend took too long to send the initial HTML.<!-- ❌ Bad: Lazy loading the LCP element (e.g. Hero image) --> <img src="hero.jpg" loading="lazy" alt="Hero Image" /> <!-- ✅ Good: Eager loading + fetchpriority --> <img src="hero.jpg" loading="eager" fetchpriority="high" alt="Hero Image" />Reference: Optimize Largest Contentful Paint5. Debugging INP with DevToolsINP is the metric that kills single-page applications (SPAs). It measures the latency of user interactions.Use the Interactions track: Look for the specific interaction (click, keypress) you recorded.Expand the Interaction: You will see it broken down into three phases:Input Delay: Time waiting for the main thread to become free.Processing Time: Time running your event handlers.Presentation Delay: Time waiting for the browser to paint the next frame.Visually correlate with the Main Thread: Click the interaction bar. Look directly below it in the Main track.If you see a massive yellow block of JavaScript under the interaction, your event handler is too slow (Processing Time).If you see a massive block of JS before the interaction starts, the main thread was busy doing something else (Input Delay).Common offenders:JSON parsing large payloads.React/Vue reconciliation (rendering too many components).Synchronous loops or expensive calculations.// ❌ Bad: Blocking the main thread with heavy work button.addEventListener("click", () => { const result = heavyCalculation(); // Blocks for 200ms updateUI(result); }); // ✅ Good: Yielding to main thread button.addEventListener("click", async () => { showSpinner(); // Yield to main thread so browser can paint the spinner await new Promise((resolve) => setTimeout(resolve, 0)); const result = heavyCalculation(); updateUI(result); });Fix workflow: Identify the function in the flame chart → Optimize or defer it → Record again → Verify the block is smaller.Reference: Interaction to Next Paint (INP)6. Debugging CLS with DevToolsLayout shifts are annoying and confusing. DevTools visualizes them clearly.Open the Command Menu (Cmd+Shift+P / Ctrl+Shift+P) and type "Rendering".Enable "Layout Shift Regions".As you interact with the page, shifted elements will flash blue.In the Performance Trace: Look at the Experience track for red diamonds. Click one. The Summary tab at the bottom will list exactly which nodes moved and their previous/current coordinates.Common CLS patterns:Font swaps (FOUT/FOIT): Text renders, then the web font loads, changing the size.Image resize: Images without width and height attributes.Late injected UI: Banners or ads inserting themselves at the top of the content./* ❌ Bad: No space reserved for image */ img.hero { width: 100%; height: auto; } /* ✅ Good: Reserve space with aspect-ratio */ img.hero { width: 100%; height: auto; aspect-ratio: 16 / 9; }Reference: Optimize Cumulative Layout Shift7. Live Metrics screenThe Live Metrics view (in the Performance panel sidebar or landing page) provides real-time feedback without a full trace.Why it matters:Instant feedback: See LCP and CLS values update as you resize the window or navigate.Field-aligned: It uses the same implementation as the Web Vitals extension.Use cases:Testing hover states and small interactions.Validating SPA route transitions.Quick sanity checks before committing code.Note: This is still "Lab Data" running on your machine, not real user data (CrUX).8. Insights panelThe Performance Insights panel is an experimental but powerful automated analysis layer. It uses the trace data to highlight risks automatically.Key features:Layout Shift Culprits: It points directly to the animation or DOM update that caused a shift.Render blocking requests: identifies CSS/JS that delayed the First Contentful Paint.Long main-thread tasks: suggestions on how to break them up.Use Insights as a hint, not a verdict. It points you to the right place in the flame chart, but you still need to interpret the code.9. CPU and Network throttling (Mandatory)Developing on a MacBook Pro with fiber internet is a lie. Your users are on mid-tier Android devices with spotty 4G.CPU Throttling:set to 4x slowdown. This roughly simulates a mid-range Android device. It exposes "death by a thousand cuts"—small scripts that feel instant on desktop but freeze a phone for 300ms.Network Throttling: Fast 4G or Slow 4G. Critical for debugging LCP (image load times) and font loading behavior.Fast Wi-Fi hides bad engineering. Always throttle when testing performance.10. Putting it all together: a repeatable workflowDetect: Use PageSpeed Insights or CrUX to identify which metric is failing.Reproduce: Open DevTools, enable Throttling (CPU 4x, Network 4G).Record: Start tracing, perform the user action, stop tracing.Inspect: Find the red/yellow markers in the Experience/Main tracks.Fix: Apply the code change (defer JS, optimize images, reduce DOM depth).Verify: Re-record and compare the trace. Did the long task disappear? Did the LCP marker move left?ConclusionDevTools is not optional. Performance is observable. Every Core Web Vitals issue leaves a trace; you just need to know where to look.If you cannot explain a performance problem in DevTools, you do not understand it yet.Resources:Chrome DevTools Documentationweb.dev Performance GuidesGoogle Search Central CWV DocsI hope this has been helpful and/or taught you something new!Profile@khriztianmorenoUntil next time

Navigating the Future - Understanding and Measuring Soft Navigations for SPAs

2025-11-12
performanceweb-developmentjavascriptchrome

Explore the concept of "soft navigations" in Single Page Applications (SPAs), why traditional Core Web Vitals measurement has been challenging for them, and the ongoing efforts by the Chrome team to standardize and enable reporting for these dynamic content changes.

Optimizing for Speed - Practical Strategies to Improve Your Core Web Vitals

2025-11-04
performancecore-web-vitalsweb-vitalslcpinpclschrome-devtools

Pages can load fast.And still feel slow.That’s the modern performance trap: you ship a “snappy” initial render, but interactions hitch, UI jumps, and the page loses trust. Core Web Vitals are designed to catch that gap—and they’re judged in the field, not in your local Lighthouse run.

Demystifying Core Web Vitals - A Developer's Guide to LCP, INP, and CLS

2025-10-19
web-performancecore-web-vitalslighthouseweb-developmentcruxchromeperformancedevtoolschrome-devtools

Core Web Vitals are ranking signals, but most teams still optimize them like lab-only scorecards. This guide turns CWV into actionable engineering work: how to measure (field + lab), how to debug root causes in DevTools, and which fixes actually move the 75th percentile.

Introducing Chrome DevTools MCP

2025-09-30
javascriptchromedevtoolsaimcpdebuggingperformancechrome-devtools

I participated in the Chrome DevTools MCP Early Access Program and put the feature through its paces on real projects. I focused on four scenarios: fixing a styling issue, running performance traces and extracting insights, debugging a failing network request, and validating optimal caching headers for assets. This post shares that hands-on experience—what worked, where it shines, and how I’d use it day-to-day.Chrome DevTools MCP gives AI coding assistants actual visibility into a live Chrome browser so they can inspect, test, measure, and fix issues based on real signals—not guesses. In practice, this means your agent can open pages, click, read the DOM, collect performance traces, analyze network requests, and iterate on fixes in a closed loop.

Beyond the Basics - Advanced CSS Techniques for Web Developers

2025-09-19
cssperformancefrontendweb-development

Discover how modern CSS has evolved from a styling language into a powerful tool for performance and logic. We dive into the rendering pipeline, dynamic theming with Custom Properties, and the new era of Container Queries and :has().