👨🏼‍💻

khriztianmoreno's Blog

Home Tags About |

Posts with tag ai

The Web Stack You Cannot Ignore in 2026

2025-12-26
web-performanceidentitypwaaidevtoolsprogrammingweb-developmentdiscuss

After going through roadmaps, specs, Chrome Dev Summit talks, and real signals from production, my prediction is simple:Web development in 2026 moves toward more native capabilities, less unnecessary JavaScript, and performance you can measure in the real world.This isn’t a “cool tools” list. These are the areas that become non-optional.1. Performance (Core Web Vitals + Soft Navigation)If you only fix one thing, fix this. Performance is the priority. No debate.Why it will be vital in 2026Google is doubling down on real user experience, not synthetic benchmarks. Soft Navigation also changes how modern SPAs (and “MPA-like” apps) are evaluated.In 2026:If you don’t improve INP and LCP, you don’t just “lose SEO” — you lose conversions.If you don’t measure soft navigations correctly, you’ll ship “faster” routes with fake metrics.What changesCLS stops being “cosmetic”.INP fully replaces the old “FID mindset”.SPA performance gets judged like an MPA.What you should masterweb-vitals in productionLong tasks (and what creates them)Soft navigation heuristicsRUM > LighthouseResourcesWeb VitalsSoft NavigationCrUX2. Identity: Passkeys + FedCMTraditional login is dying. It just doesn’t know it yet.Why it will be vital in 2026Passwords are both a technical and legal liability. Passkeys reduce friction and fraud. FedCM is the browser’s real answer to identity in a world without third‑party cookies.In 2026:A product without passkeys will be perceived as outdated.“Classic OAuth” without FedCM will degrade (or break) flows users care about.What changesPasswordless becomes normal.Browser-native login UI becomes the expectation.Less JS. More platform.What you should masterWebAuthnPasskeys UX patternsFedCM flowsPrivacy-preserving identityResourcesFedCMPasskeysWebAuthn3. Fugu / PWA APIsThe web talks to hardware now. The debate is over — what’s left is execution.Why it will be vital in 2026Web apps compete directly with native when the capability gap is small. Browsers keep shipping standards-based APIs, which means fewer dependencies and less glue code.In 2026:WebUSB, File System Access, and Badging stop being “rare”.PWAs feel more and more like first-class apps when the use case fits.What changesReal offline capabilitiesDeeper OS integrationFaster UX without native wrappersWhat you should masterFile System Access APIBackground SyncBadging APIPWA install heuristicsResourcesWeb capabilitiesProgressive Web Apps4. AI for Web Developers (Built-in AI APIs)AI stops being “just a SaaS”. It becomes part of the browser.Why it will be vital in 2026Lower latency. More privacy (because local is the new default). And better UX without forcing every product to build an expensive AI backend.This is not “embed ChatGPT”. This is native AI, progressively enhanced.In 2026:On-device AI becomes the default when available.AI-driven UX becomes a real differentiator.What changesSmaller, faster models running locallyFewer external callsUI patterns that adapt in contextWhat you should masterOn-device inference constraints (and fallbacks)AI UX patterns (assistive, not intrusive)Privacy-first AIProgressive enhancement with AIResourcesAI in Chrome5. DevTools & Browser AutomationTraditional debugging doesn’t scale.Why it will be vital in 2026Apps get more complex. Performance issues get more subtle. And manual testing simply isn’t viable if you want speed and quality.In 2026:Observability from DevTools becomes a daily habit.Automation becomes part of the workflow, not a “QA phase”.What changesSmarter DevToolsMore integrated testingDebugging centered on real UXWhat you should masterAdvanced Performance panel workflowsLighthouse CIPuppeteer / PlaywrightTracing and deep profilingResourcesChrome DevToolsLighthouseMy final prediction (no marketing)If I had to bet on only one foundation:Performance + Identity will be the base. Everything else sits on top of that.The web in 2026 will be:More nativeFasterMore privateLess dependent on “framework magic”The rest is noise.I hope this has been helpful and/or taught you something new!Profile@khriztianmorenoUntil next time

The Agent Development Kit - ADK

2025-10-15
aiweb-develpomentmcp

Today we’re going to dive into the Agent Development Kit, better known as ADK. It’s a framework that is changing the game for building artificial intelligence agents.Think about this for a moment. For those who build software, the idea of creating a functional AI agent with a single line of code isn’t just a question—it’s the dream come true.Developers know this headache well. You start with a simple idea, but soon the project turns into a maze of API calls, state management, and repetitive code. Everything becomes chaos. And that’s exactly where ADK steps in to bring order.What’s interesting is that it brings solid software engineering principles—like modularity and testability—to AI development. It’s an elegant solution to a complex problem.🔩 The six key componentsTo understand how ADK works, you have to look under the hood. The system is made up of six fundamental building blocks, working like an orchestra where each instrument plays an essential role.The agent: It’s the brain of the system. It processes requests, makes decisions, and knows when to use its tools.The runner: It acts as the conductor. It ensures everything flows correctly: messages are delivered, tools are triggered, and responses return smoothly.The session: It’s the short-term memory. It keeps conversations coherent, remembers what’s being discussed, and maintains context.The state: It’s like a shared whiteboard. All components can read and write data there. A tool can leave information and the agent can use it instantly.The memory: It represents long-term knowledge. It stores information that persists across conversations, allowing the agent to learn and improve over time.The tools: They are the agent’s hands. Without them, it could only chat—but with them, it can act: search the web, connect to APIs, fetch data. They are what turns a chatbot into a truly useful assistant.🎼 How the symphony worksWhen a request comes in:The session loads conversation context.The agent processes the request.If necessary, a tool steps in.The state is updated with the results.The runner orchestrates everything to return a coherent response.A perfect loop that brings intelligent agents to life.🧰 Extend capabilities with toolsThere are two ways to give agents new skills:Build custom tools: Designed for specific business logic. They only require four elements:A unique nameA clear descriptionA schema with the required input dataThe function that performs the actionUse the Model Context Protocol (MCP): A universal connector that lets you integrate existing services (like GitHub, Discord, or Slack) without bespoke integration code.Additionally, ADK is model-agnostic: it works with OpenAI, Anthropic, or any model. Total freedom.⚙️ Workflows and collaboration between agentsWhen a task is too large for a single agent, workflows come into play, allowing you to orchestrate multiple agents to automate complex processes.It’s key to distinguish between:Multi-agent systems: designed for dynamic conversations (like a support bot that escalates to a specialist).Workflows: for pure automation, with fixed, predictable steps (for example, generating a monthly report).The most common patterns are:Sequential: One agent gathers data, another drafts, and another publishes. Ideal for step-by-step processes, like content creation.Parallel: Splits a task across multiple agents at once (for example, technical review and business review simultaneously). You gain speed and multiple perspectives.Loop: One agent generates a result, another evaluates whether it meets the criteria, and if not, it returns it with feedback to improve. Perfect for iterative refinement and continuous improvement.Rule of thumb:Step-by-step → sequentialSpeed or diversity → parallelMaximum quality → loop🔌 Connecting to the outside world: the universal adapter (MCP)The Model Context Protocol (MCP) acts like a universal power adapter. It connects agents to external services without the need for custom integrations.This separates the agent’s logic from the technical complexity of external connections.Real-world examples:A DevOps assistant that detects issues in GitHub and alerts via Discord.A support agent that queries the CRM and notifies the customer via Telegram.The automation potential is enormous.🔗 Referencias esencialesGoogle – Kit de Desarrollo de Agentes (ADK)This is the main resource. All official documentation can be found here.Doc oficial: Agent Development KitGitHub SDK: Agent Development Kit (ADK) Web🚀 ConclusionADK is much more than a set of tools. It represents a new philosophy for AI development, where agents are built with the same rigor as professional software.The final question:What will be the next big problem we automate

Introducing Chrome DevTools MCP

2025-09-30
javascriptchromedevtoolsaimcpdebuggingperformancechrome-devtools

I participated in the Chrome DevTools MCP Early Access Program and put the feature through its paces on real projects. I focused on four scenarios: fixing a styling issue, running performance traces and extracting insights, debugging a failing network request, and validating optimal caching headers for assets. This post shares that hands-on experience—what worked, where it shines, and how I’d use it day-to-day.Chrome DevTools MCP gives AI coding assistants actual visibility into a live Chrome browser so they can inspect, test, measure, and fix issues based on real signals—not guesses. In practice, this means your agent can open pages, click, read the DOM, collect performance traces, analyze network requests, and iterate on fixes in a closed loop.

The revolutionary Chrome API

2025-08-01
aiweb-developmentjavascript

Artificial intelligence has evolved from a futuristic promise into a tangible and powerful tool in our daily lives as developers. Until recently, integrating AI into our web applications meant relying on external servers, dealing with latency, and managing costs. But what if I told you that the game is about to change dramatically? Google has given us a new and powerful tool: the Prompt API, which allows us to run AI models, like Gemini Nano, directly in the user's browser.I've been experimenting with this API recently, and the feeling is electrifying. We are facing a paradigm shift that not only improves performance but also addresses key concerns like privacy and personalization. Join me as I break down why I believe this API is so revolutionary.Why is it revolutionary?The true revolution of the Prompt API lies in three fundamental pillars:Privacy by Design: By running the AI model on the client, the user's data never leaves their device. This is a massive change. Imagine being able to offer intelligent features without the user having to worry about how their personal information is handled in the cloud. This not only builds trust but also greatly simplifies compliance with privacy regulations.Zero Latency: Communicating with an external server inevitably introduces a delay. With the Prompt API, inference happens locally, resulting in almost instantaneous responses. This is crucial for creating fluid and interactive user experiences that feel natural, not like you're waiting for a response from a distant server.Offline Availability: Once the model has been downloaded, the application can continue to function without an internet connection. This opens the door to a new type of intelligent web application that is robust and always available, regardless of the quality of the user's connection.First Steps: A Simple ExampleBefore we dive into more complex use cases, let's see how simple it is to get started. The first thing is to check if the user's browser can run the model.// Check if the API is available const availability = await window.ai.getAvailability(); if (availability === "available") { console.log("AI is ready to use!"); } else { console.log("AI is not available on this device."); }If it's available, we can create a session and send it a prompt. It's that easy!if (availability === "available") { // Create an inference session const session = await window.ai.createSession(); // Send a prompt and wait for the full response const response = await session.prompt("Write me a short poem about code."); console.log(response); // Don't forget to destroy the session to free up resources session.destroy(); }This simple code snippet already shows us the power we have at our fingertips.Use Cases Where This API Would ShineThe possibilities are enormous, but here are a few ideas where the Prompt API could have an immediate and significant impact:Intelligent Writing Assistants: Imagine a text editor that not only corrects your grammar but also helps you rephrase sentences, adjust the tone, or even generate complete drafts of emails or articles, all in real-time and without sending your drafts to any server.Content Classification and Organization: A news site or blog could use the API to automatically classify articles into relevant categories for the user, allowing for the creation of personalized and dynamic feeds without needing backend logic.Client-Side Semantic Search: Instead of a simple keyword search, you could implement a search that understands the meaning behind the user's query within the content of a page or a set of documents, offering much more accurate results.Audio Transcription and Image Description: The API has multimodal capabilities. You could allow users to record voice notes and transcribe them instantly, or upload an image and automatically generate descriptive alt text to improve accessibility.Why Is This Important?The Prompt API isn't just a new feature; it's a statement about the future of the web. It's empowering us as developers to build the next generation of web applications: smarter, more private, and more user-centric.By moving AI to the client, access to this technology is democratized. Large server infrastructures and high inference budgets are no longer required. Small developers and teams can now compete on a level playing field, creating innovative experiences that were previously reserved for large corporations.Exploring the API in Depth: More ExamplesThe official documentation provides more advanced examples that are worth exploring.Streaming ResponsesFor longer responses, we can display the result as it's generated, improving the perception of speed.const session = await window.ai.createSession(); const stream = session.promptStreaming( "Write me an extra-long poem about the universe" ); for await (const chunk of stream) { // Append each chunk to your UI document.getElementById("poem-div").textContent += chunk; }Maintaining ContextSessions remember previous interactions, allowing for fluid conversations.const session = await window.ai.createSession({ initialPrompts: [ { role: "system", content: "You are a friendly and helpful assistant." }, ], }); let response1 = await session.prompt("What is the capital of Italy?"); console.log(response1); // "The capital of Italy is Rome." let response2 = await session.prompt("And what language is spoken there?"); console.log(response2); // "The official language of Italy is Italian."Structured Output with JSONYou can force the model to respond in a specific JSON format, which is incredibly useful for integrating the AI with other parts of your application.const session = await window.ai.createSession(); const schema = { type: "boolean" }; const post = "Today I baked some ceramic mugs and they turned out great."; const result = await session.prompt(`Is this post about ceramics?\n\n${post}`, { responseConstraint: schema, }); console.log(JSON.parse(result)); // trueConclusion: The Future is LocalMy experience with the Prompt API has been revealing. It's one of those technologies that makes you feel like you're witnessing the beginning of something big. It gives us the tools to build a smarter, more privacy-respecting web, right from the browser.I invite you to dive in, experiment, and think about how you can use this incredible capability in your own projects. The future of AI on the web is local, and it's already here

WebContainers at its best - Bolt.new combines AI and full-stack development in the browser

2024-10-08
javascriptaiweb-development

Remember WebContainers? It’s the WebAssembly-based “microoperating system” that can run Vite operations and the entire Node.js ecosystem in the browser. The StackBlitz team built WebContainers to power their in-browser IDE, but it often felt like the technology was still searching for a killer use case—until now. That’s because StackBlitz just released bolt.new, an AI-powered development sandbox that Eric Simons described during ViteConf as “like if Claude or ChatGPT had a baby with StackBlitz.”Bolt.newI’ll try not to imagine it too vividly, but based on the overwhelmingly positive reviews so far, I’m guessing it’s working – dozens of developers describe it as a combination of v0, Claude, Cursor, and Replit. How Bolt is different: Existing AI code tools can often run some basic JavaScript/HTML/CSS in the browser, but for more complex projects, you need to copy and paste the code to a local environment. But not Bolt. By using WebContainers, you can request, run, edit, and deploy entire web applications, all from the browser. Here’s what it looks like:You can ask bolt.new to build a production-ready multi-page app with a specific backend and database, using any technology stack you want (e.g. “Build a personal blog using Astro, Tailwind, and shadcn”).Unlike other tools, Bolt can install and run relevant npm packages and libraries, interact with third-party APIs, and run Node servers.You can manually edit the code it generates via an in-browser editor, or have Bolt resolve errors for you . This is unique to Bolt, because it integrates AI into all levels of WebContainers (not just the CodeGen step).You can deploy to production from chat via Netlify, no login required.There’s a lot more we could go over here, but Eric’s demo is pretty wild. In closing: From the outside, it wasn’t always clear whether StackBlitz would ever see a significant return on investment over the 5+ years they’ve spent developing WebContainers. But suddenly it looks like they might be uniquely positioned to help developers leverage AI to build legitimate FullStack applications.<iframe width="560" height="315" src="https://www.youtube.com/embed/knLe8zzwNRA?si=7R7-1HxzwuyzL0EZ&amp;start=700" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

Unlock your creativity with Google Gemini and JavaScript - A practical guide

2024-06-12
javascriptaitutorial

Hello! Today I bring you a new tool that will boost your creativity to another level: Google Gemini. This artificial intelligence API allows you to generate high-quality text in Spanish, from simple phrases to complete stories, with just a few lines of code.What is Google Gemini?Google Gemini is a state-of-the-art language model developed by Google AI. It has been trained with a massive dataset of text and code, allowing it to understand and generate natural language with impressive accuracy.What can we do with Google Gemini and JavaScript?The possibilities are endless. Here are some examples:Generate creative content: Write poems, stories, scripts, blog posts, or any type of textual content you can imagine.Translate languages: Translate texts from one language to another quickly and accurately.Answer questions: Get answers to your questions in an informative and complete way.Create chatbots: Develop conversational chatbots that interact with users naturally.Automate tasks: Automate the generation of reports, emails, and other tasks that require natural language processing.How to get started?To get started with Google Gemini with JavaScript, you only need:A Google Cloud Platform account: https://cloud.google.com/Enable the Google Gemini API: https://ai.google.dev/Practical example:In this example, we are going to generate a poem using Google Gemini and JavaScript.const { GoogleGenerativeAI } = require("@google/generative-ai"); // Access your API key as an environment variable (see "Set up your API key" above) const genAI = new GoogleGenerativeAI(process.env.API_KEY); async function run() { // For embeddings, use the embedding-001 model const model = genAI.getGenerativeModel({ model: "embedding-001" }); const text = "Escribe un poema sobre la naturaleza"; const result = await model.embedContent(text); const embedding = result.embedding; console.log(embedding.values); } run();For text generation we use the getGenerativeModel method of the genAI object.Example of a generated poem:The green earth, the blue sky, the sun shines with crystal light. The flowers bloom in the garden, the birds sing with sweet trill. The wind rustles through the leaves, the bees buzz among the flowers. Nature is a divine gift, a place of peace and harmony.Conclusion:Google Gemini and JavaScript are a powerful combination that allows you to unlock your creativity and develop amazing applications. With this practical guide, you are ready to start exploring the endless possibilities of this technology.Additional Resources:Google Gemini Documentation: https://ai.google.dev/docsGoogle Gemini Tutorials: https://m.youtube.com/watch?v=TXvbT8ORI50Google Gemini Code Examples: https://m.youtube.com/watch?v=jTdouaDuUOAFeel free to experiment with Google Gemini and JavaScript! Share your creations in the comments and let me know what you think of this tool.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno ďż˝