👨🏼‍💻

khriztianmoreno's Blog

Home Tags About |

Posts with tag web-development

Fetching Data in React: A Beginner's Guide

2024-12-09
reactjavascriptweb-development

Imagine building a webpage that displays real-time weather data or a list of products from an online store. How does your React application get this information? The answer lies in data fetching. This process involves retrieving data from various sources (like APIs, databases) and incorporating it into your React components.Sequential Data Fetching:Think of sequential data fetching as a step-by-step process. You fetch one piece of data, wait for it to arrive, and then move on to the next.Example: Fetching user information, then their orders, and finally their address.// Sequential fetching using async/await async function fetchData() { const user = await fetchUser(); const orders = await fetchOrders(user.id); const address = await fetchAddress(user.id); // ... }Parallel Data Fetching:In parallel fetching, multiple data requests are made simultaneously.Example: Fetching user information, orders, and address at the same time.// Parallel fetching using Promise.all Promise.all([fetchUser(), fetchOrders(userId), fetchAddress(userId)]) .then(([user, orders, address]) => { // ... }) .catch((error) => { // ... });Prefetching Data:To enhance the perceived speed of your application, consider prefetching data before it's required. This technique is particularly effective for data that's likely to be needed soon but not immediately. For instance, when leveraging a framework like Next.js built on top of React, you can prefetch the subsequent page and its associated data using their Link component.Example: Fetching post details for the next page.<Link href="/posts/1" prefetch> <a>Post 1</a> </Link>As soon as this Link component becomes visible on the screen, the data for the "Next Page" is preloaded. These subtle optimizations can significantly improve the perceived performance of your app, making it feel more responsive.Conclusion:Choosing the right data fetching pattern depends on your specific use case.Sequential: Simple to implement, suitable for small-scale applications.Parallel: Improves performance for larger datasets, but can be more complex.Prefetching: Enhances user experience by reducing perceived loading times.Key Takeaways:Async/await: A modern way to handle asynchronous operations in JavaScript.Promises: A way to represent the eventual completion (or failure) of an asynchronous operation.Performance: Parallel fetching and prefetching can significantly improve performance.User experience: Prefetching can make your application feel snappier.Additional Tips:Error handling: Always handle errors gracefully to provide a better user experience.Caching: Store frequently accessed data to reduce the number of network requests.State management: Use libraries like Redux or Zustand to manage complex application state, especially when dealing with fetched data.By understanding these patterns, you can build more efficient and responsive React applications.Would you like me to elaborate on any of these concepts or provide more code examples?Profile@khriztianmoren

WebContainers at its best - Bolt.new combines AI and full-stack development in the browser

2024-10-08
javascriptaiweb-development

Remember WebContainers? It’s the WebAssembly-based “microoperating system” that can run Vite operations and the entire Node.js ecosystem in the browser. The StackBlitz team built WebContainers to power their in-browser IDE, but it often felt like the technology was still searching for a killer use case—until now. That’s because StackBlitz just released bolt.new, an AI-powered development sandbox that Eric Simons described during ViteConf as “like if Claude or ChatGPT had a baby with StackBlitz.”Bolt.newI’ll try not to imagine it too vividly, but based on the overwhelmingly positive reviews so far, I’m guessing it’s working – dozens of developers describe it as a combination of v0, Claude, Cursor, and Replit. How Bolt is different: Existing AI code tools can often run some basic JavaScript/HTML/CSS in the browser, but for more complex projects, you need to copy and paste the code to a local environment. But not Bolt. By using WebContainers, you can request, run, edit, and deploy entire web applications, all from the browser. Here’s what it looks like:You can ask bolt.new to build a production-ready multi-page app with a specific backend and database, using any technology stack you want (e.g. “Build a personal blog using Astro, Tailwind, and shadcn”).Unlike other tools, Bolt can install and run relevant npm packages and libraries, interact with third-party APIs, and run Node servers.You can manually edit the code it generates via an in-browser editor, or have Bolt resolve errors for you . This is unique to Bolt, because it integrates AI into all levels of WebContainers (not just the CodeGen step).You can deploy to production from chat via Netlify, no login required.There’s a lot more we could go over here, but Eric’s demo is pretty wild. In closing: From the outside, it wasn’t always clear whether StackBlitz would ever see a significant return on investment over the 5+ years they’ve spent developing WebContainers. But suddenly it looks like they might be uniquely positioned to help developers leverage AI to build legitimate FullStack applications.<iframe width="560" height="315" src="https://www.youtube.com/embed/knLe8zzwNRA?si=7R7-1HxzwuyzL0EZ&amp;start=700" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

Rust is revolutionizing JavaScript development!

2024-07-24
web-developmentjavascript

Rspack just released version 1.0 , and the dream of using Rust-based build tools to speed up the JavaScript ecosystem is more alive than ever. How did we get here? Early last year, a team of developers at ByteDance was facing performance issues maintaining the company’s “many large monolithic applications.” So they did what any good developer would do: they blamed webpack. But they didn’t stop there. In March 2023, they released Rspack v0.1, a high-performance JavaScript bundler written in Rust and designed to be fully compatible with the webpack ecosystem.RspackFast forward to today, and Rspack now has 100k weekly downloads and has introduced key improvements that make it production-ready:Better performance: New features like lazy compilation and other performance optimizations make Rspack 1.0 build times over 20x faster than webpack 5.Increased webpack compatibility: Over 80% of the top 50 downloaded webpack plugins can now be used in Rspack, bringing it closer to becoming a true drop-in replacement for webpack.Reduced complexity: They created a new toolchain called Rstack that includes separate projects like Rsbuild, Rspress, and Rslib, each targeting different use cases. This reduces the complexity of setting up an all-in-one tool like Rspack (or webpack), while still maintaining flexibility.Bottom line: Rspack offers a pretty simple value proposition for developers: if you already use webpack, it will make it much easier for you to migrate to its bundler which is faster, easier to use, and still fully compatible with the webpack API. Time will tell if that will be enough to convince the masses to try it out

How to mock an HTTP request with Jest 💻

2024-05-07
javascripttestingnodejsjestweb-development

Today I wanted to show you how to properly write a test.But anyone can figure out how to run a simple test. And here, we're looking to help you find answers you won't find anywhere else.So I thought we'd take things one step further.Let's run a more complex test, where you'll have to mock 1 or 2 parts of the function you're testing.[In case you're new here: mock is like using a stunt double in a movie. It's a way to replace a complicated part of your code (like calling an API) with something simpler that pretends to be real, so you can test the rest of your code easily.]MY testing framework of choice is Jest, because it makes everything so much easier:Zero Configuration: One of the main advantages of Jest is its zero-configuration setup. It is designed to work out of the box with minimal configuration, making it very attractive for projects that want to implement tests quickly and efficiently.Snapshot Testing: Jest introduced the concept of Snapshot Testing, which is particularly useful for testing UI components. It takes a snapshot of a component’s rendered output and ensures that it doesn’t change unexpectedly in future tests.Built-In Mocking and Spies: Jest comes with built-in support for mock functions, modules, and timers, making it easy to test components or functions in isolation without worrying about their dependencies.Asynchronous Testing Support: Jest supports asynchronous testing out of the box, which is essential for testing in modern JavaScript applications that often rely on asynchronous operations like API calls or database queries.Image descriptionDe todos modos, entremos en las pruebas:Step 1: Setting up your projectCreate a new project directory and navigate to itInitialize a new npm project: npm init -yInstall Jest: npm install --save-dev jestInstall axios to make HTTP requests: npm install axiosThese are the basic requirements. Nothing new or fancy here. Let's get started.Step 2: Write a function with an API callNow, let's say you log into some kind of application. StackOverflow, for example. Most likely, at the top right you'll see information about your profile. Maybe your full name and username, for example.In order to get these, we typically have to make an API call to get them. So, let's see how we would do that.Create a file called user.jsInside user.js, write a function that makes an API call. For example, using axios to retrieve user data:// user.js import axios from "axios"; export const getUser = async (userId) => { const response = await axios.get(`https://api.example.com/users/${userId}`); return response.data; };Step 3: Create the Test FileOkay, now that we have a function that brings us the user based on the ID we requested, let's see how we can test it.Remember, we want something that works always and for all developers.Which means we don't want to depend on whether the server is running or not (since this is not what we are testing).And we don't want to depend on the users we have in the database.Because in my database, ID1 could belong to my admin user, while in your database, ID1 could belong to YOUR admin user.This means that the same function would give us different results. Which would cause the test to fail, even though the function works correctly.Read on to see how we tackled this problem using mocks.Create a file called user.test.js in the same directory.Inside this file, import the function you want to test:import axios from "axios"; jest.mock("axios"); import { getUser } from "./user";Write your test case, mock the call, and retrieve mock data.test("should fetch user data", async () => { // Mock data to be returned by the Axios request const mockUserData = { id: "1", name: "John Doe" }; axios.get.mockResolvedValue({ data: mockUserData }); // Call the function const result = await getUser("1"); // Assert that the Axios get method was called correctly expect(axios.get).toHaveBeenCalledWith("https://api.example.com/users/1"); // Assert that the function returned the correct data expect(result).toEqual(mockUserData); });Step 4: Run the testAdd a test script to your package.json:"scripts": { "test": "jest" }Run your tests with npm test.Step 5: Review the resultsJest will display the result of your test in the terminal. The test should pass, indicating that getUser is returning the mocked data as expected.Congratulations, you now have a working test with Jest and Mocking.I hope this was helpful and/or made you learn something new!Profile@khriztianmoren

Are you making THESE unit testing and mocking mistakes?

2024-04-08
javascripttestingweb-development

Testing is hard.And it doesn't matter if you're an experienced tester or a beginner...If you've put significant effort into testing an application...Chances are you've made some of these testing and mocking mistakes in the past.From test cases packed with duplicate code and huge lifecycle hooks, to conveniently incorrect mocking cases and missing and sneaky edge cases, there are plenty of common culprits.I've tracked some of the most popular cases and listed them below. Go ahead and count how many of them you've done in the past.Hopefully, it'll be a good round.Why do people make mistakes in testing in the first place?While automated testing is one of the most important parts of the development process...And unit testing saves us countless hours of manual testing and countless bugs that get caught in test suites...Many companies don't use unit testing or don't run enough tests.Did you know that the average test coverage for a project is ~40%, while the recommended one is 80%?Image descriptionThis means that a lot of people aren't used to running tests (especially complex test cases) and when you're not used to doing something, you're more prone to making a mistake.So without further ado, let's look at some of the most common testing errors I seeDuplicate CodeThe three most important rules of software development are also the three most important rules of testing.What are these rules? Reuse. Reuse. Reuse.A common problem I see is repeating the same series of commands in every test instead of moving them to a lifecycle hook like beforeEach or afterEachThis could be because the developer was prototyping or the project was small and the change insignificant. These cases are fine and acceptable.But a few test cases later, the problem of code duplication becomes more and more apparent.And while this is more of a junior developer mistake, the following one is similar but much more clever.Overloading lifecycle hooksOn the other side of the same coin, sometimes we are too eager to refactor our test cases and we put so much stuff in lifecycle hooks without thinking twice that we don't see the problem we are creating for ourselves.Sometimes lifecycle hooks grow too large.And when this happens......and you need to scroll up and down to get from the hook to the test case and back...This is a problem and is often referred to as "scroll fatigue".I remember being guilty of this in the past.A common pattern/practice to keep the file readable when we have bloated lifecycle hooks is to extract the common configuration code into small factory functions.So, let's imagine we have a few (dozens of) test cases that look like this:describe("authController", () => { describe("signup", () => { test("given user object, returns response with 201 status", async () => { // Arrange const userObject = { // several lines of user setup code }; const dbUser = { // several lines of user setup code }; mockingoose(User).toReturn(undefined, "findOne"); mockingoose(User).toReturn(dbUser, "save"); const mockRequest = { // several lines of constructing the request }; const mockResponse = { // several lines of constructing the response }; // Act await signup(mockRequest, mockResponse); // Assert expect(mockResponse.status).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(201); }); test("given user object with email of an existing user, returns 400 status - 1", async () => { // Arrange const userObject = { // several lines of user setup code }; const dbUser = { // several lines of user setup code }; const mockRequest = { // several lines of constructing the request }; const mockJson = jest.fn(); const mockResponse = { // several lines of constructing the response }; mockingoose(User).toReturn(dbUser, "findOne"); // Act await signup(mockRequest, mockResponse); // Assert expect(mockResponse.status).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(400); expect(mockJson).toHaveBeenCalled(); expect(mockJson).toHaveBeenCalledWith({ status: "fail", message: "Email taken.", }); }); }); });We can extract the repeated configuration information into its own functions called createUserObject, createDbUserObject and createMocksAnd then the tests would look like this:test("given user object, returns response with 201 status", async () => { const userObject = createUserObject(); const dbUser = createDbUserObject(); const [mockRequest, mockResponse] = createMocks(userObject); mockingoose(User).toReturn(undefined, "findOne"); mockingoose(User).toReturn(dbUser, "save"); await signup(mockRequest, mockResponse); expect(mockResponse.status).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(201); });By extracting those code snippets into their own separate factory functions, we can avoid scrolling fatigue, keep lifecycle links snappy, and make it easier to navigate the file and find what we're looking for.Not prioritizing the types of tests you runThis has more to do with large or huge codebases where there are literally hundreds or even thousands of test cases running every time a new set of commits wants to be merged into the codebase.Image descriptionIn such cases, running all the test suites can take literally hours, and you may not always have the time or resources to do so.When time or resources are limited, it's important to strategically choose the type of test to prioritize. Generally, integration tests provide better reliability assurances due to their broader scope. So when you have to choose between the two, it's often a good idea to choose integration tests over unit tests.Image descriptionUsing logic in your test casesWe want to avoid logic in our test cases whenever possible.Test cases should only have simple validation and avoid things like try-catch blocks or if-else conditionals.This keeps your tests clean and focused only on the expected flow because it makes the tests easier to understand at a glance.The only exception is when you're writing helper or factory functions that set up scenarios for tests.Using loose validations instead of strict assertionsThis is usually a sign that you might need to refactor the piece of code you're testing or that you need to make a minor adjustment to your mocks.For example, instead of checking if the value is greater than 1, you should be more specific and assert that the value is 2.Or, if you're checking data for a User object, you should assert that each piece of data is exactly as you expect, rather than just checking for an ID match.Loose checks can mask edge cases that could fail in the future.Improper Implementation of Mock BehaviorThis one is hard to find and that's why you can find an example in every codebase.It's one of the sneakiest but common testing issues and it's hard to notice at first glance.It can happen when the mock behavior is overly simplified or when it doesn't accurately reflect edge cases and error conditions.As a result, tests may pass, but they will not provide a reliable indication of how the system will perform under various conditions, resulting in future errors and unexpected problems, and test cases with simulated behavior that end up doing more harm than good.I hope this post helps you identify those practices that we should avoid when testing.Profile@khriztianmoren

Why is it so difficult to pass state between client and server components?

2024-03-28
javascriptreactweb-development

The way we represent server components is different.Nothing like what we're used to so far.And because it's so different, it also changes where we handle state, how we handle it, and how we manage to sleep at night knowing that these are all important things we should know since last year, but most of us are completely unaware of.WHY?In fact, server components impact 3 of the most important parts of web development:PerformanceUser ExperienceThe way we, developers, write code and design our applications.It's not something we can ignore or take lightly.As you can see, I've been thinking about it a lot and you've probably been thinking about it too. Every developer who values ​​their keyboard is thinking about it.And there's this specific question... It's kind of a "chicken and the egg" type of question, mainly because they're both questions that get asked a lot.How ​​the heck do I handle state in server components if I don't have access to the state in the first place?Before I give you the answer, let me explain the problem at hand. Consider what happens when a server component requests a new render.Unlike client components where state is preserved between renders, server components don't have that luxury. Like a roguelike game, they always start from scratch.There is no inherent code or mechanism that can make a server component remember state. The backend has all those databases and design patterns and all those complex functions and for what? It can't even handle state.So what do we do? Do we simply use server components on completely static components that don't need any state? While this is a possible answer, it's also an approach that limits the effectiveness and tuning of server components in modern web applications. So, we're ruling it out.Because everything I said above has a drawback.While the backend may not handle client state like the client does, it does handle application state. So, in a way, we can handle state on the server. Just not in the way you think. And, actually, there's not one way.And, actually, there's not one way.There are THREE ways to handle state on the server.Which one we choose depends on what best suits our needs and current situation. And these 3 ways are:Prop drilling from the server to the componentscookiesstate hydrationNow, another million dollar question. Why can't we just handle all the state on the client?This is because server components advocate a separation of concerns. Which in simpler terms means that each part of the application should mind its own business. By decoupling state from rendering, we not only improve performance, but we also gain more control over the user experience.Profile@khriztianmoren

Will you still be using client-side components in 2024?

2024-02-14
javascriptreactweb-development

I love server components. But they're not for every occasion. In fact, using them at each and every opportunity you get is more like tenderizing a steak with the same hammer you used to hammer a pin.That said, there are also some cases where server components fit like Cinderella and her shoe.Dashboards & ReportsIn the case of dashboards and reports, you maximize performance by processing and rendering all the data on the server while still allowing the client to do what it was supposed to do in the first place.If we only use client components, we're offloading more tasks to it and making it perform more tasks than it can potentially handle.If you processed and rendered the data on the client, you'd spend a lot of time waiting around with an ugly spinner mocking you because of all the extra steps the client has to take to achieve the same result.First, you would need to get the data from your endpoint, which in turn would need to get data from the database, which in turn would need to get data from its logs.Not only that, but you also have to wait patiently for the data to arrive at the client, even if it arrives late.And only once it arrives, the client can start processing it and displaying it to the user.That's why, in this case, it's better to have the components locked and rendered directly on the server.And then, the client can get all the widgets its electronic heart desires, all streamed in parallel, which is a much faster and easier way to conduct business.Blog PostsBlog posts are some of the most static content you can find. After all, it's mostly just a bunch of words, in order, with the occasional image, gif, or meme here and there.With server components, you're pre-rendering the content on the server, which is the best case scenario in terms of SEO because it's delivered fast and complete.This use case above is what tends to throw developers off balance and confuse them. Because blog posts are the first thing they think of when it comes to SSR as well. So if they think of SSR and server components when they think of blog posts, it's natural to think that they're both the same thing or can be used interchangeably.Which is completely and utterly wrong and is a sin worthy of burning in hell for eternity, according to my previous blog post.But at the same time it makes sense from the point of view of results. Because although their approaches are very different in practice, the results are quite similar.Server-Side Data FetchingServer-side data fetching is a great way to give your code a security boost in case you otherwise reveal logic or API keys that are supposed to remain hidden or if you are paranoid about every piece of information you hold.You can use this type of data fetching to access not only your database but also any other API you want. All by being sneaky.However, there are cases where server-side components are NOT ideal and should in fact be avoided.High Interactivity RequirementsThis is fancy wording for anything you do something to and get something back. Kind of like Jell-O, but not quite.Something potentially more familiar and more in keeping with web development itself are things like forms and buttons.These components often have to react in some way to your actions, may have their own state, and communicate with other components on the client side, making them the weakest link when it comes to server components.Stateful ComponentsIf you take it as literally as possible and use the definition we've been using so far, there is no way to handle state in a server component.But if you squint, tilt your head a little to the side, defocus a bit, and take a deep breath, you can see that this is only half true.We'll learn what the nuances are another time, but for now let's just say that server components do NOT have access to state. After all, they don't have access to state-related hooks.ConclusionServer components are a powerful tool for specific scenarios in web development. Use them to improve performance and security for data-intensive tasks. But remember, they may not fit every situation, especially for interactive or stateful elements.Profile@khriztianmoreno see you soon

Server Components Vs Server-side Rendering

2024-01-28
javascriptreactweb-development

Did you know that server components and server-side rendering are two completely different things?Image descriptionAnd while they go hand-in-hand in many cases, there are equally many examples where you can and should use just one of them.Image descriptionNow, to be fair, given the online documentation most developers find about server components and server-side rendering, the fact that they assume these two things are the same isn't exactly a surprise.(This also gives them the perfect excuse to avoid server-side components altogether, rather than face the fact that this is something they should already be pretty good at using and will eventually have to learn anyway.)Image descriptionThat said, server-side components have a lot of advantages over server-side rendering, especially when it comes to building larger, more complex applications.Let's talk a bit more about the differences between the two.Server-Side RenderingServer-side rendering renders the page on the server. But obviously that's not really useful, so let's dig a little deeper.Server-side rendering renders the page on the server at request time, meaning that every time a client makes a request to the server — such as when a new visitor comes to your page — the server re-renders the same page and sends it to the client.And while this is great for SEO, since even the most dynamic pages appear static in the eyes of the robots that index them and use them for searches, server-side rendering has a lot of steps between the client making the request to the server and the page finally loading in front of you.First, when the request hits the server, the server statically renders the page and sends it back to the client, where the user gets a version of your page that's devoid of all interactivity.But that's not the only thing the server sends to the client. Along with the static page HTML, the server sends its condolences along with a big bundle of React code that the client will need to run to make the page dynamic again.But what happens when 90% of our page is static?You might think the correct answer is “practically nothing” and you wouldn’t be exactly wrong, but just like in Calculus class, you lose points because this isn’t the answer I was looking for.In reality, there’s still a lot of JavaScript being sent to the client and it needs to be executed without changing much on the page itself.This is a waste of time and data and is one of the main reasons server components were created.(Which also means that if you’re using SSR as an alternative to server components, you’ve hit a big snag and it’s time to change your ways before you get thrown into a “slow performance” jail.)Server ComponentsServer components, like SSR, are rendered on the server, but they have the unique ability to include server-side logic without sending any additional JavaScript to the client, since server components are executed on the server.This means that while in SSR the JavaScript code is executed on the client, in Server Components, the code is executed directly on the server. And the client only receives the output of the code through the server payload, like a baby getting its food chewed up.So, in a nutshell, the difference between server components and server-side rendering is all about when and how the code is executed. In the case of server components, there is no need to send or handle any additional code on the client side because we already run it on the server.The only code needed is if you are tying the server and client components together and the client has to tie the two together.The big benefit of server components is that they offer higher performance because they need less client-side JavaScript and offload all the processing and data retrieval to the server while the client can relax and drink Aperol Spritz until the rendered HTML arrives.At that point, the client simply displays it to the end user and takes all the credit for the hard work done by the server.And while this may seem a bit complex right now, it will all become clearer as you learn more about server components. Which is something that can't really be avoided, as they are becoming more and more popular in use cases like:Having a lot of heavy calculations that need a lot of processingPrivate API access (to keep things secret… secret)When most of the client side is already static and it would be a waste to send more JavaScript to the clientConclusionWhile server components may seem daunting right now, they are mostly something new and strange. Once you get to know them better and learn some of the basics, you'll realize that they are not as complex as they seem.The learning curve is not as steep as you might think and just because they have "sever" in the name, it doesn't mean they have to be something strange and cryptic that we frontend developers should stay away from. In fact, they are quite similar to client components and even more lightweight as they lack things like state hooks, etc.Profile@khriztianmoren

Predictions 🧞‍♀️💻 2024

2023-12-04
programmingweb-developmentdiscuss

Some points you should pay attention to for this year 2024 that will surely have a high impact on the technological ecosystem.Bun achieves its goal of becoming the default frontend runtime: They still have some hurdles to overcome, but if they manage to provide a drop-in replacement for Node that instantly improves your application's performance 10x, it will be an obvious choice for most developers. The v1.0 release last September was a major step towards overall Windows compatibility and stability, and the bet is that Bun will start becoming the default choice this year.AI will replace no-code/low-code tools: It turns out that AI is much better and faster at creating marketing analytics dashboards. Tools like Basedash and 8base already use AI to create a full set of requirements, custom internal tools, and others will emerge to replace drag-and-drop builders to create sites that don't rely heavily on business logic.Netlify is acquired by GoDaddy: With multiple rounds of layoffs, 2023 was clearly not the best year for Netlify. But sometimes the best way to recover is to find a new ~~sugar daddy~~, GoDaddy. After retiring the Media Temple brand a few months ago, it looks like GoDaddy might be back in the acquisition market for a platform like Netlify.Any other ones you can think of? Help me update this post!!Profile@khriztianmoren

Clone an Object in JavaScript - 4 Best Ways

2023-06-18
javascriptprogrammingweb-development

Cloning an object in JavaScript means creating an identical copy of an existing object. This means that the new object will have the same values ​​for each property, but it will be a different object.Why is it important to clone an object in javascript?Cloning an object in javascript is important to preserve the original state of an object and prevent the propagation of changes. This is especially useful when there are multiple instances of an object that are shared between various parts of the application. Cloning the original object ensures that the other instances are not affected by changes made to the original object. This also allows developers to use a single instance of the object and create a unique copy for each part of the application. This avoids having to create a new instance of the object every time, saving time and resources.How to clone an object?To clone a JavaScript object correctly, you have 4 different options:Use the structuredClone() functionUse the spread operatorCall the Object.assign() functionUse JSON parsing.const data = { name: "khriztianmoreno", age: 33 }; // 1 const copy4 = structuredClone(data); // 2 const copy1 = { ...data }; // 3 const copy2 = Object.assign({}, data); // 4 const copy3 = JSON.parse(JSON.stringify(data));1. structuredClone()Creates a deep clone of a given value using the structured cloning algorithm.const original = { someProp: "with a string value", anotherProp: { withAnotherProp: 1, andAnotherProp: true, }, }; const myDeepCopy = structuredClone(original);2. spread operatorThe spread operator (...) allows you to spread an object or array into a list of individual elements, and is used to create a copy of an object.const original = { name: "khriztianmoreno", age: 33 }; const clone = { ...original };In the above example, a new clone object is being created from the values ​​of the original object, it is important to note that this only makes a shallow copy, if the original object contains objects or arrays within it, these will not be cloned but the reference of the original object will be assigned.It can also be used to clone arrays as follows:const original = [1, 2, 3, 4]; const clone = [...original];3. Object.assignThe Object.assign() function is another way to clone objects in JavaScript. The function takes a target object as its first argument, and one or more source objects as additional arguments. It copies enumerable properties from the source objects to the target object.const original = { name: "khriztianmoreno", age: 33 }; const clone = Object.assign({}, original);4. JSON parsingThe JSON.parse() and JSON.stringify() methods are another way to clone objects in JavaScript. The JSON.stringify() method converts an object into a JSON string, while the JSON.parse() method converts a JSON string into a JavaScript object.const original = { name: "khriztianmoreno", age: 33 }; const clone = JSON.parse(JSON.stringify(original));ConclusionsAdvantages of cloning an object in javascriptObject cloning allows the developer to create a copy of an existing object without having to redefine all the values ​​of the object. This means that the developer can save time and effort by not having to recreate an object from scratch.Object cloning also allows the developer to create a modified version of an existing object. For example, a developer can modify the values ​​of an existing object to create a customized version. This is useful in saving time by not having to rewrite the code to create an object from scratch.Object cloning also allows the developer to create an improved version of an existing object. For example, a developer can add new properties to an existing object to improve its functionality.Object cloning also offers a way to backup objects. This means that if the existing object is affected by a software failure, the developer can resort to the backup to recover the original values.Disadvantages of cloning an object in javascriptCloning an object in Javascript can be a useful task, but there are also some drawbacks that need to be considered. The first is the execution time. Cloning objects in Javascript can be a slow process, especially if the object is large. This can lead to a poor user experience if you are trying to clone an object while running an application.Another disadvantage of cloning objects in Javascript is that you cannot clone complex objects. This means that objects that contain references to other objects cannot be cloned properly. This can be a problem if you are trying to clone an object that contains references to other important objects, as the clone will not have these references.Lastly, cloning objects in Javascript can also lead to security issues if you clone objects that contain sensitive information. The clone can contain the same information as the original object, which can be a security risk if you share the clone with other users.That's all folks! I hope this article helps you understand the different options we have when it comes to cloning an object/array in javascript.Profile@khriztianmorenoPS: This article was written entirely using Artificial Intelligenc

How to use Call, Apply and Bind methods in javascript

2023-05-12
javascriptweb-developmentprogramming

In this article, we'll look at what call, apply, and bind methods are in javascript and why they exist.Before we jump in, we need to know what this is in javascript, in this post you can go a little deeper.Call, Apply and BindIn Javascript, all functions will have access to a special keyword called this, the value of this will point to the object on which that function is executed.What are these call, apply and bind methods?To put it in a simple way, all these methods are used to change the value of this inside a function.Let us understand each method in detail.call()Using the call method, we can invoke a function, passing a value that will be treated as this within it.const obj = { myName: "khriztianmoreno", printName: function () { console.log(this.myName); }, }; obj.printName(); // khriztianmoreno const newObj = { myName: "mafeserna", }; obj.printName.call(newObj); //mafesernaIn the above example, we are invoking the call method in the printName function by passing newObj as a parameter, so now this inside printName points to newObj, hence this.myName prints mafeserna.How to pass arguments to functions?The first argument of the call method is the value pointed to by this inside the function, to pass additional arguments to that function, we can start passing it from the second argument of the call method.function foo(param1, param2) {} foo.call(thisObj, arg1, arg2);where:foo is the function we are calling by passing the new this value which is thisObjarg1, arg2 are the additional arguments that the foo function will take ( param1= arg1 , param2 = arg2 )apply()The apply function is very similar to the call function. The only difference between call and apply is the difference in how the arguments are passed.call — we pass arguments as individual values, starting from the second argumentapply — additional arguments will be passed as an arrayfunction sayHello(greet, msg) { console.log(`${greet} ${this.name} ! ${msg}`); } const obj = { name: "khriztianmoreno", }; // Call sayHello.call(obj, "Hello", "Good Morning"); // Hello khriztianmoreno ! Good Morning // Apply sayHello.apply(obj, ["Hello", "Good Morning"]); // Hello khriztianmoreno ! Good MorningIn the above example, both the call and apply methods in the sayHello function are doing the same thing, the only difference is how we are passing additional arguments.bind()Unlike the call and apply methods, bind will not invoke the function directly, but will change the this value inside the function and return the modified function instance.We can invoke the returned function later.function sayHello() { console.log(this.name); } const obj = { name: "khriztianmoreno" }; // it won't invoke, it just returns back the new function instance const newFunc = sayHello.bind(obj); newFunc(); // khriztianmorenopassing additional arguments: Passing additional arguments in bind works similarly to the call method, we can pass additional arguments as individual values ​​starting from the second argument of the bind method.function sayHello(greet) { console.log(`${greet} ${this.name}`); } const obj = { name: "khriztianmoreno" }; const newFunc = sayHello.bind(obj, "Hello"); newFunc(); // Hello khriztianmorenoIn case of bind method, we can pass additional arguments in two ways:While calling the bind method itself, we can pass additional arguments along with the value of this to that function.Another way is that we can pass additional arguments while invoking the return function of the bind method.We can follow any of the above ways and it works similarly without any difference in functionality.function sayHello(greet) { console.log(`${greet} ${this.name}`); } const obj = { name: "khriztianmoreno" }; const newFunc1 = sayHello.bind(obj, "Hello"); newFunc1(); // Hello khriztianmoreno const newFunc2 = sayHello.bind(obj); newFunc2("Hello"); // Hello khriztianmorenoNOTE: if we don't pass any value or we pass null while calling call, apply, bind methods, then this inner calling function will point to the global object.function sayHello() { // executing in browser env console.log(this === window); } sayHello.call(null); // true sayHello.apply(); // true sayHello.bind()(); // trueWe cannot use call, apply and bind methods in arrow functions to change the value of this, because arrow functions do not have their own this context.The this inside the arrow function will point to the outer/parent function in which it is present.Therefore, applying these methods in the arrow function will have no effect.That's all folks! I hope this article helped you understand what call(), apply() and bind() methods are in javascript!Profile@khriztianmoren

Introduction to .env files

2023-03-09
javascriptweb-developmentprogramming

Imagine having to pay nearly $148 million for a data breach that exposed the data of some 57 million users 😱😰Well, that's what happened to Uber, not long ago, and the culprit was none other than a coded secret published openly for any bad actor to exploit.That's why in this post, we're going to learn what they are and how we could work them into our projects with javascript.envContextToday, millions of software developers keep their secrets (i.e. credentials such as access keys and tokens to services used by programs) safe with .env files.For those unfamiliar with the topic, .env files were introduced in 2012 as part of a solution to the hard-coded secret problem mentioned in the introductory paragraph above.Instead of sending secrets along with their codebase to the cloud, developers could now send their codebase to the cloud and keep their secrets separated on their machines in key-value format in .env files; this separation reduced the risk of bad actors getting their hands on sensitive credentials in the cloud.To run programs, developers would now just need to pull their latest codebase from the remote repository and inject the secrets contained in their local .env files into the pulled code.Unless a development team is small and "skeletal" and doesn't care much about DevOps, it typically maintains multiple "environments" for its codebase to ensure that changes are well tested before being pushed to production to interact with end users. In the case of multiple environments, developers may choose to employ multiple .env files to store credentials, one for each of those environments (for example, one .env file to hold development database keys and another to hold production database keys).To summarize, .env files contain credentials in key-value format for the services used by the program they are building. They are meant to be stored locally and not uploaded to online code repositories for everyone to read. Each developer on a team typically maintains one or more .env files for each environment.UsageIn this post, we'll look at how to use a .env file in a basic project, assuming you're using Node.js and git for version control; this should apply to other languages ​​as well. Feel free to skip this section if you're not interested in the technicalities of how to use a .env file.To get started, head to the root of your project folder and create an empty .env file containing the credentials you'd like to inject into your codebase. It might look something like this:SECRET_1=924a137562fc4833be60250e8d7c1568 SECRET_2=cb5000d27c3047e59350cc751ec3f0c6Next, you'll want to ignore the .env file so that it doesn't get committed to git. If you haven't already, create a .gitignore file. It should look something like this:.envNow, to inject the secrets into your project, you can use a popular module like dotenv; it will parse the .env file and make your secrets accessible within your codebase under the process object. Go ahead and install the module:npm install dotenvImport the module at the top of the startup script for your codebase:require(‘dotenv’).config()That's it, you can now access secrets anywhere in your codebase:// display the value of SECRET_1 into your code console.log(process.env.SECRET_1); // -> 924a137562fc4833be60250e8d7c1568 // display the value of SECRET_2 into your code console.log(process.env.SECRET_2); // -> cb5000d27c3047e59350cc751ec3f0c6Excelente. Ha agregado con éxito un archivo .env a su proyecto con algunos secretos y ha accedido a esos secretos en su base de código. Además, cuando envías tu código a través de git, tus secretos permanecerán en tu máquina.ChallengesWhile simple and powerful, .env files can be problematic when managed incorrectly in the context of a larger team.Imagine having to distribute and track hundreds of keys to your software development team.On a simplified level, between Developer_1 and Developer_2, here's what could happen:Developer_1 could add an API key to their local .env file and forget to tell Developer_2 to add it to theirs - this cost Developer_2 15 minutes down the road debugging why their code is crashing only to realize it's because of the missing API key.Developer_2 could ask Developer_1 to send them the API key so they can add it to their .env file, after which Developer_1 can choose to send it via text or email - this cost Developer_2 15 minutes down the road debugging why their code is crashing only to realize it's because of the missing API key.This now unnecessarily puts your organization at risk of bad actors like Developer_2 waiting precisely to intercept the API key.Unfortunately, these challenges are common and even have a name: secret sprawl.Over the past few years, many companies have attempted to solve this problem. HashiCorp Vault is a product that securely stores secrets for large enterprises; however, it is too expensive, cumbersome, and downright overkill to set up for the average developer who just needs a fast and secure way to store these secrets.Simpler solutions exist, such as Doppler and the new dotenv-vault, but they often lack the security infrastructure needed to gain mass adoption.Let me know in the comments what tools or services you use to easily and safely solve secret sprawl.That's all folks! I hope this helps you become a better dev!Profile@khriztianmoreno �

Some methods beyond console.log in Javascript

2023-02-17
javascriptweb-developmentprogramming

Some methods beyond console.log in JavascriptOften, during debugging, JavaScript developers tend to use the console.log() method to print values. But there are some other console methods that make life much easier. Want to know what these methods are? Let's get to know them!1. console.table()Displaying long arrays or objects is a headache using the console.log() method, but with console.table() we have a much more elegant way to do it.// Matrix const matrix = [ ["apple", "banana", "cherry"], ["Rs 80/kg", "Rs 100/kg", "Rs 120/kg"], ["5 ⭐", "4 ⭐", "4.5 ⭐"], ]; console.table(matrix); // Maps class Person { constructor(firstName, lastName) { this.firstName = firstName; this.lastName = lastName; } } const family = {}; family.mother = new Person("Jane", "Smith"); family.father = new Person("John", "Smith"); family.daughter = new Person("Emily", "Smith"); console.table(family);console.table()2. console.trace()Having trouble debugging a function? Wondering how the execution flows? console.trace() is your friend!function outerFunction() { function innerFunction() { console.trace(); } innerFunction(); } outerFunction();console.trace()3. console.error() and console.warn()Tired of boring logs? Spice things up with console.error() and console.warn()console.error("This is an error message"); console.warn("This is a warning message"); console.log("This is a log message");console.error()4. console.assert()This is another brilliant debugging tool! If the assertion fails, the console will print the trace.function func() { const a = -1; console.assert(a === -1, "a is not equal to -1"); console.assert(a >= 0, "a is negative"); } func();console.assert()5. console.time(), console.timeEnd(), and console.timeLog()Need to check how long something is taking? Timer methods are there to rescue you!console.time("timeout-timer"); setTimeout(() => { console.timeEnd("timeout-timer"); }, 1000); setTimeout(() => { console.timeLog("timeout-timer"); }, 500);console.time()NOTE: setTimeouts are not executed immediately, which results in a slight deviation from the expected time.That's all folks! I hope this helps you become a better dev!Profile@khriztianmoreno on Twitter and GitHu

A journey to Single Page Applications through React and JavaScript

2022-09-18
javascriptweb-developmenttalks

Last June, I had the opportunity to share again in person with the medellinjs community, which I appreciate very much.MedellinJSThis time, I was talking about the javascript concepts that are necessary to understand and know before facing working with a Single Page Application (SPA).<iframe width="560" height="315" src="https://www.youtube.com/embed/6opIHgRqWPo?si=xFM0sbf6w8qKQkR3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>I hope this was helpful and/or taught you something new!Profile@khriztianmoren

A better way to build React component libraries

2022-07-05
reactjavascriptweb-development

Today we'll quickly go over four programming patterns that apply to shared components in React.ReactUsing these allows you to create a well-structured shared component library. The benefit you get is that developers in your organization can easily reuse components across numerous projects. You and your team will be more efficient.Common PatternsIn this post, I show you four API patterns that you can use with all your shared components. These are:JSX children pass-throughReact fowardRef APIJSX prop-spreading cont TypeScriptOpinionated prop defaultsPattern 1: JSX Children Pass-ThroughReact provides the ability to compose elements using the children prop. The shared component design leans heavily on this concept.Allowing consumers to provide the children whenever possible makes it easier for them to provide custom content and other components. It also helps align component APIs with those of native elements.Let's say we have a Button component to start with. Now we allow our Button component to render its children, like this:// File: src/Button.tsx export const Button: React.FC = ({ children }) => { return <button>{children}</button>; };The definition of React.FC already includes children as a valid prop. We pass it directly to the native button element.Here is an example using Storybook to provide content to the Button.// File: src/stories/Button.stories.tsx const Template: Story = (args) => ( <Button {...args}>my button component</Button> );Pattern 2: forwardRef APIMany components have a one-to-one mapping to an HTML element. To allow consumers to access that underlying element, we provide a referencing prop using the React.forwardRef() API.It is not necessary to provide a net for day-to-day React development, but it is useful within shared component libraries. It allows for advanced functionality, such as positioning a tooltip relative to our Button with a positioning library.Our Button component provides a single HTMLButtonElement (button). We provide a reference to it with forwardRef().// File: src/buttons/Button.tsx export const Button = React.forwardRef < HTMLButtonElement > (({ children }, ref) => { return <button ref={ref}>{children}</button>; }); Button.displayName = "Button";To help TypeScript consumers understand what element is returned from the ref object, we provide a type variable that represents the element we are passing it to, HTMLButtonElement in this case.Pattern 3: JSX Prop-SpreadingAnother pattern that increases component flexibility is prop propagation. Prop propagation allows consumers to treat our shared components as drop-in replacements for their native counterparts during development.Prop propagation helps with the following scenarios:Providing accessible props for certain content.Adding custom data attributes for automated testingUsing a native event that is not defined in our props.Without prop propagation, each of the above scenarios would require explicit attributes to be defined. prop propagation helps ensure that our shared components remain as flexible as the native elements they use internally.Let's add prop propagation to our Button component.// File: src/buttons/Button.tsx export const Button = React.forwardRef< HTMLButtonElement, React .ComponentPropsWithoutRef<'button'> >(({ children, ...props }, ref) => { return ( <button ref={ref} {...props}> {children} </button> ); });We can reference our remaining props with the spread syntax and apply them to the button. React.ComponentPropsWithoutRef is a type utility that helps document valid props for a button element for our TypeScript consumers.Some examples of this type checking in action:// Pass - e is typed as // `React.MouseEventMouseEvent>` <Button onClick={(e) => { console.log(e) }} /> // Pass - aria-label is typed // as `string | undefined` <Button aria-label="My button" /> // Fail - type "input" is not // assignable to `"button" | // "submit" | "reset" | undefined` <Button type="input" />Pattern 4: Opinionated DefaultsFor certain components, you may want to map default attributes to specific values. Whether to reduce bugs or improve the developer experience, providing a set of default values ​​is specific to an organization or team. If you find the need to default certain props, you should ensure that it is still possible for consumers to override those values ​​if necessary.A common complexity encountered with button elements is the default value type, "submit". This default type often accidentally submits surrounding forms and leads to difficult debugging scenarios. Here's how we set the "button" attribute by default.Let's update the Button component to return a button with the updated type.// File: src/buttons/Button.tsx return ( <button ref={ref} type="button" {...props}> {children} </button> );By placing the default props before the prop broadcast, we ensure that any value provided by consumers is prioritized.Look at some open source librariesIf you're building a component library for your team, take a look at the most popular open source component libraries to see how they use the patterns above. Here's a list of some of the top open source React component libraries to look into:Ant DesignRainbow UIGrommetProfile@khriztianmorenoUntil next time

NPM dependencies vs devDependencies

2022-06-20
javascriptweb-developmentnpm

tl;drDependencies are required by our application at runtime. Packages like react, redux, and lodash are all examples of dependencies. devDependencies are only required to develop or compile your application. Packages like babel, enzyme, and prettier are examples of devDependencies.NPM dependencies vs devDependenciesnpm installThe real difference between dependencies and devDependencies is seen when you run npm install.If you run npm install from a directory containing a package.json file (which you normally do after cloning a project, for example).✅ All packages located in dependencies will be installed ✅ All packages located in devDependencies will be installedIf you run npm install <package-name> (which you normally do when you want to add a new package to an existing project), i.e. npm install react.✅ All packages located in dependencies will be installed ❌ None of the packages located in devDependencies will be installedTransitive dependenciesIf package A depends on package B, and package B depends on C, then package C is a transitive dependency on package A. What that means is that for package A to run properly, it needs package B installed. However, for package B to run properly, package C needs to be installed. Why do I mention this? Well, dependencies and devDependencies also treat transitive dependencies differently.When you run npm install from a directory containing a package.json file:dependencies ✅ Download all transitive dependencies.devDependencies ❌ Do not download any transitive dependencies.Specifying dependencies vs. devDependenciesStarting with NPM 5, when you run npm install <package-name>, that package will automatically be saved within your dependencies in your package.json file. If you wanted to specify that the specific package should be included in devDependencies instead, you would add the --save-dev flag.npm install prettier --save-devInstalling on a production serverOften, you will need to install your project on a production server. When you do that, you will not want to install devDependencies as you obviously won't need them on your production server. To install only the dependencies (and not devDependencies), you can use the --production flag.npm install --productionI hope this was helpful and/or made you learn something new!Profile@khriztianmoren

A practical handbook on JavaScript module systems

2022-06-12
javascriptweb-development

Today I'll give you a practical introduction to the module systems we use when building libraries in JavaScript. As a web application or library grows and more features are added, modularizing the code improves readability and maintainability. This quick guide will give you an incisive look at the options available for creating and consuming modules in JavaScript.JavaScript Module SystemsIf you've ever wondered what the pros and cons of AMD, ESM, or CommonJS are, this guide will give you the information you need to confidently choose among all the options.A history of JavaScript modulesWith no built-in native functions for namespaces and modules in early versions of the JavaScript language, different module formats have been introduced over the years to fill this gap.The most notable ones, which I'll show you how to use in your JavaScript code below, are:Immediately Invoked Function Expression (IIFE).CommonJS (CJS)Asynchronous Module Definition (AMD)Universal Module Definition (UMD)ECMAScript Modules (ESM)The selection of a module system is important when developing a JavaScript library. For library authors, the choice of module system for your library affects user adoption and ease of use. You will want to be familiar with all the possibilities.1. Immediately Invoked Function Expression (IIFE) - Immediately Invoked Function ExpressionOne of the earliest forms of exposing libraries in the web browser, immediately Invoked Function Expressions (IIFE) are anonymous functions that are executed immediately after being defined.(function () { // Module's Implementation Code })();A common design pattern that leverages IIFEs is the Singleton pattern, which creates a single object instance and namespace code. This object serves as a single point of access to a specific set of functions. For real-world examples, look no further than the Math object or the jQuery library.ProsWriting modules this way is convenient and compatible with older browsers. In fact, you can safely concatenate and bundle multiple files containing IIFEs without worrying about naming and scope collisions.ConsHowever, IIFE modules are loaded synchronously, which means that properly ordering module files is critical. Otherwise, the application will break. For large projects, IIFE modules can be difficult to manage, especially if you have a lot of overlapping and nested dependencies.2. Common JS (CJS)Node.js's default module system, CommonJS (CJS) uses the require syntax for importing modules and the module.exports and export syntax for exported and named exports, respectively. Each file represents a module and all local variables of the module are private, since Node.js wraps the module inside a function container.For example, this module ...const { PI, pow } = Math; function calculateArea(radius) { return PI * pow(radius, 2); } module.exports = calculateArea;It becomes...(function (exports, require, module, __filename, __dirname) { const { PI, pow } = Math; function calculateArea(radius) { return PI * pow(radius, 2); } module.exports = calculateArea; });Not only does the module have its variables within the private scope, but it still has global access to, exports, require, and module. __filename and __dirname are module-scoped and contain the filename and directory name of the module, respectively.The require syntax allows you to import built-in Node.js modules or locally installed third-party modulesProsCommonJS require statements are synchronous, meaning that CommonJS modules are loaded synchronously. As long as it is the only entry point for the application, CommonJS automatically knows how to order modules and handle circular dependencies.ConsLike IIFEs, CommonJS was not designed to generate small-sized packages. Package size was not considered in the design of CommonJS, as CommonJS is primarily used to develop server-side applications. For client-side applications, code must be downloaded first before running. The lack of tree shaking makes CommonJS a suboptimal module system for client-side applications.3. Asynchronous Module Definition (AMD)Unlike IIFE and CommonJS, Asynchronous Module Definition (AMD) loads modules and their dependencies asynchronously. Originating from the Dojo Toolkit, AMD is designed for client-side applications and requires no additional tools. In fact, all you need to run applications following the AMD module format is the RequireJS library, an in-browser module loader. That's it. Here's a simple example that runs a simple React application, structured with AMD, in the browser.<!-- index.html --> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>React + AMD</title> </head> <body> <div id="root"></div> <script type="text/javascript" src="https://cdnjs.cloudflare.com /ajax/libs/require.js/2.3.6 /require.min.js" ></script> <script type="text/javascript" src="main.js"></script> </body> </html>Here's what the JavaScript looks like.// main.js requirejs.config({ paths: { react: "https://unpkg.com/react@15.3.2 /dist/react", "react-dom": "https://unpkg.com /react-dom@15.3.2 /dist/react-dom", }, }); requirejs( ["react", "react-dom"], (React, ReactDOM) => { ReactDOM.render( React.createElement( "p", {}, "Greetings!" ), document.getElementById("root") ); } );Calling the requirejs or define methods registers the factory function (the anonymous function passed as the second argument to these methods). AMD runs this function only after all dependencies have been loaded and executed.ProsAMD allows multiple modules to be defined within a single file and is compatible with older browsers.ConsAMD is not as popular as more modern module formats such as ECMAScript modules and Universal Module Definition.4. Universal Module Definition (UMD)For libraries that support both client-side and server-side environments, the Universal Module Definition (UMD) offers a unified solution for making modules compatible with many different module formats, such as CommonJS and AMD.Here's UMD in action from the React development library(function (root, factory) { if (typeof define === "function" && define.amd) { // Checks for RequireJS's // `define` function. // Register as an anonymous module. define(["exports"], factory); } else if ( typeof exports === "object" && typeof exports.nodeName !== "string" ) { // Checks for CommonJS. // Calls the module factory // immediately. factory(exports); } else { // Register browser globals. global = root || self; factory((global.React = {})); } })(this, function (exports) { "use strict"; // Place React's module code here. // ... });If the IIFE detects a defining function in the global scope and an amd property in the definition, then it runs the module as an AMD module.If the IIFE detects an export object in the global scope and a nodeName property within the exports, then it runs the module as a CommonJS module.ProsRegardless of whether an application consumes your library as a CommonJS, AMD, or IIFE module, UMD conditionally checks the format of the module being used at runtime and executes code specific to the detected module format.ConsThe UMD template code is an intimidating-looking IIFE and is initially challenging to use. However, UMD itself is not conceptually complicated.5. ECMAScript Modules (ESM)ECMAScript Modules (ESM), the most recently introduced module format, is the standard and official way of handling modules in JavaScript. This module format is commonly used in TypeScript applications.Like CommonJS, ESM provides several ways to export code: default exports or named exports.// circle.js export function calculateArea() { return Math.PI * Math.pow(radius, 2); } export function calculateCircumference() { return 2 * Math.PI * radius; }Importing these named exports separately tells the module bundler which parts of the imported module should be included in the generated code. Any unimported named exports are skipped. This reduces the library size, which is useful if your library relies on some methods from a large utility library like lodash.Now, in some file in the same directory as ./circle.js, we would need the module as follows.const { calculateArea, calculateCircumference } = require("./circle"); console.log(calculateArea(5)); console.log(calculateCircumference(5));ProsModule bundlers are supported by ESM and optimize code using techniques like tree shaking (removes unused code from the final result), which are not supported by other module formats. Module loading and parsing is asynchronous, but their execution is synchronous.ConsThis is the newest core module system. As such, some libraries have not yet adopted it.Building your own React/JavaScript libraryAs you can imagine, choosing the right module system becomes important when building your own React library. Personally with the use of tools like babel.js nowadays we could work with ECMAScript modules, but I am a proponent of using CommonJS in Node and ECMAScript Modules (ESM) on the frontend.Profile@khriztianmoren

Introduction to Volta, the fastest way to manage Node environments

2022-05-27
javascriptweb-developmentprogramming

Volta is a tool that opens up the possibilities for a smoother development experience with Node.js. This is especially relevant for working in teams. Volta allows you to automate your Node.js development environment. It allows your team to use the same consistent versions of Node and other dependencies. Even better, it allows you to keep versions consistent across production and development environments, eliminating the subtle bugs that come with version mismatches.Volta eliminates “It works on my machine...” problems.Version mismatches cause headaches when developing in teams.Let's assume this scenario:Team X built their application on local machines running Node 10, but the build pipeline defaulted to the lowest version of Node they had on hand, Node 6, and the application would not start in production. They had to revert the deployment, figure out what went wrong, it turned into a very long night.If they had used Volta this could have been avoided.How does Volta work?Volta is “a simple way to manage your JavaScript command line tools”. It makes managing Node, npm, yarn or other JavaScript executables shipped as part of packages really easy.Volta has a lot in common with tools like NVM, but NVM is not the easiest to set up initially and, more importantly, the developer using it still has to remember to switch to the correct version of Node for the project they are working on.Volta, on the other hand, is easy to install and takes the thinking part out of the equation: once Volta is configured in a project and installed on a local machine, it will automatically switch to the appropriate versions of Node.Not only that, but it will also allow you to define yarn and npm versions in a project, and if the version of Node defined in a project is not downloaded locally, Volta will go out and download the appropriate version.But when you switch to another project, Volta will either fall back to the presets in that project or revert to the default environment variables.Volta in actionLet's give Volta a spin. First, create a new React application with Create React App.Run the following command from a terminal.npx create-react-app volta-sample-appOnce you have created your new React application, open the code in an IDE and start it through the command line.npm run startIf all goes according to plan, you will see a rotating React logo when you open a browser at http://localhost:3000/.Now that we have an application, let's add Volta.Download Volta locallyTo install Volta, run the following command:curl https://get.volta.sh | shellIf you have Windows, download and run the Windows installer and follow the instructions.Define your environment variablesBefore we add our Volta-specific versions of Node and npm to our project, let's see what the default environment variables are.Get a reference readingIn a terminal at the root of your project, run the following command.node -v && npm -vFor me, my default versions of Node and npm are v14.18.1 and v6.14.15, respectively.With our baseline set, we can change our versions just for this project with the help of Volta.Setting a node.js versionWe'll start with Node. Since v16 is the current version of Node, let's add that to our project.In our project at the root level where our package.json file lives, run the following command.volta pin node@16Using volta pin [JS_TOOL]@[VERSION] will put this particular JavaScript tool in our version specified in our application's package.json. After committing this to our repository with git, any future developer using Volta to manage dependencies will be able to read this from the repository and use the exact same version.With Volta we can be as specific or generic as we want to define the versions, and Volta will fill in any gaps. I specified the major version of Node that I wanted (16) and then Volta filled in the minor and patch versions for me.After pinning, you will see the following success message on your terminal: pinned node@16.11.1 in package.json..Tip: make your version of node pinned to match the version of Node on your build serverSetting an npm versionNow let's tackle our npm version. Still in the root of our project in the terminal, run this command:volta pin npmWithout a version specified, Volta defaults to the latest LTS version to add to our project.The current LTS version for npm is 8, so now our project has npm v8.1.0 as its default version.Verify the package.json.To confirm that the new versions of the JavaScript environment are part of our project, check the package.json file of the application.Scroll down and you should see a new property named “volta”. Inside the “volta” property there should be a “node”: “16.11.1” and an “npm”: “8.1.0” version.From now on, any developer who has Volta installed on their machine and downloads this repository will have the configuration of these tools to automatically switch to use these particular versions of node and npm.To be doubly sure, you can also re-run the first command we did before anchoring our versions with Volta to see how our current development environment is configured.node -v && npm -vAfter that, your terminal should tell you that you are using those same versions: Node.js v16 and npm v8.Watch the magic happenNow, you can sit back and let Volta take care of things for you.If you want to see what happens when nothing is specified for Volta, try going up one level from the root of your project and check your Node and npm versions again.Let's open two terminals side by side: the first one inside our project with Volta versions, the other one level higher in our folder structure.Now run the following command in both of them:node -v && npm -vAnd in our project, Node v16 and npm v8 are running, but outside the project, Node v14 and npm v6 are present. We did nothing more than change directories and Volta took care of the rest.By using Volta, we took the guesswork out of our JavaScript environment variables and actually made it harder for a member of the development team to use the wrong versions than the right ones.Profile@khriztianmoren

Predictions 🧞‍♀️💻 2022

2022-01-04
programmingweb-developmentdiscuss

Some points you should pay attention to for this year 2022 that will surely have a high impact on the technology ecosystem.RIP Babel and Webpack: They will not disappear forever, but will be largely replaced by new compiler tools that are faster and more intuitive, such as SWC, esbuild and Vite.Serverless will help frontend developers become (real) fullstack developers: and (hopefully) get paid accordingly. Much of the serverless technology is based on V8 and is adopting Web APIs, so frontend developers will already be familiar with the key parts of the serverless infrastructure. Now, instead of starting up an Express server and calling yourself a “fullstack developer”, Serverless will allow you to actually be one.Next.js will become less of a React meta-framework and more of a web meta-framework: Vercel has already hired Rich Harris (aka Lord of the Svelte) and has shared their plans for an edge-first approach to the web with any framework. They will lean even more on this in 2022, adapt to more JS frameworks/libs (with pillowcases full of cash) and prepare for an IPO.No/Low-code tools will dominate even more: We will probably continue to ignore them; meanwhile, more agencies and teenagers will make millions of dollars submitting sites without writing a line of code. In 2022, we'll also start to see more established software companies with “real developers” leveraging no-code or low-code tools because the best code is the code you don't have to maintain.Meta will cede control of React: Just like when they created GraphQL Foundation in 2018, Meta will create a React Foundation later this year and cede control of React. Unlike Microsoft/Amazon/Google, Meta has never (successfully) monetized developers, so React is not a strategic priority for the company. That might be even more true now, with Zuck's eyes on Metaverse and Sebastian Markbåge leaving for Vercel.VC will solve Open Source funding: At least, it will feel that way. With some pre-revenue/traction/pmf OSS projects generating seed rounds at valuations between $25-50MM, you'll want to dust off that old side project of yours. I don't know if it's sustainable (it's not), but it's a lot better than when we relied on Patreon to fund our critical web infrastructure.Netlify to acquire Remix: Bottoms up framework is the wave. Netlify will want the distribution and Remix will want the... money. It would allow the Remix team to spend their time on what they are good at, Remix-the-framework, rather than Remix-the-business. The pairing would give them both a much better chance of catching up with Vercel/Next.js.While all that is going on ...? we can continue to work quietly.Profile@khriztianmoren

HEADLESS CMS - The best solution for content driven development

2020-03-10
javascriptheadless-cmsweb-development

As the world becomes more connected, an increasing number of companies are turning to content management systems to better engage with their customer base. We have all heard of WordPress, Drupal, Joomla, Sitecore, and Squarespace. However, many of these traditional CMS tools do not seem to keep up with the rapid evolution of technology.Their implementation and maintenance are costly and can present a significant number of security risks. They are also not very flexible, bogged down by layers of multiple templates and framework constraints that can hinder the introduction of mobile functionality.But there is a simple solution: go headless.Integrate with any codebaseA relatively new concept, a Headless CMS essentially removes the interface from the equation, allowing developers to integrate with any codebase. The focus is on the API and the backend technology used to store and deliver content to any device or website.The same editing capabilities are still available to users, but without all the views and responses that govern many traditional CMS approaches.Headless CMS provides us with a lot of freedom on how to implement the content itself. We can have full control over the look of the final product, and no valuable time is wasted building templates from scratch.A traditional CMS requires a lot of time, while a Headless CMS is relatively easy to deliver, as developers generally find pre-made templates that are suitable for many variations of an online product.When we talk about multiple applications consuming the same API, it makes sense to extract them and place them in a real API, this helps to keep the process of each application and ensure they have the same data.When to go for a Headless CMSIs there a time when a traditional CMS would be better than going Headless?As with all software-related answers, it depends on the product, although a better question is whether or not I need a full CMS.Many clients often want to do some kind of CMS, especially for landing pages, which requires time and money. However, if you only plan to change your site's content once or twice a year, do you really need a CMS? Probably not.If, on the other hand, you have content that is constantly changing, like a news website, then your best solution would be a headless approach.What are the benefits of a Headless CMS? Is a traditional approach better for our projects? And is investing more money and time in a custom solution a better strategy?We will delve a bit more in a future blog post about the benefits of Headless CMS. In the meantime, you can learn more about Headless CMS at https://headlesscms.org and https://jamstack.org.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �