👨🏼‍💻

khriztianmoreno's Blog

Home Tags About |

Node.js Corepack: Version Control for Package Managers

2024-12-10
javascriptnodejstutorial

The Problem with Traditional Package ManagersFor years, npm has been the de facto package manager for Node.js. While it offers robust features like private package access and two-factor authentication, it also comes with some drawbacks:Slow installation speeds: npm can be notoriously slow, especially for large projects.Bloated node_modules directories: These directories can consume significant disk space.Complex configuration: npm's configuration can be intricate and challenging to master.To address these issues, alternative package managers like Yarn and pnpm have emerged. Yarn is known for its speed, while pnpm optimizes disk space by sharing dependencies.What is Corepack?Corepack is a new experimental feature in Node.js that allows you to manage the versions of package managers on your machines and environments. This means that all team members will use the same version of the package manager, which can help avoid compatibility issues.{ "name": "my-project", "scripts": { "start": "node index.js" }, "packageManager": "pnpm@8.5.1" // what is this? (Corepack) }Getting Started with CorepackTo enable Corepack, you can run the following command:corepack enableOnce Corepack is enabled, to set your project’s package manager, run corepack use. This command updates your package.json automatically.corepack use pnpm@8.x # sets the latest 8.x pnpm version in the package.json corepack use yarn@* # sets the latest Yarn version in the package.jsonWhy Use Corepack?Corepack can help you avoid compatibility issues by ensuring that all team members use the same version of the package manager. It can also help you manage package manager versions across different environments, such as development, production, and testing.The Future of CorepackCorepack represents a significant step forward in Node.js package management. By providing a unified interface for different package managers, it simplifies the development workflow and reduces the complexity associated with managing dependencies. As Corepack matures, it has the potential to become the standard way to manage Node.js packages.ReferencesCorepack DocumentationCorepack : Managing the Package ManagersHow To Use CorepackI hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

Fetching Data in React: A Beginner's Guide

2024-12-09
reactjavascriptweb-development

Imagine building a webpage that displays real-time weather data or a list of products from an online store. How does your React application get this information? The answer lies in data fetching. This process involves retrieving data from various sources (like APIs, databases) and incorporating it into your React components.Sequential Data Fetching:Think of sequential data fetching as a step-by-step process. You fetch one piece of data, wait for it to arrive, and then move on to the next.Example: Fetching user information, then their orders, and finally their address.// Sequential fetching using async/await async function fetchData() { const user = await fetchUser(); const orders = await fetchOrders(user.id); const address = await fetchAddress(user.id); // ... }Parallel Data Fetching:In parallel fetching, multiple data requests are made simultaneously.Example: Fetching user information, orders, and address at the same time.// Parallel fetching using Promise.all Promise.all([fetchUser(), fetchOrders(userId), fetchAddress(userId)]) .then(([user, orders, address]) => { // ... }) .catch((error) => { // ... });Prefetching Data:To enhance the perceived speed of your application, consider prefetching data before it's required. This technique is particularly effective for data that's likely to be needed soon but not immediately. For instance, when leveraging a framework like Next.js built on top of React, you can prefetch the subsequent page and its associated data using their Link component.Example: Fetching post details for the next page.<Link href="/posts/1" prefetch> <a>Post 1</a> </Link>As soon as this Link component becomes visible on the screen, the data for the "Next Page" is preloaded. These subtle optimizations can significantly improve the perceived performance of your app, making it feel more responsive.Conclusion:Choosing the right data fetching pattern depends on your specific use case.Sequential: Simple to implement, suitable for small-scale applications.Parallel: Improves performance for larger datasets, but can be more complex.Prefetching: Enhances user experience by reducing perceived loading times.Key Takeaways:Async/await: A modern way to handle asynchronous operations in JavaScript.Promises: A way to represent the eventual completion (or failure) of an asynchronous operation.Performance: Parallel fetching and prefetching can significantly improve performance.User experience: Prefetching can make your application feel snappier.Additional Tips:Error handling: Always handle errors gracefully to provide a better user experience.Caching: Store frequently accessed data to reduce the number of network requests.State management: Use libraries like Redux or Zustand to manage complex application state, especially when dealing with fetched data.By understanding these patterns, you can build more efficient and responsive React applications.Would you like me to elaborate on any of these concepts or provide more code examples?Profile@khriztianmoren

A senior stands out for his thinking

2024-12-02
web-programmingsoftware-development

The analysis of edge cases: a key to evaluating senior developersThe world of programming is constantly changing, but many companies still use outdated evaluation methods to measure the skills of senior developers. While theoretical tests are useful for measuring basic knowledge, they do not accurately reflect a senior's ability to solve complex real-world problems.The analysis of edge casesWhy are traditional technical tests not enough?Tests based solely on theoretical concepts fail to capture the true value of a senior developer. A senior's experience is measured by their ability to anticipate problems, design scalable solutions, and optimize systems. A good senior developer not only masters the code but also knows how to handle exceptions, manage complexity, and think critically in uncertain scenarios.The practical approach: edge case analysisA key exercise to evaluate a senior developer is to present them with a realistic scenario and ask them to identify and solve the edge cases. Take, for example, the development of a page with search and pagination. In a technical interview, an experienced senior should be able to think of extreme scenarios that could break the functionality, such as:Search with no results: How would you handle a search that returns no results? How would you inform the user in a friendly manner and what impact would this have on performance?Malicious terms: What if a user enters malicious code or unexpected characters in the search field? How would you protect the application from SQL injection or XSS attacks?Non-existent pages: If the system receives a request for a non-existent page or with incorrect parameters, how would you handle URL validation to avoid 404 errors? What alternatives would you offer the user?Excessive results: Imagine the search returns millions of results. How would you optimize pagination to prevent the application from crashing or becoming slow? What techniques would you use to ensure efficient loading?Why is this approach important?Evaluating a candidate in these scenarios not only measures their technical knowledge but also their ability to think proactively, manage exceptions, and anticipate problems. A senior developer must have the ability to design robust solutions that not only work in ideal cases but also cover those exceptional cases that may arise during the daily operation of the system.Edge case analysis is a clear way to measure a senior's mindset. It is not enough for the candidate to solve a problem in the ideal scenario. The true test is how they respond when things do not go as planned, and if they can find effective solutions to maintain the stability, security, and usability of the system.How should a senior be evaluated?Realistic and practical tests: Beyond theoretical concepts, interviews should focus on challenges that reflect the real problems a senior developer faces. This involves evaluating how they manage edge cases and their ability to design scalable and secure solutions.Evaluation of architectural decision-making: Seniors must be able to make informed decisions about software architecture, scalability, performance, and security. Interviews should explore these capabilities in depth.Evaluation of interpersonal and leadership skills: Interviews should also include dynamics that explore communication, teamwork, and leadership skills, as these are crucial aspects of a senior's role.In conclusion:Traditional technical tests have their place, but they are not enough to effectively evaluate a senior developer. Companies should focus on a more holistic evaluation, where edge cases and critical decisions about software architecture and design are at the center of the assessment. Only then can we ensure that the selected candidates are truly capable of bringing the expected value to a development team.What do you think about this? Do you agree that technical interviews for seniors should focus more on edge case analysis? What other aspects do you consider important for evaluating a senior?ReferencesSpecial thanks 🙏🏻 for their example to Freddy MontesTaking the Edge off of Edge CasesProfile@khriztianmoreno �

WebContainers at its best - Bolt.new combines AI and full-stack development in the browser

2024-10-08
javascriptaiweb-development

Remember WebContainers? It’s the WebAssembly-based “microoperating system” that can run Vite operations and the entire Node.js ecosystem in the browser. The StackBlitz team built WebContainers to power their in-browser IDE, but it often felt like the technology was still searching for a killer use case—until now. That’s because StackBlitz just released bolt.new, an AI-powered development sandbox that Eric Simons described during ViteConf as “like if Claude or ChatGPT had a baby with StackBlitz.”Bolt.newI’ll try not to imagine it too vividly, but based on the overwhelmingly positive reviews so far, I’m guessing it’s working – dozens of developers describe it as a combination of v0, Claude, Cursor, and Replit. How Bolt is different: Existing AI code tools can often run some basic JavaScript/HTML/CSS in the browser, but for more complex projects, you need to copy and paste the code to a local environment. But not Bolt. By using WebContainers, you can request, run, edit, and deploy entire web applications, all from the browser. Here’s what it looks like:You can ask bolt.new to build a production-ready multi-page app with a specific backend and database, using any technology stack you want (e.g. “Build a personal blog using Astro, Tailwind, and shadcn”).Unlike other tools, Bolt can install and run relevant npm packages and libraries, interact with third-party APIs, and run Node servers.You can manually edit the code it generates via an in-browser editor, or have Bolt resolve errors for you . This is unique to Bolt, because it integrates AI into all levels of WebContainers (not just the CodeGen step).You can deploy to production from chat via Netlify, no login required.There’s a lot more we could go over here, but Eric’s demo is pretty wild. In closing: From the outside, it wasn’t always clear whether StackBlitz would ever see a significant return on investment over the 5+ years they’ve spent developing WebContainers. But suddenly it looks like they might be uniquely positioned to help developers leverage AI to build legitimate FullStack applications.<iframe width="560" height="315" src="https://www.youtube.com/embed/knLe8zzwNRA?si=7R7-1HxzwuyzL0EZ&amp;start=700" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

Rust is revolutionizing JavaScript development!

2024-07-24
web-developmentjavascript

Rspack just released version 1.0 , and the dream of using Rust-based build tools to speed up the JavaScript ecosystem is more alive than ever. How did we get here? Early last year, a team of developers at ByteDance was facing performance issues maintaining the company’s “many large monolithic applications.” So they did what any good developer would do: they blamed webpack. But they didn’t stop there. In March 2023, they released Rspack v0.1, a high-performance JavaScript bundler written in Rust and designed to be fully compatible with the webpack ecosystem.RspackFast forward to today, and Rspack now has 100k weekly downloads and has introduced key improvements that make it production-ready:Better performance: New features like lazy compilation and other performance optimizations make Rspack 1.0 build times over 20x faster than webpack 5.Increased webpack compatibility: Over 80% of the top 50 downloaded webpack plugins can now be used in Rspack, bringing it closer to becoming a true drop-in replacement for webpack.Reduced complexity: They created a new toolchain called Rstack that includes separate projects like Rsbuild, Rspress, and Rslib, each targeting different use cases. This reduces the complexity of setting up an all-in-one tool like Rspack (or webpack), while still maintaining flexibility.Bottom line: Rspack offers a pretty simple value proposition for developers: if you already use webpack, it will make it much easier for you to migrate to its bundler which is faster, easier to use, and still fully compatible with the webpack API. Time will tell if that will be enough to convince the masses to try it out

Unlock your creativity with Google Gemini and JavaScript - A practical guide

2024-06-12
javascriptaitutorial

Hello! Today I bring you a new tool that will boost your creativity to another level: Google Gemini. This artificial intelligence API allows you to generate high-quality text in Spanish, from simple phrases to complete stories, with just a few lines of code.What is Google Gemini?Google Gemini is a state-of-the-art language model developed by Google AI. It has been trained with a massive dataset of text and code, allowing it to understand and generate natural language with impressive accuracy.What can we do with Google Gemini and JavaScript?The possibilities are endless. Here are some examples:Generate creative content: Write poems, stories, scripts, blog posts, or any type of textual content you can imagine.Translate languages: Translate texts from one language to another quickly and accurately.Answer questions: Get answers to your questions in an informative and complete way.Create chatbots: Develop conversational chatbots that interact with users naturally.Automate tasks: Automate the generation of reports, emails, and other tasks that require natural language processing.How to get started?To get started with Google Gemini with JavaScript, you only need:A Google Cloud Platform account: https://cloud.google.com/Enable the Google Gemini API: https://ai.google.dev/Practical example:In this example, we are going to generate a poem using Google Gemini and JavaScript.const { GoogleGenerativeAI } = require("@google/generative-ai"); // Access your API key as an environment variable (see "Set up your API key" above) const genAI = new GoogleGenerativeAI(process.env.API_KEY); async function run() { // For embeddings, use the embedding-001 model const model = genAI.getGenerativeModel({ model: "embedding-001" }); const text = "Escribe un poema sobre la naturaleza"; const result = await model.embedContent(text); const embedding = result.embedding; console.log(embedding.values); } run();For text generation we use the getGenerativeModel method of the genAI object.Example of a generated poem:The green earth, the blue sky, the sun shines with crystal light. The flowers bloom in the garden, the birds sing with sweet trill. The wind rustles through the leaves, the bees buzz among the flowers. Nature is a divine gift, a place of peace and harmony.Conclusion:Google Gemini and JavaScript are a powerful combination that allows you to unlock your creativity and develop amazing applications. With this practical guide, you are ready to start exploring the endless possibilities of this technology.Additional Resources:Google Gemini Documentation: https://ai.google.dev/docsGoogle Gemini Tutorials: https://m.youtube.com/watch?v=TXvbT8ORI50Google Gemini Code Examples: https://m.youtube.com/watch?v=jTdouaDuUOAFeel free to experiment with Google Gemini and JavaScript! Share your creations in the comments and let me know what you think of this tool.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

How to mock an HTTP request with Jest 💻

2024-05-07
javascripttestingnodejsjestweb-development

Today I wanted to show you how to properly write a test.But anyone can figure out how to run a simple test. And here, we're looking to help you find answers you won't find anywhere else.So I thought we'd take things one step further.Let's run a more complex test, where you'll have to mock 1 or 2 parts of the function you're testing.[In case you're new here: mock is like using a stunt double in a movie. It's a way to replace a complicated part of your code (like calling an API) with something simpler that pretends to be real, so you can test the rest of your code easily.]MY testing framework of choice is Jest, because it makes everything so much easier:Zero Configuration: One of the main advantages of Jest is its zero-configuration setup. It is designed to work out of the box with minimal configuration, making it very attractive for projects that want to implement tests quickly and efficiently.Snapshot Testing: Jest introduced the concept of Snapshot Testing, which is particularly useful for testing UI components. It takes a snapshot of a component’s rendered output and ensures that it doesn’t change unexpectedly in future tests.Built-In Mocking and Spies: Jest comes with built-in support for mock functions, modules, and timers, making it easy to test components or functions in isolation without worrying about their dependencies.Asynchronous Testing Support: Jest supports asynchronous testing out of the box, which is essential for testing in modern JavaScript applications that often rely on asynchronous operations like API calls or database queries.Image descriptionDe todos modos, entremos en las pruebas:Step 1: Setting up your projectCreate a new project directory and navigate to itInitialize a new npm project: npm init -yInstall Jest: npm install --save-dev jestInstall axios to make HTTP requests: npm install axiosThese are the basic requirements. Nothing new or fancy here. Let's get started.Step 2: Write a function with an API callNow, let's say you log into some kind of application. StackOverflow, for example. Most likely, at the top right you'll see information about your profile. Maybe your full name and username, for example.In order to get these, we typically have to make an API call to get them. So, let's see how we would do that.Create a file called user.jsInside user.js, write a function that makes an API call. For example, using axios to retrieve user data:// user.js import axios from "axios"; export const getUser = async (userId) => { const response = await axios.get(`https://api.example.com/users/${userId}`); return response.data; };Step 3: Create the Test FileOkay, now that we have a function that brings us the user based on the ID we requested, let's see how we can test it.Remember, we want something that works always and for all developers.Which means we don't want to depend on whether the server is running or not (since this is not what we are testing).And we don't want to depend on the users we have in the database.Because in my database, ID1 could belong to my admin user, while in your database, ID1 could belong to YOUR admin user.This means that the same function would give us different results. Which would cause the test to fail, even though the function works correctly.Read on to see how we tackled this problem using mocks.Create a file called user.test.js in the same directory.Inside this file, import the function you want to test:import axios from "axios"; jest.mock("axios"); import { getUser } from "./user";Write your test case, mock the call, and retrieve mock data.test("should fetch user data", async () => { // Mock data to be returned by the Axios request const mockUserData = { id: "1", name: "John Doe" }; axios.get.mockResolvedValue({ data: mockUserData }); // Call the function const result = await getUser("1"); // Assert that the Axios get method was called correctly expect(axios.get).toHaveBeenCalledWith("https://api.example.com/users/1"); // Assert that the function returned the correct data expect(result).toEqual(mockUserData); });Step 4: Run the testAdd a test script to your package.json:"scripts": { "test": "jest" }Run your tests with npm test.Step 5: Review the resultsJest will display the result of your test in the terminal. The test should pass, indicating that getUser is returning the mocked data as expected.Congratulations, you now have a working test with Jest and Mocking.I hope this was helpful and/or made you learn something new!Profile@khriztianmoren

Are you making THESE unit testing and mocking mistakes?

2024-04-08
javascripttestingweb-development

Testing is hard.And it doesn't matter if you're an experienced tester or a beginner...If you've put significant effort into testing an application...Chances are you've made some of these testing and mocking mistakes in the past.From test cases packed with duplicate code and huge lifecycle hooks, to conveniently incorrect mocking cases and missing and sneaky edge cases, there are plenty of common culprits.I've tracked some of the most popular cases and listed them below. Go ahead and count how many of them you've done in the past.Hopefully, it'll be a good round.Why do people make mistakes in testing in the first place?While automated testing is one of the most important parts of the development process...And unit testing saves us countless hours of manual testing and countless bugs that get caught in test suites...Many companies don't use unit testing or don't run enough tests.Did you know that the average test coverage for a project is ~40%, while the recommended one is 80%?Image descriptionThis means that a lot of people aren't used to running tests (especially complex test cases) and when you're not used to doing something, you're more prone to making a mistake.So without further ado, let's look at some of the most common testing errors I seeDuplicate CodeThe three most important rules of software development are also the three most important rules of testing.What are these rules? Reuse. Reuse. Reuse.A common problem I see is repeating the same series of commands in every test instead of moving them to a lifecycle hook like beforeEach or afterEachThis could be because the developer was prototyping or the project was small and the change insignificant. These cases are fine and acceptable.But a few test cases later, the problem of code duplication becomes more and more apparent.And while this is more of a junior developer mistake, the following one is similar but much more clever.Overloading lifecycle hooksOn the other side of the same coin, sometimes we are too eager to refactor our test cases and we put so much stuff in lifecycle hooks without thinking twice that we don't see the problem we are creating for ourselves.Sometimes lifecycle hooks grow too large.And when this happens......and you need to scroll up and down to get from the hook to the test case and back...This is a problem and is often referred to as "scroll fatigue".I remember being guilty of this in the past.A common pattern/practice to keep the file readable when we have bloated lifecycle hooks is to extract the common configuration code into small factory functions.So, let's imagine we have a few (dozens of) test cases that look like this:describe("authController", () => { describe("signup", () => { test("given user object, returns response with 201 status", async () => { // Arrange const userObject = { // several lines of user setup code }; const dbUser = { // several lines of user setup code }; mockingoose(User).toReturn(undefined, "findOne"); mockingoose(User).toReturn(dbUser, "save"); const mockRequest = { // several lines of constructing the request }; const mockResponse = { // several lines of constructing the response }; // Act await signup(mockRequest, mockResponse); // Assert expect(mockResponse.status).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(201); }); test("given user object with email of an existing user, returns 400 status - 1", async () => { // Arrange const userObject = { // several lines of user setup code }; const dbUser = { // several lines of user setup code }; const mockRequest = { // several lines of constructing the request }; const mockJson = jest.fn(); const mockResponse = { // several lines of constructing the response }; mockingoose(User).toReturn(dbUser, "findOne"); // Act await signup(mockRequest, mockResponse); // Assert expect(mockResponse.status).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(400); expect(mockJson).toHaveBeenCalled(); expect(mockJson).toHaveBeenCalledWith({ status: "fail", message: "Email taken.", }); }); }); });We can extract the repeated configuration information into its own functions called createUserObject, createDbUserObject and createMocksAnd then the tests would look like this:test("given user object, returns response with 201 status", async () => { const userObject = createUserObject(); const dbUser = createDbUserObject(); const [mockRequest, mockResponse] = createMocks(userObject); mockingoose(User).toReturn(undefined, "findOne"); mockingoose(User).toReturn(dbUser, "save"); await signup(mockRequest, mockResponse); expect(mockResponse.status).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(201); });By extracting those code snippets into their own separate factory functions, we can avoid scrolling fatigue, keep lifecycle links snappy, and make it easier to navigate the file and find what we're looking for.Not prioritizing the types of tests you runThis has more to do with large or huge codebases where there are literally hundreds or even thousands of test cases running every time a new set of commits wants to be merged into the codebase.Image descriptionIn such cases, running all the test suites can take literally hours, and you may not always have the time or resources to do so.When time or resources are limited, it's important to strategically choose the type of test to prioritize. Generally, integration tests provide better reliability assurances due to their broader scope. So when you have to choose between the two, it's often a good idea to choose integration tests over unit tests.Image descriptionUsing logic in your test casesWe want to avoid logic in our test cases whenever possible.Test cases should only have simple validation and avoid things like try-catch blocks or if-else conditionals.This keeps your tests clean and focused only on the expected flow because it makes the tests easier to understand at a glance.The only exception is when you're writing helper or factory functions that set up scenarios for tests.Using loose validations instead of strict assertionsThis is usually a sign that you might need to refactor the piece of code you're testing or that you need to make a minor adjustment to your mocks.For example, instead of checking if the value is greater than 1, you should be more specific and assert that the value is 2.Or, if you're checking data for a User object, you should assert that each piece of data is exactly as you expect, rather than just checking for an ID match.Loose checks can mask edge cases that could fail in the future.Improper Implementation of Mock BehaviorThis one is hard to find and that's why you can find an example in every codebase.It's one of the sneakiest but common testing issues and it's hard to notice at first glance.It can happen when the mock behavior is overly simplified or when it doesn't accurately reflect edge cases and error conditions.As a result, tests may pass, but they will not provide a reliable indication of how the system will perform under various conditions, resulting in future errors and unexpected problems, and test cases with simulated behavior that end up doing more harm than good.I hope this post helps you identify those practices that we should avoid when testing.Profile@khriztianmoren

Why is it so difficult to pass state between client and server components?

2024-03-28
javascriptreactweb-development

The way we represent server components is different.Nothing like what we're used to so far.And because it's so different, it also changes where we handle state, how we handle it, and how we manage to sleep at night knowing that these are all important things we should know since last year, but most of us are completely unaware of.WHY?In fact, server components impact 3 of the most important parts of web development:PerformanceUser ExperienceThe way we, developers, write code and design our applications.It's not something we can ignore or take lightly.As you can see, I've been thinking about it a lot and you've probably been thinking about it too. Every developer who values ​​their keyboard is thinking about it.And there's this specific question... It's kind of a "chicken and the egg" type of question, mainly because they're both questions that get asked a lot.How ​​the heck do I handle state in server components if I don't have access to the state in the first place?Before I give you the answer, let me explain the problem at hand. Consider what happens when a server component requests a new render.Unlike client components where state is preserved between renders, server components don't have that luxury. Like a roguelike game, they always start from scratch.There is no inherent code or mechanism that can make a server component remember state. The backend has all those databases and design patterns and all those complex functions and for what? It can't even handle state.So what do we do? Do we simply use server components on completely static components that don't need any state? While this is a possible answer, it's also an approach that limits the effectiveness and tuning of server components in modern web applications. So, we're ruling it out.Because everything I said above has a drawback.While the backend may not handle client state like the client does, it does handle application state. So, in a way, we can handle state on the server. Just not in the way you think. And, actually, there's not one way.And, actually, there's not one way.There are THREE ways to handle state on the server.Which one we choose depends on what best suits our needs and current situation. And these 3 ways are:Prop drilling from the server to the componentscookiesstate hydrationNow, another million dollar question. Why can't we just handle all the state on the client?This is because server components advocate a separation of concerns. Which in simpler terms means that each part of the application should mind its own business. By decoupling state from rendering, we not only improve performance, but we also gain more control over the user experience.Profile@khriztianmoren

Will you still be using client-side components in 2024?

2024-02-14
javascriptreactweb-development

I love server components. But they're not for every occasion. In fact, using them at each and every opportunity you get is more like tenderizing a steak with the same hammer you used to hammer a pin.That said, there are also some cases where server components fit like Cinderella and her shoe.Dashboards & ReportsIn the case of dashboards and reports, you maximize performance by processing and rendering all the data on the server while still allowing the client to do what it was supposed to do in the first place.If we only use client components, we're offloading more tasks to it and making it perform more tasks than it can potentially handle.If you processed and rendered the data on the client, you'd spend a lot of time waiting around with an ugly spinner mocking you because of all the extra steps the client has to take to achieve the same result.First, you would need to get the data from your endpoint, which in turn would need to get data from the database, which in turn would need to get data from its logs.Not only that, but you also have to wait patiently for the data to arrive at the client, even if it arrives late.And only once it arrives, the client can start processing it and displaying it to the user.That's why, in this case, it's better to have the components locked and rendered directly on the server.And then, the client can get all the widgets its electronic heart desires, all streamed in parallel, which is a much faster and easier way to conduct business.Blog PostsBlog posts are some of the most static content you can find. After all, it's mostly just a bunch of words, in order, with the occasional image, gif, or meme here and there.With server components, you're pre-rendering the content on the server, which is the best case scenario in terms of SEO because it's delivered fast and complete.This use case above is what tends to throw developers off balance and confuse them. Because blog posts are the first thing they think of when it comes to SSR as well. So if they think of SSR and server components when they think of blog posts, it's natural to think that they're both the same thing or can be used interchangeably.Which is completely and utterly wrong and is a sin worthy of burning in hell for eternity, according to my previous blog post.But at the same time it makes sense from the point of view of results. Because although their approaches are very different in practice, the results are quite similar.Server-Side Data FetchingServer-side data fetching is a great way to give your code a security boost in case you otherwise reveal logic or API keys that are supposed to remain hidden or if you are paranoid about every piece of information you hold.You can use this type of data fetching to access not only your database but also any other API you want. All by being sneaky.However, there are cases where server-side components are NOT ideal and should in fact be avoided.High Interactivity RequirementsThis is fancy wording for anything you do something to and get something back. Kind of like Jell-O, but not quite.Something potentially more familiar and more in keeping with web development itself are things like forms and buttons.These components often have to react in some way to your actions, may have their own state, and communicate with other components on the client side, making them the weakest link when it comes to server components.Stateful ComponentsIf you take it as literally as possible and use the definition we've been using so far, there is no way to handle state in a server component.But if you squint, tilt your head a little to the side, defocus a bit, and take a deep breath, you can see that this is only half true.We'll learn what the nuances are another time, but for now let's just say that server components do NOT have access to state. After all, they don't have access to state-related hooks.ConclusionServer components are a powerful tool for specific scenarios in web development. Use them to improve performance and security for data-intensive tasks. But remember, they may not fit every situation, especially for interactive or stateful elements.Profile@khriztianmoreno see you soon

Server Components Vs Server-side Rendering

2024-01-28
javascriptreactweb-development

Did you know that server components and server-side rendering are two completely different things?Image descriptionAnd while they go hand-in-hand in many cases, there are equally many examples where you can and should use just one of them.Image descriptionNow, to be fair, given the online documentation most developers find about server components and server-side rendering, the fact that they assume these two things are the same isn't exactly a surprise.(This also gives them the perfect excuse to avoid server-side components altogether, rather than face the fact that this is something they should already be pretty good at using and will eventually have to learn anyway.)Image descriptionThat said, server-side components have a lot of advantages over server-side rendering, especially when it comes to building larger, more complex applications.Let's talk a bit more about the differences between the two.Server-Side RenderingServer-side rendering renders the page on the server. But obviously that's not really useful, so let's dig a little deeper.Server-side rendering renders the page on the server at request time, meaning that every time a client makes a request to the server — such as when a new visitor comes to your page — the server re-renders the same page and sends it to the client.And while this is great for SEO, since even the most dynamic pages appear static in the eyes of the robots that index them and use them for searches, server-side rendering has a lot of steps between the client making the request to the server and the page finally loading in front of you.First, when the request hits the server, the server statically renders the page and sends it back to the client, where the user gets a version of your page that's devoid of all interactivity.But that's not the only thing the server sends to the client. Along with the static page HTML, the server sends its condolences along with a big bundle of React code that the client will need to run to make the page dynamic again.But what happens when 90% of our page is static?You might think the correct answer is “practically nothing” and you wouldn’t be exactly wrong, but just like in Calculus class, you lose points because this isn’t the answer I was looking for.In reality, there’s still a lot of JavaScript being sent to the client and it needs to be executed without changing much on the page itself.This is a waste of time and data and is one of the main reasons server components were created.(Which also means that if you’re using SSR as an alternative to server components, you’ve hit a big snag and it’s time to change your ways before you get thrown into a “slow performance” jail.)Server ComponentsServer components, like SSR, are rendered on the server, but they have the unique ability to include server-side logic without sending any additional JavaScript to the client, since server components are executed on the server.This means that while in SSR the JavaScript code is executed on the client, in Server Components, the code is executed directly on the server. And the client only receives the output of the code through the server payload, like a baby getting its food chewed up.So, in a nutshell, the difference between server components and server-side rendering is all about when and how the code is executed. In the case of server components, there is no need to send or handle any additional code on the client side because we already run it on the server.The only code needed is if you are tying the server and client components together and the client has to tie the two together.The big benefit of server components is that they offer higher performance because they need less client-side JavaScript and offload all the processing and data retrieval to the server while the client can relax and drink Aperol Spritz until the rendered HTML arrives.At that point, the client simply displays it to the end user and takes all the credit for the hard work done by the server.And while this may seem a bit complex right now, it will all become clearer as you learn more about server components. Which is something that can't really be avoided, as they are becoming more and more popular in use cases like:Having a lot of heavy calculations that need a lot of processingPrivate API access (to keep things secret… secret)When most of the client side is already static and it would be a waste to send more JavaScript to the clientConclusionWhile server components may seem daunting right now, they are mostly something new and strange. Once you get to know them better and learn some of the basics, you'll realize that they are not as complex as they seem.The learning curve is not as steep as you might think and just because they have "sever" in the name, it doesn't mean they have to be something strange and cryptic that we frontend developers should stay away from. In fact, they are quite similar to client components and even more lightweight as they lack things like state hooks, etc.Profile@khriztianmoren

Predictions 🧞‍♀️💻 2024

2023-12-04
programmingweb-developmentdiscuss

Some points you should pay attention to for this year 2024 that will surely have a high impact on the technological ecosystem.Bun achieves its goal of becoming the default frontend runtime: They still have some hurdles to overcome, but if they manage to provide a drop-in replacement for Node that instantly improves your application's performance 10x, it will be an obvious choice for most developers. The v1.0 release last September was a major step towards overall Windows compatibility and stability, and the bet is that Bun will start becoming the default choice this year.AI will replace no-code/low-code tools: It turns out that AI is much better and faster at creating marketing analytics dashboards. Tools like Basedash and 8base already use AI to create a full set of requirements, custom internal tools, and others will emerge to replace drag-and-drop builders to create sites that don't rely heavily on business logic.Netlify is acquired by GoDaddy: With multiple rounds of layoffs, 2023 was clearly not the best year for Netlify. But sometimes the best way to recover is to find a new ~~sugar daddy~~, GoDaddy. After retiring the Media Temple brand a few months ago, it looks like GoDaddy might be back in the acquisition market for a platform like Netlify.Any other ones you can think of? Help me update this post!!Profile@khriztianmoren

Clone an Object in JavaScript - 4 Best Ways

2023-06-18
javascriptprogrammingweb-development

Cloning an object in JavaScript means creating an identical copy of an existing object. This means that the new object will have the same values ​​for each property, but it will be a different object.Why is it important to clone an object in javascript?Cloning an object in javascript is important to preserve the original state of an object and prevent the propagation of changes. This is especially useful when there are multiple instances of an object that are shared between various parts of the application. Cloning the original object ensures that the other instances are not affected by changes made to the original object. This also allows developers to use a single instance of the object and create a unique copy for each part of the application. This avoids having to create a new instance of the object every time, saving time and resources.How to clone an object?To clone a JavaScript object correctly, you have 4 different options:Use the structuredClone() functionUse the spread operatorCall the Object.assign() functionUse JSON parsing.const data = { name: "khriztianmoreno", age: 33 }; // 1 const copy4 = structuredClone(data); // 2 const copy1 = { ...data }; // 3 const copy2 = Object.assign({}, data); // 4 const copy3 = JSON.parse(JSON.stringify(data));1. structuredClone()Creates a deep clone of a given value using the structured cloning algorithm.const original = { someProp: "with a string value", anotherProp: { withAnotherProp: 1, andAnotherProp: true, }, }; const myDeepCopy = structuredClone(original);2. spread operatorThe spread operator (...) allows you to spread an object or array into a list of individual elements, and is used to create a copy of an object.const original = { name: "khriztianmoreno", age: 33 }; const clone = { ...original };In the above example, a new clone object is being created from the values ​​of the original object, it is important to note that this only makes a shallow copy, if the original object contains objects or arrays within it, these will not be cloned but the reference of the original object will be assigned.It can also be used to clone arrays as follows:const original = [1, 2, 3, 4]; const clone = [...original];3. Object.assignThe Object.assign() function is another way to clone objects in JavaScript. The function takes a target object as its first argument, and one or more source objects as additional arguments. It copies enumerable properties from the source objects to the target object.const original = { name: "khriztianmoreno", age: 33 }; const clone = Object.assign({}, original);4. JSON parsingThe JSON.parse() and JSON.stringify() methods are another way to clone objects in JavaScript. The JSON.stringify() method converts an object into a JSON string, while the JSON.parse() method converts a JSON string into a JavaScript object.const original = { name: "khriztianmoreno", age: 33 }; const clone = JSON.parse(JSON.stringify(original));ConclusionsAdvantages of cloning an object in javascriptObject cloning allows the developer to create a copy of an existing object without having to redefine all the values ​​of the object. This means that the developer can save time and effort by not having to recreate an object from scratch.Object cloning also allows the developer to create a modified version of an existing object. For example, a developer can modify the values ​​of an existing object to create a customized version. This is useful in saving time by not having to rewrite the code to create an object from scratch.Object cloning also allows the developer to create an improved version of an existing object. For example, a developer can add new properties to an existing object to improve its functionality.Object cloning also offers a way to backup objects. This means that if the existing object is affected by a software failure, the developer can resort to the backup to recover the original values.Disadvantages of cloning an object in javascriptCloning an object in Javascript can be a useful task, but there are also some drawbacks that need to be considered. The first is the execution time. Cloning objects in Javascript can be a slow process, especially if the object is large. This can lead to a poor user experience if you are trying to clone an object while running an application.Another disadvantage of cloning objects in Javascript is that you cannot clone complex objects. This means that objects that contain references to other objects cannot be cloned properly. This can be a problem if you are trying to clone an object that contains references to other important objects, as the clone will not have these references.Lastly, cloning objects in Javascript can also lead to security issues if you clone objects that contain sensitive information. The clone can contain the same information as the original object, which can be a security risk if you share the clone with other users.That's all folks! I hope this article helps you understand the different options we have when it comes to cloning an object/array in javascript.Profile@khriztianmorenoPS: This article was written entirely using Artificial Intelligenc

How to use Call, Apply and Bind methods in javascript

2023-05-12
javascriptweb-developmentprogramming

In this article, we'll look at what call, apply, and bind methods are in javascript and why they exist.Before we jump in, we need to know what this is in javascript, in this post you can go a little deeper.Call, Apply and BindIn Javascript, all functions will have access to a special keyword called this, the value of this will point to the object on which that function is executed.What are these call, apply and bind methods?To put it in a simple way, all these methods are used to change the value of this inside a function.Let us understand each method in detail.call()Using the call method, we can invoke a function, passing a value that will be treated as this within it.const obj = { myName: "khriztianmoreno", printName: function () { console.log(this.myName); }, }; obj.printName(); // khriztianmoreno const newObj = { myName: "mafeserna", }; obj.printName.call(newObj); //mafesernaIn the above example, we are invoking the call method in the printName function by passing newObj as a parameter, so now this inside printName points to newObj, hence this.myName prints mafeserna.How to pass arguments to functions?The first argument of the call method is the value pointed to by this inside the function, to pass additional arguments to that function, we can start passing it from the second argument of the call method.function foo(param1, param2) {} foo.call(thisObj, arg1, arg2);where:foo is the function we are calling by passing the new this value which is thisObjarg1, arg2 are the additional arguments that the foo function will take ( param1= arg1 , param2 = arg2 )apply()The apply function is very similar to the call function. The only difference between call and apply is the difference in how the arguments are passed.call — we pass arguments as individual values, starting from the second argumentapply — additional arguments will be passed as an arrayfunction sayHello(greet, msg) { console.log(`${greet} ${this.name} ! ${msg}`); } const obj = { name: "khriztianmoreno", }; // Call sayHello.call(obj, "Hello", "Good Morning"); // Hello khriztianmoreno ! Good Morning // Apply sayHello.apply(obj, ["Hello", "Good Morning"]); // Hello khriztianmoreno ! Good MorningIn the above example, both the call and apply methods in the sayHello function are doing the same thing, the only difference is how we are passing additional arguments.bind()Unlike the call and apply methods, bind will not invoke the function directly, but will change the this value inside the function and return the modified function instance.We can invoke the returned function later.function sayHello() { console.log(this.name); } const obj = { name: "khriztianmoreno" }; // it won't invoke, it just returns back the new function instance const newFunc = sayHello.bind(obj); newFunc(); // khriztianmorenopassing additional arguments: Passing additional arguments in bind works similarly to the call method, we can pass additional arguments as individual values ​​starting from the second argument of the bind method.function sayHello(greet) { console.log(`${greet} ${this.name}`); } const obj = { name: "khriztianmoreno" }; const newFunc = sayHello.bind(obj, "Hello"); newFunc(); // Hello khriztianmorenoIn case of bind method, we can pass additional arguments in two ways:While calling the bind method itself, we can pass additional arguments along with the value of this to that function.Another way is that we can pass additional arguments while invoking the return function of the bind method.We can follow any of the above ways and it works similarly without any difference in functionality.function sayHello(greet) { console.log(`${greet} ${this.name}`); } const obj = { name: "khriztianmoreno" }; const newFunc1 = sayHello.bind(obj, "Hello"); newFunc1(); // Hello khriztianmoreno const newFunc2 = sayHello.bind(obj); newFunc2("Hello"); // Hello khriztianmorenoNOTE: if we don't pass any value or we pass null while calling call, apply, bind methods, then this inner calling function will point to the global object.function sayHello() { // executing in browser env console.log(this === window); } sayHello.call(null); // true sayHello.apply(); // true sayHello.bind()(); // trueWe cannot use call, apply and bind methods in arrow functions to change the value of this, because arrow functions do not have their own this context.The this inside the arrow function will point to the outer/parent function in which it is present.Therefore, applying these methods in the arrow function will have no effect.That's all folks! I hope this article helped you understand what call(), apply() and bind() methods are in javascript!Profile@khriztianmoren

Hacks for effective Fullstack development with React and Node

2023-04-17
javascriptnodejsreact

Today I'm going to show you an optimal workflow for effective development with Node.js and React. If you've ever worked on a project with multiple package.json files, you might know the pain of juggling multiple terminal tabs, remembering which commands start which server, or handling CORS errors.Fortunately, there are some tools available that can alleviate some of these headaches.FullstackMonorepo Setup for React and NodeLet's say we're working on a monorepo with two package.json files: one is in a client directory for a React front-end powered by Create React App, and one is at the root of the repository for a Node back-end that exposes an API that our React app uses. Our React app runs on localhost:3000 and our Node app runs on localhost:8080. Both apps are started with npm startSince we have two package.json files, this means that in order to get our front-end and back-end up and running, we need to make sure we've run npm install and npm start in both the root directory and the client directory. Here's how we simplify this.1. Running two servers at the same timeOne improvement we can make to our development workflow is to add a build tool to run multiple npm commands at the same time to save us the trouble of running npm start in multiple terminal tabs. To do this, we can add an npm package called concurrently to the root of our project.In the root of our project, we will install it as a development dependency.npm install -D concurrentlyThen, in our root package.json scripts, we will update our start script to use them simultaneously.{ "name": "my-app", "version": "1.0.0", "main": "index.js", "scripts": { "start": "concurrently --kill-others-on-fail npm run server npm run client", "server": "node index.js", "client": "cd client && npm start" }, "dependencies": { "express": "^4.17.1" }, "devDependencies": { "concurrently": "^6.0.1" } }Now, we have three npm scripts:npm run server starts our Node application,npm run client runs npm start in the client directory to start our React application,npm start runs npm run server and npm run client at the same time.2. Installing front-end and back-end dependencies with a single commandAnother aspect of our workflow that we can improve is the installation of dependencies. Currently, we need to manually run npm install for every package.json file we have when setting up the project. Instead of going through that hassle, we can add a postinstall script to our root package.json to automatically run npm install in the client directory after the installation has finished in the root directory.{ "name": "my-app", "scripts": { ..., "postinstall": "cd client && npm install" }, }Now, when we install our monorepo, all we need to do to get it up and running is run npm install and then npm start at the root of the project. There is no need to enter any other directory to run other commands.3. Proxy API requests from the backendAs I mentioned earlier, our Node backend exposes the API endpoints that our React app will use. Let's say our Node app has a /refresh_token endpoint.Out of the box, if we tried to send a GET request to http://localhost:8080/refresh_token from our React app at http://localhost:3000, we would run into CORS issues. CORS stands for cross-origin resource sharing.Typically when you encounter CORS errors, it's because you're trying to access resources from another domain (i.e., http://localhost:3000 and http://localhost:8080), and the domain you're requesting resources from is not allowed.To tell the development server to proxy any unknown requests to our API server in development, we can set up a proxy in our React app's package.json file. In client/package.json, we'll add a proxy for http://localhost:8080 (where our Node app is running).{ "name": "client-app", "proxy": "http://localhost:8080", "dependencies": { "react": "^18.2.0", "react-dom": "^18.2.0", "react-scripts": "5.0.1" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, ... }Now, if we restart the server and set a request to our Node app's /refresh_token endPoint (without http://localhost:8080) using fetch(), the CORS error should be resolved.fetch("/refresh_token") .then((res) => res.json()) .then((data) => console.log(data)) .catch((err) => console.error(err));Next time you're working on a monorepo project like this, try these three tips to streamline your development workflow.That's all folks! I hope this helps you become a better dev!Profile@khriztianmoren

Introduction to .env files

2023-03-09
javascriptweb-developmentprogramming

Imagine having to pay nearly $148 million for a data breach that exposed the data of some 57 million users 😱😰Well, that's what happened to Uber, not long ago, and the culprit was none other than a coded secret published openly for any bad actor to exploit.That's why in this post, we're going to learn what they are and how we could work them into our projects with javascript.envContextToday, millions of software developers keep their secrets (i.e. credentials such as access keys and tokens to services used by programs) safe with .env files.For those unfamiliar with the topic, .env files were introduced in 2012 as part of a solution to the hard-coded secret problem mentioned in the introductory paragraph above.Instead of sending secrets along with their codebase to the cloud, developers could now send their codebase to the cloud and keep their secrets separated on their machines in key-value format in .env files; this separation reduced the risk of bad actors getting their hands on sensitive credentials in the cloud.To run programs, developers would now just need to pull their latest codebase from the remote repository and inject the secrets contained in their local .env files into the pulled code.Unless a development team is small and "skeletal" and doesn't care much about DevOps, it typically maintains multiple "environments" for its codebase to ensure that changes are well tested before being pushed to production to interact with end users. In the case of multiple environments, developers may choose to employ multiple .env files to store credentials, one for each of those environments (for example, one .env file to hold development database keys and another to hold production database keys).To summarize, .env files contain credentials in key-value format for the services used by the program they are building. They are meant to be stored locally and not uploaded to online code repositories for everyone to read. Each developer on a team typically maintains one or more .env files for each environment.UsageIn this post, we'll look at how to use a .env file in a basic project, assuming you're using Node.js and git for version control; this should apply to other languages ​​as well. Feel free to skip this section if you're not interested in the technicalities of how to use a .env file.To get started, head to the root of your project folder and create an empty .env file containing the credentials you'd like to inject into your codebase. It might look something like this:SECRET_1=924a137562fc4833be60250e8d7c1568 SECRET_2=cb5000d27c3047e59350cc751ec3f0c6Next, you'll want to ignore the .env file so that it doesn't get committed to git. If you haven't already, create a .gitignore file. It should look something like this:.envNow, to inject the secrets into your project, you can use a popular module like dotenv; it will parse the .env file and make your secrets accessible within your codebase under the process object. Go ahead and install the module:npm install dotenvImport the module at the top of the startup script for your codebase:require(‘dotenv’).config()That's it, you can now access secrets anywhere in your codebase:// display the value of SECRET_1 into your code console.log(process.env.SECRET_1); // -> 924a137562fc4833be60250e8d7c1568 // display the value of SECRET_2 into your code console.log(process.env.SECRET_2); // -> cb5000d27c3047e59350cc751ec3f0c6Excelente. Ha agregado con éxito un archivo .env a su proyecto con algunos secretos y ha accedido a esos secretos en su base de código. Además, cuando envías tu código a través de git, tus secretos permanecerán en tu máquina.ChallengesWhile simple and powerful, .env files can be problematic when managed incorrectly in the context of a larger team.Imagine having to distribute and track hundreds of keys to your software development team.On a simplified level, between Developer_1 and Developer_2, here's what could happen:Developer_1 could add an API key to their local .env file and forget to tell Developer_2 to add it to theirs - this cost Developer_2 15 minutes down the road debugging why their code is crashing only to realize it's because of the missing API key.Developer_2 could ask Developer_1 to send them the API key so they can add it to their .env file, after which Developer_1 can choose to send it via text or email - this cost Developer_2 15 minutes down the road debugging why their code is crashing only to realize it's because of the missing API key.This now unnecessarily puts your organization at risk of bad actors like Developer_2 waiting precisely to intercept the API key.Unfortunately, these challenges are common and even have a name: secret sprawl.Over the past few years, many companies have attempted to solve this problem. HashiCorp Vault is a product that securely stores secrets for large enterprises; however, it is too expensive, cumbersome, and downright overkill to set up for the average developer who just needs a fast and secure way to store these secrets.Simpler solutions exist, such as Doppler and the new dotenv-vault, but they often lack the security infrastructure needed to gain mass adoption.Let me know in the comments what tools or services you use to easily and safely solve secret sprawl.That's all folks! I hope this helps you become a better dev!Profile@khriztianmoreno �

Some methods beyond console.log in Javascript

2023-02-17
javascriptweb-developmentprogramming

Some methods beyond console.log in JavascriptOften, during debugging, JavaScript developers tend to use the console.log() method to print values. But there are some other console methods that make life much easier. Want to know what these methods are? Let's get to know them!1. console.table()Displaying long arrays or objects is a headache using the console.log() method, but with console.table() we have a much more elegant way to do it.// Matrix const matrix = [ ["apple", "banana", "cherry"], ["Rs 80/kg", "Rs 100/kg", "Rs 120/kg"], ["5 ⭐", "4 ⭐", "4.5 ⭐"], ]; console.table(matrix); // Maps class Person { constructor(firstName, lastName) { this.firstName = firstName; this.lastName = lastName; } } const family = {}; family.mother = new Person("Jane", "Smith"); family.father = new Person("John", "Smith"); family.daughter = new Person("Emily", "Smith"); console.table(family);console.table()2. console.trace()Having trouble debugging a function? Wondering how the execution flows? console.trace() is your friend!function outerFunction() { function innerFunction() { console.trace(); } innerFunction(); } outerFunction();console.trace()3. console.error() and console.warn()Tired of boring logs? Spice things up with console.error() and console.warn()console.error("This is an error message"); console.warn("This is a warning message"); console.log("This is a log message");console.error()4. console.assert()This is another brilliant debugging tool! If the assertion fails, the console will print the trace.function func() { const a = -1; console.assert(a === -1, "a is not equal to -1"); console.assert(a >= 0, "a is negative"); } func();console.assert()5. console.time(), console.timeEnd(), and console.timeLog()Need to check how long something is taking? Timer methods are there to rescue you!console.time("timeout-timer"); setTimeout(() => { console.timeEnd("timeout-timer"); }, 1000); setTimeout(() => { console.timeLog("timeout-timer"); }, 500);console.time()NOTE: setTimeouts are not executed immediately, which results in a slight deviation from the expected time.That's all folks! I hope this helps you become a better dev!Profile@khriztianmoreno on Twitter and GitHu

A journey to Single Page Applications through React and JavaScript

2022-09-18
javascriptweb-developmenttalks

Last June, I had the opportunity to share again in person with the medellinjs community, which I appreciate very much.MedellinJSThis time, I was talking about the javascript concepts that are necessary to understand and know before facing working with a Single Page Application (SPA).<iframe width="560" height="315" src="https://www.youtube.com/embed/6opIHgRqWPo?si=xFM0sbf6w8qKQkR3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>I hope this was helpful and/or taught you something new!Profile@khriztianmoren

React useEffect

2022-09-05
javascriptreacthooks

useEffect is probably the most confusing and misunderstood hook in React. Today I want to clarify that for you.We use hooks all the time at Make It Real and understanding useEffect is crucial if we are going to write modern-style React code.Next, we will see:What is useEffect?How to run an effect on every renderHow to run an effect only on the first renderHow to run an effect on the first render and re-run it when a "dependency" changesHow to run an effect with cleanupWhat is useEffect?The useEffect hook allows us to perform side effects in our function components. Side effects are essentially anything where we want an "imperative" action to happen. This includes things like:API callsUpdating the DOMSubscribing to event listenersAll of these are side effects that we might need a component to perform at different times.Running useEffect on every renderThe useEffect hook does not return any value, but it takes two arguments. The first is mandatory and the second is optional. The first argument is the callback function of the effect we want the Hook to execute (i.e., the effect itself). Suppose we wanted to place a console.log() message inside the useEffect callback.import { useEffect } from "react"; export const FunctionComponent = () => { useEffect(() => { console.log("run for every component render"); }); return ( // ... ); }By default, the effect set in the useEffect hook runs when the component renders for the first time and after every update. If we run the above code, we will notice that the console.log('run for every component render') message is generated as our component renders. If our component ever re-renders (for example, from a state change with something like useState), the effect would run again.Sometimes, re-running an effect on every render is exactly what you want. But most of the time, you only want to run the effect in certain situations, such as on the first render.How to run useEffect only on the first renderThe second argument of the useEffect hook is optional and is a dependency list that allows us to tell React to skip applying the effect until certain conditions are met. In other words, the second argument of the useEffect hook allows us to limit when the effect will run. If we simply place an empty array as the second argument, this is how we tell React to only run the effect on the initial render.import { useEffect } from "react"; export const FunctionComponent = () => { useEffect(() => { console.log("run only for first component render (i.e., component mount)"); }, []); return ( // ... ); }With the above code, the console.log() message will only trigger when the component mounts for the first time and will not re-trigger, even if the component re-renders multiple times.This is much more "efficient" than running on every render, but isn't there a happy medium? What if we want to re-run the effect if something changes?Running useEffect on the first render and re-running it when the dependency changesInstead of making an effect run once at the beginning and on every update, we can try to restrict the effect to run only at the beginning and when a certain dependency changes.Suppose we wanted to trigger a console.log() message every time the value of a state property changes. We can achieve this by placing the state property as a dependency of the effect callback. See the following code example:import { useState, useEffect } from "react"; export const FunctionComponent = () => { const [count, setCount] = useState(0); useEffect(() => { console.log( "run for first component render and re-run when 'count' changes" ); }, [count]); return ( <button onClick={() => setCount(count + 1)}> Click to increment count and trigger effect </button> ); };Above, we have a button in the component template responsible for changing the value of the count state property when clicked. Each time the count state property changes (i.e., each time the button is clicked), we will notice that the effect callback runs and the console.log() message triggers.Running an effect with cleanupAn effect callback runs every time on the initial render and when we specify when an effect should run. The useEffect hook also provides the ability to run a cleanup after the effect. This can be done by specifying a return function at the end of our effect.import { useState, useEffect } from "react"; export const FunctionComponent = () => { const [count, setCount] = useState(0); useEffect(() => { console.log( "run for first component render and re-run when 'count' changes" ); return () => { console.log("run before the next effect and when component unmounts"); }; }, [count]); return ( <button onClick={() => setCount(count + 1)}> Click to increment count and trigger effect </button> ); };In the above example, we will notice that the cleanup function message triggers before the desired effect runs. Additionally, if our component ever unmounts, the cleanup function will also run.A good example of when we might need a cleanup is when we set up a subscription in our effect but want to remove the subscription whenever the next subscription call is to be made, to avoid memory leaks.These are mainly all the different ways the useEffect hook can be used to run side effects in components. I invite you to check out this visual guide to useEffect by ALEX SIDORENKO that illustrates these concepts through a series of GIFs that are both clever and effective, especially for visual learners. There is also a visualization of how first-class functions work if you want more.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

Why Storybook? The component development tool used by over 30,000 projects

2022-08-22
javascriptstorybookreact

Storybook is a tool for developing components and user interfaces faster than ever. Storybook is incredibly versatile: you can use it with a variety of JavaScript libraries and frameworks, not just React. It is available for Vue, React, Svelte, Angular, and Ember.js.StorybookIf you have been developing your components the old-fashioned way, in your text editor or IDE, a tool like Storybook allows you to unlock greater productivity when developing components. Next, you will learn what Storybook is, how it works, and if it is suitable for your team.The problems of developing components traditionallyLet's start by looking at the friction involved with the typical component development process:You receive a task to develop a feature: let's say it's a form on the checkout page.Then, you need to set up the development environment: connect to the VPN, run the backend, run the frontend, etc.Finally, you get to the page where the feature will live.It is cumbersome to navigate between multiple pages, fill out forms, and click buttons every time you need to get to where the feature should be. Sometimes, your components have multiple states, for example, loading, success, and error. It is not always easy to replicate all the states of a component, which leads you to modify the component code just to force a specific state.Storybook isolates your components: easier component debuggingYou may have gone through these situations and encountered the pain involved in this type of component development workflow.Most of the time, while developing, you want to focus on the component you are creating, so other elements on a page become noise. Having a way to quickly access any component or feature, and also being able to simulate all use cases, is incredibly beneficial and saves you a lot of time.Storybook provides you with this type of component isolation so you can work only on the component you have in mind, without having to interact with other components.What is Storybook?Storybook is an open-source tool that helps you develop user interface components in isolation. It runs on your codebase, but separately from your application, so it works like a sandbox, allowing developers not to be distracted by incomplete APIs, unstable data, and other external dependencies. It integrates with frameworks like React, Vue, Svelte, Angular, and others.Think of Storybook as a real book, with an index of pages that links to the user interface components. Each component has stories to tell about itself, and these stories represent the different states of the component's user interface. Regardless of the situation, even if you are offline, you will be able to access that page and easily find and play with the components.Due to its productivity and collaboration advantages, Storybook is used by more than 30,000 open-source projects, especially component libraries. However, many tech companies, such as Airbnb, Atlassian, and JetBrains, are among its users.Who is Storybook for?Some people seem to think that Storybook is a tool only for component library developers, and that is certainly not the case.Storybook helps us build from the simplest and most atomic component, like a button or an input, to complex features or entire pages.Since Storybook helps us summarize the user interface of applications, designers and QA can benefit from it. With Storybook, you can facilitate the development of a design system and share a single language with designers. QA can get an overview and test functionalities in isolation. Storybook can even be used to demonstrate functionality to stakeholders, as if it were a demo.Many companies have made their Storybooks public. They are not only an inspiration but also a learning guide for teams new to Storybook, and you can find a list of public Storybooks here.How it worksFrom a technical aspect, Storybook is essentially a React application that runs on your codebase, separately from your main application. You start it by running a CLI command. It will look for files in your codebase that contain a .stories.* extension, gather all those components, and display them in a nice user interface.Suppose you are creating, for example, a restaurant card. You would have a RestaurantCard.stories.tsx file, which represents the component with mocked properties for each scenario.It is important to note that Storybook does not produce any production code. Your .stories.tsx files are used solely for development purposes.I hope this was helpful and/or taught you something new!Profile@khriztianmoren

Redux explained in a simple and succinct way for React developers

2022-08-10
reactreduxjavascript

Redux is a widely used state management library for React and TypeScript applications. It’s easier than ever to manage state in React thanks to the useState React Hook, as well as the Context API. However, when your codebase grows very large, you’ll need a more powerful and structured state management solution, rather than an ad-hoc one. That’s where Redux can help.ReduxWhy do you need Redux?When working with React, you usually end up with state that is used globally throughout the entire application.One of the approaches to sharing state across the component tree is to use the Context API. We often use it in combination with hooks like useReducer and useState to manage global application state.This approach works, but it can only take you so far. In the end, you have to invent your own ways to manage side-effects, debug, and split state management code into modules so that it doesn't become an incomprehensible mess.A better idea is to use specialized tools. One such tool to manage global application state is Redux.How Redux WorksRedux is a state management framework that is based on the idea of ​​representing the global state of the application as a reducer function.In Redux, to manage state, we define a function that accepts two arguments: state, for the previous state, and action, the object that describes the state update.function reducer(state = "", action: Action) { switch (action.type) { case "SET_VALUE": return action.payload; default: return state; } }This reducer represents a string value. It handles only one type of action: SET_VALUE.If the received action field type is not SET_VALUE, the reducer returns the unchanged state.After having the reducer, we create the store using the redux createStore method.const store = createStore(reducer, "Initial Value");The store provides a subscription method that allows us to subscribe to updates to the store.store.subscribe(() => { const state = store.getState(); console.log(state); });Here, we've passed a callback that logs the state value to the console.To update the state, we dispatch an action:store.dispatch({ type: "SET_VALUE", payload: "New value", });Here we pass an object representing the action (action). Each action is required to have the type field and optionally, payload.Usually, instead of creating actions in place, people define action creator functions:const setValue = (value) => ({ type: "SET_VALUE", payload: value, });And this is the essence of Redux.Why can't we use the useReducer hook instead of Redux?Since version 16.8, React supports Hooks. One of them, useReducer, works very similarly to Redux.It's easy to manage application state using a combination of useReducer and the React Context API.So why do we need Redux if we have a native tool that also allows us to represent state as a reducer? If we make it available throughout the application using the Context API, won't that be enough?Redux offers some important advantages:Browser Tools: You can use Redux DevTools to debug your Redux code. It allows us to see the list of dispatched actions, inspect the state, and even travel back in time. You can toggle through the history of actions and see how the state dealt with each of them.Handling Side Effects: With useReducer, you have to invent your own ways to organize the code that makes network requests. Redux provides the Middleware API to handle that. Additionally, there are tools like Redux Thunk that make this task even easier.Testing: Since Redux is based on pure functions, it is easy to test. All testing comes down to checking the output against the given inputs.Patterns and code organization: Redux is well studied and there are recipes and best practices you can apply. There is a methodology called Ducks that you can use to organize Redux code.Building with ReduxNow that you've seen examples of what Redux does and how it works, you're ready to use it in a real project.Profile@khriztianmoren

A better way to build React component libraries

2022-07-05
reactjavascriptweb-development

Today we'll quickly go over four programming patterns that apply to shared components in React.ReactUsing these allows you to create a well-structured shared component library. The benefit you get is that developers in your organization can easily reuse components across numerous projects. You and your team will be more efficient.Common PatternsIn this post, I show you four API patterns that you can use with all your shared components. These are:JSX children pass-throughReact fowardRef APIJSX prop-spreading cont TypeScriptOpinionated prop defaultsPattern 1: JSX Children Pass-ThroughReact provides the ability to compose elements using the children prop. The shared component design leans heavily on this concept.Allowing consumers to provide the children whenever possible makes it easier for them to provide custom content and other components. It also helps align component APIs with those of native elements.Let's say we have a Button component to start with. Now we allow our Button component to render its children, like this:// File: src/Button.tsx export const Button: React.FC = ({ children }) => { return <button>{children}</button>; };The definition of React.FC already includes children as a valid prop. We pass it directly to the native button element.Here is an example using Storybook to provide content to the Button.// File: src/stories/Button.stories.tsx const Template: Story = (args) => ( <Button {...args}>my button component</Button> );Pattern 2: forwardRef APIMany components have a one-to-one mapping to an HTML element. To allow consumers to access that underlying element, we provide a referencing prop using the React.forwardRef() API.It is not necessary to provide a net for day-to-day React development, but it is useful within shared component libraries. It allows for advanced functionality, such as positioning a tooltip relative to our Button with a positioning library.Our Button component provides a single HTMLButtonElement (button). We provide a reference to it with forwardRef().// File: src/buttons/Button.tsx export const Button = React.forwardRef < HTMLButtonElement > (({ children }, ref) => { return <button ref={ref}>{children}</button>; }); Button.displayName = "Button";To help TypeScript consumers understand what element is returned from the ref object, we provide a type variable that represents the element we are passing it to, HTMLButtonElement in this case.Pattern 3: JSX Prop-SpreadingAnother pattern that increases component flexibility is prop propagation. Prop propagation allows consumers to treat our shared components as drop-in replacements for their native counterparts during development.Prop propagation helps with the following scenarios:Providing accessible props for certain content.Adding custom data attributes for automated testingUsing a native event that is not defined in our props.Without prop propagation, each of the above scenarios would require explicit attributes to be defined. prop propagation helps ensure that our shared components remain as flexible as the native elements they use internally.Let's add prop propagation to our Button component.// File: src/buttons/Button.tsx export const Button = React.forwardRef< HTMLButtonElement, React .ComponentPropsWithoutRef<'button'> >(({ children, ...props }, ref) => { return ( <button ref={ref} {...props}> {children} </button> ); });We can reference our remaining props with the spread syntax and apply them to the button. React.ComponentPropsWithoutRef is a type utility that helps document valid props for a button element for our TypeScript consumers.Some examples of this type checking in action:// Pass - e is typed as // `React.MouseEventMouseEvent>` <Button onClick={(e) => { console.log(e) }} /> // Pass - aria-label is typed // as `string | undefined` <Button aria-label="My button" /> // Fail - type "input" is not // assignable to `"button" | // "submit" | "reset" | undefined` <Button type="input" />Pattern 4: Opinionated DefaultsFor certain components, you may want to map default attributes to specific values. Whether to reduce bugs or improve the developer experience, providing a set of default values ​​is specific to an organization or team. If you find the need to default certain props, you should ensure that it is still possible for consumers to override those values ​​if necessary.A common complexity encountered with button elements is the default value type, "submit". This default type often accidentally submits surrounding forms and leads to difficult debugging scenarios. Here's how we set the "button" attribute by default.Let's update the Button component to return a button with the updated type.// File: src/buttons/Button.tsx return ( <button ref={ref} type="button" {...props}> {children} </button> );By placing the default props before the prop broadcast, we ensure that any value provided by consumers is prioritized.Look at some open source librariesIf you're building a component library for your team, take a look at the most popular open source component libraries to see how they use the patterns above. Here's a list of some of the top open source React component libraries to look into:Ant DesignRainbow UIGrommetProfile@khriztianmorenoUntil next time

NPM dependencies vs devDependencies

2022-06-20
javascriptweb-developmentnpm

tl;drDependencies are required by our application at runtime. Packages like react, redux, and lodash are all examples of dependencies. devDependencies are only required to develop or compile your application. Packages like babel, enzyme, and prettier are examples of devDependencies.NPM dependencies vs devDependenciesnpm installThe real difference between dependencies and devDependencies is seen when you run npm install.If you run npm install from a directory containing a package.json file (which you normally do after cloning a project, for example).✅ All packages located in dependencies will be installed ✅ All packages located in devDependencies will be installedIf you run npm install <package-name> (which you normally do when you want to add a new package to an existing project), i.e. npm install react.✅ All packages located in dependencies will be installed ❌ None of the packages located in devDependencies will be installedTransitive dependenciesIf package A depends on package B, and package B depends on C, then package C is a transitive dependency on package A. What that means is that for package A to run properly, it needs package B installed. However, for package B to run properly, package C needs to be installed. Why do I mention this? Well, dependencies and devDependencies also treat transitive dependencies differently.When you run npm install from a directory containing a package.json file:dependencies ✅ Download all transitive dependencies.devDependencies ❌ Do not download any transitive dependencies.Specifying dependencies vs. devDependenciesStarting with NPM 5, when you run npm install <package-name>, that package will automatically be saved within your dependencies in your package.json file. If you wanted to specify that the specific package should be included in devDependencies instead, you would add the --save-dev flag.npm install prettier --save-devInstalling on a production serverOften, you will need to install your project on a production server. When you do that, you will not want to install devDependencies as you obviously won't need them on your production server. To install only the dependencies (and not devDependencies), you can use the --production flag.npm install --productionI hope this was helpful and/or made you learn something new!Profile@khriztianmoren

A practical handbook on JavaScript module systems

2022-06-12
javascriptweb-development

Today I'll give you a practical introduction to the module systems we use when building libraries in JavaScript. As a web application or library grows and more features are added, modularizing the code improves readability and maintainability. This quick guide will give you an incisive look at the options available for creating and consuming modules in JavaScript.JavaScript Module SystemsIf you've ever wondered what the pros and cons of AMD, ESM, or CommonJS are, this guide will give you the information you need to confidently choose among all the options.A history of JavaScript modulesWith no built-in native functions for namespaces and modules in early versions of the JavaScript language, different module formats have been introduced over the years to fill this gap.The most notable ones, which I'll show you how to use in your JavaScript code below, are:Immediately Invoked Function Expression (IIFE).CommonJS (CJS)Asynchronous Module Definition (AMD)Universal Module Definition (UMD)ECMAScript Modules (ESM)The selection of a module system is important when developing a JavaScript library. For library authors, the choice of module system for your library affects user adoption and ease of use. You will want to be familiar with all the possibilities.1. Immediately Invoked Function Expression (IIFE) - Immediately Invoked Function ExpressionOne of the earliest forms of exposing libraries in the web browser, immediately Invoked Function Expressions (IIFE) are anonymous functions that are executed immediately after being defined.(function () { // Module's Implementation Code })();A common design pattern that leverages IIFEs is the Singleton pattern, which creates a single object instance and namespace code. This object serves as a single point of access to a specific set of functions. For real-world examples, look no further than the Math object or the jQuery library.ProsWriting modules this way is convenient and compatible with older browsers. In fact, you can safely concatenate and bundle multiple files containing IIFEs without worrying about naming and scope collisions.ConsHowever, IIFE modules are loaded synchronously, which means that properly ordering module files is critical. Otherwise, the application will break. For large projects, IIFE modules can be difficult to manage, especially if you have a lot of overlapping and nested dependencies.2. Common JS (CJS)Node.js's default module system, CommonJS (CJS) uses the require syntax for importing modules and the module.exports and export syntax for exported and named exports, respectively. Each file represents a module and all local variables of the module are private, since Node.js wraps the module inside a function container.For example, this module ...const { PI, pow } = Math; function calculateArea(radius) { return PI * pow(radius, 2); } module.exports = calculateArea;It becomes...(function (exports, require, module, __filename, __dirname) { const { PI, pow } = Math; function calculateArea(radius) { return PI * pow(radius, 2); } module.exports = calculateArea; });Not only does the module have its variables within the private scope, but it still has global access to, exports, require, and module. __filename and __dirname are module-scoped and contain the filename and directory name of the module, respectively.The require syntax allows you to import built-in Node.js modules or locally installed third-party modulesProsCommonJS require statements are synchronous, meaning that CommonJS modules are loaded synchronously. As long as it is the only entry point for the application, CommonJS automatically knows how to order modules and handle circular dependencies.ConsLike IIFEs, CommonJS was not designed to generate small-sized packages. Package size was not considered in the design of CommonJS, as CommonJS is primarily used to develop server-side applications. For client-side applications, code must be downloaded first before running. The lack of tree shaking makes CommonJS a suboptimal module system for client-side applications.3. Asynchronous Module Definition (AMD)Unlike IIFE and CommonJS, Asynchronous Module Definition (AMD) loads modules and their dependencies asynchronously. Originating from the Dojo Toolkit, AMD is designed for client-side applications and requires no additional tools. In fact, all you need to run applications following the AMD module format is the RequireJS library, an in-browser module loader. That's it. Here's a simple example that runs a simple React application, structured with AMD, in the browser.<!-- index.html --> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>React + AMD</title> </head> <body> <div id="root"></div> <script type="text/javascript" src="https://cdnjs.cloudflare.com /ajax/libs/require.js/2.3.6 /require.min.js" ></script> <script type="text/javascript" src="main.js"></script> </body> </html>Here's what the JavaScript looks like.// main.js requirejs.config({ paths: { react: "https://unpkg.com/react@15.3.2 /dist/react", "react-dom": "https://unpkg.com /react-dom@15.3.2 /dist/react-dom", }, }); requirejs( ["react", "react-dom"], (React, ReactDOM) => { ReactDOM.render( React.createElement( "p", {}, "Greetings!" ), document.getElementById("root") ); } );Calling the requirejs or define methods registers the factory function (the anonymous function passed as the second argument to these methods). AMD runs this function only after all dependencies have been loaded and executed.ProsAMD allows multiple modules to be defined within a single file and is compatible with older browsers.ConsAMD is not as popular as more modern module formats such as ECMAScript modules and Universal Module Definition.4. Universal Module Definition (UMD)For libraries that support both client-side and server-side environments, the Universal Module Definition (UMD) offers a unified solution for making modules compatible with many different module formats, such as CommonJS and AMD.Here's UMD in action from the React development library(function (root, factory) { if (typeof define === "function" && define.amd) { // Checks for RequireJS's // `define` function. // Register as an anonymous module. define(["exports"], factory); } else if ( typeof exports === "object" && typeof exports.nodeName !== "string" ) { // Checks for CommonJS. // Calls the module factory // immediately. factory(exports); } else { // Register browser globals. global = root || self; factory((global.React = {})); } })(this, function (exports) { "use strict"; // Place React's module code here. // ... });If the IIFE detects a defining function in the global scope and an amd property in the definition, then it runs the module as an AMD module.If the IIFE detects an export object in the global scope and a nodeName property within the exports, then it runs the module as a CommonJS module.ProsRegardless of whether an application consumes your library as a CommonJS, AMD, or IIFE module, UMD conditionally checks the format of the module being used at runtime and executes code specific to the detected module format.ConsThe UMD template code is an intimidating-looking IIFE and is initially challenging to use. However, UMD itself is not conceptually complicated.5. ECMAScript Modules (ESM)ECMAScript Modules (ESM), the most recently introduced module format, is the standard and official way of handling modules in JavaScript. This module format is commonly used in TypeScript applications.Like CommonJS, ESM provides several ways to export code: default exports or named exports.// circle.js export function calculateArea() { return Math.PI * Math.pow(radius, 2); } export function calculateCircumference() { return 2 * Math.PI * radius; }Importing these named exports separately tells the module bundler which parts of the imported module should be included in the generated code. Any unimported named exports are skipped. This reduces the library size, which is useful if your library relies on some methods from a large utility library like lodash.Now, in some file in the same directory as ./circle.js, we would need the module as follows.const { calculateArea, calculateCircumference } = require("./circle"); console.log(calculateArea(5)); console.log(calculateCircumference(5));ProsModule bundlers are supported by ESM and optimize code using techniques like tree shaking (removes unused code from the final result), which are not supported by other module formats. Module loading and parsing is asynchronous, but their execution is synchronous.ConsThis is the newest core module system. As such, some libraries have not yet adopted it.Building your own React/JavaScript libraryAs you can imagine, choosing the right module system becomes important when building your own React library. Personally with the use of tools like babel.js nowadays we could work with ECMAScript modules, but I am a proponent of using CommonJS in Node and ECMAScript Modules (ESM) on the frontend.Profile@khriztianmoren

Introduction to Volta, the fastest way to manage Node environments

2022-05-27
javascriptweb-developmentprogramming

Volta is a tool that opens up the possibilities for a smoother development experience with Node.js. This is especially relevant for working in teams. Volta allows you to automate your Node.js development environment. It allows your team to use the same consistent versions of Node and other dependencies. Even better, it allows you to keep versions consistent across production and development environments, eliminating the subtle bugs that come with version mismatches.Volta eliminates “It works on my machine...” problems.Version mismatches cause headaches when developing in teams.Let's assume this scenario:Team X built their application on local machines running Node 10, but the build pipeline defaulted to the lowest version of Node they had on hand, Node 6, and the application would not start in production. They had to revert the deployment, figure out what went wrong, it turned into a very long night.If they had used Volta this could have been avoided.How does Volta work?Volta is “a simple way to manage your JavaScript command line tools”. It makes managing Node, npm, yarn or other JavaScript executables shipped as part of packages really easy.Volta has a lot in common with tools like NVM, but NVM is not the easiest to set up initially and, more importantly, the developer using it still has to remember to switch to the correct version of Node for the project they are working on.Volta, on the other hand, is easy to install and takes the thinking part out of the equation: once Volta is configured in a project and installed on a local machine, it will automatically switch to the appropriate versions of Node.Not only that, but it will also allow you to define yarn and npm versions in a project, and if the version of Node defined in a project is not downloaded locally, Volta will go out and download the appropriate version.But when you switch to another project, Volta will either fall back to the presets in that project or revert to the default environment variables.Volta in actionLet's give Volta a spin. First, create a new React application with Create React App.Run the following command from a terminal.npx create-react-app volta-sample-appOnce you have created your new React application, open the code in an IDE and start it through the command line.npm run startIf all goes according to plan, you will see a rotating React logo when you open a browser at http://localhost:3000/.Now that we have an application, let's add Volta.Download Volta locallyTo install Volta, run the following command:curl https://get.volta.sh | shellIf you have Windows, download and run the Windows installer and follow the instructions.Define your environment variablesBefore we add our Volta-specific versions of Node and npm to our project, let's see what the default environment variables are.Get a reference readingIn a terminal at the root of your project, run the following command.node -v && npm -vFor me, my default versions of Node and npm are v14.18.1 and v6.14.15, respectively.With our baseline set, we can change our versions just for this project with the help of Volta.Setting a node.js versionWe'll start with Node. Since v16 is the current version of Node, let's add that to our project.In our project at the root level where our package.json file lives, run the following command.volta pin node@16Using volta pin [JS_TOOL]@[VERSION] will put this particular JavaScript tool in our version specified in our application's package.json. After committing this to our repository with git, any future developer using Volta to manage dependencies will be able to read this from the repository and use the exact same version.With Volta we can be as specific or generic as we want to define the versions, and Volta will fill in any gaps. I specified the major version of Node that I wanted (16) and then Volta filled in the minor and patch versions for me.After pinning, you will see the following success message on your terminal: pinned node@16.11.1 in package.json..Tip: make your version of node pinned to match the version of Node on your build serverSetting an npm versionNow let's tackle our npm version. Still in the root of our project in the terminal, run this command:volta pin npmWithout a version specified, Volta defaults to the latest LTS version to add to our project.The current LTS version for npm is 8, so now our project has npm v8.1.0 as its default version.Verify the package.json.To confirm that the new versions of the JavaScript environment are part of our project, check the package.json file of the application.Scroll down and you should see a new property named “volta”. Inside the “volta” property there should be a “node”: “16.11.1” and an “npm”: “8.1.0” version.From now on, any developer who has Volta installed on their machine and downloads this repository will have the configuration of these tools to automatically switch to use these particular versions of node and npm.To be doubly sure, you can also re-run the first command we did before anchoring our versions with Volta to see how our current development environment is configured.node -v && npm -vAfter that, your terminal should tell you that you are using those same versions: Node.js v16 and npm v8.Watch the magic happenNow, you can sit back and let Volta take care of things for you.If you want to see what happens when nothing is specified for Volta, try going up one level from the root of your project and check your Node and npm versions again.Let's open two terminals side by side: the first one inside our project with Volta versions, the other one level higher in our folder structure.Now run the following command in both of them:node -v && npm -vAnd in our project, Node v16 and npm v8 are running, but outside the project, Node v14 and npm v6 are present. We did nothing more than change directories and Volta took care of the rest.By using Volta, we took the guesswork out of our JavaScript environment variables and actually made it harder for a member of the development team to use the wrong versions than the right ones.Profile@khriztianmoren

Predictions 🧞‍♀️💻 2022

2022-01-04
programmingweb-developmentdiscuss

Some points you should pay attention to for this year 2022 that will surely have a high impact on the technology ecosystem.RIP Babel and Webpack: They will not disappear forever, but will be largely replaced by new compiler tools that are faster and more intuitive, such as SWC, esbuild and Vite.Serverless will help frontend developers become (real) fullstack developers: and (hopefully) get paid accordingly. Much of the serverless technology is based on V8 and is adopting Web APIs, so frontend developers will already be familiar with the key parts of the serverless infrastructure. Now, instead of starting up an Express server and calling yourself a “fullstack developer”, Serverless will allow you to actually be one.Next.js will become less of a React meta-framework and more of a web meta-framework: Vercel has already hired Rich Harris (aka Lord of the Svelte) and has shared their plans for an edge-first approach to the web with any framework. They will lean even more on this in 2022, adapt to more JS frameworks/libs (with pillowcases full of cash) and prepare for an IPO.No/Low-code tools will dominate even more: We will probably continue to ignore them; meanwhile, more agencies and teenagers will make millions of dollars submitting sites without writing a line of code. In 2022, we'll also start to see more established software companies with “real developers” leveraging no-code or low-code tools because the best code is the code you don't have to maintain.Meta will cede control of React: Just like when they created GraphQL Foundation in 2018, Meta will create a React Foundation later this year and cede control of React. Unlike Microsoft/Amazon/Google, Meta has never (successfully) monetized developers, so React is not a strategic priority for the company. That might be even more true now, with Zuck's eyes on Metaverse and Sebastian Markbåge leaving for Vercel.VC will solve Open Source funding: At least, it will feel that way. With some pre-revenue/traction/pmf OSS projects generating seed rounds at valuations between $25-50MM, you'll want to dust off that old side project of yours. I don't know if it's sustainable (it's not), but it's a lot better than when we relied on Patreon to fund our critical web infrastructure.Netlify to acquire Remix: Bottoms up framework is the wave. Netlify will want the distribution and Remix will want the... money. It would allow the Remix team to spend their time on what they are good at, Remix-the-framework, rather than Remix-the-business. The pairing would give them both a much better chance of catching up with Vercel/Next.js.While all that is going on ...? we can continue to work quietly.Profile@khriztianmoren

Pimp my Term - Mac

2020-05-03
tutorialbashterminal

As a Mac OS user, I enjoy working with the terminal and find it a particularly powerful tool. Therefore, I spent quite a bit of time customizing it and here is my ultimate guide to terminal customization.Alt TextAt first I thought I would just create a short post with some of the settings I like. But I had so many things I wanted to show that this started to turn into a considerably long post. So I've decided to post it now, with as many tips as I can write, and I'll update it with new tips and tricks.My terminalRecommended installationsiTerm2Nerd fonts - Hack Boldzshzsh extensions:autosuggestionssyntax-highlightingPowerlevel10klsd: The next gen ls commandccat: Colorizing catlolcatNeofetch: A command-line system information tool written in bash 3.2+Let's start configuring all the tools we will need.PrerequisitesFirst, you must install iTerm2.Then install brew.Now install oh-my-zsh, open iTerm2 and paste the following command:sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"After we have these couple of things installed let's get more into it.Patched fontsI want to start talking about patched fonts, since many of the customizations I will explain later may depend on them. Patched fonts consist of regular fonts to which some additional symbols have been added. That way, you can display special icons (like your operating system icon) or add new shapes to your prompt.The most successful project is nerd-fonts, which includes many of the most commonly used fonts and a favorite of this project is Hack BoldNerd fontsNerd Fonts is a project that patches developer-driven fonts with a large number of glyphs (icons). Specifically to add a large number of additional glyphs from popular 'iconic fonts' such as Font Awesome, Devicons, Octicons and others.Nerd fontsThen to install these fonts on your Mac OS you can use brew:brew tap homebrew/cask-fonts brew cask install font-hack-nerd-fontI have seen that sometimes it does not install the fonts with cask, here is another option:brew install --cask font-hack-nerd-fontNow, if you look in the folder where you just installed it, you will see that it appears there: ls ~/Library/Fonts.Configure your terminalOnce you have downloaded Nerd Fonts, you can configure your terminal to use it. Configure iTerm2 to use the font by going to:iTerm2 -> Preferences -> Profiles -> Text -> Font -> Change FontSelect the Hack Regular Nerd Font and adjust the size if desired. Also check the Use a different font for non-ASCII text box and select the font again. It should display the new font and icons in the application.iTerm2 Text nerd fontsDon't worry if you don't see a significant change in your terminal, this will set the stage for the next steps.Colorize the terminalOn the road to the ultimate terminal, there is nothing that will improve its appearance more than customizing its color scheme, so this will be our starting point. By searching the Internet, you will be able to find many themes, but the easiest way to apply them is to use Gogh.This tool requires no installation and allows you to choose your favorite colors from a long list of different pre-built schemes.Simply copy and paste the command: bash -c "$(curl -sLo- https://git.io/vQgMr)"After selecting a theme, it will be installed and available for selection on your terminal.iTerm2 -> Preferences -> Profiles -> Colors -> Color PresetsLSD, LOLcat y ccatSome of the following tools to make them look better you need your iterm2 to have a minimum contrast.lsd is very much inspired by the supercolor project but with some small differences. For example, it is written in RUST and not in ruby, which makes it much faster.LSDIt is necessary to install the patched powerline fonts nerd-font and/or font-awesome.To install LSD just use brew and execute this line in your terminal:brew install lsdlolcat 🤣️ It gets rainbows and unicorns everywhere! This tool appears commonly used together with Neofetch, adding a stunning rainbow effect to your prompt.LOLcatTo install LOLcat just use brew and execute this line in your terminal:brew install lolcatTo verify that it is installed and working correctly you can run in the terminalls | lolcatccat This is the cat coloring. It works similarly to cat but displays content with syntax highlighting.asciicastTo install ccat just use brew and execute this line in your terminal:brew install ccatCustomize the bash promptIn case you have installed a patched font as described above, you can now use any kind of symbols to build your prompt. These sources include many powerline symbols that will allow you to fully customize your terminal without having to install any external plugins.Powerlevel10kThis is a fast reimplementation of POWERLEVEL9K with even some additional features. It even keeps the same variable names, so you won't need to change your configuration if you are coming from POWERLEVEL9k.One thing I love about POWERLEVEL10K is that, if you don't already have settings, when you first start it up, it will show you a guide that asks you for your preferences. During this process, it shows several examples, making it much easier to customize.Powerlevel10kTo install it on Mac we have two ways, the first one can be using brew or the second one that will be the one we will use is with Oh My Zsh, paste the following line in your terminal.git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/powerlevel10kNow you must tell zsh which will be the theme to use, for that edit the file ~/.zshrc, you can use any editor of your preference and look for the theme property and change it to ZSH_THEME="powerlevel10k/powerlevel10k.We must reload this file to see the changes in our terminal.source ~/.zshrcConfiguration WizardType p10k configure to access the built-in configuration wizard directly from your terminal.Powerlevel10k wizardAt the end of the wizard you will have a more personalized terminal and you will be very happy :)Now let's continue adding more high glamour details to our terminal.Some aliasesSome of the commands you have already installed do, in fact, support color highlighting, for example: ls, grep and diff. In case you want these commands to always have the color option enabled, you can write aliases in your terminal configuration file (.bashrc /.zshrc) to force them.alias l='ls -l' alias la='ls -a' alias lla='ls -la' alias lt='ls --tree' alias cat=ccatZsh customizationsZsh is a much more configurable shell with tons of plugins and themes that will make your terminal look amazing and even improve your workflow. For this shell, the customization possibilities are almost limitless, so now I will simply explain the configuration I use.One key difference with Zsh is that it doesn't come with preconfigured settings like other shells such as bash or fish, so I would suggest copying some of my settings as a starting point, particularly if you are installing it for the first time.Configuring keybindings - keybindingsOne of the first things I quickly noticed when using Zsh is that many of the keybindings and shortcuts I was used to, coming from bash, would not work at all or would cause unexpected behavior. Even the HOME and END keys didn't work. So here are all the key combination settings I use:bindkey '^[[2~' overwrite-mode bindkey '^[[3~' delete-char bindkey '^[[H' beginning-of-line bindkey '^[[1~' beginning-of-line bindkey '^[[F' end-of-line bindkey '^[[4~' end-of-line bindkey '^[[1;5C' forward-word bindkey '^[[1;5D' backward-word bindkey '^[[3;5~' kill-word bindkey '^[[5~' beginning-of-buffer-or-history bindkey '^[[6~' end-of-buffer-or-historyUseful add-onsAutosuggestions Suggests commands as you type based on history and completions.Syntax highlighting This package provides syntax highlighting for the Zsh shell. It allows highlighting commands as they are typed at a Zsh prompt in an interactive terminal. This helps to check commands before executing them, particularly for syntax errors.That's it, we're done with setting up our terminal, I hope this has been useful and/or made you learn something new!Profile@khriztianmoren

Testing framework - Node.js

2020-04-17
javascripttestingnodejs

Once an application is running in production, we might be afraid to make changes. How do we know that a new feature, a fix, or a refactor won't break existing functionality?We can manually use our application to try to find bugs, but without maintaining an exhaustive checklist, it's unlikely we'll cover all possible failure points. And honestly, even if we did, it would take too long to run our entire application after every commit.By using a testing framework, we can write code that verifies our previous code still works. This allows us to make changes without fear of breaking expected functionality.But there are many different testing frameworks, and it can be difficult to know which one to use. Below, I will talk about three of them for Node.js:TapeAvaJestTAPEThis derives its name from its ability to provide structured results through TAP (Test Anything Protocol). The output of our runner is human-friendly, but other programs and applications cannot easily parse it. Using a standard protocol allows for better interoperability with other systems.Additionally, Tape has several convenience methods that allow us to skip and isolate specific tests, as well as verify additional expectations such as errors, deep equality, and throwing.Overall, the advantage of Tape is its simplicity and speed. It is a solid and straightforward harness that gets the job done without a steep learning curve.Here is what a basic test with Tape looks like:const test = require("tape"); test("timing test", (t) => { t.plan(2); t.equal(typeof Date.now, "function"); const start = Date.now(); setTimeout(function () { t.equal(Date.now() - start, 100); }, 100); });And if we run it, it looks like this:$ node example/timing.js TAP version 13 # timing test ok 1 should be strictly equal not ok 2 should be strictly equal --- operator: equal expected: 100 actual: 107 ... 1..2 # tests 2 # pass 1 # fail 1The test() method expects two arguments: the name of the test and the test function. The test function has the t object as an argument, and this object has methods we can use for assertions: t.ok(), t.notOk(), t.equal(), and t.deepEqual() to name a few.AVAAVA has a concise API, detailed error output, embraces new language features, and has process isolation to run tests in parallel. AVA is inspired by Tape's syntax and supports reporting through TAP, but it was developed to be more opinionated, provide more features, and run tests concurrently.AVA will only run tests with the ava binary. With Tape, we could run node my-tape-test.js, but with AVA we must first ensure that AVA is installed globally and available on the command line (e.g., npm i -g ava).Additionally, AVA is strict about how test files are named and will not run unless the file ends with "test.js".One thing to know about AVA is that by default it runs tests in parallel. This can speed up many tests, but it is not ideal in all situations. When tests that read and write to the database run simultaneously, they can affect each other.AVA also has some helpful features that make setup and teardown easier: test.before() and test.after() methods for setup and cleanup.AVA also has test.beforeEach() and test.afterEach() methods that run before or after each test. If we had to add more database tests, we could clear our database here instead of individual tests.Here is what an AVA test looks like:const test = require("ava"); test("foo", (t) => { t.pass(); }); test("bar", async (t) => { const bar = Promise.resolve("bar"); t.is(await bar, "bar"); });When iterating on tests, it can be useful to run AVA in "watch mode". This will watch your files for changes and automatically rerun the tests. This works particularly well when we first create a failing test. We can focus on adding functionality without having to keep switching to restart the tests.AVA is very popular and it's easy to see why. AVA is an excellent choice if we are looking for something that makes it easy to run tests concurrently, provides helpers like before() and afterEach(), and provides better performance by default, all while maintaining a concise and easy-to-understand API.JestIt is a testing framework that has grown in popularity alongside React.js. The React documentation lists it as the recommended way to test React, as it allows using jsdom to easily simulate a browser environment. It also provides features to help mock modules and timers.Although Jest is very popular, it is mainly used for front-end testing. It uses Node.js to run, so it is capable of testing both browser-based code and Node.js applications and modules. However, keep in mind that using Jest to test Node.js server-side applications comes with caveats and additional configuration.Overall, Jest has many features that can be attractive. Here are some key differences from Tape and AVA:Jest does not behave like a normal Node.js module.The test file must be run with jest, and several functions are automatically added to the global scope (e.g., describe(), test(), beforeAll(), and expect()). This makes test files "special" as they do not follow the Node.js convention of using require() to load jest functionality. This will cause issues with linters like standard that restrict the use of undefined globals.Jest uses its global expect() to perform checks, instead of standard assertions. Jest expects it to read more like English. For example, instead of doing something like t.equal(actual, expected, comment) with tape and AVA, we use expect(actual).toBe(expected). Jest also has smart modifiers that you can include in the chain like .not() (e.g., expect(actual).not.toBe(unexpected)).Jest has the ability to mock functions and modules. This can be useful in situations where it is difficult to write or change the code we are testing to avoid slow or unpredictable results in a test environment. An example in the Jest documentation is preventing axios from making a real HTTP request to an external server and instead returning a preconfigured response.Jest has a much larger API and many more configuration options. Some of them do not work well when testing for Node.js. The most important option we need to set is that testEnvironment should be "node". If we do not do this, jest uses the default configuration where our tests will run in a browser-like environment using jsdom.Here is what a Jest test looks like:const sum = require("./sum"); test("adds 1 + 2 to equal 3", () => { expect(sum(1, 2)).toBe(3); });Jest has a much larger API and offers more functionality than AVA or tape. However, the larger scope is not without drawbacks. When using Jest to test Node.js code, we have to:Agree to use undefined globals.Not use functions like mocked timers that interfere with packages like Mongoose.Configure the environment correctly so it does not run in a simulated browser by default.Consider that some code may run 20-30 times slower in Jest compared to other test runners.Many teams will choose Jest because they are already using it on the front-end and do not like the idea of having multiple test runners, or they like the built-in features like mocks and do not want to incorporate additional modules. Ultimately, these trade-offs must be made on a case-by-case basis.Other testing toolsThere are plenty of other testing tools like Istanbul, nyc, nock, and replay that we do not have space to go into here.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

JAMstack. And how web sites are getting faster and faster

2020-03-20
javascriptapihtmljamstack

If you are involved in the world of websites, you have probably heard of JAMstack.JAMstack stands for:JavaScriptAPIsMarkupJAMstack has inspired some of the best web development tools we've seen. Publishing incredibly fast, secure, and accessible websites has never been so easy or free. I still find it hard to believe that my own personal website now runs for free instead of on a $15/month VPS.Let's take a quick look through various stages of web history to see how we got to where we are today.When the web emergedIn the 90s, web pages didn't look spectacular. HTML was initially used to store documents and send them over the World Wide Web. They looked exactly like a Word document.Considering the time, this was a BOOM!! It was revolutionary, and even websites like Wikipedia currently share this "minimal" look as if you were reading it on paper.As the web grew, developers wanted more control over how their documents looked. CSS was proposed on October 10, 1994, and released in 1996. It didn't catch on quickly as Internet Explorer 3 had limited support for it. Nevertheless, the web was evolving, and so were the tools behind it. Some gems can still be seen online now that were made with such old technologies but still had excellent performance and allowed for a great user interface.After JavaScript was introduced to the WWW, it was used to make the web much more dynamic. As the web grew, so did the companies operating on the Internet. With the evolution of the web, there were many ways to test and sell products and information. Since a lot of code is Open Source, many developers released JavaScript libraries that helped improve the web.Static site generatorsStatic site generators are very popular in 2018. People say they are a trend that web developers should keep in mind this year, and it's understandable. For most cases, it's the right solution and does it well. Here are a couple of very popular static site generators in a comparison.Hugo vs JekyllHugo vs JekyllHugo and Jekyll are great ways to start with static site generators. Many people use them as their primary methods of blogging and page management.The modern webHow many times have you heard the words "The modern web" and just assumed a negative connotation towards it? Don't worry, I'm not here to speak ill of the web. If anything, I adore it. Today's web applications focus more on creating great experiences using more robust JavaScript libraries and RESTful APIs to make things much easier and faster. That's why I love React/Vue because it makes the web a much faster and more accessible place.JAM StackWhich is where we are now.❤❤GatsbyJS is a newcomer to the world of JavaScript and static site generators. It uses modern technologies like GraphQL and React to create extremely fast websites. You can use it with any CMS that has a RESTful API (Wordpress, Contentful, Netlify CMS, Stripe, storyblok, etc). Gatsby is very powerful and has had great success in the Open Source community and Venture Capital.If you are from Medellín, Colombia and are interested in learning about #GatsbyJS, you can vote in the following tweet or leave a comment on it to know the interest of people in this technology, and maybe we can create material to share on this blog.<blockquote class="twitter-tweet"><p lang="es" dir="ltr">How many of you would be interested in having talks, workshops, <a href="https://twitter.com/GatsbyJS?ref_src=twsrc%5Etfw">@gatsbyjs</a> events in the city of Medellin? 👨🏻‍💻♥️👨🏻‍🏫<br><br>Cuantos de ustedes, estarían interesados en tener charlas, talleres, eventos de <a href="https://twitter.com/hashtag/gatsbyjs?src=hash&amp;ref_src=twsrc%5Etfw">#gatsbyjs</a> en la ciudad de medellin?👨‍💻♥️🙌🏻</p>&mdash; Khriztian Moreno 👨🏼‍💻👨🏼‍🏫 (@khriztianmoreno) <a href="https://twitter.com/khriztianmoreno/status/1067593264814088192?ref_src=twsrc%5Etfw">November 28, 2018</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>The future of the webMany people have been predicting that the future of the web will be an immersive world where we can enter an area similar to Matrix. For me, what I imagine is that the web will be accessible to everyone who can have a low-quality computer or not have access to extremely fast Internet, and thanks to tools like #GatsbyJS, we can achieve this dream.I hope this has been useful and/or taught you something new!*PS: Depending on the responses to my tweet, we will see if we create more material about JAMstack and GatsbyJS*I hope this has been useful and/or taught you something new!Profile@khriztianmoreno �

HEADLESS CMS - The best solution for content driven development

2020-03-10
javascriptheadless-cmsweb-development

As the world becomes more connected, an increasing number of companies are turning to content management systems to better engage with their customer base. We have all heard of WordPress, Drupal, Joomla, Sitecore, and Squarespace. However, many of these traditional CMS tools do not seem to keep up with the rapid evolution of technology.Their implementation and maintenance are costly and can present a significant number of security risks. They are also not very flexible, bogged down by layers of multiple templates and framework constraints that can hinder the introduction of mobile functionality.But there is a simple solution: go headless.Integrate with any codebaseA relatively new concept, a Headless CMS essentially removes the interface from the equation, allowing developers to integrate with any codebase. The focus is on the API and the backend technology used to store and deliver content to any device or website.The same editing capabilities are still available to users, but without all the views and responses that govern many traditional CMS approaches.Headless CMS provides us with a lot of freedom on how to implement the content itself. We can have full control over the look of the final product, and no valuable time is wasted building templates from scratch.A traditional CMS requires a lot of time, while a Headless CMS is relatively easy to deliver, as developers generally find pre-made templates that are suitable for many variations of an online product.When we talk about multiple applications consuming the same API, it makes sense to extract them and place them in a real API, this helps to keep the process of each application and ensure they have the same data.When to go for a Headless CMSIs there a time when a traditional CMS would be better than going Headless?As with all software-related answers, it depends on the product, although a better question is whether or not I need a full CMS.Many clients often want to do some kind of CMS, especially for landing pages, which requires time and money. However, if you only plan to change your site's content once or twice a year, do you really need a CMS? Probably not.If, on the other hand, you have content that is constantly changing, like a news website, then your best solution would be a headless approach.What are the benefits of a Headless CMS? Is a traditional approach better for our projects? And is investing more money and time in a custom solution a better strategy?We will delve a bit more in a future blog post about the benefits of Headless CMS. In the meantime, you can learn more about Headless CMS at https://headlesscms.org and https://jamstack.org.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

Systems Design with ReactJS and Storybook

2020-03-05
javascriptreactstorybookdesign-system

Document and test your React components in isolation using Storybook.Storybook ReactStorybook Reacttl;dr: In this post, we will learn how to set up the necessary infrastructure to build a reusable component design system in React, using Storybook.Let's start by understanding that a design system is a series of components that can be reused in different combinations. Design systems allow you to manage design. If we go to designsystemsrepo.com, we can see the design systems used by some of the largest companies and strongest brands, such as Priceline, Apple, IBM, WeWork, GitHub, and even the US government.Design systems can be a significant productivity multiplier in any medium to large-sized project or company, as we can document our components as we develop them, ensuring a consistent look and feel across all screens, and having a continuous workflow between designers and developers.<iframe width="100%" height="315" src="https://www.youtube.com/embed/guteTaeLoys?si=_8X6KwUOZCjQvwan" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>Throughout this video, we will progressively build a very simple design system that contains a single button, but I will show several of the features that Storybook can offer to improve our development experience and project speed.We will learn to set up the StoryBook used in production by everyone from Lonely Planet to Uber, but at the same time, we will keep it as simple as possible, so we can reuse these APIs for our future needs.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

What is a “side effect”?

2020-02-20
javascriptreactredux

In the previous post, we learned a bit about immutability and why it should matter when writing our code, especially our reducers.This time, I want to talk a bit about side effects and how working with pure functions can help us. However, first, let's see what makes a function pure and why it is closely related to immutability.Immutability RulesTo be pure, a function must follow these rules:A pure function must always return the same value when given the same inputs.A pure function must not have any side effects.“Side effects” is a broad term, but it basically means modifying things outside the immediate scope of that function. Some examples of side effects are:Mutating/modifying input parameters, like giveAwesomePowers (function from the previous post)Modifying any other state outside the function, such as global variables, or document.(anything) or window.(anything)Making API callsconsole.log()Math.random()API calls might surprise you. After all, making a call to something like fetch('/users') might not change anything in your UI.But ask yourself this: if you called fetch('/users'), could it change anything anywhere? Even outside your UI?Yes. It will create an entry in the browser's network log. It will create (and perhaps later close) a network connection to the server. And once that call reaches the server, all bets are off. The server could do whatever it wants, including calling other services and making more mutations. At the very least, it will probably place an entry in a log file somewhere (which is a mutation).So, as I said: “side effect” is a pretty broad term. Here is a function that has no side effects:You can call this function once, you can call it a million times and nothing will change. I mean, technically, this satisfies Rule 2. Calling this function will not directly cause any side effects.Also, every time you call this function like add(1, 2) you will get the same answer. No matter how many times you call add(1, 2) you will get the same answer. That satisfies Rule 1: same inputs == same outputs.JS Array Methods That MutateCertain array methods will mutate the array they are used on:push (add an item to the end)pop (remove an item from the end)shift (remove an item from the beginning)unshift (add an item to the beginning)sortreversesplicePure Functions Can Only Call Other Pure FunctionsA possible source of trouble is calling an impure function from a pure one.Purity is transitive and it's all or nothing. You can write a perfectly pure function, but if you end it with a call to some other function that eventually calls setState or dispatch or causes some other kind of side effect… then all bets are off.Now, there are some types of side effects that are “acceptable”. Logging messages with console.log is fine. Yes, technically it is a side effect, but it is not going to affect anything.A Pure Version of giveAwesomePowersNow we can rewrite our function keeping the Rules in mind.giveAwesomePowers — Pure functiongiveAwesomePowers — Pure functionThis is a bit different now. Instead of modifying the person, we are creating a completely new person.If you haven't seen Object.assign, what it does is assign properties from one object to another. You can pass it a series of objects, and it will combine them, from left to right, while overwriting any duplicate properties. (And by “from left to right”, I mean that running Object.assign(result, a, b, c) will copy a into result, then b, then c).However, it does not do a deep merge: only the immediate properties of each argument will be moved. Also, most importantly, it does not create copies or clones of the properties. It assigns them as they are, keeping the references intact.So the above code creates an empty object, then assigns all the properties of person to that empty object and then assigns the specialPower property to that object as well. Another way to write this is with the object spread operator (spread):giveAwesomePowers — ES6 || spreadgiveAwesomePowers — ES6 || spreadYou can read this as: “Create a new object, then insert the properties of person, then add another property called specialPower”. As of this writing, this spread syntax is officially part of the JavaScript specification in ES2018.Pure Functions Return Brand New ObjectsNow we can rerun our experiment from before, using our new pure version of giveAwesomePowers.The big difference is that person was not modified. Mafe has not changed. The function created a clone of Mafe, with all the same properties, plus the ability to become invisible.This is kind of a weird thing about functional programming. Objects are constantly being created and destroyed. We did not change Mafe; we created a clone, modified her clone, and then replaced Mafe with her clone.I hope this has been helpful and/or taught you something new!Profile@khriztianmoreno �

What is immutability?

2020-02-10
javascriptreduxreact

Immutability in React and ReduxImmutability can be a confusing topic, and it appears everywhere in React, Redux, and JavaScript in general.You may have encountered errors where your React components do not re-render, even though you know you have changed the props, and someone says, "You should be making immutable state updates." Maybe you or one of your teammates regularly writes reducers in Redux that mutate the state, and we have to constantly correct them (the reducers, or our teammates 😄).It's complicated. It can be very subtle, especially if you're not sure what to look for. And honestly, if you're not sure why it's important, it's hard to care.

Introduction to Apollo Client with React for GraphQL

2020-01-30
javascriptreactgraphqltutorial

GraphQL has become popular recently and is likely to replace the Rest API. In this tutorial, we will use the Apollo Client to communicate with GitHub's GraphQL API. We will integrate Apollo Client with ReactJS, but you can also use it with other platforms (VueJS, Angular, etc).

Flux Standard Action (FSA)

2020-01-20
reactjavascriptreduxtutorial

It is a lightweight specification that defines the structure of an action, to be implemented in libraries that use the Flux pattern or architecture.Compliance with FSA helps developers create abstractions that can work with different implementations of Flux.Flux Standard Action — ExampleFlux Standard Action — ExampleIt all started after Facebook published its architecture/pattern Flux, many libraries implemented the Flux philosophy, Redux was one of them.Flux can be divided into several concepts Dispatcher, Store, Action, and View. But in this post, we are going to learn about the Action part and how to work with them in a more standardized way, so later we can use other libraries that implement the FSA philosophy.Before delving deeper into today's main topic, let's get to know the concept of Action and how it is defined by flux:Actions define the internal API of your application. They capture the ways to interact with your application. They are simple objects that consist of a “type” field and data.The specification would lead to the following object:{ type: 'ADD_TODO', text: 'TODO content' }The only problem with this simple example is that the developer can choose any property name for the values. All the following names are valid: title, name, text, todoName, etc. It is impossible to know what properties to expect from ADD_TODO in the Redux reducer.It would be much easier to work with Flux actions if we could make certain assumptions about their shape. Maybe defining a minimum common standard for these patterns would allow us to have the necessary abstraction to communicate our actions with the reducer. This is something that Flux Standard Action (FSA) comes to solve.To go into a bit more detail about FSA, it is necessary to start from the following premise that Flux Standard Action provides us about actions:An action MUST:be a plain JavaScript object.have a type property.An action MAYhave an error property.have a payload property.have a meta property.An action MUST NOT include properties other than type, payload, error, and meta.But then what does each of these properties that our JavaScript object can contain mean?Let's see each of thesetypeThe required property type identifies the nature of the action that has occurred to the consumer, type is a constant of type StringpayloadThe optional payload property MAY be any type of value. It represents the action's payload. Any information about the action that is not the type or the status of the action should be part of the payload field.By convention, the payload SHOULD be an object.errorThe optional error property MAY be set to true if the action represents an error.An action whose error is true is analogous to a rejected Promise. By convention, the payload SHOULD be an error object.If the error has any value other than true, including undefined and null, the action MUST NOT be interpreted as an error.metaThe optional meta property MAY be any type of value. It is intended for any additional information that is not part of the payload.The Flux Standard Action (FSA) concept is used by some libraries that can help us reduce the repetitive text we have to create for our actions.Librariesredux-actions — a set of helpers to create and handle FSA actions in Redux.redux-promise — A middleware that supports FSA actions.redux-rx — RxJS utilities for Redux, including a middleware that supports FSA actions.I hope to have the opportunity to give an introduction on how to reduce Redux boilerplate with Redux-Actions in a future occasion.I hope this has been useful and/or taught you something new!Profile@khriztianmoreno �

Structuring the base of our NodeJS project

2020-01-10
nodejsexpressjsscaffoldingapijavascript

The idea of writing this article came from a need that arose at a meetup in the city I attended. Many of the people there were asking how they would know where the files that make up their project should be, such as: models, events, controllers, views, etc. Since in Node.JS there was no base way to do it and many of the visible examples never substantiated the reason why it was built that way.

NodeSchool Learn on your own

2020-01-02
javascriptnodejstutorial

When it comes to learning a new technology or understanding the features of a language, we always look for tutorials on the internet that teach us its concepts. That's why today I want to talk to you about NodeSchool.io, an initiative that aims to teach these topics through self-guided workshops.