Skip to content

Instantly share code, notes, and snippets.

@kostecky
Forked from grifx/0000_hello_world.md
Created May 30, 2023 17:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kostecky/94d33d131591d3c53e2da77b4a1f7ed7 to your computer and use it in GitHub Desktop.
Save kostecky/94d33d131591d3c53e2da77b4a1f7ed7 to your computer and use it in GitHub Desktop.
Create your own roadmaps from roadmap.sh with checkboxes on GIST

Hello,

My SO is learning coding. I wanted a convenient way for her to consume the content from roadmap.sh.

I hope it can help someone else.

If you want your own roadmap:

  1. Fork or Copy-paste the md files you are interested in your own gist.

  2. You can build them yourself using the following commands:

# Step 0 - make sure you have all the utils and that you are willing to install them
which bash sed jq grep cat xargs pbcopy open || echo 'You are missing some utils.'

# Step 1 - Manually download the roadmaps folder
https://download-directory.github.io?url=https://github.com/kamranahmedse/developer-roadmap/tree/master/content/roadmaps

# Step 2 - Unzip the roadmaps
unzip kamranahmedse*roadmaps.zip -d roadmaps

# Step 3 - Delete the zip file
rm kamranahmedse*roadmaps.zip

# Step 4 - Build large Markdown files

for roadmap_id in ./roadmaps/*; do
  cat $roadmap_id/content-paths.json \
    | jq 'to_entries | "." + .[].value' -r \
    | xargs cat \
    | sed \
    -e "s/\"/'/g" \
    -e "s/\' >/'>/g" \
    -e "s/# #/#/g" \
    -e "s/^### /\n\n#### /g" \
    -e "s/^## /\n\n### /g" \
    -e "s/^# /\n\n## /g" \
    -e "s/<ResourceGroupTitle>\(.*\)<\/ResourceGroupTitle>/### \1/g" \
    -e "s/### Free Content//g" \
    -e "s/<BadgeLink colorScheme='\(.*\)' badgeText='\(.*\)' href='\(.*\)'>\(.*\)<\/BadgeLink>/ - [ ] [\4]\(\3\) (\2)/g" \
    -e "s/<BadgeLink badgeText='\(.*\)' colorScheme='\(.*\)' href='\(.*\)'>\(.*\)<\/BadgeLink>/ - [ ] [\4]\(\3\) (\1)/g" \
    -e "s/<BadgeLink badgeText='\(.*\)' href='\(.*\)'>\(.*\)<\/BadgeLink>/ - [ ] [\3]\(\2\) (\1)/g" \
    | grep -v "BadgeLink" \
    | cat > $(echo $roadmap_id | sed -e 's/roadmaps\///').md
done

# Step 5 - Create a new gist
open https://gist.new

# Step 6 - Copy file to clipboard and paste it into your new gist
cat 101-backend.md | pbcopy

# Utils ----------------

# Debug
    | grep -v "BadgeLink" \
    | cat >> debug.md


# @TODO Fix and use this: Remove all </BadgeLink>, add them when line starts with <BadgeLink, then remove all the signle <\/BadgeLink>
    -e "/^<BadgeLink/s/<\/BadgeLink>$//; /^<BadgeLink/s/$/<\/BadgeLink>/" \

Internet

The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.

Internet

The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.

What is HTTP?

HTTP is the TCP/IP based application layer communication protocol which standardizes how the client and server communicate with each other. HTTP follows a classical 'Client-Server model' with a client opening a connection request, then waiting until it receives a response. HTTP is a stateless protocol, that means that the server does not keep any data (state) between two requests.

Browsers

A web browser is a software application that enables a user to access and display web pages or other online content through its graphical user interface.

DNS

The Domain Name System (DNS) is the phonebook of the Internet. Humans access information online through domain names, like nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so browsers can load Internet resources.

Domain Name

A domain name is a unique, easy-to-remember address used to access websites, such as ‘google.com’, and ‘facebook.com’. Users can connect to websites using domain names thanks to the DNS system.

Hosting

Web hosting is an online service that allows you to publish your website files onto the internet. So, anyone who has access to the internet has access to your website.

HTML

HTML stands for HyperText Markup Language. It is used on the frontend and gives the structure to the webpage which you can style using CSS and make interactive using JavaScript.

HTML Basics

HTML stands for HyperText Markup Language. It is used on the frontend and gives the structure to the webpage which you can style using CSS and make interactive using JavaScript.

Semantic HTML

Semantic element clearly describes its meaning to both the browser and the developer. In HTML,semantic element are the type of elements that can be used to define different parts of a web page such as <form>, <table>, <article>, <header>, <footer>, etc.

Forms and Validations

Before submitting data to the server, it is important to ensure all required form controls are filled out, in the correct format. This is called client-side form validation, and helps ensure data submitted matches the requirements set forth in the various form controls.

Best Practices

Learn to follow the best practices for writing maintainable and scalable HTML documents.

Accessibility

Web accessibility means that websites, tools, and technologies are designed and developed in such a way that people with disabilities can use them easily.

Basics of SEO

SEO or Search Engine Optimization is the technique used to optimize your website for better rankings on search engines such as Google, Bing etc.

CSS

CSS or Cascading Style Sheets is the language used to style the frontend of any website. CSS is a cornerstone technology of the World Wide Web, alongside HTML and JavaScript.

CSS Basics

CSS or Cascading Style Sheets is the language used to style the frontend of any website. CSS is a cornerstone technology of the World Wide Web, alongside HTML and JavaScript.

Making layouts

Float, grid, flexbox, positioning, display and box model are some of the key topics that are used for making layouts. Use the resources below to learn about these topics:

Responsive Web Design

Responsive Web Designing is the technique to make your webpages look good on all screen sizes. There are certain techniques used to achieve that e.g. CSS media queries, percentage widths, min or max widths heights etc.

JavaScript

JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on.

JavaScript

JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on.

DOM Manipulation

The Document Object Model (DOM) is a programming interface built for HTML and XML documents. It represents the page that allows programs and scripts to dynamically update the document structure, content, and style. With DOM, we can easily access and manipulate tags, IDs, classes, attributes, etc.

Fetch API

Ajax is the technique that lets us send and receive the data asynchronously from the servers e.g. updating the user profile or asynchronously fetching the list of searched products without reloading the page.

Modern JavaScript

ECMAScript 2015 or ES2015 is a significant update to the JavaScript programming language. It is the first major update to the language since ES5 which was standardized in 2009. You should look at the features introduced with ES6 and onwards.

JavaScript Concepts

Learn and understand the concepts such as Hoisting, Event Bubbling, Scope, Prototype, Shadow DOM and strict.

Version Control Systems

Version control systems allow you to track changes to your codebase/files over time. They allow you to go back to some previous version of the codebase without any issues. Also, they help in collaborating with people working on the same code – if you’ve ever collaborated with other people on a project, you might already know the frustration of copying and merging the changes from someone else into your codebase; version control systems allow you to get rid of this issue.

Git

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Repo Hosting Services

There are different repository hosting services with the most famous one being GitHub, GitLab and BitBucket. I would recommend creating an account on GitHub because that is where most of the OpenSource work is done and most of the developers are.

Services Links

GitHub

GitHub is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

GitLab

GitLab is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

BitBucket

BitBucket is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Web Security Knowledge

Web security refers to the protective measures taken by the developers to protect the web applications from threats that could affect the business.

CORS

Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources.

HTTPS

HTTPS is a secure way to send data between a web server and a browser.

Hypertext transfer protocol secure (HTTPS) is the secure version of HTTP, which is the primary protocol used to send data between a web browser and a website. HTTPS is encrypted in order to increase security of data transfer. This is particularly important when users transmit sensitive data, such as by logging into a bank account, email service, or health insurance provider

Content Security Policy

Content Security Policy is a computer security standard introduced to prevent cross-site scripting, clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context.

OWASP Security Risks

OWASP or Open Web Application Security Project is an online community that produces freely-available articles, methodologies, documentation, tools, and technologies in the field of web application security.

Package Managers

Package managers allow you to manage the dependencies (external code written by you or someone else) that your project needs to work correctly.

npm

npm is a package manager for the JavaScript programming language maintained by npm, Inc. npm is the default package manager for the JavaScript runtime environment Node.js.

Yarn

Yarn is a software packaging system developed in 2016 by Facebook for Node.js JavaScript runtime environment that provides speed, consistency, stability, and security as an alternative to npm (package manager).

pnpm

PNPM is an alternative package manager for Node. js which stands for “Performant NPM”. The main purpose of PNPM is to hold all the packages at a global (centralized) store and use them if needed by other projects too by creating hard links to it.

CSS Architecture

CSS is notoriously difficult to manage in large, complex, rapidly-iterated systems. There are different ways of writing CSS that allows in writing more maintainable CSS.

BEM

The Block, Element, Modifier methodology (commonly referred to as BEM) is a popular naming convention for classes in HTML and CSS. Developed by the team at Yandex, its goal is to help developers better understand the relationship between the HTML and CSS in a given project.

OOCSS

As with any object-based coding method, the purpose of OOCSS or Object Oriented CSS is to encourage code reuse and, ultimately, faster and more efficient stylesheets that are easier to add to and maintain.

SMACSS

SMACSS (pronounced “smacks”) is more style guide than rigid framework. SMACSS is a way to examine your design process and as a way to fit those rigid frameworks into a flexible thought process. It is an attempt to document a consistent approach to site development when using CSS.

CSS Preprocessors

CSS Preprocessors are scripting languages that extend the default capabilities of CSS. They enable us to use logic in our CSS code, such as variables, nesting, inheritance, mixins, functions, and mathematical operations.

Sass

Sass is a preprocessor scripting language that is interpreted or compiled into Cascading Style Sheets. It lets you write maintainable CSS and provides features like variable, nesting, mixins, extension, functions, loops, conditionals and so on.

PostCSS

PostCSS is a tool for transforming styles with JS plugins. These plugins can lint your CSS, support variables and mixins, transpile future CSS syntax, inline images, and more.

Free Resources

Less

Less extends CSS with dynamic behavior such as variables, mixins, operations and functions. Less runs on both the server-side (with Node.js and Rhino) or client-side (modern browsers only).

Build Tools

Task runners automatically execute commands and carry out processes behind the scenes. This helps automate your workflow by performing mundane, repetitive tasks that you would otherwise waste an egregious amount of time repeating yourself.

Common usages of task runners include numerous development tasks such as: spinning up development servers, compiling code (ex. SCSS to CSS), running linters, serving files up from a local port on your computer, and many more!

Task Runners

Task Runner are tools to simplify certain tedious tasks of development, like automating sass/scss compilation, bundling assets, linting source code, and hot reloading local server.

npm Scripts

npm scripts are the entries in the scripts field of the package.json file. The scripts field holds an object where you can specify various commands and scripts that you want to expose.

Linters formatters

A linter is a tool used to analyze code and discover bugs, syntax errors, stylistic inconsistencies, and suspicious constructs. Popular linters for JavaScript include ESLint, JSLint, and JSHint.

Prettier

Prettier is an opinionated code formatter with support for JavaScript, HTML, CSS, YAML, Markdown, GraphQL Schemas. By far the biggest reason for adopting Prettier is to stop all the on-going debates over styles.

ESLint

With ESLint you can impose the coding standard using a certain set of standalone rules.

StandardJS

Standardjs is a Style guide, with linter & automatic code fixer. It is a way to enforce consistent style in your project. It automatically formats code. Standard JS is a tool in the Code Review category of a tech stack.

Module Bundlers

A module bundler is a tool that takes pieces of JavaScript and their dependencies and bundles them into a single file, usually for use in the browser. You may have used tools such as Browserify, Webpack, Rollup or one of many others.

It usually starts with an entry file, and from there it bundles up all of the code needed for that entry file.

Webpack

Webpack is a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser, yet it is also capable of transforming, bundling, or packaging just about any resource or asset.

esbuild

Our current build tools for the web are 10-100x slower than they could be. The main goal of the esbuild bundler project is to bring about a new era of build tool performance, and create an easy-to-use modern bundler along the way.

Rollup

Rollup is a module bundler for JavaScript which compiles small pieces of code into something larger and more complex, such as a library or application.

Parcel

Parcel is a web application bundler, differentiated by its developer experience. It offers blazing-fast performance utilizing multicore processing and requires zero configuration.

Vite

Vite is a build tool that aims to provide a faster and leaner development experience for modern web projects.

Pick a Framework

Web frameworks are designed to write web applications. Frameworks are collections of libraries that aid in the development of a software product or website. Frameworks for web application development are collections of various tools. Frameworks vary in their capabilities and functions, depending on the tasks set. They define the structure, establish the rules, and provide the development tools required.

React

React is the most popular front-end JavaScript library for building user interfaces. React can also render on the server using Node and power mobile apps using React Native.

Svelte

Svelte is a javascript framework that unlike Vue and React does not use vertical DOM diffing but instead knows exactly what and where to update when the state changes. It's mainly focused on frontend and building user interfaces.

SolidJS

Recoil

Recoil is a new state management library built by the Facebook team that simplifies global state management.

Redux

Redux is a predictable state container for JavaScript apps. It helps you write applications that behave consistently, run in different environments (client, server, and native), and are easy to test. On top of that, it provides a great developer experience, such as live code editing combined with a time traveling debugger.

MobX

MobX is an open source state management tool. MobX, a simple, scalable, and standalone state management library, follows functional reactive programming (FRP) implementation and prevents inconsistent state by ensuring that all derivations are performed automatically.

Angular

Angular is a component based front-end development framework built on TypeScript which includes a collection of well-integrated libraries that include features like routing, forms management, client-server communication, and more.

RxJS

RxJS (Reactive Extensions for JavaScript) is a library for reactive programming using observables that makes it easier to compose asynchronous or callback-based code.

NgRx

NgRx is an open source library that provides reactive state management for your Angular applications

Vue.js

Vue.js is an open-source JavaScript framework for building user interfaces and single-page applications. It is mainly focused on front end development.

Pinia

Pinia is a store library for Vue.js, and can be used in Vue 2 and Vue 3, with the same API, except in SSR and its installation. It allows state sharing between pages and components around the application. As the documentation says, it is extensible, intuitive (by organization), has devtools support (in Vue.js devtools), inferred typed state even in javascript and more. In Pinia you can access, mutate, replace, use getters that works like computed, use actions, etc. The library is recommended by the official Vue.js documentation.

Modern CSS

The way we write CSS in our modern front-end applications is completely different from how we used to write CSS before. There are methods such as Styled Components, CSS Modules, Styled JSX, Emotion, etc# Styled components

Styled-components is a CSS-in-JS library that enables you to write regular CSS and attach it to JavaScript components. With styled-components, you can use the CSS you’re already familiar with instead of having to learn a new styling structure.

CSS Modules

CSS files in which all class names and animation names are scoped locally by default.

Styled JSX

Styled JSX is a CSS-in-JS library that allows you to write encapsulated and scoped CSS to style your components. The styles you introduce for one component won't affect other components, allowing you to add, change and delete styles without worrying about unintended side effects.

Emotion

Emotion is a library designed for writing css styles with JavaScript. It provides powerful and predictable style composition in addition to a great developer experience with features such as source maps, labels, and testing utilities. Both string and object styles are supported.

Web Components

Web Components is a suite of different technologies allowing you to create reusable custom elements — with their functionality encapsulated away from the rest of your code — and utilize them in your web apps.

HTML Templates

The <template> HTML element is a mechanism for holding HTML that is not to be rendered immediately when a page is loaded but may be instantiated subsequently during runtime using JavaScript. Think of a template as a content fragment that is being stored for subsequent use in the document.

Custom Elements

One of the key features of the Web Components standard is the ability to create custom elements that encapsulate your functionality on an HTML page, rather than having to make do with a long, nested batch of elements that together provide a custom page feature.

Shadow DOM

An important aspect of web components is encapsulation — being able to keep the markup structure, style, and behavior hidden and separate from other code on the page so that different parts do not clash, and the code can be kept nice and clean. The Shadow DOM API is a key part of this, providing a way to attach a hidden separated DOM to an element.

CSS frameworks

A CSS framework provides the user with a fully functional CSS stylesheet, allowing them to create a web page by simply coding the HTML with appropriate classes, structure, and IDs. Classes for popular website features like as the footer, slider, navigation bar, hamburger menu, column-based layouts, and so on are already included in the framework.

Js first# Chakra UI

Chakra UI is a simple, modular and accessible component library that gives you the building blocks you need to build your React applications.

Mantine

Mantine is a React components library with more than 100 customizable components and 40 hooks to cover you in any situation.

Material-UI is an open-source framework that features React components that implement Google’s Material Design.

Radix UI

An open-source UI component library for building high-quality, accessible design systems and web apps.

Daisy UI

Component library around Tailwind CSS that comes with several built-in components.

CSS Framework that provides atomic CSS classes to help you style components e.g. flex, pt-4, text-center and rotate-90 that can be composed to build any design, directly in your markup.

Css first# Bootstrap

Quickly design and customize responsive mobile-first sites with Bootstrap, the world’s most popular front-end open source toolkit, featuring Sass variables and mixins, responsive grid system, extensive prebuilt components, and powerful JavaScript plugins.

Bulma

Bulma is a free, open source framework that provides ready-to-use frontend components that you can easily combine to build responsive web interfaces.

Testing your apps

Before delivering your application to users, you need to be sure that your app meets the requirements it was designed for, and that it doesn't do any weird, unintended things (called 'bugs'). To accomplish this, we 'test' our applications in different ways.

Jest

Jest is a delightful JavaScript Testing Framework with a focus on simplicity. It works with projects using: Babel, TypeScript, Node, React, Angular, Vue and more!

Playwright

Playwright is an open-source test automation library initially developed by Microsoft contributors. It supports programming languages such as Java, Python, C#, and NodeJS. Playwright comes with Apache 2.0 License and is most popular with NodeJS with Javascript/Typescript.

React Testing Library

The React Testing Library is a very lightweight solution for testing React components. It provides light utility functions on top of react-dom and react-dom/test-utils, in a way that encourages better testing practices. Its primary guiding principle is: The more your tests resemble the way your software is used, the more confidence they can give you.

Cypress

Cypress framework is a JavaScript-based end-to-end testing framework built on top of Mocha – a feature-rich JavaScript test framework running on and in the browser, making asynchronous testing simple and convenient. It also uses a BDD/TDD assertion library and a browser to pair with any JavaScript testing framework.

Free Resources

Enzyme

Enzyme is a JavaScript Testing utility for React that makes it easier to test your React Components' output. You can also manipulate, traverse, and in some ways simulate runtime given the output.

Other options# Mocha

Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases.

Free Resources

Chai

Chai is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.

Free Resources

Ava

Ava is a JavaScript test runner. It utilizes the async I/O nature of Node and runs concurrent tests, thereby vastly decreasing your test times.

Free Resources

Jasmine

Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests. It provides utilities that can be used to run automated tests for both synchronous and asynchronous code.

Free Resources

Type Checkers

Type checker helps developers write code with fewer bugs by adding types to their code, trying to catch type errors within your code, and then removing them during compile time. Flow and TypeScript are two popular static type checkers for JavaScript.# TypeScript

TypeScript is a strongly typed programming language that builds on JavaScript, giving you better tooling at any scale.

Free Resources

Flow

Flow is a static type checker, designed to find type errors in JavaScript programs.

Free Resources

Progressive Web Apps

Progressive Web Apps (PWAs) are websites that are progressively enhanced to function like installed, native apps on supporting platforms, while functioning like regular websites on other browsers.

Performance

Performance plays a significant role in the success of any online venture, as high performing sites engage and retain users better than poorly performing ones. Tools like Lighthouse and Devtools highlight performance metrics and help improve performance of PWAs.

PWA APIs

One of the main purposes of PWAs is to provide a native-app-like experience. APIs like service workers, web sockets, and storage allow a PWA to fast load, access data offline, and other capabilities, similar to a native app.

Server-side rendering

Server-side rendering refers to the process that the service side completes the HTML structure splicing of the page, sends it to the browser, and then binds the status and events for it to become a fully interactive page.

React

React is the most popular front-end JavaScript library for building user interfaces. React can also render on the server using Node and power mobile apps using React Native.

Next.js

Next.js is an open-source development framework built on top of Node.js enabling React based web applications functionalities such as server-side rendering and generating static websites.

Free Resources

Remix

Remix is a full stack web framework that lets you focus on the user interface and work back through web standards to deliver a fast, slick, and resilient user experience. People are gonna love using your stuff.

Free Resources

After.js

After.js is an open-source JavaScript framework for developing SSR (Server Side Rendering) based applications. It is similar to the Next.js framework for server-rendered React apps but uses React Router instead of a folder structure based router like Next.js

Free Resources

Angular

Angular is a component based front-end development framework built on TypeScript which includes a collection of well-integrated libraries that include features like routing, forms management, client-server communication, and more.

Angular Universal

The Angular Universal project is a community driven project to expand on the core APIs from Angular (platform-server) to enable developers to do the server side rendering of Angular applications. It mainly uses express to render pages on pages on node.js server.

Vue.js

Vue.js is an open-source JavaScript framework for building user interfaces and single-page applications. It is mainly focused on front end development.

Nuxt.js

Nuxt.js is a free and open source JavaScript library based on Vue.js, Node.js, Webpack and Babel.js. Nuxt is inspired by Next.js, which is a framework of similar purpose, based on React.js.

Graphql

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

Apollo

Apollo is a platform for building a unified graph, a communication layer that helps you manage the flow of data between your application clients (such as web and native apps) and your back-end services.

Relay Modern

Relay is a JavaScript client used in the browser to fetch GraphQL data. It's a JavaScript framework developed by Facebook for managing and fetching data in React applications. It is built with scalability in mind in order to power complex applications like Facebook. The ultimate goal of GraphQL and Relay is to deliver instant UI-response interactions.

Static Site Generators

A static site generator is a tool that generates a full static HTML website based on raw data and a set of templates. Essentially, a static site generator automates the task of coding individual HTML pages and gets those pages ready to serve to users ahead of time. Because these HTML pages are pre-built, they can load very quickly in users' browsers.

Next.js

Next.js is an open-source development framework built on top of Node.js enabling React based web applications functionalities such as server-side rendering and generating static websites.

Free Resources

Remix

Remix is a full stack web framework that lets you focus on the user interface and work back through web standards to deliver a fast, slick, and resilient user experience. People are gonna love using your stuff.

Free Resources

Gatsby

Gatsby is a React-based open source framework with performance, scalability and security built-in.

Free Resources

Nuxt js

Nuxt.js is an open-source development framework built on top of Node.js enabling Vue based web applications functionalities such as server-side rendering and generating static websites.

Free Resources

Vuepress

VuePress is composed of two parts: a minimalistic static site generator (opens new window)with a Vue-powered theming system and Plugin API, and a default theme optimized for writing technical documentation. It was created to support the documentation needs of Vue’s own sub projects.

Free Resources

Jekyll

Jekyll is a static site generator. It takes text written in your favorite markup language and uses layouts to create a static website. You can tweak the site’s look and feel, URLs, the data displayed on the page, and more.

Hugo

Hugo is the world’s fastest static website engine. It’s written in Go (aka Golang) and developed by bep, spf13 and friends.

Free Resources

Gridsome

Gridsome is a Vue.js powered Jamstack framework for building static generated websites & apps that are fast by default.

Eleventy

Eleventy (11ty) is a simple to use, easy to customize, highly performant and powerful static site generator with a helpful set of plugins (e.g. navigation, build-time image transformations, cache assets). Pages can be built and written with a variety of template languages (HTML, Markdown, JavaScript, Liquid, Nunjucks, Handlebars, Mustache, EJS, Haml, Pug or JS template literals). But it also offers the possibility to dynamically create pages from local data or external sources that are compiled at build time. It has zero client-side JavaScript dependencies.

Mobile applications

A while back, developing a mobile app using JavaScript was impossible. But now JavaScript developers can create mobile applications using their knowledge for web development. Here is the list of options to create mobile applications in JavaScript.

React Native is a popular JavaScript-based mobile app framework that allows you to build natively-rendered mobile apps for iOS and Android. The framework lets you create an application for various platforms by using the same codebase.

NativeScript is an open source framework for creating native iOS and Android apps in Angular, TypeScript, or JavaScript.

Flutter

Flutter is a free and open-source mobile UI framework created by Google and released in May 2017. In a few words, it allows you to create a native mobile application with only one codebase. This means that you can use one programming language and one codebase to create two different apps (for iOS and Android).

Flutter consists of two important parts:

  • An SDK (Software Development Kit): A collection of tools that are going to help you develop your applications. This includes tools to compile your code into native machine code (code for iOS and Android).
  • A Framework (UI Library based on widgets): A collection of reusable UI elements (buttons, text inputs, sliders, and so on) that you can personalize for your own needs. To develop with Flutter, you will use a programming language called Dart. The language was created by Google in October 2011, but it has improved a lot over these past years.

Dart focuses on front-end development, and you can use it to create mobile and web applications.

If you know a bit of programming, Dart is a typed object programming language. You can compare Dart's syntax to JavaScript.

Ionic

Ionic framework is an open-source UI toolkit for building performant, high-quality mobile apps, desktop apps, and progressive web apps using web technologies such as HTML, CSS, and JavaScript.

Desktop Applications in JavaScript

A while back, developing a desktop app using JavaScript was impossible. But now JavaScript developers can create desktop applications using their knowledge for web development. Here is the list of options to create desktop applications in JavaScript.

Electron

Electron allows you to build cross-platform desktop applications with HTML, CSS, and Javascript/Typescript. It uses Chromium and Node.js, so essentially it is a 'Browser' like application that is compatible with Mac, Windows, and Linux.

Tauri

Tauri is a toolkit that helps developers make applications for the major desktop platforms - using virtually any frontend framework in existence. The core is built with Rust, and the CLI leverages Node.js making Tauri a genuinely polyglot approach to creating and maintaining great apps.

Proton native

Proton Native allows you to create desktop applications through a React syntax, on all platforms.

Web Assembly

WebAssembly is a new type of code that can be run in modern web browsers — it is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages such as C/C++, C# and Rust with a compilation target so that they can run on the web. It is also designed to run alongside JavaScript, allowing both to work together.

Internet

The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.

Internet

The Internet is a global network of computers connected to each other which communicate through a standardized set of protocols.

What is HTTP?

HTTP is the TCP/IP based application layer communication protocol which standardizes how the client and server communicate with each other. It defines how the content is requested and transmitted across the internet.

Browsers

A web browser is a software application that enables a user to access and display web pages or other online content through its graphical user interface.

DNS

The Domain Name System (DNS) is the phonebook of the Internet. Humans access information online through domain names, like nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so browsers can load Internet resources.

Domain Name

A domain name is a unique, easy-to-remember address used to access websites, such as ‘google.com’, and ‘facebook.com’. Users can connect to websites using domain names thanks to the DNS system.

Hosting

Web hosting is an online service that allows you to publish your website files onto the internet. So, anyone who has access to the internet has access to your website.

Basic Frontend Knowledge

As a backend developer, you may not need to have proficient knowledge of the frontend stack but you should at least have some basic understanding of HTML, CSS and JavaScript.

HTML

HTML stands for HyperText Markup Language. It is used on the frontend and gives the structure to the webpage which you can style using CSS and make interactive using JavaScript.

CSS

CSS or Cascading Style Sheets is the language used to style the frontend of any website. CSS is a cornerstone technology of the World Wide Web, alongside HTML and JavaScript.

JavaScript

JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on.

General Knowledge

Operating System is a program that manages a computer’s resources, especially the allocation of those resources among other programs. Typical resources include the central processing unit (CPU), computer memory, file storage, input/output (I/O) devices, and network connections.

Terminals, also known as command lines or consoles, allow us to accomplish and automate tasks on a computer without the use of a graphical user interface.

Operating Systems

An operating system is a main program on computer, that governs all other applications. It allows you to use browsers, play games, print documents, launch your favorite program.

Process management involves various tasks like creation, scheduling, termination of processes, and a deadlock. Process is a program that is under execution, which is an important part of modern-day operating systems. The OS must allocate resources that enable processes to share and exchange information. It also protects the resources of each process from other methods and allows synchronization among processes.

Threads and Concurrency

A thread is the smallest unit of processing that can be performed in an OS. In most modern operating systems, a thread exists within a process - that is, a single process may contain multiple threads.

Concurrency refers to the execution of multiple threads at the same time. It occurs in an operating system when multiple process threads are executing concurrently. These threads can interact with one another via shared memory or message passing. Concurrency results in resource sharing, which causes issues like deadlocks and resource scarcity. It aids with techniques such as process coordination, memory allocation, and execution schedule to maximize throughput.

Working within the terminal is common practice for any Backend Developer and there are many commands and utilities that can help you achieve your tasks more efficiently.

The best way to learn these commands is to practice them in your own machine/environment. Specifically, these are related to Linux commands/utilities which are the most prevalent in the market.

To understand these commands, read through the manual pages by using man command e.g. man grep, man awk etc.

After enough exposure and practice to these commands, it will become easier to use these in practice

Memory Management

The term Memory can be defined as a collection of data in a specific format. It is used to store instructions and process data. The memory comprises a large array or group of words or bytes, each with its own location. The primary motive of a computer system is to execute programs. These programs, along with the information they access, should be in the main memory during execution. The CPU fetches instructions from memory according to the value of the program counter.

To achieve a degree of multiprogramming and proper utilization of memory, memory management is important. There are several memory management methods, reflecting various approaches, and the effectiveness of each algorithm depends on the situation.

Interprocess Communication

Interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data

I/O Management

One of the important jobs of an Operating System is to manage various I/O devices including mouse, keyboards, touchpad, disk drives, display adapters, USB devices, Bit-mapped screens, LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers, etc.

POSIX Basics

POSIX (Portable Operating System Interface) is a family of standards for maintaining compatibility between operating systems. It describes utilities, APIs, and services that a compliant OS should provide to software, thus making it easier to port programs from one system to another.

A practical example: in a Unix-like operating system, there are three standard streams, stdin, stdout and stderr - they are I/O connections that you will probably come across when using a terminal, as they manage the flow from the standard input (stdin), standard output (stdout) and standard error (stderr).

So, in this case, when we want to interact with any of these streams (through a process, for example), the POSIX operating system API makes it easier - for example, in the <unistd.h> C header where the stdin, stderr, and stdout are defined as STDIN_FILENO, STDERR_FILENO and STDOUT_FILENO.

POSIX also adds a standard for exit codes, filesystem semantics, and several other command line utility API conventions.

Basic Networking Concepts

Computer networking refers to interconnected computing devices that can exchange data and share resources with each other. These networked devices use a system of rules, called communications protocols, to transmit information over physical or wireless technologies.

Learn a Language

Even if you’re a beginner the least you would have known is that Web Development is majorly classified into two facets: Frontend Development and Backend Development. And obviously, they both have their respective set of tools and technologies. For instance, when we talk about Frontend Development, there always comes 3 names first and foremost – HTML, CSS, and JavaScript.

In the same way, when it comes to Backend Web Development – we primarily require a backend (or you can say server-side) programming language to make the website function along with various other tools & technologies such as databases, frameworks, web servers, etc.

Go

Go is an open source programming language supported by Google. Go can be used to write cloud services, CLI tools, used for API development, and much more.

Rust

Rust is a modern systems programming language focusing on safety, speed, and concurrency. It accomplishes these goals by being memory safe without using garbage collection.

Java

Java is general-purpose language, primarily used for Internet-based applications. It was created in 1995 by James Gosling at Sun Microsystems and is one of the most popular options for backend developers.

Csharp

C# (pronounced 'C sharp') is a general purpose programming language made by Microsoft. It is used to perform different tasks and can be used to create web apps, games, mobile apps, etc.

PHP

PHP is a general purpose scripting language often used for making dynamic and interactive Web pages. It was originally created by Danish-Canadian programmer Rasmus Lerdorf in 1994. The PHP reference implementation is now produced by The PHP Group and supported by PHP Foundation. PHP supports procedural and object-oriented styles of programming with some elements of functional programming as well.

JavaScript

JavaScript, often abbreviated JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. It lets us add interactivity to pages e.g. you might have seen sliders, alerts, click interactions, and popups etc on different websites -- all of that is built using JavaScript. Apart from being used in the browser, it is also used in other non-browser environments as well such as Node.js for writing server-side code in JavaScript, Electron for writing desktop applications, React Native for mobile applications and so on.

Python

Python is a well known programming language which is both a strongly typed and a dynamically typed language. Being an interpreted language, code is executed as soon as it is written and the Python syntax allows for writing code in functional, procedural or object-oriented programmatic ways.

Ruby

Ruby is a high-level, interpreted programming language that blends Perl, Smalltalk, Eiffel, Ada, and Lisp. Ruby focuses on simplicity and productivity along with a syntax that reads and writes naturally. Ruby supports procedural, object-oriented and functional programming and is dynamically typed.

Version Control Systems

Version control/source control systems allow developers to track and control changes to code over time. These services often include the ability to make atomic revisions to code, branch/fork off of specific points, and to compare versions of code. They are useful in determining the who, what, when, and why code changes were made.

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Repo Hosting Services

When working on a team, you often need a remote place to put your code so others can access it, create their own branches, and create or review pull requests. These services often include issue tracking, code review, and continuous integration features. A few popular choices are GitHub, GitLab, BitBucket, and AWS CodeCommit.

GitHub

GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

GitLab is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Bitbucket

Bitbucket is a Git based hosting and source code repository service that is Atlassian's alternative to other products like GitHub, GitLab etc

Bitbucket offers hosting options via Bitbucket Cloud (Atlassian's servers), Bitbucket Server (customer's on-premise) or Bitbucket Data Centre (number of servers in customers on-premise or cloud environment)

Relational Databases

A relational database is a type of database that stores and provides access to data points that are related to one another. Relational databases store data in a series of tables. Interconnections between the tables are specified as foreign keys. A foreign key is a unique reference from one row in a relational table to another row in a table, which can be the same table but is most commonly a different table.

PostgreSQL

PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing extensibility and SQL compliance.

MySQL

MySQL is an incredibly popular open source relational database management system (RDBMS). MySQL can be used as a stand-alone client or in conjunction with other services to provide database connectivity. The M in LAMP stack stands for MySQL; that alone should provide an idea of its prevalence.

MariaDB

MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry. MariaDB was created with the intention of being a more versatile, drop-in replacement version of MySQL

MS SQL

MS SQL (or Microsoft SQL Server) is the Microsoft developed relational database management system (RDBMS). MS SQL uses the T-SQL (Transact-SQL) query language to interact with the relational databases. There are many different versions and editions available of MS SQL

Oracle

Oracle Database Server or sometimes called Oracle RDBMS or even simply Oracle is a world leading relational database management system produced by Oracle Corporation.

NoSQL databases

NoSQL databases offer data storage and retrieval that is modelled differently to 'traditional' relational databases. NoSQL databases typically focus more on horizontal scaling, eventual consistency, speed and flexibility and is used commonly for big data and real-time streaming applications. NoSQL is often described as a BASE system (Basically Available, Soft state, Eventual consistency) as opposed to SQL/relational which typically focus on ACID (Atomicity, Consistency, Isolation, Durability). Common NoSQL data structures include key-value pair, wide column, graph and document.

Document databases

MongoDB

MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL).

DynamoDB

DynamoDB is a fully managed proprietary NoSQL database service that supports key–value and document data structures and is offered by Amazon.com as part of the Amazon Web Services portfolio. DynamoDB exposes a similar data model to and derives its name from Dynamo, but has a different underlying implementation.

Column Databases

A wide-column database (sometimes referred to as a column database) is similar to a relational database. It store data in tables, rows and columns. However in opposite to relational databases here each row can have its own format of the columns. Column databases can be seen as a two-dimensional key-value database. One of such database system is Apache Cassandra.

Warning: note that a 'columnar database' and a 'column database' are two different terms!

Timeseries databases

InfluxDB

InfluxDB was built from the ground up to be a purpose-built time series database; i.e., it was not repurposed to be time series. Time was built-in from the beginning. InfluxDB is part of a comprehensive platform that supports the collection, storage, monitoring, visualization and alerting of time series data. It’s much more than just a time series database.

Realtime databases# Databases

A database is a collection of useful data of one or more related organizations structured in a way to make data an asset to the organization. A database management system is a software designed to assist in maintaining and extracting large collections of data in a timely fashion.

ORMs

Object-Relational Mapping (ORM) is a technique that lets you query and manipulate data from a database using an object-oriented paradigm. When talking about ORM, most people are referring to a library that implements the Object-Relational Mapping technique, hence the phrase 'an ORM'.

ACID

ACID are the four properties of any database system that help in making sure that we are able to perform the transactions in a reliable manner. It's an acronym which refers to the presence of four properties: atomicity, consistency, isolation and durability

Transactions

In short, a database transaction is a sequence of multiple operations performed on a database, and all served as a single logical unit of work — taking place wholly or not at all. In other words, there's never a case where only half of the operations are performed and the results saved.

N plus one problem

The N+1 query problem happens when your code executes N additional query statements to fetch the same data that could have been retrieved when executing the primary query.

Database Normalization

Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by Edgar F. Codd as part of his relational model.

Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).

An index is a data structure that you build and assign on top of an existing table that basically looks through your table and tries to analyze and summarize so that it can create shortcuts.

Data Replication

Data replication is the process by which data residing on a physical/virtual server(s) or cloud instance (primary instance) is continuously replicated or copied to a secondary server(s) or cloud instance (standby instance). Organizations replicate data to support high availability, backup, and/or disaster recovery.

Sharding strategies

Sharding strategy is a technique to split a large dataset into smaller chunks (logical shard) in which we distribute these chunks in different machines/database nodes in order to distribute the traffic load. It’s a good mechanism to improve the scalability of an application.

CAP Theorem

CAP is an acronym that stands for Consistency, Availability and Partition Tolerance. According to CAP theorem, any distributed system can only guarantee two of the three properties at any point of time. You can't guarantee all three properties at once.

APIs

API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other.

REST

REST, or REpresentational State Transfer, is an architectural style for providing standards between computer systems on the web, making it easier for systems to communicate with each other.

JSON or JavaScript Object Notation is an encoding scheme that is designed to eliminate the need for an ad-hoc code for each application to communicate with servers that communicate in a defined way. JSON API module exposes an implementation for data stores and data structures, such as entity types, bundles, and fields.

SOAP

Simple Object Access Protocol (SOAP) is a message protocol for exchanging information between systems and applications. When it comes to application programming interfaces (APIs), a SOAP API is developed in a more structured and formalized way. SOAP messages can be carried over a variety of lower-level protocols, including the web-related Hypertext Transfer Protocol (HTTP).

gRPC

gRPC is a high-performance, open source universal RPC framework

RPC stands for Remote Procedure Call, there's an ongoing debate on what the g stands for. RPC is a protocol that allows a program to execute a procedure of another program located on another computer. The great advantage is that the developer doesn’t need to code the details of the remote interaction. The remote procedure is called like any other function. But the client and the server can be coded in different languages.

Hateoas

HATEOAS is an acronym for Hypermedia As The Engine Of Application State, it's the concept that when sending information over a RESTful API the document received should contain everything the client needs in order to parse and use the data i.e they don't have to contact any other endpoint not explicitly mentioned within the Document

Open api spec

The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.

An OpenAPI definition can then be used by documentation generation tools to display the API, code generation tools to generate servers and clients in various programming languages, testing tools, and many other use cases.

Authentication

The API authentication process validates the identity of the client attempting to make a connection by using an authentication protocol. The protocol sends the credentials from the remote client requesting the connection to the remote access server in either plain text or encrypted form. The server then knows whether it can grant access to that remote client or not.

Here is the list of common ways of authentication:

Cookie-Based Authentication

Cookies are pieces of data used to identify the user and their preferences. The browser returns the cookie to the server every time the page is requested. Specific cookies like HTTP cookies are used to perform cookie-based authentication to maintain the session for each user.

OAuth

OAuth stands for Open Authorization and is an open standard for authorization. It works to authorize devices, APIs, servers and applications using access tokens rather than user credentials, known as 'secure delegated access'.

In its most simplest form, OAuth delegates authentication to services like Facebook, Amazon, Twitter and authorizes third-party applications to access the user account without having to enter their login and password.

It is mostly utilized for REST/APIs and only provides a limited scope of a user's data.

Basic authentication# Token authentication

Token-based authentication is a protocol which allows users to verify their identity, and in return receive a unique access token. During the life of the token, users then access the website or app that the token has been issued for, rather than having to re-enter credentials each time they go back to the same webpage, app, or any resource protected with that same token.

Auth tokens work like a stamped ticket. The user retains access as long as the token remains valid. Once the user logs out or quits an app, the token is invalidated.

Token-based authentication is different from traditional password-based or server-based authentication techniques. Tokens offer a second layer of security, and administrators have detailed control over each action and transaction.

But using tokens requires a bit of coding know-how. Most developers pick up the techniques quickly, but there is a learning curve.

JWT

JWT stands for JSON Web Token is a token-based encryption open standard/methodology that is used to transfer information securely as a JSON object. Clients and Servers use JWT to securely share information, with the JWT containing encoded JSON objects and claims. JWT tokens are designed to be compact, safe to use within URLs, and ideal for SSO contexts.

OpenID

OpenID is a protocol that utilizes the authorization and authentication mechanisms of OAuth 2.0 and is now widely adopted by many identity providers on the Internet. It solves the problem of needing to share user's personal info between many different web services(e.g. online shops, discussion forums etc.)

Caching is a technique of storing frequently used data or information in a local memory, for a certain time period. So, next time, when the client requests the same information, instead of retrieving the information from the database, it will give the information from the local memory. The main advantage of caching is that it improves the performance by reducing the processing burden.

CDN (Content Delivery Network)

A Content Delivery Network (CDN) service aims to provide high availability and performance improvements of websites. This is achieved with fast delivery of website assets and content typically via geographically closer endpoints to the client requests. Traditional commercial CDNs (Amazon CloudFront, Akamai, CloudFlare and Fastly) provide servers across the globe which can be used for this purpose. Serving assets and contents via a CDN reduces bandwidth on website hosting, provides an extra layer of caching to reduce potential outages and can improve website security as well

Server side

Server-side caching temporarily stores web files and data on the origin server to reuse later.

When the user first requests for the webpage, the website goes under the normal process of retrieving data from the server and generates or constructs the webpage of the website. After the request has happened and the response has been sent back, the server copies the webpage and stores it as a cache.

Next time the user revisits the website, it loads the already saved or cached copy of the webpage, thus making it faster.

Redis

Redis is an open source (BSD licensed), in-memory data structure store used as a database, cache, message broker, and streaming engine. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. Redis has built-in replication, Lua scripting, LRU eviction, transactions, and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

Memcached

Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the libevent library.

Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.

Memcached has no internal mechanism to track misses which may happen. However, some third-party utilities provide this functionality.

Client side# Web Security Knowledge

Web security refers to the protective measures taken by the developers to protect the web applications from threats that could affect the business.

MD5

MD5 (Message-Digest Algorithm 5) is a hash function that is currently advised not to be used due to its extensive vulnerabilities. It is still used as a checksum to verify data integrity.

SHA family

SHA (Secure Hash Algorithms) is a family of cryptographic hash functions created by the NIST (National Institute of Standards and Technology). The family includes:

  • SHA-0: Published in 1993, this is the first algorithm in the family. Shortly after its release, it was discontinued for an undisclosed significant flaw.

  • SHA-1: Created to replace SHA-0 and which resembles MD5, this algorithm has been considered insecure since 2010.

  • SHA-2: This isn't an algorithm, but a set of them, with SHA-256 and SHA-512 being the most popular. SHA-2 is still secure and widely used.

  • SHA-3: Born in a competition, this is the newest member of the family. SHA-3 is very secure and doesn't carry the same design flaws as its brethren.

  • Wikipedia - SHA-1 (Read)

  • Wikipedia - SHA-2 (Read)

  • Wikipedia - SHA-3 (Read)

Bcrypt

bcrypt is a password hashing function, that has been proven reliable and secure since it's release in 1999. It has been implemented into most commonly-used programming languages.

Scrypt

Scrypt (pronounced 'ess crypt') is a password hashing function (like bcrypt). It is designed to use a lot of hardware, which makes brute-force attacks more difficult. Scrypt is mainly used as a proof-of-work algorithm for cryptocurrencies.

HTTPS

HTTPS is a secure way to send data between a web server and a browser.

Content Security Policy

Content Security Policy is a computer security standard introduced to prevent cross-site scripting, clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context.

Cors

Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources.

SSL/TLS

Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols used to provide security in internet communications. These protocols encrypt the data that is transmitted over the web, so anyone who tries to intercept packets will not be able to interpret the data. One difference that is important to know is that SSL is now deprecated due to security flaws, and most modern web browsers no longer support it. But TLS is still secure and widely supported, so preferably use TLS.

OWASP Security Risks

OWASP or Open Web Application Security Project is an online community that produces freely-available articles, methodologies, documentation, tools, and technologies in the field of web application security.

Testing

A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.

Integration Testing

Integration testing is a broad category of tests where multiple software modules are integrated and tested as a group. It is meant to test the interaction between multiple services, resources, or modules. For example, an API's interaction with a backend service, or a service with a database.

Unit Testing

Unit testing is where individual units (modules, functions/methods, routines, etc.) of software are tested to ensure their correctness. This low-level testing ensures smaller components are functionally sound while taking the burden off of higher-level tests. Generally, a developer writes these tests during the development process and they are run as automated tests.

Functional Testing

Functional testing is where software is tested to ensure functional requirements are met. Usually, it is a form of black box testing in which the tester has no understanding of the source code; testing is performed by providing input and comparing expected/actual output. It contrasts with non-functional testing, which includes performance, load, scalability, and penetration testing.

CI/CD

CI/CD (Continuous Integration/Continuous Deployment) is the practice of automating building, testing, and deployment of applications with the main goal of detecting issues early, and provide quicker releases to the production environment.

Design and development principles# Design Patterns

Design patterns are typical solutions to commonly occurring problems in software design. They can be broken into three categories:

Domain-Driven Design

Domain-driven design (DDD) is a software design approach focusing on modeling software to match a domain according to input from that domain's experts.

In terms of object-oriented programming, it means that the structure and language of software code (class names, class methods, class variables) should match the business domain. For example, if a software processes loan applications, it might have classes like LoanApplication and Customer, and methods such as AcceptOffer and Withdraw.

DDD connects the implementation to an evolving model and it is predicated on the following goals:

  • Placing the project's primary focus on the core domain and domain logic;

  • Basing complex designs on a model of the domain;

  • Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.

  • Domain Driven Design Quickly (Official Docs)

Test Driven Development

Test driven development (TDD) is the process of writing tests for software's requirements which will fail until the software is developed to meet those requirements. Once those tests pass, then the cycle repeats to refactor code or develop another feature/requirement. In theory, this ensures that software is written to meet requirements in the simplest form, and avoids code defects.

SOLID

SOLID is a set of principles applied to object-oriented design (OOD) to create maintainable, understandable, and flexible code, while avoiding code smells and defects. The principles are:

KISS

Keep It Simple, Stupid (KISS) is a software design principle that states avoiding needless complexity is the best way to build software that is easier to maintain, understand, and contains fewer defects. A simple product that does a single thing well is better than a complex product that does many things poorly.

YAGNI

You Aren't Going to Need It (YAGNI) is a software design principle from the Extreme Programming (XP) framework that states when developing software, functionality or features should not be added until they are necessary. Within agile software development in general, requirements are always open to change; any extra functionality may end up being wasted time and resources.

DRY

Don't Repeat Yourself (DRY) is a software design principle which encourages developers to not repeat software patterns or code. DRY encourages code reusability, often in the form of methods, functions, or subroutines. When DRY is implemented successfully, developers are able to make one change to update many related elements while avoiding making changes to unrelated elements.

Architectural patterns# Monolithic Apps

Monolithic architecture is a pattern in which an application handles requests, executes business logic, interacts with the database, and creates the HTML for the front end. In simpler terms, this one application does many things. It's inner components are highly coupled and deployed as one unit.

Microservices

Microservice architecture is a pattern in which highly cohesive, loosely coupled services are separately developed, maintained, and deployed. Each component handles an individual function, and when combined, the application handles an overall business function.

SOA

SOA, or service-oriented architecture, defines a way to make software components reusable via service interfaces. These interfaces utilize common communication standards in such a way that they can be rapidly incorporated into new applications without having to perform deep integration each time.

CQRS and Event Sourcing

CQRS, or command query responsibility segregation, defines an architectural pattern where the main focus is to separate the approach of reading and writing operations for a data store. CQRS can also be used along with Event Sourcing pattern in order to persist application state as an ordered of sequence events, making it possible to restore data to any point in time.

Serverless

Serverless is an architecture in which a developer builds and runs applications without provisioning or managing servers. With cloud computing/serverless, servers exist but are managed by the cloud provider. Resources are used as they are needed, on demand and often using auto scaling.

Search engines# Elasticsearch

Elastic search at its core is a document-oriented search engine. It is a document based database that lets you INSERT, DELETE , RETRIEVE and even perform analytics on the saved records. But, Elastic Search is unlike any other general purpose database you have worked with, in the past. It's essentially a search engine and offers an arsenal of features you can use to retrieve the data stored in it, as per your search criteria. And that too, at lightning speeds.

Solr

Solr is highly reliable, scalable and fault tolerant, providing distributed indexing, replication and load-balanced querying, automated failover and recovery, centralized configuration and more. Solr powers the search and navigation features of many of the world's largest internet sites.

Message Brokers

Message brokers are an inter-application communication technology to help build a common integration mechanism to support cloud-native, microservices-based, serverless, and hybrid cloud architectures. Two of the most famous message brokers are RabbitMQ and Apache Kafka

RabbitMQ

With tens of thousands of users, RabbitMQ is one of the most popular open-source message brokers. RabbitMQ is lightweight and easy to deploy on-premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.

Kafka

Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

Containerization vs. Virtualization

Containers and virtual machines are the two most popular approaches to setting up a software infrastructure for your organization.

Docker

Docker is a platform for working with containerized applications. Among its features are a daemon and client for managing and interacting with containers, registries for storing images, and a desktop application to package all these features together.

RKT

RKT(pronounced like a 'rocket') is an application container engine developed for modern production cloud-native environments. It features a pod-native approach, a pluggable execution environment, and a well-defined surface area that makes it ideal for integration with other systems.

RKT project was ended in 2018.

LXC

LXC is an abbreviation used for Linux Containers which is an operating system that is used for running multiple Linux systems virtually on a controlled host via a single Linux kernel. LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Graphql

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

Apollo

Apollo is a platform for building a unified graph, a communication layer that helps you manage the flow of data between your application clients (such as web and native apps) and your back-end services.

Relay Modern

Relay is a JavaScript client used in the browser to fetch GraphQL data. It's a JavaScript framework developed by Facebook for managing and fetching data in React applications. It is built with scalability in mind in order to power complex applications like Facebook. The ultimate goal of GraphQL and Relay is to deliver instant UI-response interactions.

Graph databases

A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it.

Neo4j

Neo4j AuraDB is a fast, reliable, scalable, and completely automated Neo4j graph database, provided as a cloud service.

Web sockets

Web sockets are defined as a two-way communication between the servers and the clients, which mean both the parties, communicate and exchange data at the same time. This protocol defines a full duplex communication from the ground up. Web sockets take a step forward in bringing desktop rich functionalities to the web browsers.

Web Servers

Web servers can be either hardware or software, or perhaps a combination of the two.

Hardware Side:

A hardware web server is a computer that houses web server software and the files that make up a website (for example, HTML documents, images, CSS stylesheets, and JavaScript files). A web server establishes a connection to the Internet and facilitates the physical data exchange with other web-connected devices.

Software side:

A software web server has a number of software components that regulate how hosted files are accessed by online users. This is at the very least an HTTP server. Software that knows and understands HTTP and URLs (web addresses) is known as an HTTP server (the protocol your browser uses to view webpages). The content of these hosted websites is sent to the end user's device through an HTTP server, which may be accessed via the domain names of the websites it holds.

Basically, an HTTP request is made by a browser anytime it wants a file that is stored on a web server. The relevant (hardware) web server receives the request, which is then accepted by the appropriate (software) HTTP server, which then locates the requested content and returns it to the browser over HTTP. (If the server cannot locate the requested page, it responds with a 404 error.)

Nginx

NGINX is a powerful web server and uses a non-threaded, event-driven architecture that enables it to outperform Apache if configured correctly. It can also do other important things, such as load balancing, HTTP caching, or be used as a reverse proxy.

Apache

Apache is a free, open-source HTTP server, available on many operating systems, but mainly used on Linux distributions. It is one of the most popular options for web developers, as it accounts for over 30% of all the websites, as estimated by W3Techs.

Caddy

The Caddy web server is an extensible, cross-platform, open-source web server written in Go. It has some really nice features like automatic SSL/HTTPs and a really easy configuration file.

MS IIS

Internet Information Services (IIS) for Windows® Server is a flexible, secure and manageable Web server for hosting anything on the Web.

Building for Scale

Speaking in general terms, scalability is the ability of a system to handle a growing amount of work by adding resources to it.

A software that was conceived with a scalable architecture in mind, is a system that will support higher workloads without any fundamental changes to it, but don't be fooled, this isn't magic. You'll only get so far with smart thinking without adding more sources to it.

For a system to be scalable, there are certain things you must pay attention to, like:

  • Coupling
  • Observability
  • Evolvability
  • Infrastructure

When you think about the infrastructure of a scalable system, you have two main ways of building it: using on-premises resources or leveraging all the tools a cloud provider can give you.

The main difference between on-premises and cloud resources will be FLEXIBILITY, on cloud providers you don't really need to plan ahead, you can upgrade your infrastructure with a couple of clicks, while with on-premises resources you will need a certain level of planning.

This section is mainly relevant to the cloud design patterns that help you build scalable solutions. Have a look at the Cloud Design Patterns docs by Microsoft and this video covering the throttling, retry and circuit breaker patterns

Instrumentation monitoring telemetry# Migration Strategies

A migration strategy is a plan for moving data from one location to another, and it is an important step in any database migration. A data migration strategy should include a plan for how to move the data and what to do with it once it arrives at the new location.

Horizontal/Vertical Scaling

Horizontal scaling is a change in the number of a resource. For example, increasing the number of virtual machines processing messages in a queue. Vertical scaling is a change in the size/power of a resource. For example, increasing the memory or disk space available to a machine. Scaling can be applied to databases, cloud resources, and other areas of computing.

Observability

In sofware development, observability is the measure of how well we can understand a system from the work it does, and how to make it better.

So what makes a system to be 'observable'? It is its ability of producing and collecting metrics, logs and traces in order for us to understand what happens under the hood and identify issues and bottlenecks faster.

You can of course implement all those features by yourself, but there are a lot of softwares out there that can help you with it like Datadog, Sentry and CloudWatch.

Infrastructure as Code

Sometimes referred to as IaC, this section refers to the techniques and tools used to define infrastructure, typically in a markup language like YAML or JSON. Infrastructure as code allows DevOps Engineers to use the same workflows used by software developers to version, roll back, and otherwise manage changes.

The term Infrastructure as Code encompasses everything from bootstrapping to configuration to orchestration, and it is considered a best practice in the industry to manage all infrastructure as code. This technique precipitated the explosion in system complexity seen in modern DevOps organizations.

Service Mesh

A service mesh, like the open source project Istio, is a way to control how different parts of an application share data with one another. Unlike other systems for managing this communication, a service mesh is a dedicated infrastructure layer built right into an app. This visible infrastructure layer can document how well (or not) different parts of an app interact, so it becomes easier to optimize communication and avoid downtime as an app grows.

Istio

Istio is an open source service mesh platform that provides a way to control how microservices share data with one another. It includes APIs that let Istio integrate into any logging platform, telemetry, or policy system. Istio is designed to run in a variety of environments: on-premise, cloud-hosted, in Kubernetes containers, in services running on virtual machines, and more.

Linkerd

Linkerd is an open source service mesh designed to be deployed into a variety of container schedulers and frameworks such as Kubernetes. It became the original “service mesh” when its creator Buoyant first coined the term in 2016. Like Twitter’s Finagle, on which it was based, Linkerd was first written in Scala and designed to be deployed on a per-host basis. Linkerd is one of the first products to be associated with the term service mesh and supports platforms such as Docker and Kubernetes.

Envoy

Originally created at Lyft, Envoy is a high-performance data plane designed for service mesh architectures. Lyft open sourced it and donated it to the CNCF, where it is now one of the CNCF’s graduated open source projects. Envoy is a self contained process that is designed to run alongside every application server. All of the Envoys form a transparent communication mesh in which each application sends and receives messages to and from localhost and is unaware of the network topology.

Consul

Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. Each of these features can be used individually as needed, or they can be used together to build a full service mesh. Consul requires a data plane and supports both a proxy and native integration model. Consul ships with a simple built-in proxy so that everything works out of the box, but also supports 3rd party proxy integrations such as Envoy.

Containers

Containers are a construct in which cgroups, namespaces, and chroot are used to fully encapsulate and isolate a process. This encapsulated process, called a container image, shares the kernel of the host with other containers, allowing containers to be significantly smaller and faster than virtual machines.

These images are designed for portability, allowing for full local testing of a static image, and easy deployment to a container management platform.

Docker

Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run.

LXC

LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel.

Configuration Management

Configuration management is a systems engineering process for establishing consistency of a product’s attributes throughout its life. In the technology world, configuration management is an IT management process that tracks individual configuration items of an IT system. IT systems are composed of IT assets that vary in granularity. An IT asset may represent a piece of software, or a server, or a cluster of servers. The following focuses on configuration management as it directly applies to IT software assets and software asset CI/CD.

Software configuration management is a systems engineering process that tracks and monitors changes to a software systems configuration metadata. In software development, configuration management is commonly used alongside version control and CI/CD infrastructure. This post focuses on its modern application and use in agile CI/CD software environments.

Ansible

Ansible is an open-source configuration management, application deployment and provisioning tool that uses its own declarative language in YAML. Ansible is agentless, meaning you only need remote connections via SSH or Windows Remote Management via Powershell in order to function

Chef

Emerging in 2009, Chef (now known as Progress Chef) is one of the earliest configuration management tools to gain popularity. Chef 'Recipes' are written in Ruby, in a primarily declarative style.

Chef requires that a client is installed on a server being managed. This client polls a Chef-Server regularly, to determine what its configuration should be. Chef-Solo is also available, a version of Chef that allows provisioning of a single node by running chef locally.

A key tenet of Chef recipe design is the concept of idempotence. All Chef recipes should be runnable multiple times and produce the same result - this is especially necessary in cases where the client/server model listed above is in use. This pattern of configuration management is highly influential for future declarative tools like Terraform and Cloud Formation.

Puppet

Puppet, an automated administrative engine for your Linux, Unix, and Windows systems, performs administrative tasks (such as adding users, installing packages, and updating server configurations) based on a centralized specification.

Salt

Salt is an open-source event-driven IT automation, remote task execution, and configuration management software service. Built on python, Salt uses simple and human-readable YAML combined with event-driven automation to deploy and configure complex IT systems.

Kubernetes

Kubernetes is an open source container management platform, and the dominant product in this space. Using Kubernetes, teams can deploy images across multiple underlying hosts, defining their desired availability, deployment logic, and scaling logic in YAML. Kubernetes evolved from Borg, an internal Google platform used to provision and allocate compute resources. (similar to the Autopilot and Aquaman systems of Microsoft Azure)

The popularity of Kubernetes has made it an increasingly important skill for the DevOps Engineer and has triggered the creation of Platform teams across the industry. These Platform engineering teams often exist with the sole purpose of making Kubernetes approachable and usable for their product development colleagues.

Mesos

Apache Mesos is an open-source project to manage computer clusters. It was developed at the University of California, Berkeley.

Docker Swarm

A Docker Swarm is a group of either physical or virtual machines that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you're used to, but they will now be carried out by the machines in your cluster. The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.

Nomad

Nomad is a simple and flexible scheduler and orchestrator to deploy and manage containers and non-containerized applications across on-prem and clouds at scale. Nomad runs as a single binary with a small resource footprint and supports a wide range of workloads beyond containers, including Windows, Java, VM, Docker, and more.

Infrastructure Provisioning

Tools in this category are used to provision infrastructure in cloud providers. This includes DNS, networking, security policies, servers, containers, and a whole host of vendor-specific constructs. In this category, the use of cloud provider-agnostic tooling is strongly encouraged. These skills can be applied across most cloud providers, and the more specific domain-specific languages tend to have less reach.

Terraform

Terraform is an extremely popular open source Infrastructure as Code (IaC) tool that can be used with many different cloud and service provider APIs. Terraform focuses on an immutable approach to infrastructure, with a terraform state file center to tracking the status of your real world infrastructure.

AWS CDK

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework used to provision cloud infrastructure resources in a safe, repeatable manner through AWS CloudFormation. AWS CDK offers the flexibility to write infrastructure as code in popular languages like JavaScript, TypeScript, Python, Java, C#, and Go.

Pulumi

Pulumi is an open source Infrastructure as Code tool that can be written in TypeScript, JavaScript, Python, Go, .NET, Java, and YAML to model cloud infrastructure.

CloudFormation

CloudFormation is the AWS service that helps to define collections of AWS resources. CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code.

Setting up x# Apache

Apache is a free, open-source HTTP server, available on many operating systems, but mainly used on Linux distributions. It is one of the most popular options for web developers, as it accounts for over 30% of all the websites, as estimated by W3Techs.

Caddy

The Caddy web server is an extensible, cross-platform, open-source web server written in Go. It has some really nice features like automatic SSL/HTTPs and a really easy configuration file.

Nginx

NGINX is a powerful web server and uses a non-threaded, event-driven architecture that enables it to outperform Apache if configured correctly. It can also do other important things, such as load balancing, HTTP caching, or be used as a reverse proxy.

Tomcat

Tomcat is an open source implementation of the Jakarta Servlet, Jakarta Server Pages, Jakarta Expression Language, Jakarta WebSocket, Jakarta Annotations and Jakarta Authentication specifications. These specifications are part of the Jakarta EE platform.

MS IIS

Internet Information Services (IIS) for Windows® Server is a flexible, secure and manageable Web server for hosting anything on the Web.

Forward Proxy

Forward Proxy, often called proxy server is a server that sits in front of a group of client machines. When those computers make requests to sites and services on the Internet, the proxy server intercepts those requests and then communicates with web servers on behalf of those clients, like a middleman.

Common Uses:

  • To block access to certain content
  • To protect client identity online
  • To provide restricted internet to organizations

A cache server is a dedicated network server or service acting as a server that saves Web pages or other Internet content locally. By placing previously requested information in temporary storage, or cache, a cache server both speeds up access to data and reduces demand on an enterprise's bandwidth.

A Reverse Proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. It provides an additional level of security by hiding the server related details like IP Address to clients. It is also known as server side proxy.

Common Uses:

  • Load balancing
  • Web acceleration
  • Security and anonymity

Load Balancer acts as the traffic cop sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked. If a one of the servers goes down, the load balancer redirects traffic to the remaining online servers.

Firewall is a network security device that monitors and filters incoming and outgoing network traffic based on an organization’s previously established security policies. It is a barrier that sits between a private internal network and the public Internet. A firewall’s main purpose is to allow non-threatening traffic in and to keep dangerous traffic out.

An Operating System is a program that manages a computer’s resources, especially the allocation of those resources among other programs. Typical resources include the central processing unit (CPU), computer memory, file storage, input/output (I/O) devices, and network connections.

Memory Management

The term Memory can be defined as a collection of data in a specific format. It is used to store instructions and process data. The memory comprises a large array or group of words or bytes, each with its own location. The primary motive of a computer system is to execute programs. These programs, along with the information they access, should be in the main memory during execution. The CPU fetches instructions from memory according to the value of the program counter.

To achieve a degree of multiprogramming and proper utilization of memory, memory management is important. There are several memory management methods, reflecting various approaches, and the effectiveness of each algorithm depends on the situation.

I/O Management

One of the important jobs of an Operating System is to manage various I/O devices including mouse, keyboards, touchpad, disk drives, display adapters, USB devices, Bit-mapped screens, LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers, etc.

Virtualization

Virtualization is the creation of a virtual -- rather than actual -- version of something, such as an operating system (OS), a server, a storage device or network resources. It uses software that simulates hardware functionality to create a virtual system. This practice allows IT organizations to operate multiple operating systems, more than one virtual system and various applications on a single server.

A file is a named collection of related information recorded on secondary storage such as magnetic disks, magnetic tapes, and optical disks. Generally, a file is a sequence of bits, bytes, lines, or records whose meaning is defined by the file's creator and user.

Startup Management (init.d)

init.d is a daemon which is the first process (PID = 1) of the Linux system. Then other processes, services, daemons, and threads are started by init. One can write their own scripts in '/etc/init.d' location to start services automatically on system boot. Services can be started and stopped manually by using service command.

It has following syntax: $ service [service_name] [action] e.g. $ service ssh start

systemd is a System Management Daemon which replaces the sysvinit process to become the first process with PID = 1, which gets executed in user space during the Linux start-up process. It is a system that is designed specifically for the Linux kernel. It is now being used as a replacement of init.d to overcome shortcomings of it. It uses systemctl command to perform related operations.

e.g. $ systemctl start [service-name], $ systemctl poweroff

Thread is an active entity which executes a part of a process. It is a sequential flow of tasks within a process. It is also called lightweight process as they share common resources. A process can contain multiple threads. Threads are used to increase the performance of the applications. Each thread has its own program counter, stack, and set of registers. But the threads of a single process might share the same code and data/file.

Key Terminologies:

  • proc
  • fork
  • join

Concurrency in OS

Concurrency is the execution of the multiple instruction sequences at the same time. It happens in the operating system when there are several process threads running in parallel. It helps in techniques like coordinating execution of processes, memory allocation and execution scheduling for maximizing throughput.

The running process threads always communicate with each other through shared memory or message passing. Concurrency results in sharing of resources result in problems like deadlocks and resources starvation.

Key Terminologies:

  • mutex
  • critical section
  • Deadlock

Computer networking refers to interconnected computing devices that can exchange data and share resources with each other. These networked devices use a system of rules, called communications protocols, to transmit information over physical or wireless technologies.

Begin by studying the OSI Model. This model will assist in constructing an understanding of the linked topics, and help you contextualize the items linked to the Networking, Security, and Protocols node. Higher level networking concepts may be implemented and named differently across cloud providers. Don't let this confuse you - the basics of TCP/IP are useful and used in the same ways across all implementations.

Socket is an endpoint of a two way communication link between two different processes on the network (same or different machines). The socket mechanism provides a means of inter-process communication (IPC) by establishing named contact points between client and server. It is the combination of IP Address and Port Number.

e.g. http://192.168.0.1:8080

POSIX (Portable Operating System Interface) is a family of standards for maintaining compatibility between operating systems. It describes utilities, APIs, and services that a compliant OS should provide to software, thus making it easier to port programs from one system to another.

A practical example: in a Unix-like operating system, there are three standard streams, stdin, stdout and stderr - they are I/O connections that you will probably come across when using a terminal, as they manage the flow from the standard input (stdin), standard output (stdout) and standard error (stderr).

So, in this case, when we want to interact with any of these streams (through a process, for example), the POSIX operating system API makes it easier - for example, in the <unistd.h> C header where the stdin, stderr, and stdout are defined as STDIN_FILENO, STDERR_FILENO and STDOUT_FILENO.

POSIX also adds a standard for exit codes, filesystem semantics, and several other command line utility API conventions.

Processes

A process means program in execution. It generally takes an input, processes it and gives us the appropriate output. ps command can be used in linux to get the list of processes running in foreground. Each process will have a unique identifier called PID, which can be used to track it or kill it through shell.

Types of processes:

  • Foreground processes
  • Background processes

Learn a Language

It doesn't matter what language you pick, but it is important to learn at least one. You will be able to use that language to write automation scripts.

Ruby

Ruby is a high-level, interpreted programming language that blends Perl, Smalltalk, Eiffel, Ada, and Lisp. Ruby focuses on simplicity and productivity along with a syntax that reads and writes naturally. Ruby supports procedural, object-oriented and functional programming and is dynamically typed.

Python

Python is a multi-paradigm language. Being an interpreted language, code is executed as soon as it is written and the Python syntax allows for writing code in functional, procedural or object-oriented programmatic ways. Python is frequently recommended as the first language new coders should learn, because of its focus on readability, consistency, and ease of use. This comes with some downsides, as the language is not especially performant in most production tasks.

JavaScript

JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on. Apart from being used on the frontend in browsers, there is Node.js which is an open-source, cross-platform, back-end JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser.

Node.js

Node.js is an open-source, cross-platform, back-end JavaScript runtime environment that runs on a JavaScript Engine and executes JavaScript code outside a web browser, which was designed to build scalable network applications. It allows you to run JavaScript on the server.

Go

Go is an open source programming language supported by Google. Go can be used to write cloud services, CLI tools, used for API development, and much more.

Rust

Rust is a modern systems programming language focusing on safety, speed, and concurrency. It accomplishes these goals by being memory safe without using garbage collection.

C

C is a powerful general-purpose programming language. It can be used to develop software like operating systems, databases, compilers, and so on.

C++

C++ is a powerful general-purpose programming language. It can be used to develop operating systems, browsers, games, and so on. C++ supports different ways of programming like procedural, object-oriented, functional, and so on. This makes C++ powerful as well as flexible.

Managing Servers

Server management includes all of the monitoring and maintenance required for servers to operate reliably and at optimal performance levels. Server management also involves the management of hardware, software, security, and backups all in service of keeping the IT environment operational and efficient. The primary goals of an effective server management strategy are to:

  • Minimize server slowdowns and downtime while maximizing reliability.
  • Build secure server environments.
  • Scale servers and related operations to meet the needs of the organization over time.

Operating system

An Operating system serves as a bridge between a computer's user and its hardware. An operating system's function is to offer a setting in which a user can conveniently and effectively run programmes. In simple terms we can say that and Operating System (OS) is an interface between a computer user and computer hardware. An OS permits software programmes to communicate with a computer's hardware, The kernel is the name of Piece of software that houses the fundamental elements of Operating System.

Windows

Windows is a graphical user interface (GUI) based operating system developed by Microsoft. It is a hybrid kernel-based proprietary operating system. According to a survey, till April 2022, windows is the most popular operating system in the world with a 75% market share.

CentOS

CentOS (short for Community Enterprise Operating System) is a community driven, free and open-source distribution that is functionally compatible with Red Hat Enterprise Linux (RHEL). The CentOS distribution was discontinued in December 2021, however it has now been succeeded by Rocky Linux and AlmaLinux

Ubuntu

Ubuntu is a free and open-source Linux distribution based on Debian. Ubuntu is available in three versions Desktop, Server and Core.

openSUSE Linux

openSUSE is a free to use Linux distribution aimed to promote the use of Linux everywhere. openSUSE is released in two versions Leap and Tumbleweed

RHEL

Red Hat Enterprise Linux (RHEL) is a commercial open-source Linxus distribution based on Fedora, that is sold as a commercial enterprise operating system.

Fedora

Fedora Linux is a free and open-source Linux distribution developed by the Fedora Project, an open source focussed community. Fedora Linux releases new versions every six months and is the upstream source for Red Hat Enterprise Linux (RHEL)

Debian

Debian is a free and open-source Linux distribution developed by the Debian Project, an all volunteer software community organization. Debian is the upstream distribution of Ubuntu.

FreeBSD

FreeBSD is a free and open-source Unix-like operating system including many features such as preemptive multitasking, memory protection, virtual memory, and multi-user facilities.

OpenBSD

OpenBSD is a free and open-source Unix-like operating system, focussed on portability, standardization, correctness, proactive security and integrated cryptography. The popular software application OpenSSH is developed by from OpenBSD

NetBSD

NetBSD is a free, fast, secure, and highly portable Unix-like Open Source operating system. It is available for a wide range of platforms, from large-scale servers and powerful desktop systems to handheld and embedded devices.

Live in terminal

A terminal is simply a text-based interface to the computer, it is use to interact with your computer system via CLI (command line interface)

Bash scripting

Bash is a command-line interface shell program used extensively in Linux and macOS. The name Bash is an acronym for 'Bourne Again Shell,' developed in 1989 as a successor to the Bourne Shell.

'What's a shell?' you ask? A shell is a computer program that allows you to directly control a computer's operating system (OS) with a graphical user interface (GUI) or command-line interface (CLI).

You actually use GUI shells all the time. For example, Windows 10 is based on the Windows shell that allows you to control your OS with a desktop, taskbar, and menus.

With a CLI shell like Bash, you type commands into the program to directly control your computer's OS. Opening up the terminal on your Mac or command line in Linux will look similar to consoles and integrated development environments (IDEs) for other programming languages such as R, Ruby, or Python. You can type commands directly in the command line or run Bash scripts to perform longer and more complex tasks.

Editors

Editors are tools that allow you to create or edit files on your file system.

Vim

Vim is a highly configurable text editor built to make creating and changing any kind of text very efficient. It is included as 'vi' with most UNIX systems and with Apple OS X.

Vim ships with vimtutor that is a tutor designed to describe enough of the Vim commands that you will be able to easily use Vim as an all-purpose editor.

Nano

GNU nano is a small and friendly text editor.

PowerShell

PowerShell is a cross-platform task automation solution made up of a command-line shell, a scripting language, and a configuration management framework. PowerShell runs on Windows, Linux, and macOS.

Emacs

An extensible, customizable, free/libre text editor.

At its core is an interpreter for Emacs Lisp, a dialect of the Lisp programming language with extensions to support text editing.

Compiling Apps

gcc

The GNU Compiler Collection (GCC) is a compiler system produced by the GNU Project supporting various programming languages also known as GCC. GCC is a key component of the GNU tool chain and the standard compiler for most Unix-like operating systems. Compiler Collection is a set of compilers and development tools available for Linux and an array of other operating systems. It includes support primarily for C and C++. It provides all of the infrastructure for building software in those languages from source code to assembly.

'What is GCC used for?' GCC is a toolchain that compiles code, links it with any library dependencies, converts that code to assembly, and then prepares executable files.It is responsible for the conversion of the “high level” source code in the respective language and ensuring that it is semantically valid, performing well formed optimizations, and converting it to assembly code (which is then handed off to the assembler).

make

The GNU Make is a tool which enables and controls the creation of executables and other non-source files of a program from the program's source files. Make builds the program from a file called the makefile, which lists each of the non-source files and how to compute it from other files. When you write a program, you should write a makefile for it, so that it is possible to use Make to build and install the program.

'What is make used for?' Make enables the end user to build and install your package without knowing the details of how that is done -- because these details are recorded in the makefile that you supply.

sbt

sbt is an open-source build tool for Scala and Java projects, similar to Apache's Maven and Ant. Its main features are: Native support for compiling Scala code and integrating with many Scala test frameworks. Continuous compilation, testing, and deployment.

gradle

Gradle is a build automation tool known for its flexibility to build software. A build automation tool is used to automate the creation of applications. The building process includes compiling, linking, and packaging the code.

Terminal multiplexers

Terminal multiplexers are programs that allow us to multiplex a terminal into several sub-processes or terminals inside a single terminal session, this means that we can have multiple open sessions using a single login session to a local or remote machine.

Screen

Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells). Each virtual terminal provides the functions of a DEC VT100 terminal and, in addition, several control functions from the ISO 6429 (ECMA 48, ANSI X3.64) and ISO 2022 standards (e.g. insert/delete line and support for multiple character sets). There is a scrollback history buffer for each virtual terminal and a copy-and-paste mechanism that allows moving text regions between windows.

See man screen or screen -h for further information

Tmux

Tmux is a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen. Tmux may be detached from a screen and continue running in the background, then later reattached.

When tmux is started it creates a new session with a single window and displays it on screen. A status line at the bottom of the screen shows information on the current session and is used to enter interactive commands.

See man tmux further information

ps - process status

The ps utility displays a header line, followed by lines containing information about all of your processes that have controlling terminals.

See man ps further information

top

The top program periodically displays a sorted list of system processes. The default sorting key is pid, but other keys can be used instead. Various output options are available.

See man top further information.

htop

htop is a cross-platform ncurses-based process. It is similar to top, but allows you to scroll vertically and horizontally, and interact using a pointing device (mouse). You can observe all processes running on the system, along with their command line arguments, as well as view them in a tree format, select multiple processes and act on them all at once.

atop

The program atop is an interactive monitor to view the load on a Linux system. It shows the occupation of the most critical hardware resources (from a performance point of view) on system level, i.e. cpu, memory, disk and network.

lsof

Lsof lists on its standard output file information about files opened by processes.

See man lsof or lsof --help for further information.

Nmon

Nmon is a fully interactive performance monitoring command-line utility tool for Linux. It is a benchmark tool that displays performance about the cpu, memory, network, disks, file system, nfs, top processes, resources, and power micro-partition.

Iostat

The iostat command in Linux is used for monitoring system input/output statistics for devices and partitions. It monitors system input/output by observing the time the devices are active in relation to their average transfer rates. The iostat produce reports may be used to change the system configuration to raised balance the input/output between the physical disks.

Sar

Short for System Activity Report, it is a command line tool for Unix and Unix-like operating systems that shows a report of different information about the usage and activity of resorces in the operating system.

Vmstat

Short for Virtual memory statistic reporter, it is a command line tool for Unix and Unix-like operating systems that reports various information about the operating system such as memory, paging, processes, I/O, CPU and disk usage.

Traceroute

traceroute command is a command in Linux that prints the route a network packet takes from its source (e.g. your computer) to the destination host (e.g., roadmap.sh). It is quite valuable in investigating slow network connections as it can help us spot the slow leg of the network packet journey through the internet.

It has the following syntax: $ traceroute [OPTIONS] DESTINATION e.g. $ traceroute roadmap.sh

mtr

mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.

As mtr starts, it investigates the network connection between the host mtr runs on and HOSTNAME by sending packets with purposely low TTLs. It continues sending packets with low TTL, noting the response time of the intervening routers. This allows mtr to print the internet route's response percentage and response times to HOSTNAME. A sudden packet loss or response time increase often indicates a bad (or simply overloaded) link.

ping (Packet Internet Groper) command is used to check the network connectivity between host and server/host. This command takes as input the IP address or the URL and sends a data packet to the specified address with the message “PING” and get a response from the server/host this time is recorded which is called latency.

It has the following syntax: $ ping [OPTIONS] DESTINATION e.g. $ ping roadmap.sh

NMAP stands for Network Mapper and is an open-source tool used to explore and audit the network's security, such as checking firewalls and scanning ports.

Netstat is a command line utility to display all the network connections on a system. It displays all the tcp, udp and unix socket connections. Apart from connected sockets it also displays listening sockets that are waiting for incoming connections.

Airmon# Tcpdump

tcpdump is a command line tool used for analysing network traffic passing through your system. It can be used to capture and filter packets and display them in a human-readable format. The captured information can be analysed at a later date as well.

It's a packet sniffer used for packet captures such as WEP IVs (Initialization Vector) and WAP Handshake. Even if you have a GPS receiver connected to the computer, you can log the coordinates of the found access points.

Iptables

IPtables is a command-line firewall utility that uses policy chains to allow or block traffic that will be enforced by the linux kernel’s netfilter framework. Iptables packet filtering mechanism is organized into three different kinds of structures: tables, chains and targets.

dig

dig command stands for Domain Information Groper. It is used for retrieving information about DNS name servers. It is mostly used by network administrators for verifying and troubleshooting DNS problems and to perform DNS lookups. It replaces older tools such as nslookup and the host.

It has the following syntax: $ dig [server] [name] [type] e.g. $ dig roadmap.sh

awk is a general-purpose scripting language used for manipulating data or text and generating reports in the Linux world. It is mostly used for pattern scanning and processing. It searches one or more files to see if they contain lines that match the specified patterns and then performs the associated actions.

It has the below syntax:

awk options 'selection_criteria {action}' input-file > output-file e.g. $ awk '{print}' file.txt

sed

sed(Stream Editor) command in UNIX can perform lots of functions on file like searching, finding and replacing, insertion or deletion. By using SED you can edit files even without opening them in editors like VI Editor.

It has the following syntax:

$ sed [options].. [script] [input-file] e.g. $ sed 's/search-regex/replacement-txt/g' file.txt

grep

The grep command (global search for regular expression and print out) searches file(s) for a particular pattern of characters, and displays all lines that contain that pattern. It can be used with other commands like ps making it more useful.

It has the following syntax:

$ grep [options] pattern [files] e.g. $ grep 'search-regex' file-1.txt

sort

sort command is used to sort the contents of a file in a particular order. By default, it sorts a file assuming the contents are in ASCII. But it also can also be used to sort numerically by using appropriate options.

It has the following syntax

$ sort [options].. input-file e.g. $ sort file.txt

cut

The cut utility cuts out selected portions of each line (as specified by list) from each file and writes them to the standard output.

See man cut for further information.

uniq

The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file.

See man uniq for further information.

cat

cat (concatenate) command is very frequently used in Linux. It reads data from the file and gives its content as output. It helps us to create, view, and concatenate files.

It has the following syntax:

  • View : $ cat [option] [input-file]
  • Create : $ cat [content] > [new-file]
  • Append : $ cat [append_content] >> [existing-file]

e.g. $ cat file.txt

echo

echo is a built-in command in Linux used to display lines of text/string that are passed as an argument. It is mostly used in shell scripts and batch files to output status text or ENV variables to the screen or a file.

It has the following syntax: $ echo [options] [string] e.g. $ echo 'Hello World!'

fmt

fmt command is for formatting and optimizing contents in text files. It will be really usefull when it comes to beautify large text files by setting uniform column width and spaces.

It has the following syntax: $ fmt [-width] [option] [file] e.g. $ fmt file.txt

The tr utility copies the standard input to the standard output with substitution or deletion of selected characters.

See man tr for further information.

nl

The nl utility reads lines from the named file or the standard input if the file argument is omitted, applies a configurable line numbering filter operation and writes the result to the standard output.

See man nl for further information.

wc

The wc utility displays the number of lines, words, and bytes contained in each input file, or standard input (if no file is specified) to the standard output.

See man wc for further information.

egrep

egrep (Extended Grep) is a pattern searching command which belongs to the family of grep functions. It treats the pattern as an extended regular expression and prints out the lines that match the pattern. It works the same way as $ grep -E command

It has the following syntax:

$ egrep [options] pattern [files] e.g. $ egrep 'search-regex' *.txt

fgrep (Fixed Grep) command is used for searching fixed-character strings in a file. It treats meta-characters or regular expressions in the search field as strings. For searching any direct string or files having meta-characters, this is the version of grep which should be selected. It works the same way as $ grep -F command.

It has the following syntax:

$ fgrep [options] [string] [files] e.g. $ fgrep 'search-string' file.txt

strac is a useful diagnsotic, debugging tool for unix based operating systems. It traces the system calls and signals a process uses during its lifetime. And usually returns the name of the each system calls , its arguments and what it returned.

DTrace

DTrace is a comprehensive dynamic tracing framework ported from Solaris. DTrace provides a powerful infrastructure that permits administrators, developers, and service personnel to concisely answer arbitrary questions about the behavior of the operating system and user programs.

Systemtap# Uname

Uname is a short form of Unix name and it helps to print the system information for both hardware and software in the current running system.

df

df is a standard Unix command used to display the amount of available disk space for file systems on which the invoking user has appropriate read access. df is typically implemented using the statfs or statvfs system calls.

Useful Links

history

history command is used to view the previously executed command. Every command executed is treated as the event and is associated with an event number using which they can be recalled and changed if required. These commands are saved in a history file.

It has the below syntax: $ history

The du utility, short for disk usage, displays the file system block usage for each file argument and for each directory in the file hierarchy rooted in each directory argument. If no file is specified, the block usage of the hierarchy rooted in the current directory is displayed.

SCP

SCP is an acronym for Secure Copy Protocol.It is a command line utility that allows the user to securely copy files and directories between two locations usually between unix or linux systems.The protocol ensures the transmission of files is encrypted to prevent anyone with suspicious intentions from getting sensitive information.SCP uses encryption over an SSH (Secure Shell) connection, this ensures that the data being transferred is protected from suspicious attacks.

UFW

UFW, or uncomplicated firewall, is command-line based utility for managing firewall rules in Arch Linux, Debian and Ubuntu. It's aim is to make firewall configuration as simple as possible. It is a frontend for the iptables firewalling tool.

A network protocol is an established set of rules that determine how data is transmitted between different devices in the same network. Essentially, it allows connected devices to communicate with each other, regardless of any differences in their internal processes, structure or design. Network protocols are the reason you can easily communicate with people all over the world, and thus play a critical role in modern digital communications.

DNS

DNS (Domain Name System) is the phonebook of the Internet. Humans access information online through domain names, like nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so browsers can load Internet resources.

OSI Model

Open Systems Interconnection (OSI) model is a conceptual model consists of 7 layers, that was proposed to standardize the communication between devices over the network. It was the first standard model for network communications, adopted by all major computer and telecommunication companies in the early 1980s.

TCP/IP Model

TCP/IP model is a practical model consists of 4 layers. The modern Internet is based on this model.

HTTP is the TCP/IP based application layer communication protocol which standardizes how the client and server communicate with each other. It defines how the content is requested and transmitted across the internet.

HTTPS

HTTPS (Hypertext Transfer Protocol Secure) is the secure version of HTTP, which is the primary protocol used to send data between a web browser and a website.

HTTPS = HTTP + SSL/TLS

FTP

File Transfer Protocol(FTP) is TCP/IP based application layer communication protocol that helps transferring files between local and remote file systems over the network. To transfer a file, 2 TCP connections(control connection and data connection) are used in parallel.

Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols used to provide security in internet communications. These protocols encrypt the data that is transmitted over the web, so anyone who tries to intercept packets will not be able to interpret the data. One difference that is important to know is that SSL is now deprecated due to security flaws, and most modern web browsers no longer support it. But TLS is still secure and widely supported, so preferably use TLS.

The SSH (Secure Shell) is a network communication protocol that enables two computers to communicate over an insecure network. It is a secure alternative to the non-protected login protocols (such as telnet, rlogin) and insecure file transfer methods (such as FTP). It is mostly used for secure Remote Login and File Transfer.

SFTP = FTP + SSH

Port forwarding, sometimes called port mapping, allows computers or services in private networks to connect over the internet with other public or private computers or services. Since firewalls exist to keep unwanted visitors out, the visitors you want to get in are going to need a way to do so. Knowing the IP address isn’t enough, Requests need to be directed to the correct port as well.

Emails# SMTP

Email is emerging as one of the most valuable services on the internet today. Most internet systems use SMTP as a method to transfer mail from one user to another. SMTP is a push protocol and is used to send the mail whereas POP (post office protocol) or IMAP (internet message access protocol) are used to retrieve those emails at the receiver’s side.

SMTP is an application layer protocol. The client who wants to send the mail opens a TCP connection to the SMTP server and then sends the mail across the connection. The SMTP server is an always-on listening mode. As soon as it listens for a TCP connection from any client, the SMTP process initiates a connection through port 25. After successfully establishing a TCP connection the client process sends the mail instantly.

Imaps

IMAP (port 143) or IMAPS (port 993) allows you to access your email wherever you are, from any device. When you read an email message using IMAP, you aren't actually downloading or storing it on your computer; instead, you're reading it from the email service. As a result, you can check your email from different devices, anywhere in the world: your phone, a computer, a friend's computer.

IMAP only downloads a message when you click on it, and attachments aren't automatically downloaded. This way you're able to check your messages a lot more quickly than POP.

Email servers hosted by Internet service providers also use POP3 to receive and hold emails intended for their subscribers. Periodically, these subscribers will use email client software to check their mailbox on the remote server and download any emails addressed to them.

Once the email client has downloaded the emails, they are usually deleted from the server, although some email clients allow users to specify that mails be copied or saved on the server for a period of time.

DMARC stands for Domain-based Message Authentication, Reporting, and Conformance, is an authentication method on the email that is built to protect domain email from invalid email addresses or commonly known as email spoofing, email attacks, phishing, scams, and other threat activities.

Sender Policy Framework (SPF) is used to authenticate the sender of an email. With an SPF record in place, Internet Service Providers can verify that a mail server is authorized to send email for a specific domain. An SPF record is a DNS TXT record containing a list of the IP addresses that are allowed to send email on behalf of your domain.

Domain Keys

DomainKeys Identified Mail (DKIM) is an email authentication method designed to detect forged sender addresses in email (email spoofing), a technique often used in phishing and email spam.

White Listing vs Grey Listing

White listing is a process of adding an email to an approved sender list, so emails from that sender are never moved to the spam folder. This tells an email server to move messages to the inbox directly.

Greylisting is a method of protecting e-mail users against spam. A mail transfer agent (MTA) using greylisting will 'temporarily reject' any email from a sender it does not recognize. If the mail is legitimate, the originating server will try again after a delay, and the email will be accepted if sufficient time has elapsed.

CI/CD

CI/CD is a method to frequently deliver apps to customers by introducing automation into the stages of app development. The main concepts attributed to CI/CD are continuous integration, continuous delivery, and continuous deployment. CI/CD is a solution to the problems integrating new code can cause for development and operations teams.

Specifically, CI/CD introduces ongoing automation and continuous monitoring throughout the lifecycle of apps, from integration and testing phases to delivery and deployment. Taken together, these connected practices are often referred to as a 'CI/CD pipeline' and are supported by development and operations teams working together in an agile way with either a DevOps or site reliability engineering (SRE) approach.

Jenkins

Jenkins is an open-source CI/CD automation server. Jenkins is primarily used for building projects, running tests, static code analysis and deployments.

GitLab CI

GitLab offers a CI/CD service that can be used as a SaaS offering or self-managed using your own resources. You can use GitLab CI with any GitLab hosted repository, or any BitBucket Cloud or GitHub repository in the GitLab Premium self-managed, GitLab Premium SaaS and higher tiers.

Travis CI

Travis CI is a CI/CD service that is primarily used for building and testing projects that are hosted on BitBucket and GitHub. Open source projects can utilize Travis CI for free.

GitHub Actions

Automate, customize, and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow.

TeamCity

TeamCity is a CI/CD service provided by JetBrains. TeamCity can be used as a SaaS offering or self-managed using your own resources.

Bamboo

Bamboo is a CI/CD service provided by Atlassian. Bamboo is primarily used for automating builds, tests and releases in a single workflow.

CircleCI

CircleCI is a CI/CD service that can be integrated with GitHub, BitBucket and GitLab repositories. The service that can be used as a SaaS offering or self-managed using your own resources.

Drone

Drone is a CI/CD service offering by Harness. Each build runs on an isolated Docker container, and Drone integrates with many popular source code management repositories like GitHub, BitBucket and GitLab

Azure DevOps

Azure DevOps is developed by Microsoft as a full scale application lifecycle management and CI/CD service. Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications.

Monitoring

DevOps monitoring entails overseeing the entire development process from planning, development, integration and testing, deployment, and operations. It involves a complete and real-time view of the status of applications, services, and infrastructure in the production environment. Features such as real-time streaming, historical replay, and visualizations are critical components of application and service monitoring.

Infrastructure Monitoring

Monitoring refers to the practice of making the performance and status of infrastructure visible. This section contains common tools used for monitoring.

This is a very vendor-heavy space - use caution when studying materials exclusively from a given product or project, as there are many conflicting opinions and strategies in use. There is no single solution for the most substantially complex internet-facing applications, so understanding the pros and cons of these tools will be useful in helping you plan how to monitor a system for a given goal.

Nagios

Nagios is a powerful tool that provides you with instant awareness of your organization’s mission-critical IT infrastructure. Nagios allows you to detect and repair problems and mitigate future issues before they affect end-users and customers.

Grafana

Grafana is the open-source platform for monitoring and observability. It allows you to query, visualize, alert on and understand your metrics no matter where they are stored.

Datadog

Datadog is a monitoring and analytics platform for large-scale applications. It encompasses infrastructure monitoring, application performance monitoring, log management, and user-experience monitoring. Datadog aggregates data across your entire stack with 400+ integrations for troubleshooting, alerting, and graphing.

Zabbix

Zabbix is an enterprise-class open source monitoring solution for network monitoring and application monitoring of millions of metrics.

Monit

Monit is a small Open Source utility for managing and monitoring Unix systems. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.

Monit has the ability to start a process if it is not running, restart a process if not responding, and stop a process if uses high resources. Additionally, you can also use Monit to monitor files, directories, and filesystems for changes, checksum changes, file size changes, or timestamp changes.

With Monit, you can able to monitor remote hosts’ TCP/IP port, server protocols, and ping. Monit keeps its own log file and alerts about any critical error conditions and recovery status.

Prometheus

Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database built using a HTTP pull model, with flexible queries and real-time alerting.

Application Monitoring

Application monitoring refers to the practice of making the status and performance of a given application visible. This may include details such as stacktraces, error logs, and the line of code implicated in a given failure. When combined with Infrastructure monitoring, this can provide a complete picture of what is happening in your system, and why.

Jaeger

Jaeger is an open source, end-to-end distributed tracing system that enables us to monitor and troubleshoot transactions in complex distributed systems.

New Relic

New Relic is where dev, ops, security and business teams solve software–performance problems with data.

AppDynamics

AppDynamics is a full-stack application performance management (APM) and IT operations analytics (ITOA) company based in San Francisco. The company focuses on managing the performance and availability of applications across cloud computing environments, IT infrastructure, network architecture, digital user experience design, application security threat detection, observability, and data centers.

Instana

Instana is particularly used in monitoring and managing the performance of software used in microservice architectures, and permits 3D visualisation of performance through graphs generated using machine learning algorithms, with notifications regarding performance also generated automatically. Instana's Application Performance Monitoring (APM) tool of the same name is especially purposed for monitoring software used in so-called 'container orchestration' (a modular method of providing a software service).

OpenTelemetry

OpenTelemetry is a collection of tools, APIs, and SDKs. Use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software’s performance and behavior.

Logs Management

Log management is the process of handling log events generated by all software applications and infrastructure on which they run. It involves log collection, aggregation, parsing, storage, analysis, search, archiving, and disposal, with the ultimate goal of using the data for troubleshooting and gaining business insights, while also ensuring the compliance and security of applications and infrastructure.

Elastic Stack

Elastic Stack is a group of open source products comprised of Elasticsearch, Kibana, Beats, and Logstash and more that help store, search, analyze, and visualize data from various source, in different format, in real-time.

  • Elastic Search - Search and analytics engine
  • Logstash/fluentd - Data processing pipeline
  • Kibana - Dashboard to visualize data

Graylog

Graylog is a leading centralized log management solution for capturing, storing, and enabling real-time analysis of terabytes of machine data.

Splunk

The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.

Loki

Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost-effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.

Papertrail

Papertrail is a leading centralized log management solution for capturing, storing, and enabling real-time analysis of terabytes of machine data.

Cloud Providers

Cloud providers provide a layer of APIs to abstract infrastructure and provision it based on security and billing boundaries. The cloud runs on servers in data centers, but the abstractions cleverly give the appearance of interacting with a single 'platform' or large application. The ability to quickly provision, configure and secure resources with cloud providers has been key to both the tremendous success, and complexity, of modern DevOps.

AWS

Amazon Web Services has been the market leading cloud computing platform since 2011, ahead of Azure and Google Cloud. AWS offers over 200 services with data centers located all over the globe.

AWS service is an online platform that provides scalable and cost-effectove cloud computing solutions. It is broadly adopted cloud platform that offers several on-demand operations like compute power, database storage, content delivery and so on.

Google Cloud

Google Cloud is Google's cloud computing service offering, providing over 150 products/services to choose from. It consists of a set of physical assets, such as conputers and hard disk drives, and virtual resources, such as virtual nachines(VMs), that are contained in Google's data centers. It runs on the same infrastructure that Google uses internally for its end-user products, such as Search, Gmail, Google Drive, and YouTube.

Azure

Microsoft Azure is a is a cloud computing service operated by Microsoft. Azure currently provides more than 200 products and cloud services.

DigitalOcean

DigitalOcean is a cloud computing service offering products and services in Compute, Storage, Managed Databases, Containers & Images and Networking.

Heroku

Heroku is a cloud platform as a service subsidiary of Salesforce. Heroku officially supports Node.js, Ruby, Java, PHP, Python, Go, Scala and Clojure, along with any language that runs on Linux via a third-party build pack.

Linode

Linode is a cloud computing service owned by Akamai Technologies. Linode positions itself as an alternative to AWS, GCP and Azure by offering core services without complexity for most workloads.

Vultr

Vultr is an infrastructure focussed cloud computing service, available in 25 locations worldwide. Vultur compute offers 100% SSD and high performance Intel vCPUs.

Alibaba Cloud

Alibaba Cloud is a cloud computing service, offering over 100 products and services with data centers in 24 regions and 74 availability zones around the world.

Availability

Availability is the percentage of time that a system is functional and working as intended, generally referred to as uptime. Availability can be affected by hardware or software errors, infrastructure problems, malicious attacks, and system load. Many cloud providers typically offer their users a service level agreement (SLA) that specifies the exact percentages of promised uptime/downtime. Availability is related to reliability in this sense. For example, a company might promise 99.99% uptime for their services.

To achieve high levels of uptime, it is important to eliminate single points of failure so that a single device failure does not disrupt the entire service. High availability in the cloud is often achieved by creating clusters. Clusters are groups of devices (such as servers) that all have access to the same shared storage and function as one single server to provide uninterrupted availability. This way, if one server goes down, the others are able to pick up the load until it comes back online. Clusters can range from two servers to even multiple buildings of servers.

Data Management

Data management is the key element of cloud applications, and influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for reasons such as performance, scalability or availability, and this can present a range of challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations.

Additionally data should be protected at rest, in transit, and via authorized access mechanisms to maintain security assurances of confidentiality, integrity, and availability. Refer to the Azure Security Benchmark Data Protection Control for more information.

Design and implementation

Good design encompasses factors such as consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability to allow components and subsystems to be used in other applications and in other scenarios. Decisions made during the design and implementation phase have a huge impact on the quality and the total cost of ownership of cloud hosted applications and services.

Management and Monitoring

DevOps management and monitoring entails overseeing the entire development process from planning, development, integration and testing, deployment, and operations. It involves a complete and real-time view of the status of applications, services, and infrastructure in the production environment. Features such as real-time streaming, historical replay, and visualizations are critical components of application and service monitoring.

Advanced Topics

Now that you have covered the basics, next we have the advanced topics such as advanced hook topics, context, refs, portals, error boundaries and more.

React Hooks

Hooks were introduced in React 16.8 and they let us use state and other React features without writing a class

Common Hooks

React also has a lot of hooks that allow you to write more efficient React code.

Writing Custom Hooks

Building your own Hooks lets you extract component logic into reusable functions.

Context

Context provides a way to pass data through the component tree without having to pass props down manually at every level.

In a typical React application, data is passed top-down (parent to child) via props, but such usage can be cumbersome for certain types of props (e.g. locale preference, UI theme) that are required by many components within an application. Context provides a way to share values like these between components without having to explicitly pass a prop through every level of the tree.

Refs

Refs provide a way to access DOM nodes or React elements created in the render method.

In the typical React dataflow, props are the only way that parent components interact with their children. To modify a child, you re-render it with new props. However, there are a few cases where you need to imperatively modify a child outside of the typical dataflow. The child to be modified could be an instance of a React component, or it could be a DOM element. For both of these cases, React provides an escape hatch.

Render Props

The term “render prop” refers to a technique for sharing code between React components using a prop whose value is a function.

A component with a render prop takes a function that returns a React element and calls it instead of implementing its own render logic.

High Order Components

A higher-order component (HOC) is an advanced technique in React for reusing component logic. HOCs are not part of the React API, per se. They are a pattern that emerges from React’s compositional nature.

Concretely, a higher-order component is a function that takes a component and returns a new component.

Portals

Portals provide a first-class way to render children into a DOM node that exists outside the DOM hierarchy of the parent component.

Error Boundaries

In the past, JavaScript errors inside components used to corrupt React’s internal state and cause it to emit cryptic errors on next renders. These errors were always caused by an earlier error in the application code, but React did not provide a way to handle them gracefully in components, and could not recover from them.

Error boundaries are React components that catch JavaScript errors anywhere in their child component tree, log those errors, and display a fallback UI instead of the component tree that crashed. Error boundaries catch errors during rendering, in lifecycle methods, and in constructors of the whole tree below them.

Fiber Architecture

React 16.0 was released with an update to the React core algorithm. This new core architecture is named “Fiber.” Facebook has completely rewritten the internals of React from the ground-up while keeping the public API essentially unchanged; in simple terms, it means only changing the engine of a running car.

React

React is a JavaScript library for building user interfaces. It is an open-source, component-based front end library responsible only for the view layer of the application.

Create React App

Create React App is the CLI based tool and is the best way to start building a new single-page application in React.

It sets up your development environment so that you can use the latest JavaScript features, provides a nice developer experience, and optimizes your app for production. You’ll need to have Node >= 14.0.0 and npm >= 5.6 on your machine.

JSX

JSX stands for JavaScript XML. It allows writing HTML in JavaScript and converts the HTML tags into React elements.

Components

Components are the building blocks of React applications. They let us split the UI into independent, reusable pieces, and think about each piece in isolation.

Functional Components

Functional components are some of the more common components that will come across while working in React. These are simply JavaScript functions. We can create a functional component to React by writing a JavaScript function. These functions may or may not receive data as parameters. In the functional Components, the return value is the JSX code to render to the DOM tree. Functional components can also have state which is managed using React hooks.

Class Components

Components can either be created using the class based approach or a functional approach. These components are simple classes (made up of multiple functions that add functionality to the application). All class based components are child classes for the Component class of ReactJS.

Although the class components are supported in React, it is encouraged to write functional components and make use of hooks in modern React applications.

Props vs State

Props (short for “properties”) and state are both plain JavaScript objects. While both hold information that influences the output of component render, they are different in one important way: props get passed to the component (similar to function parameters) whereas state is managed within the component (similar to variables declared within a function).

Conditional Rendering

In React, you can create distinct components that encapsulate behavior you need. Then, you can render only some of them, depending on the state of your application.

Conditional rendering in React works the same way conditions work in JavaScript. Use JavaScript operators like if or the conditional operator to create elements representing the current state, and let React update the UI to match them.

Component Life Cycle

Each component has several “lifecycle methods” that you can override to run code at particular times in the process. You can use this lifecycle diagram as a cheat sheet. In the list below, commonly used lifecycle methods are marked as bold. The rest of them exist for relatively rare use cases.

Lists and Keys

When you render lists in React, you can use the key prop to specify a unique key for each item. This key is used to identify which item to update when you want to update a specific item.

Composition vs Inheritance

React has a powerful composition model, and it is recommended to use composition instead of inheritance to reuse code between components.

Hooks

Hooks were introduced in React 16.8 and they let us use state and other React features without writing a class

useState Hook

useState hook is used to manage the state of a component in functional components. Calling useState returns an array with two elements: the current state value and a function to update the state.

useEffect Hook

useEffect is a special hook that lets you run side effects in React. It is similar to componentDidMount and componentDidUpdate, but it only runs when the component (or some of its props) changes and during the initial mount.

Ecosystem

Thanks to its popularity, React has been enriched by a vast ecosystem of plugins and tools. A (long) list is available here: awesome-react.

Routing

Routing is an essential concept in Single Page Applications (SPA). When your application is divided into separated logical sections, and all of them are under their own URL, your users can easily share links among each other.

React Router

React router is the most famous library when it comes to implementing routing in React applications.

Reach Router

Reach Router is a small, simple router for React that borrows from React Router, Ember, and Preact Router. Reach Router has a small footprint, supports only simple route patterns by design, and has strong (but experimental) accessibility features.

Server-side rendering

Server-side rendering refers to the process that the service side completes the HTML structure splicing of the page, sends it to the browser, and then binds the status and events for it to become a fully interactive page.

Next.js

Next.js is an open-source development framework built on top of Node.js enabling React based web applications functionalities such as server-side rendering and generating static websites.

Free Resources

Static Site Generators

A static site generator is a tool that generates a full static HTML website based on raw data and a set of templates. Essentially, a static site generator automates the task of coding individual HTML pages and gets those pages ready to serve to users ahead of time. Because these HTML pages are pre-built, they can load very quickly in users' browsers.

Gatsby

Gatsby is a React-based open source framework with performance, scalability and security built-in.

Free Resources

API Calls

There are several options available to make API calls from your React.js applications.

React Query

Powerful asynchronous state management, server-state utilities and data fetching for TS/JS, React, Solid, Svelte and Vue.

use-http

React hook for making isomorphic http requests.

Apollo

Apollo is a platform for building a unified graph, a communication layer that helps you manage the flow of data between your application clients (such as web and native apps) and your back-end services.

Relay Modern

Relay is a JavaScript client used in the browser to fetch GraphQL data. It's a JavaScript framework developed by Facebook for managing and fetching data in React applications. It is built with scalability in mind in order to power complex applications like Facebook. The ultimate goal of GraphQL and Relay is to deliver instant UI-response interactions.

Axios

The most common way for frontend programs to communicate with servers is through the HTTP protocol. You are probably familiar with the Fetch API and the XMLHttpRequest interface, which allows you to fetch resources and make HTTP requests.

Axios is a client HTTP API based on the XMLHttpRequest interface provided by browsers.

Unfetch

Unfetch is the bare minimum 500b fetch polyfill.

Superagent

Small progressive client-side HTTP request library, and Node.js module with the same API, supporting many high-level HTTP client features

Mobile

React Native is an open-source UI software framework created by Meta Platforms, Inc. It is used to develop applications for Android, Android TV, iOS, macOS, tvOS, Web, Windows and UWP by enabling developers to use the React framework along with native platform capabilities.

React Native

React Native is an open-source UI software framework created by Meta Platforms, Inc. It is used to develop applications for Android, Android TV, iOS, macOS, tvOS, Web, Windows and UWP by enabling developers to use the React framework along with native platform capabilities.

Forms

Although you can build forms using vanilla React, it normally requires a lot of boilerplate code. This is because the form is built using a combination of state and props. To make it easier to manage forms, we use some sort of library.

React hook form

React hook form is an opensource form library for react. Performant, flexible and extensible forms with easy-to-use validation.

Formik

Formik is another famous opensource form library that helps with getting values in and out of form state, validation and error messages, and handling form submissions.

Final form

High performance subscription-based form state management for React.

Testing

A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.

Jest

Jest is a delightful JavaScript Testing Framework with a focus on simplicity. It works with projects using: Babel, TypeScript, Node, React, Angular, Vue and more!

React Testing Library

The React Testing Library is a very lightweight solution for testing React components. It provides light utility functions on top of react-dom and react-dom/test-utils, in a way that encourages better testing practices. Its primary guiding principle is: The more your tests resemble the way your software is used, the more confidence they can give you.

Cypress

Cypress framework is a JavaScript-based end-to-end testing framework built on top of Mocha – a feature-rich JavaScript test framework running on and in the browser, making asynchronous testing simple and convenient. It also uses a BDD/TDD assertion library and a browser to pair with any JavaScript testing framework.

Free Resources

State Management

Application state management is the process of maintaining knowledge of an application's inputs across multiple related data flows that form a complete business transaction -- or a session -- to understand the condition of the app at any given moment. In computer science, an input is information put into the program by the user and state refers to the condition of an application according to its stored inputs -- saved as variables or constants. State can also be described as the collection of preserved information that forms a complete session.

Context

Context provides a way to pass data through the component tree without having to pass props down manually at every level.

In a typical React application, data is passed top-down (parent to child) via props, but such usage can be cumbersome for certain types of props (e.g. locale preference, UI theme) that are required by many components within an application. Context provides a way to share values like these between components without having to explicitly pass a prop through every level of the tree.

Redux

Redux is a predictable state container for JavaScript apps. It helps you write applications that behave consistently, run in different environments (client, server, and native), and are easy to test. On top of that, it provides a great developer experience, such as live code editing combined with a time traveling debugger.

MobX

MobX is an open source state management tool. MobX, a simple, scalable, and standalone state management library, follows functional reactive programming (FRP) implementation and prevents inconsistent state by ensuring that all derivations are performed automatically.

Styling

While 'CSS in JS' is the most predominant way of styling modern frontend applications, there are several different ways to style your React applications whether it is vanilla CSS, CSS Modules, or CSS in JS etc and each has several frameworks available.

Chakra UI

Chakra UI is a simple, modular and accessible component library that gives you the building blocks you need to build your React applications.

Material UI

Material-UI is an open-source framework that features React components that implement Google’s Material Design.

Ant design

An enterprise-class UI design language and React UI library with a set of high-quality React components, one of best React UI library for enterprises.

Styled components

Styled-components is a CSS-in-JS library that enables you to write regular CSS and attach it to JavaScript components. With styled-components, you can use the CSS you’re already familiar with instead of having to learn a new styling structure.

Emotion

Emotion is a library designed for writing css styles with JavaScript. It provides powerful and predictable style composition in addition to a great developer experience with features such as source maps, labels, and testing utilities. Both string and object styles are supported.

Typescript Basics

In order to enter into the world of Angular application development, typescript is necessary and it is the primary language here. Typescript is a superset of JavaScript. It comes with design-time support which is useful for type safety and tooling. Since, browsers cannot execute the TypeScript directly, it will be 'Transpiled' into JavaScript using the tsc compiler.

What is Typescript

TypeScript is a strongly typed, object-oriented, compiled programming language that builds on JavaScript. It is a superset of the JavaScript language, designed to give you better tooling at any scale. TypeScript calls itself “JavaScript with syntax for types.” In short, it is JavaScript with some additional features. The secret to the success of TypeScript is in the type checking, ensuring that the data flowing through the program is of the correct kind of data.

TypeScript extends JavaScript, providing a better developer experience. The benefits of using TypeScript over JavaScript include.Static typing – TypeScript comes with optional static typing and a type inference system, which means that a variable declared with no type may be inferred by TypeScript based on its value. Object-oriented programming – TypeScript supports object-oriented programming concepts like classes, inheritance, etc. Compile time checks – JavaScript is an interpreted programming language. There is no compilation involved. Hence, the errors get caught during the runtime. Since TypeScript compiles into JavaScript, errors get reported during the compile time rather than the runtime. Code editor support – IDEs or code editors like VS Code support autocomplete for a TypeScript codebase. They also provide inline documentation and highlight the errors. Use existing packages – You might want to use an npm package written in JavaScript. Since TypeScript is a superset of JavaScript, you can import and use that package. Moreover, the TypeScript community creates and maintains type definitions for popular packages that can be utilized in your project.

Structural Typing

Type compatibility in TypeScript is based on structural subtyping. Structural typing is a way of relating types based solely on their members. This is in contrast with nominal typing.

TypeScript’s structural type system was designed based on how JavaScript code is typically written. Because JavaScript widely uses anonymous objects like function expressions and object literals, it’s much more natural to represent the relationships found in JavaScript libraries with a structural type system instead of a nominal one.

Type Inference

In TypeScript, several places where type inference is used to provide type information when there is no explicit type annotation. The type of the x variable is inferred to be a number. This inference occurs when variables and members are initialized, set parameter default values are, and determine function return types. For example, let x: number. In most cases, type inference is straightforward. In the following sections, we’ll explore some nuances in how types are inferred. For example, let x: (number | null)[]

In TypeScript, we can define a variable that can have multiple types of values. In other words, TypeScript can combine one or two types of data (i.e., number, string, etc.) in a single type, a union type. Union types are a powerful way to express a variable with multiple types. Two or more data types can be combined using the pipe ('|') symbol between the types. For example, (type1 | type2 | type3 | .. | typeN).

Builtin Types

The Builtin types represent the different types of values supported by the language. The builtin types check the validity of the supplied values before they are stored or manipulated by the program. This ensures that the code behaves as expected. The Builtin types further allow for richer code hinting and automated documentation too.

A type guard is a TypeScript technique used to get information about the type of a variable, usually within a conditional block. Type guards are regular functions that return a boolean, taking a type and telling TypeScript if it can be narrowed down to something more specific. Type guards have the unique property of assuring that the value tested is of a set type depending on the returned boolean.

TypeScript uses built-in JavaScript operators like typeof, instanceof, and the in operator, which is used to determine if an object contains a property. Type guards enable you to instruct the TypeScript compiler to infer a specific type for a variable in a particular context, ensuring that the type of an argument is what you say it is.

Type guards are typically used for narrowing a type and are pretty similar to feature detection, allowing you to detect the correct methods, prototypes, and properties of a value. Therefore, you can quickly figure out how to handle that value.

RxJS Basics

Reactive Extensions for JavaScript, or RxJS, is a reactive library used to implement reactive programming to deal with async implementation, callbacks, and event-based programs.

The reactive paradigm can be used in many different languages through the use of reactive libraries. These libraries are downloaded APIs that provide functionalities for reactive tools like observers and operators. It can be used in your browser or with Node.js.

Observable Pattern

The observer pattern is a software design pattern in which an object, named the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods.

Angular uses the Observer pattern which simply means — Observable objects are registered, and other objects observe (in Angular using the subscribe method) them and take action when the observable object is acted on in some way.

An observable is a function that acts as a wrapper for a data stream. They support to pass messages inside your application. An observable is useless until an observer subscribes to it. An observer is an object which consumes the data emitted by the observable. An observer keeps receiving data values from the observable until the observable is completed, or the observer unsubscribes from the observable. Otherwise observers can receive data values from the observable continuously and asynchronously. So we can perform various operations such as updating the user interface, or passing the JSON response.

There are 4 stages for a life cycle of an observable.

Marble diagrams# Rxjs vs promises# RxJS Operators

RxJS is mostly useful for its operators, even though the Observable is the foundation. Operators are the essential pieces that allow complex asynchronous code to be easily composed in a declarative manner.

Operators are functions. There are two kinds of operators:

Pipeable Operators are the kind that can be piped to Observables using the syntax observableInstance.pipe(operator()). These include, filter(...), and mergeMap(...). When called, they do not change the existing Observable instance. Instead, they return a new Observable, whose subscription logic is based on the first Observable.

A Pipeable Operator is essentially a pure function which takes one Observable as input and generates another Observable as output. Subscribing to the output Observable will also subscribe to the input Observable.

Creation Operators are the other kind of operator, which can be called as standalone functions to create a new Observable. For example: of(1, 2, 3) creates an observable that will emit 1, 2, and 3, one right after another. Creation operators will be discussed in more detail in a later section.

Piping

Pipeable operators are functions, so they could be used like ordinary functions: op()(obs) — but in practice, there tend to be many of them convolved together, and quickly become unreadable: op4()(op3()(op2()(op1()(obs)))). For that reason, Observables have a method called .pipe() that accomplishes the same thing while being much easier to read:

 obs.pipe(op1(), op2(), op3(), op4());

Creation Operators

What are creation operators? Distinct from pipeable operators, creation operators are functions that can be used to create an Observable with some common predefined behavior or by joining other Observables.

A typical example of a creation operator would be the interval function. It takes a number (not an Observable) as input argument, and produces an Observable as output:

import { interval } from 'rxjs';

const observable = interval(1000 /* number of milliseconds */);

Higher-order Observables

Observables most commonly emit ordinary values like strings and numbers, but surprisingly often, it is necessary to handle Observables of Observables, so-called higher-order Observables.

Filtering# Rate limiting# Transformation# Combination# Angular Basics

Angular is an strong front-end JavaScript framework which means that it enforces a certain style of application development and project structure that developers need to follow to develop apps with Angular. However, it also offers enough flexibility to allow you to structure your project in an understandable and manageable manner.

In this module, we will have a look at some of the most basic concepts that you need to understand before diving into the framework with more advanced concepts.# AngularJS vs. Angular

AngularJS was the older version of Angular, whose support officially ended in January 2022. Angular is a component-based front-end development framework built on TypeScript, which includes a collection of well-integrated libraries that include features like routing, forms management, client-server communication, and more.

Components are the main building block for Angular applications. Each component consists of:

  • An HTML template that declares what renders on the page
  • A TypeScript class that defines the behavior
  • A CSS selector that defines how the component is used in a template
  • Optionally, CSS styles applied to the template

A Template is a form of HTML which tells Angular to go towards another component. To create many Angular features, special syntax within the templates is used.

Modules in Angular act like a container where we can group the components, directives, pipes, and services, related to the application.

Dependency Injection is one of the fundamental concepts in Angular. DI is wired into the Angular framework and allows classes with Angular decorators, such as Components, Directives, Pipes, and Injectables, to configure dependencies that they need.

Services let you define code or functionalities that are then accessible and reusable in many other components in the Angular project. It also helps you with the abstraction of logic and data that is hosted independently but can be shared across other components.

Routing in Angular allows the users to create a single-page application with multiple views and allows navigation between them.

The Angular CLI is a command-line interface tool that you use to initialize, develop, scaffold, and maintain Angular applications directly from a command shell. we can install angular latest CLI using the following command

npm install -g @angular/cli

Free Resources

The command can be used to build a project of type 'application' or 'library'. When used to build a library, a different builder is invoked, and only the ts-config, configuration, and watch options are applied. All other options apply only to building applications.

Free Resources

ng serve — This command builds, deploy, serves and every time watches your code changes. if find any change in code it builds and serves that code automatically. How do Angular builds? After coding our Angular apps using TypeScript, we use the Angular CLI command to build the app.

Free Resources

ng generate is used to create the component in angular project. These are the two main ways to generate a new component in Angular: using ng g c <component_name>, and using ng generate component <component_name>. Using either of these two commands, the new component can be generated pretty easily and followed by the suitable component name of your choice.

Free Resources

ng test is used to runs unit tests in angular project.

ng test <project> [options] | ng t <project> [options]

Free Resources

End-to-end testing (E2E) of Angular applications is performed using the Protractor testing framework, which is created by the Angular team themselves. Protractor can perform end to end tests on Angular applications that are running in a real browser by interacting with it, similar to that of an end-user.

Free Resources

$ ng new [name]

That’s the default usage of the command and creating a new project folder with name. The project which is created in that folder is containing:

The default Angular project, All dependencies installed in node_modules folder , Testing files for each components

Free Resources

A schematic is a template-based code generator that supports complex logic. It is a set of instructions for transforming a software project by generating or modifying code.

Templates

A template is a form of HTML that tells Angular how to render the component.

Interpolation

Interpolation refers to embedding expressions into marked up text. By default, interpolation uses the double curly braces {{ and }} as delimiters. Angular replaces currentCustomer with the string value of the corresponding component property.

  • Angular Official Website (Official Website)

Property binding

Property binding helps you set values for properties of HTML elements or directives. To bind to an element's property, enclose it in square brackets [] which causes Angular to evaluate the right-hand side of the assignment as a dynamic expression.

Template statements

Template statements are methods or properties that you can use in your HTML to respond to user events. With template statements, your application can engage users through actions such as displaying dynamic content or submitting forms. Enclose the event in () which causes Angular to evaluate the right hand side of the assignment as one or more template statements chained together using semicolon ;.

Binding data props attrs events

In an Angular template, a binding creates a live connection between view and the model and keeps them both in sync.

  • property: helps you set values for properties of HTML elements or directives.

  • attributes: helps you set values for attributes of HTML elements directly.

  • event: lets you listen for and respond to user actions such as keystrokes, mouse movements, clicks, and touches.

  • data: It's a combination of property and event binding and helps you share data between components.

  • Angular Official Website (Official Website)

Reference vars

Template reference variables help you use data from one part of a template in another part of the template. A template variable can refer to a DOM element within a template, component or directive. In the template, use the hash symbol, #, to declare a template reference variable.

Input output

@Input() and @Output() give a child component a way to communicate with its parent component. @Input() lets a parent component update data in the child component. Conversely, @Output() lets the child send data to a parent component.

Rendering topics# Builtin directives

SKDirectives are classes that add additional behavior to elements in your Angular applications. Use Angular's built-in directives to manage forms, lists, styles, and what users see.

NgClass Adds and removes a set of CSS classes. | NgStyle Adds and removes a set of HTML styles. | NgModel Adds two-way data binding to an HTML form element.

Use pipes to transform strings, currency amounts, dates, and other data for display. Pipes are simple functions to use in template expressions to accept an input value and return a transformed value. Pipes are useful because you can use them throughout your application , some common pipes are

DatePipe | UpperCasePipe | LowerCasePipe | CurrencyPipe | DecimalPipe | PercentPipe

Change detection is the process through which Angular checks to see whether your application state has changed, and if any DOM needs to be updated. At a high level, Angular walks your components from top to bottom, looking for changes. Angular runs its change detection mechanism periodically so that changes to the data model are reflected in an application’s view. Change detection can be triggered either manually or through an asynchronous event

Forms are used to handle user inputs in many applications. It enables users from entering sensitive information to performing several data entry tasks.

Angular provides two forms to hadle user inputs: reactive forms and template-driven forms.

Reactive Forms in angular are those which used to handle the inputs coming from the user. We can define controls by using classes such as FormGroup and FormControl.

A Template driven form is the simplest form we can build in Angular. It is mainly used for creating simple form application.

It uses two-way data-binding (ngModel) to create and handle the form components.

Routing in Angular allows the users to create a single-page application with multiple views and navigation between them. Users can switch between these views without losing the application state and properties.

Configuration# Router outlets

The router-outlet is a directive that's available from the @angular/router package and is used by the router to mark where in a template, a matched component should be inserted.

Thanks to the router outlet, your app will have multiple views/pages and the app template acts like a shell of your application. Any element, you add to the shell will be rendered in each view, only the part marked by the router outlet will be changed between views.

Router links# Router events# Route Guards

Angular route guards are interfaces provided by Angular which, when implemented, allow us to control the accessibility of a route based on conditions provided in class implementation of that interface.

Some types of angular guards are CanActivate, CanActivateChild, CanLoad, CanDeactivate and Resolve.

Lazy loading

Lazy loading is a technique in Angular that allows you to load JavaScript components asynchronously when a specific route is activated. It improves the application load time speed by splitting the application into several bundles. The bundles are loaded as required when the user navigates through the app.

Services

Components shouldn't fetch or save data directly and shouldn't knowingly present fake data. They should focus on presenting data and delegate data access to a service. Service is where all the remote API calls exist to retrieve and provide data to components.

Dependency Injection

Dependency Injection (DI) is a design pattern that creates the dependencies of a class and provides those objects to the class when required. Angular being a nice framework, provides a built-in dependency injection mechanism that creates and provides a runtime version of a dependency value using dependency injectors.

Lifecycle hooks

A component instance has a lifecycle that starts when Angular instantiates the component class and renders the component view along with its child views. The lifecycle continues with change detection, as Angular checks to see when data-bound properties change, and updates both the view and the component instance as needed. The lifecycle ends when Angular destroys the component instance and removes its rendered template from the DOM. Directives have a similar lifecycle, as Angular creates, updates, and destroys instances in the course of execution.

Your application can use lifecycle hook methods to tap into key events in the lifecycle of a component or directive to initialize new instances, initiate change detection when needed, respond to updates during change detection, and clean up before deletion of instances.

The following life cycle hooks of angular are :

OnChanges , OnInit , DoCheck , OnDestroy , AfterContentInit , AfterContentChecked , AfterViewInit , AfterViewChecked

State Management

Application state management is the process of maintaining knowledge of an application's inputs across multiple related data flows that form a complete business transaction -- or a session -- to understand the condition of the app at any given moment. In computer science, an input is information put into the program by the user and state refers to the condition of an application according to its stored inputs -- saved as variables or constants. State can also be described as the collection of preserved information that forms a complete session.

Ngxs

Ngxs is a state management pattern for the Angular framework. It acts as a single source of truth for our application. Ngxs is very simple and easily implementable. It reduce lots of boilerplate code . It is a replacement for Ngrx. In Ngrx we are creating state, action, reducer, and effects but in Ngxs, we are creating only state and actions instead of all of this. Like Ngrx, Ngxs is also asynchronous and when we dispatch any action we can get a response back.

Ngrx

Ngrx is a group of Angular libraries for reactive extensions that implements the Redux pattern and it’s supercharged with RXJS.

Zones# Creating a custom X

Learn how to create custom pipes, libraries and directives in Angular.

Directive

Directives are the functions that will execute whenever the Angular compiler finds them. Angular Directives enhance the capability of HTML elements by attaching custom behaviors to the DOM.

From the core concept, Angular directives are categorized into three categories: Attribute Directives, Structural Directives, and Component Directives.

Pipes to transform strings, currency amounts, dates, and other data for display. Pipes are simple functions in template expressions to accept an input value and return a transformed value. Pipes are helpful because you can use them throughout your application while only declaring each pipe once. For example, you would use a pipe to show the date as April 15, 1988, rather than the raw string format.

Use the Angular CLI and the npm package manager to build and publish your library as an npm package.

SSR in Angular

A normal Angular application executes in the browser, rendering pages in the DOM in response to user actions. Angular Universal executes on the server, generating static application pages that later get bootstrapped on the client. This means that the application generally renders more quickly, giving users a chance to view the application layout before it becomes fully interactive.

Angular universal

Angular Universal also known as server-side rendering is tool which allows server to pre-render Angular application while user hits your website for first time.

Angular SSG

SSG (Static Site Generator), helps in building the HTML full website, during the process of building and serving that HTML Page. This method helps to generate the HTML website on the client side before its being served on the server side. Therefore, whenever a user requests a HTML Page, firstly HTML page will be rendered and secondly, the angular app will be rendered. The SSG can be used only if your website is static (or) it's content doesn't changes frequently.

Scully

Scully is the best static site generator for Angular projects looking to embrace the Jamstack. It will use your application and will create a static index. html for each of your pages/routes.

Testing Angular Apps

In any software development process, Testing the application plays a vital role. If Bugs and crashes are not figured out and solved they can defame the development company as well as hurt the clients too. But, Angular’s architecture comes with built-in testability features. As soon as you create a new project with Angular CLI, two essential testing tools are installed.They are: Jasmine and Karma. Jasmine is the testing library which structures individual tests into specifications (“specs”) and suites. And Karma is the test runner, which enables the different browsers to run the tests mentioned by Jasmine and the browsers will finally report the test results back.

Testing pipes

An Angular Pipe is a special function that is called from a Component template. Its purpose is to transform a value: You pass a value to the Pipe, the Pipe computes a new value and returns it.

Testing services

In an Angular application, Services are responsible for fetching, storing and processing data. Services are singletons, meaning there is only one instance of a Service during runtime. They are fit for central data storage, HTTP and WebSocket communication as well as data validation.

Testing component bindings

Angular processes all data bindings once for each JavaScript event cycle, from the root of the application component tree through all child components. Data binding plays an important role in communication between a template and its component, and is also important for communication between parent and child components.

Testing directives

Directives are classes that add new behavior or modify the existing behavior to the elements in the template. Basically directives are used to manipulate the DOM, for example adding/removing the element from DOM or changing the appearance of the DOM elements.

Testing component templates

With a component template , you can save and reuse component processes and properties and create components from them; template-based components inherit the template's properties and process.

Fundamental Topics

Vue is a JavaScript framework for building user interfaces. It builds on top of standard HTML, CSS and JavaScript, and provides a declarative and component-based programming model that helps you efficiently develop user interfaces, be it simple or complex.

Vue CLI

Vue CLI is a full system for rapid Vue.js development, providing:

  • Interactive project scaffolding via @vue/cli.
  • A runtime dependency (@vue/cli-service) that is:
    • Upgradeable;
    • Built on top of webpack, with sensible defaults;
    • Configurable via in-project config file;
    • Extensible via plugins
  • A rich collection of official plugins integrating the best tools in the frontend ecosystem.
  • A full graphical user interface to create and manage Vue.js projects.

Vue CLI aims to be the standard tooling baseline for the Vue ecosystem. It ensures the various build tools work smoothly together with sensible defaults so you can focus on writing your app instead of spending days wrangling with configurations. At the same time, it still offers the flexibility to tweak the config of each tool without the need for ejecting.

Components

Components allow us to split the UI into independent and reusable pieces, and think about each piece in isolation.

Single File Components

Vue Single-File Components (a.k.a. *.vue files, abbreviated as SFC) is a special file format that allows us to encapsulate the template, logic, and styling of a Vue component in a single file.

Component Registration

A Vue component needs to be 'registered' so that Vue knows where to locate its implementation when it is encountered in a template. There are two ways to register components: global and local.

Props

If we are building a blog, we will likely need a component representing a blog post. We want all the blog posts to share the same visual layout, but with different content. Such a component won't be useful unless you can pass data to it, such as the title and content of the specific post we want to display. That's where props come in.

Props are custom attributes you can register on a component.

Events

As we develop our applications we may need to communicate with the parent component in order to notify of some actions e.g. when a user clicks on a button. In order to do this we need to use events.

Attribute Inheritance

Attribute inheritance aka 'fallthrough attributes' is a feature of Vue.js that allows you to inherit attributes from a parent component.

Templates

Vue uses an HTML-based template syntax that allows you to declaratively bind the rendered DOM to the underlying component instance's data. All Vue templates are syntactically valid HTML that can be parsed by spec-compliant browsers and HTML parsers.

Directives

Directives are special attributes with the v- prefix. Vue provides a number of built-in directives.

API Styles

Up until Vue 2, there was one way to create components in Vue. With Vue 3, a new methodology was introduced called the Composition API. Now, if we want to make a component in Vue, we have two ways to do it. You might be wondering what the difference is, exactly, so let’s take a look at how the newer Composition API differs from the Vue 2 methodology, which is now known as the Options API

Options API

We use Options API in a Vue application to write and define different components. With this API, we can use options such as data, methods, and mounted.

To state it simply, Options API is an old way to structure a Vue.JS application. Due to some limitations in this API, Composition API was introduced in Vue 3.

Composition API

With the release of Vue 3, developers now have access to the Composition API, a new way to write Vue components. This API allows features to be grouped together logically, rather than having to organize your single-file components by function. Using the Composition API can lead to more readable code, and gives the developer more flexibility when developing their applications.

App Configurations

Every application instance exposes a config object that contains the configuration settings for that application. You can modify its properties before mounting your application.

Rendering Lists

We can use the v-for directive to render a list of items based on an array. The v-for directive requires a special syntax in the form of item in items, where items is the source data array and item is an alias for the array element being iterated on.

Conditional Rendering

The directive v-if is used to conditionally render a block. The block will only be rendered if the directive's expression returns a truthy value.

Lifecycle Hooks

Each Vue component instance goes through a series of initialization steps when it's created - for example, it needs to set up data observation, compile the template, mount the instance to the DOM, and update the DOM when data changes. Along the way, it also runs functions called lifecycle hooks, giving users the opportunity to add their own code at specific stages.

Forms Handling

You can use the v-model directive to create two-way data bindings on form input elements. It automatically picks the correct way to update the element based on the input type.

Events Handling

When you build a dynamic website with Vue you'll most likely want it to be able to respond to events. For example, if a user clicks a button, submits a form, or even just moves their mouse, you may want your Vue site to respond somehow.

Computed Properties

In-template expressions are very convenient, but they are meant for simple operations. Putting too much logic in your templates can make them bloated and hard to maintain. Computed properties allow us to simplify the complex logic that includes reactive data.

Advanced Topics

Now that you have covered the basics, next we have the advanced topics such as Async Components, Teleports, Provide/Inject, Custom Directives, Custom Events, Plugins, Watchers, Slots and more.

Ref

ref() and reactive() are used to track changes of its argument. When using them to initialize variables you give Vue information: “Hey, I want you to re-build or re-evaluate everything that depends on those variables every time they change”.

toRefs

toRefs converts a reactive object to a plain object where each property of the resulting object is a ref pointing to the corresponding property of the original object. Each individual ref is created using toRef().

reactive

reactive allows us to create reactive data structures. Reactive objects are JavaScript Proxies and behave just like normal objects. The difference is that Vue is able to track the property access and mutations of a reactive object.

computed

computed takes a getter function and returns a readonly reactive ref object for the returned value from the getter. It can also take an object with get and set functions to create a writable ref object.

watch

watch watches one or more reactive data sources and invokes a callback function when the sources change.

nextTick

nextTick is a utility for waiting for the next DOM update flush.

Composables

In the context of Vue applications, a 'composable' is a function that leverages Vue's Composition API to encapsulate and reuse stateful logic.

When building frontend applications, we often need to reuse logic for common tasks. For example, we may need to format dates in many places, so we extract a reusable function for that. This formatter function encapsulates stateless logic: it takes some input and immediately returns expected output. There are many libraries out there for reusing stateless logic - for example lodash and date-fns, which you may have heard of.

Async Components

In large applications, we may need to divide the app into smaller chunks and only load a component from the server when it's needed. To make that possible, Vue has a defineAsyncComponent function.

Teleport Components

Sometimes we may run into the following scenario: a part of a component's template belongs to it logically, but from a visual standpoint, it should be displayed somewhere else in the DOM, outside of the Vue application. This is where the <Teleport> component comes in.

Provide / Inject

Usually, when we need to pass data from the parent to a child component, we use props. However, imagine the case where we have a large component tree, and a deeply nested component needs something from a distant ancestor component. With only props, we would have to pass the same prop across the entire parent chain. We can solve props drilling with provide and inject.

Custom Directives

So far you may have covered two forms of code reuse in Vue: components and composables. Components are the main building blocks, while composables are focused on reusing stateful logic. Custom directives, on the other hand, are mainly intended for reusing logic that involves low-level DOM access on plain elements.

Custom Events

Sometimes you may need to define custom events that can be used in your components. Vue.js allows you to do this by emitting custom event objects using $emit.

Plugins

Plugins are self-contained code that usually add app-level functionality to Vue.

Watchers

Computed properties allow us to declaratively compute derived values. However, there are cases where we need to perform 'side effects' in reaction to state changes - for example, mutating the DOM, or changing another piece of state based on the result of an async operation.

With Composition API, we can use the watch function to trigger a callback whenever a piece of reactive state changes.

Slots

In some cases, we may want to pass a template fragment to a child component, and let the child component render the fragment within its own template. The <slot> element is a slot outlet that indicates where the parent-provided slot content should be rendered.

Transition

Vue offers two built-in components that can help work with transitions and animations in response to changing state:

  • <Transition> for applying animations when an element or component is entering and leaving the DOM. This is covered on this page.
  • <TransitionGroup> for applying animations when an element or component is inserted into, removed from, or moved within a v-for list. This is covered in the next chapter.

Transition Group

<TransitionGroup> is a built-in component designed for animating the insertion, removal, and order change of elements or components that are rendered in a list.

Ecosystem

Thanks to its popularity, Vue has been enriched by a vast ecosystem of plugins and tools. A (long) list is available here: awesome-vue.

Routing

Routing is an essential concept in Single Page Applications (SPA). When your application is divided into separated logical sections, and all of them are under their own URL, your users can easily share links among each other.

Vue Router

Vue Router is the official router for Vue.js which allows creating static/dynamic routes, has support for navigation interception, allows for component based configuration and much more.

Forms

Apart from the built-in form-binding support, there are several options available that allow for handling forms and data in much convenient manner.

Vue formulate

Vue formulate's built-in validation, error handling, grouped & repeatable fields, form generation, and more — make complex form creation a breeze.

Vee validate

OpenSource plugin to handle form validations in Vue.js

Vuelidate

Simple, lightweight model-based validation for Vue.js.

Server-side rendering

Server-side rendering refers to the process that the service side completes the HTML structure splicing of the page, sends it to the browser, and then binds the status and events for it to become a fully interactive page.

Quasar

Quasar Framework is an open-source Vue.js based framework for building apps, with a single codebase, and deploy it on the Web as a SPA, PWA, SSR, to a Mobile App, using Cordova for iOS & Android, and to a Desktop App, using Electron for Mac, Windows, and Linux.

Nuxt.js

Nuxt.js is a free and open source JavaScript library based on Vue.js, Node.js, Webpack and Babel.js. Nuxt is inspired by Next.js, which is a framework of similar purpose, based on React.js.

Static Site Generators

A static site generator is a tool that generates a full static HTML website based on raw data and a set of templates. Essentially, a static site generator automates the task of coding individual HTML pages and gets those pages ready to serve to users ahead of time. Because these HTML pages are pre-built, they can load very quickly in users' browsers.

Gridsome

Gridsome is a Vue.js powered Jamstack framework for building static generated websites & apps that are fast by default.

Vuepress

VuePress is composed of two parts: a minimalistic static site generator (opens new window)with a Vue-powered theming system and Plugin API, and a default theme optimized for writing technical documentation. It was created to support the documentation needs of Vue’s own sub projects.

Free Resources

State Management

Application state management is the process of maintaining knowledge of an application's inputs across multiple related data flows that form a complete business transaction -- or a session -- to understand the condition of the app at any given moment. In computer science, an input is information put into the program by the user and state refers to the condition of an application according to its stored inputs -- saved as variables or constants. State can also be described as the collection of preserved information that forms a complete session.

Pinia

Pinia is a store library for Vue.js, and can be used in Vue 2 and Vue 3, with the same API, except in SSR and its installation. It allows state sharing between pages and components around the application. As the documentation says, it is extensible, intuitive (by organization), has devtools support (in Vue.js devtools), inferred typed state even in javascript and more. In Pinia you can access, mutate, replace, use getters that works like computed, use actions, etc. The library is recommended by the official Vue.js documentation.

Mobile Apps

Building a mobile application with Vue.js is not impossible. In fact, you can build production-ready apps that look and feel like native mobile apps with Vue.js.

Capacitor

Since Vue.js is a web framework, it does not natively support mobile app development. So how do we get access to native mobile features such as the camera and geolocation? Ionic has an official native runtime called Capacitor. With Capacitor’s plugin, you can access the native API of the device your app is running on and build truly native mobile application with Ionic Vue.

API Calls

There are several options available to make API calls from your Vue.js applications.

Apollo

Apollo is a platform for building a unified graph, a communication layer that helps you manage the flow of data between your application clients (such as web and native apps) and your back-end services.

Relay Modern

Relay is a JavaScript client used in the browser to fetch GraphQL data. It's a JavaScript framework developed by Facebook for managing and fetching data in React applications. It is built with scalability in mind in order to power complex applications like Facebook. The ultimate goal of GraphQL and Relay is to deliver instant UI-response interactions.

Axios

The most common way for frontend programs to communicate with servers is through the HTTP protocol. You are probably familiar with the Fetch API and the XMLHttpRequest interface, which allows you to fetch resources and make HTTP requests.

Axios is a client HTTP API based on the XMLHttpRequest interface provided by browsers.

Unfetch

Unfetch is the bare minimum 500b fetch polyfill.

Superagent

Small progressive client-side HTTP request library, and Node.js module with the same API, supporting many high-level HTTP client features

Jest

Jest is a delightful JavaScript Testing Framework with a focus on simplicity. It works with projects using: Babel, TypeScript, Node, React, Angular, Vue and more!

Vue Testing Library

The Vue Testing Library is a very lightweight solution for testing Vue components. Its primary guiding principle is: The more your tests resemble the way your software is used, the more confidence they can give you.

Cypress

Cypress framework is a JavaScript-based end-to-end testing framework built on top of Mocha – a feature-rich JavaScript test framework running on and in the browser, making asynchronous testing simple and convenient. It also uses a BDD/TDD assertion library and a browser to pair with any JavaScript testing framework.

Free Resources

Tailwind CSS

CSS Framework that provides atomic CSS classes to help you style components e.g. flex, pt-4, text-center and rotate-90 that can be composed to build any design, directly in your markup.

Vuetify

Vuetify is a Vue UI Library with beautifully handcrafted Material Components. No design skills required — everything you need to create amazing applications is at your fingertips.

Element UI

Element UI is another Vue.js component library with several built-in components to style your Vue.js applications.

JavaScript

JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on. Apart from being used on the frontend in browsers, there is Node.js which is an open-source, cross-platform, back-end JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser.

JavaScript

JavaScript, often abbreviated JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. It lets us add interactivity to pages e.g. you might have seen sliders, alerts, click interactions, popups, etc on different websites -- all of that is built using JavaScript. Apart from being used in the browser, it is also used in other non-browser environments as well such as Node.js for writing server-side code in JavaScript, Electron for writing desktop applications, React Native for mobile applications, and so on.

What is JavaScript?

JavaScript, often abbreviated JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. It lets us add interactivity to pages e.g. you might have seen sliders, alerts, click interactions, popups, etc on different websites -- all of that is built using JavaScript. Apart from being used in the browser, it is also used in other non-browser environments as well such as Node.js for writing server-side code in JavaScript, Electron for writing desktop applications, React Native for mobile applications, and so on.

History of JavaScript

JavaScript was initially created by Brendan Eich of NetScape and was first announced in a press release by Netscape in 1995. It has a bizarre history of naming; initially, it was named Mocha by the creator, which was later renamed LiveScript. In 1996, about a year later after the release, NetScape decided to rename it to JavaScript with hopes of capitalizing on the Java community (although JavaScript did not have any relationship with Java) and released Netscape 2.0 with the official support of JavaScript.

JavaScript was invented by Brendan Eich, and in 1997 and became an ECMA standard. ECMAScript is the official language name. ECMAScript versions include ES1, ES2, ES3, ES5, and ES6

JavaScript can be run in the browser by including the external script file using the script tag, writing it within the HTML page using the script tag again, running it in the browser console or you can also use REPL.

Javascript Variables

Most of the time, a JavaScript application needs to work with information. To store and represent this information in the JavaScript codebase, we use variables. A variable is a container for a value.

Variable Declarations

To use variables in JavaScript, we first need to create it i.e. declare a variable. To declare variables, we use one of the var, let, or const keywords.

[var] keyword

The var statement declares a function-scoped or globally-scoped variable, optionally initializing it to a value.

[let] keyword

The let declaration declares a block-scoped local variable, optionally initializing it to a value.

[const] keyword

Constants are block-scoped, much like variables declared using the let keyword. The value of a constant can't be changed through reassignment (i.e. by using the assignment operator), and it can't be redeclared (i.e. through a variable declaration). However, if a constant is an object or array its properties or items can be updated or removed.

Hoisting

JavaScript Hoisting refers to the process whereby the interpreter appears to move the declaration of functions, variables, or classes to the top of their scope, prior to execution of the code.

Naming Rules

A variable name should accurately identify your variable. When you create good variable names, your JavaScript code becomes easier to understand and easier to work with. Properly naming variables is really important. JavaScript also has some rules when it comes to naming variables; read about these rules through the links below.

Scopes

Before ES6 (2015), JavaScript had only Global Scope and Function Scope. ES6 introduced two important new JavaScript keywords: let and const. These two keywords provide Block Scope in JavaScript.

Block Scope

This scope restricts the variable that is declared inside a specific block, from access by the outside of the block. The let & const keyword facilitates the variables to be block scoped. In order to access the variables of that specific block, we need to create an object for it. Variables declared with the var keyword, do not have block scope.

Function Scope

When a variable is declared inside a function, it is only accessible within that function and cannot be used outside that function.

Global Scope

Variables declared Globally (outside any function) have Global Scope. Global variables can be accessed from anywhere in a JavaScript program. Variables declared with var, let and const are quite similar when declared outside a block.

Datatypes

Data type refers to the type of data that a JavaScript variable can hold. There are eight basic data types in JavaScript.

Primitive Types

In JavaScript, a primitive (primitive value, primitive data type) is data that is not an object and has no methods or properties. There are 7 primitive data types:

  • string
  • number
  • bigint
  • boolean
  • undefined
  • Symbol
  • null

Most of the time, a primitive value is represented directly at the lowest level of the language implementation.

Object

JavaScript object is a data structure that allows us to have key-value pairs; so we can have distinct keys and each key is mapped to a value that can be of any JavaScript data type. Comparing it to a real-world object, a pen is an object with several properties such as color, design, the material it is made of, etc. In the same way, JavaScript objects can have properties that define their characteristics.

Prototypes

JavaScript is an object-oriented language built around a prototype model. In JavaScript, every object inherits properties from its prototype, if there are any. A prototype is simply an object from which another object inherits properties. To create complex programs using JavaScript, one has to be proficient in working with prototypes — they form the very core of OOP in the language.

Prototypal Inheritance

The Prototypal Inheritance is a feature in javascript used to add methods and properties in objects. It is a method by which an object can inherit the properties and methods of another object. Traditionally, in order to get and set the Prototype of an object, we use Object.getPrototypeOf and Object.

Built-in objects

Built-in objects, or 'global objects', are those built into the language specification itself. There are numerous built-in objects with the JavaScript language, all of which are accessible at the global scope. Some examples are:

TypeOf Operator

You can use the typeOf operator to find the data type of a JavaScript variable.

A Data structure is a format to organize, manage and store data in a way that allows efficient access and modification. JavaScript has primitive (built-in) and non-primitive (not built-in) data structures. Primitive data structures come by default with the programming language and you can implement them out of the box (like arrays and objects). Non-primitive data structures don't come by default and you have to code them up if you want to use them.

Indexed collections

Indexed Collections are collections that have numeric indices i.e. the collections of data that are ordered by an index value. In JavaScript, an array is an indexed collection. An array is an ordered set of values that has a numeric index.

Arrays

Arrays are objects that store a collection of items and can be assigned to a variable. They have their methods that can perform operations on the array.

In Javascript, a typed array is an array-like buffer of binary data. There is no JavaScript property or object named TypedArray, but properties and methods can be used with typed array objects.

Keyed Collections

Keyed collections are data collections that are ordered by key not index. They are associative in nature. Map and set objects are keyed collections and are iterable in the order of insertion.

Map is a collection of keyed data items, just like an Object. But the main difference is that Map allows keys of any type.

Weak map

WeakMap is a Map-like collection of key/value pairs whose keys must be objects, it removes them once they become inaccessible by other means

Set

The Set object lets you store unique values of any type, whether primitive values or object references. A value in the Set may only occur once; it is unique in the Set's collection.

WeakSet

WeakSet objects are collections of objects. Just as with Sets, each object in a WeakSet may occur only once; all objects in a WeakSet's collection are unique.

Structured data

Structured data is used by search-engines, like Google, to understand the content of the page, as well as to gather information about the web and the world in general.

It is also coded using in-page markup on the page that the information applies to.

JSON

JavaScript Object Notation (JSON) is a standard text-based format for representing structured data based on JavaScript object syntax. It is commonly used for transmitting data in web applications (e.g., sending some data from the server to the client, so it can be displayed on a web page, or vice versa).

Type Casting

Type conversion (or typecasting) means the transfer of data from one data type to another. Implicit conversion happens when the compiler (for compiled languages) or runtime (for script languages like JavaScript) automatically converts data types. The source code can also explicitly require a conversion to take place.

Type Conversion/Coercion

Type coercion is the automatic or implicit conversion of values from one data type to another (such as strings to numbers). Type conversion is similar to type coercion because they convert values from one data type to another with one key difference — type coercion is implicit. In contrast, type conversion can be either implicit or explicit.

Explicit Type Casting

Type casting means transferring data from one data type to another by explicitly specifying the type to convert the given data to. Explicit type casting is normally done to make data compatible with other variables. Examples of typecasting methods are parseInt(), parseFloat(), toString().

Implicit Type Casting

Implicit type conversion happens when the compiler or runtime automatically converts data types. JavaScript is loosely typed language and most of the time operators automatically convert a value to the right type.

Equality Comparisons

Comparison operators are used in logical statements to determine equality or difference between variables or values. Comparison operators can be used in conditional statements to compare values and take action depending on the result.

isLooselyEqual checks whether its two operands are equal, returning a Boolean result. It attempts to convert and compare operands that are of different types.

isStrictlyEqual

isStrictlyEqual checks whether its two operands are equal, returning a Boolean result. It always considers operands of different types to be different.

Same value zero

SameValueZero equality determines whether two values are functionally identical in all contexts with +0 and -0 are also considered equal.

Same value

SameValue equality determines whether two values are functionally identical in all contexts.

Value Comparison Operators

In javascript, the == operator does the type conversion of the operands before comparison, whereas the === operator compares the values and the data types of the operands. The Object.is() method determines whether two values are the same value: Object.is(value1, value2).

Object.is() is not equivalent to the == operator. The == operator applies various coercions to both sides (if they are not the same type) before testing for equality (resulting in such behavior as '' == false being true), but Object.is() doesn't coerce either value.

Object.is() is also not equivalent to the === operator. The only difference between Object.is() and === is in their treatment of signed zeros and NaN values. The === operator (and the == operator) treats the number values -0 and +0 as equal but treats NaN as not equal to each other.

Loops offer a quick and easy way to do something repeatedly.

You can think of a loop as a computerized version of the game where you tell someone to take X steps in one direction, then Y steps in another. For example, the idea 'Go five steps to the east' could be expressed this way as a loop:

for (let step = 0; step < 5; step++) {
  // Runs 5 times, with values of step 0 through 4.
  console.log('Walking east one step');
}

for...in statement

The for...in statement iterates over all enumerable properties of an object that are keyed by strings (ignoring ones keyed by Symbols), including inherited enumerable properties.

The for...of statement executes a loop that operates on a sequence of values sourced from an iterable object. Iterable objects include instances of built-ins such as Array, String, TypedArray, Map, Set, NodeList (and other DOM collections), and the arguments object, generators produced by generator functions, and user-defined iterables.

Break continue

break statement, without a label reference, can only be used to jump out of a loop or a switch block.

continue statement, with or without a label reference, can only be used to skip one loop iteration.

Labeled Statements

JavaScript label statements are used to prefix a label to an identifier. It can be used with break and continue statement to control the flow more precisely.

A label is simply an identifier followed by a colon(:) that is applied to a block of code.

The for loop is a standard control-flow construct in many programming languages, including JavaScript. It's commonly used to iterate over given sequences or iterate a known number of times and execute a piece of code for each iteration.

do...while statement

The do...while statement creates a loop that executes a specified statement until the test condition evaluates to false. The condition is evaluated after executing the statement, resulting in the specified statement executing at least once.

while statement

The while statement creates a loop that executes a specified statement as long as the test condition evaluates to true. The condition is evaluated before executing the statement.

Control Flow

In JavaScript, the Control flow is a way of how your computer runs code from top to bottom. It starts from the first line and ends at the last line unless it hits any statement that changes the control flow of the program such as loops, conditionals, etc.

We can control the flow of the program through any of these control structures:

  • Sequential (default mode)

  • Conditional Statements

  • Exception Handling

  • Loops and Iterations

  • Control Flow - MDN (Read)# Exception Handling

In JavaScript, all exceptions are simply objects. While the majority of exceptions are implementations of the global Error class, any old object can be thrown. With this in mind, there are two ways to throw an exception: directly via an Error object, and through a custom object. (excerpt from Rollbar)

Throw Statement

The throw statement throws a user-defined exception. Execution of the current function will stop (the statements after throw won't be executed), and control will be passed to the first catch block in the call stack. If no catch block exists among caller functions, the program will terminate. (excerpt from MDN)

These are ways of handling errors in your JavaScript code. Inside the try code block we have the code to run, inside the catch block we handle the errors, and inside the finally block we have code that runs after the execution of the previous code blocks, regardless of the result.

Utilizing error objects

When a runtime error occurs, a new Error object is created and thrown. With this Error object, we can determine the type of the Error and handle it according to its type.

Types of Errors:

Besides error constructors, Javascript also has other core Error constructors.

Example

try {
  willGiveErrorSometime();
} catch (error) {
  if (error instanceof RangeError) {
    rangeErrorHandler(error);
  } else if (error instanceof ReferenceError) {
    referenceErrorHandle(error);
  } else {
    errorHandler(error);
  }
}

Conditional statements

When you write code, you often want to perform different actions for different decisions. You can use conditional statements in your code to do this. In JavaScript, we have three conditional statements: if, if...else, and switch.

If else

The if statement executes a statement if a specified condition is truthy. If the condition is falsy, another statement in the optional else clause will be executed.

Example

if (condition) {
  statement1
} else {
  statement2
}

Switch Case

The switch statement evaluates an expression, matching the expression's value against a series of case clauses, and executes statements after the first case clause with a matching value, until a break statement is encountered. The default clause of a switch statement will be jumped to if no case matches the expression's value.

Example

switch (expression) {
  case value1:
    //Statements executed when the result of expression matches value1
    break; 
  case value2:
    //Statements executed when the result of expression matches value2
    break; 
  ...
  case valueN:
    //Statements executed when the result of expression matches valueN
    break; 
  default:
    //Statements executed when none of the values match the value of the expression
    break; 
} 

Expressions and Operators

At a high level, an expression is a valid unit of code that resolves to a value. There are two types of expressions: those that have side effects (such as assigning values) and those that purely evaluate. The expression x = 7 is an example of the first type. This expression uses the = operator to assign the value seven to the variable x. The expression itself evaluates to 7. The expression 3 + 4 is an example of the second type. This expression uses the + operator to add 3 and 4 together and produces a value, 7. However, if it's not eventually part of a bigger construct (for example, a variable declaration like const z = 3 + 4), its result will be immediately discarded this is usually a programmer mistake because the evaluation doesn't produce any effects. As the examples above also illustrate, all complex expressions are joined by operators, such as = and +.

Assignment Operators

An assignment operator assigns a value to its left operand based on the value of its right operand. The simple assignment operator is equal (=), which assigns the value of its right operand to its left operand. That is, x = f() is an assignment expression that assigns the value of f() to x.

Comparison Operators

Comparison operators are the operators that compare values and return true or false. The operators include: >, <, >=, <=, ==, ===, !== and !===

Arithmetic operators

The Arithmetic operators perform addition, subtraction, multiplication, division, exponentiation, and remainder operations.

Arithmetic operators in JavaScript are as follows:

Bitwise operators

Bitwise operators treat arguments as 32-bits (zeros & ones) and work on the level of their binary representation. Ex. Decimal number 9 has a binary representation of 1001. Bitwise operators perform their operations on such binary representations, but they return standard JavaScript numerical values.

Bitwise operators in JavaScript are as follows:

There are four logical operators in JavaScript: || (OR), && (AND), ! (NOT), ?? (Nullish Coalescing).

BigInt Operators

Most operators that can be used with the Number data type will also work with BigInt values (e.g. arithmetic, comparison, etc.). However, the unsigned right shift >>> operator is an exception and is not supported. Similarly, some operators may have slight differences in behaviour (for example, division with BigInt will round towards zero).

String Operators

In addition to the comparison operators, which can be used on string values, the concatenation operator (+) concatenates two string values together, returning another string that is the union of the two operand strings.

The shorthand assignment operator += can also be used to concatenate strings.

Conditional operator also known as Ternary operator is the only JS operator that takes three operands.

The operator can have one of two values based on a condition.

Syntax:

condition ? val_for_true : val_for_false

Comma operators

The comma operator (,) evaluates each of its operands (from left to right) and returns the value of the last operand. This lets you create a compound expression in which multiple expressions are evaluated, with the compound expression's final value being the value of the rightmost of its member expressions. This is commonly used to provide multiple parameters to a for loop.

Unary Operators

JavaScript Unary Operators are the special operators that consider a single operand and perform all the types of operations on that single operand. These operators include unary plus, unary minus, prefix increments, postfix increments, prefix decrements, and postfix decrements.

Relational Operators

Relational operators are also known as comparison operators. They are used to find the relationship between two values or compare the relationship between them; on the comparison, they yield the result true or false.

Functions

Functions are blocks of code that execute whenever they are invoked. They are useful for placing code snippets executed in different places in the code.

Defining and Calling Functions

Defining:

  • JavaScript function declarations are made by using the function keyword.
  • Functions can also be defined by saving function expressions to a variable. 'Arrow' functions are commonly used in this way.

Calling:

Function Parameters

The parameter is the name given to the variable declared inside the definition of a function. There are two special kinds of syntax: default and rest parameters.

Default Parameters

Default function parameters allow named parameters to be initialized with default values if no value or undefined is passed.

Rest Parameters

The rest parameter syntax allows a function to accept an indefinite number of arguments as an array, providing a way to represent variadic functions in JavaScript.

Arrow Functions

Arrow Function is a new way of creating functions with the '=>' operator with a shorter syntax.

IIFE

Immediately-Invoked Function Expression is a function that is executed immediately after it is created.

Arguments object

The arguments object is an Array-like object accessible inside functions that contains the values of the arguments passed to that function, available within all non-arrow functions. You can refer to a function's arguments inside that function by using its arguments object. It has entries for each argument the function was called with, with the first entry's index at 0. But, in modern code, rest parameters should be preferred.

Scope and function stack

Scope

A space or environment in which a particular variable or function can be accessed or used. Accessibility of this variable or function depends on where it is defined.

JavaScript has the following kinds of scopes:

  • Global scope: The default scope for all code running in script mode.
  • Module scope: The scope for code running in module mode.
  • Function scope: The scope created with a function.
  • Block scope: The scope created with a pair of curly braces (a block).

Function Stack (Call stack)

The function stack is how the interpreter keeps track of its place in a script that calls multiple functions, like which function is currently executing and which functions within that function are being called.

Recursion

One of the most powerful and elegant concept of functions, recursion is when a function invokes itself. Such a function is called a recursive function. As recursion happens, the underlying code of the recursive function gets executed again and again until a terminating condition, called the base case, gets fulfilled. As you dive into the world of algorithms, you'll come across recursion in many many instances.

Lexical scoping

Before one can make an intuition of closures in JavaScript, it's important to first get the hang of the term 'lexical environment'. In simple words, the lexical environment for a function f simply refers to the environment enclosing that function's definition in the source code.

Closures

Function closures are one of the most powerful, yet most misunderstood, concepts of JavaScript that are actually really simple to understand. A closure refers to a function along with its lexical environment. It is essentially what allows us to return a function A, from another function B, that remembers the local variables defined in B, even after B exits. The idea of closures is employed in nearly every other JavaScript program, hence, it's paramount for a JavaScript developer to know it really well.

Built in functions

  • A JavaScript method is a property containing a function definition . In other words, when the data stored on an object is a function we call that a method.

  • To differentiate between properties and methods, we can think of it this way: A property is what an object has, while a method is what an object does.

  • Since JavaScript methods are actions that can be performed on objects, we first need to have objects to start with. There are several objects built into JavaScript which we can use.

  • JavaScript Built-in Functions (Read)

  • Built-in Methods in Javascript (Read)

  • Built-in Functions: (Read)

Strict Mode

JavaScript's strict mode is a way to opt-in to a restricted variant of JavaScript, thereby implicitly opting out of 'sloppy mode'. Strict mode isn't just a subset: it intentionally has different semantics from regular code. Browsers not supporting strict mode will run strict mode code with different behavior from browsers that do, so don't rely on strict mode without feature-testing for support for the relevant aspects of strict mode. Strict mode code and non-strict mode code can coexist so that scripts can opt into strict mode incrementally.

Strict mode makes several changes to normal JavaScript semantics:

  • Eliminates some JavaScript silent errors by changing them to throw errors.

  • Fixes mistakes that make it difficult for JavaScript engines to perform optimizations: strict mode code can sometimes run faster than identical code that's not strict mode.

  • Prohibits some syntax likely to be defined in future versions of ECMAScript.

  • Strict mode (Read)

  • Strict mode in JavaScript (Read)

This Keyword

In JavaScript, the this keyword is a little different compared to other languages. It refers to an object, but it depends on how or where it is being invoked. It also has some differences between strict mode and non-strict mode.

  • In an object method, this refers to the object

  • Alone, this refers to the global object

  • In a function, this refers to the global object

  • In a function, in strict mode, this is undefined

  • In an event, this refers to the element that received the event

  • Methods like call(), apply(), and bind() can refer this to any object

  • The JavaScript this Keyword (Read)

  • This Keyword (Read)# Function Borrowing

Function borrowing allows us to use the methods of one object on a different object without having to make a copy of that method and maintain it in two separate places. It is accomplished through the use of .call(), .apply(), or .bind(), all of which exist to explicitly set this on the method we are borrowing.

Methods are properties of an object which are functions. The value of this inside a method is equal to the calling object. In simple words, this value is the object “before dot”, the one used to call the method.

this in a function

The keyword this when used in a function refers to the global object.

Note: in a browser window the global object is the window object.

The keyword this when used alone refers to the global object.

Note: in a browser window the global object is the window object.

The keyword this when used in an event handler refers to the element that received the event.

The keyword this when used in an arrow function refers to the parent object.

Explicit binding is when you use the call or apply methods to explicitly set the value of this in a function. Explicit Binding can be applied using call(), apply(), and bind().

Asynchronous JavaScript

Asynchronous programming is a technique that enables your program to start a potentially long-running task and still be able to be responsive to other events while that task runs, rather than having to wait until that task has finished. Once that task has finished, your program is presented with the result.

Many functions provided by browsers, especially the most interesting ones, can potentially take a long time, and therefore, are asynchronous. For example:

  • Making HTTP requests using fetch()
  • Accessing a user's camera or microphone using getUserMedia()
  • Asking a user to select files using showOpenFilePicker()

So even though you may not have to implement your own asynchronous functions very often, you are very likely to need to use them correctly.

Event Loop

The Event Loop is one of the most important aspects to understand about Node.js. Why is this so important? Because it explains how Node.js can be asynchronous and have non-blocking I/O, it explains the 'killer feature' of Node.js, which made it this successful.

The setTimeout runs a function after the specified period expires. Times are declared in milliseconds.

setInterval

The setInterval() method helps us to repeatedly execute a function after a fixed delay. It returns a unique interval ID which can later be used by the clearInterval() method, which stops further repeated execution of the function.

setInterval() is similar to setTimeout, with a difference. Instead of running the callback function once, it will run it forever, at the specific time interval you specify (in milliseconds):

Callbacks

A callback function is a function passed into another function as an argument, which is then invoked inside the outer function to complete some kind of routine or action.

Callback Hell

The callback hell is when we try to write asynchronous JavaScript in a way where execution happens visually from top to bottom, creating a code that has a pyramid shape with many }) at the end.

Promises

Promises are a much better way to work with asynchronous code in JavaScript than the old and error-prone callback approach. They were introduced into JavaScript with ECMAScript 6. Using promises, we can manage extremely complex asynchronous code with rigorous error-handling setup, write code in a more or less synchronous style, and keep ourselves from running into the so-called callback hell.

Async/Await

async/await is a special syntax to work with promises in a more comfortable fashion. We use async keyword to declare a async function that return a Promise, and the await keyword makes a function wait for a Promise.

Classes

Classes are a template for creating objects. They encapsulate data with code to work on that data. Classes in JS are built on prototypes but have some syntax and semantics that are not shared with ES5 class-like semantics.

Javascript Iterators and Generators

Iterators and generators, introduced into JavaScript with ECMAScript 6, represent an extremely useful concept related to iteration in the language. Iterators are objects, abiding by the iterator protocol, that allows us to easily iterate over a given sequence in various ways, such as using the for...of loop. Generators, on the other hand, allow us to use functions and the yield keyword to easily define iterable sequences that are iterators as well.

Modules

Modules encapsulate all sorts of code like functions and variables and expose all this to other files. Generally, we use it to break our code into separate files to make it more maintainable. They were introduced into JavaScript with ECMAScript 6.

CommonJS

CommonJS modules are the original way to package JavaScript code for Node.js. Node.js also supports the ESModules standard used by browsers and other JavaScript runtimes, but CJS is still widely used in backend Node.js applications. Sometimes these modules will be written with a .cjs extension.

ESModules is a standard that was introduced with ES6 (2015). The idea was to standardize how JS modules work and implement these features in browsers. This standard is widely used with frontend frameworks such as react and can also be used in the backend with Node.js. Sometimes these modules will be written with a .mjs extension.

Memory Management

Low-level languages like C, have manual memory management primitives such as malloc() and free(). In contrast, JavaScript automatically allocates memory when objects are created and frees it when they are not used anymore (garbage collection). This automaticity is a potential source of confusion: it can give developers the false impression that they don't need to worry about memory management.

Memory lifecycle

Regardless of the programming language, the memory life cycle is pretty much always the same:

  • Allocate the memory you need
  • Use the allocated memory (read, write)
  • Release the allocated memory when it is not needed anymore

The second part is explicit in all languages. The first and last parts are explicit in low-level languages but are mostly implicit in high-level languages like JavaScript.

Garbage Collection

Memory management in JavaScript is performed automatically and invisibly to us. We create primitives, objects, functions… All that takes memory. The main concept of memory management in JavaScript is reachability.

Javascript chrome dev tools

These are a set of tools built into the browser to aid frontend developers diagnose and solve various issues in their applications — such as JavaScript and logical bugs, CSS styling issues or even just making quick temporary alterations to the DOM.

To enter the dev tools, right click and click Inspect (or press ctrl+shift+c/cmd+opt+c) to enter the Elements panel. Here you can debug CSS and HTML issues. If you want to see logged messages or interact with javascript, enter the Console tab from the tabs above (or press ctrl+shift+j or F12 /cmd+opt+j to enter it directly). Another very useful feature in the Chrome dev tools is the Lighthouse (for checking performance).

NOTE: This isn't a chrome-specific feature, and most browsers (Chromium based or otherwise) will have their own, largely-similar set of devtools.

Debugging issues

When you're just starting out with JavaScript development, you might use a lot of console.log() statement in your code to log and check values of variables while debugging. The results of these would show up in the Console panel, along with a reference to the line and file of code which originated it.

However, for quicker, more complex and easier to handler debugging (which also doesn't litter your codebase with console.log()s), breakpoints and the sources panel is your friend.

Debugging Memory Leaks

In JavaScript, memory leaks commonly occur within heap allocated memory, where short lived objects are attached to long lived ones and the Garbage Collector cannot safely de-allocate that memory as it is still referenced from the root set (the global object).

Enter the dev tools and check out the Lighthouse tab. This is essentially a series of tests which analyses the currently open website on a bunch of metrics related to performance, page speed, accessibility, etc. Feel free to run the tests by clicking the Analyse Page Load button (you might want to do this in an incognito tab to avoid errors arising from extensions you're using). Once you have the results, take your time and read through them (and do click through to the reference pages mentioned alongside each test result to know more about it!)

Working with APIs

When working with remote APIs, you need a way to interact with those APIs. Modern JavaScript provides two native ways to send HTTP requests to remote servers, XMLHttpRequest and Fetch.

XMLHttpRequest (XHR) is a built-in browser object that can be used to interact with server. XHR allows you to update data without having to reload a web page. Despite the word XML in its name, XHR not only used to retrieve data with XML format, we can use it with any type of data, like JSON, file(s), and much more.

The fetch() method in JavaScript is used to request to the server and load the information on the webpages. The request can be of any APIs that return the data of the format JSON or XML. This method returns a promise.

Nodejs async programming

Asynchronous code means that things can happen independently of the main program flow, async functions in JavaScript are processed in the background without blocking other requests. It ensures non-blocking code execution. Asynchronous code executes without having any dependency and no order. This improves the system efficiency and throughput. Making web apps requires knowledge of asynchronous concepts since we will be dealing with actions that require some time to get processed.

Promises

A promise is commonly defined as a proxy for a value that will eventually become available Asynchronous functions use promise behind the scenes, so understanding how promises work is fundamental to understanding how 'async' and 'await' works. Once a promise has been called, it will start in a pending state. This means that the calling function continues executing, while the promise is pending until it resolves, giving the calling function whatever data was being requested.

Creating a Promise: The Promise API exposes a Promise constructor, which you initialize using newPromise().

Using resolve() and reject(), we can communicate back to the caller what the resulting Promise state was, and what to do with it.

Async/Await

Async/Await is a special syntax to work with promises in a more comfortable fashion. It's easy to understand and use. Adding the keyword async before a function ensures that the function returns a promise and the keyword await makes JavaScript wait until that promise settles and returns the result.

Callbacks

Node.js, being an asynchronous platform, doesn't wait around for things like file I/O to finish - Node.js uses callbacks. A callback is a function called at the completion of a given task; this prevents any blocking, and allows other code to be run in the meantime.

setTimeout

The setTimeout runs a function after the specified period expires. Times are declared in milliseconds.

setInterval

The setInterval() method helps us to repeatedly execute a function after a fixed delay. It returns a unique interval ID which can later be used by the clearInterval() method, which stops further repeated execution of the function.

setInterval() is similar to setTimeout, with a difference. Instead of running the callback function once, it will run it forever, at the specific time interval you specify (in milliseconds):

setImmediate

The setImmediate function delays the execution of a function to be called after the current event loops finish all their execution. It's very similar to calling setTimeout with 0 ms delay.

process.nextTick()

Every time the event loop takes a full trip, we call it a tick. When we pass a function to process.nextTick(), we instruct the engine to invoke this function at the end of the current operation before the next event loop tick starts.

Event Loop

The Event Loop is one of the most critical aspects of Node.js. Why is this so important? Because it explains how Node.js can be asynchronous and have non-blocking I/O, it explains the 'killer feature' of Node.js, which made it this successful.

Event Emitter

In Node.js, an event can be described simply as a string with a corresponding callback. An event can be 'emitted' (or, in other words, the corresponding callback be called) multiple times or you can choose to only listen for the first time it is emitted.

Node.js Introduction

Node.js is an open source, cross-platform runtime environment and library that is used for running web applications outside the client’s browser.

It is used for server-side programming, and primarily deployed for non-blocking, event-driven servers, such as traditional web sites and back-end API services, but was originally designed with real-time, push-based architectures in mind. Every browser has its own version of a JS engine, and node.js is built on Google Chrome’s V8 JavaScript engine.

What is Node.js

Node.js is an open-source and cross-platform JavaScript runtime environment. It is a popular tool for almost any kind of project! Node.js runs the V8 JavaScript engine, Google Chrome's core, outside the browser. This allows Node.js to be very performant. A Node.js app runs in a single process, without creating a new thread for every request. Node.js provides a set of asynchronous I/O primitives in its standard library that prevent JavaScript code from blocking and generally, libraries in Node.js are written using non-blocking paradigms, making blocking behavior the exception rather than the norm.

Why Node.js

Node.js is a cross-platform runtime, perfect for a wide range of use cases. Its huge community makes it easy to get started. It uses the V8 engine to compile JavaScript and runs at lightning-fast speeds. Node.js applications are very scalable and maintainable. Cross-platform support allows the creation of all kinds of applications - desktop apps, software as a service, and even mobile applications. Node.js is perfect for data-intensive and real-time applications since it uses an event-driven, non-blocking I/O model, making it lightweight and efficient. With such a huge community, a vast collection of Node.js packages is available to simplify and boost development.

History of Node.js

Node.js was written initially by Ryan Dahl in 2009, about thirteen years after the introduction of the first server-side JavaScript environment, Netscape's LiveWire Pro Web. The initial release supported only Linux and Mac OS X. Its development and maintenance were led by Dahl and later sponsored by Joyent.

Both the browser and Node.js use JavaScript as their programming language. Building apps that run in the browser is entirely different than building a Node.js application. Even though it's always JavaScript, some key differences make the experience radically different.

Running Node.js Code

The usual way to run a Node.js program is to run the globally available node command (once you install Node.js) and pass the name of the file you want to execute.

We split our code into different files to maintain, organize and reuse code whenever possible. A module system allows us to split and include code and import code written by other developers whenever required. In simple terms, a module is nothing but a JavaScript file. Node.js has many built-in modules that are part of the platform and comes with Node.js installation, for example, HTTP, fs, path, and more.

CommonJS vs ESM

CommonJS and ES (EcmaScript) are module systems used in Node. CommonJS is the default module system. However, a new module system was recently added to NodeJS - ES modules. CommonJS modules use the require() statement for module imports and module.exports for module exports while it's import and export for ES.

Custom Modules

Modules are the collection of JavaScript codes in a separate logical file that can be used in external applications based on their related functionality. There are two ways to create modules in Node.js i.e. either via CommonJS or ESM.

global keyword

In browsers, the top-level scope is the global scope. This means that within the browser var something will define a new global variable. In Node.js this is different. The top-level scope is not the global scope; var something inside a Node.js module will be local to that module.

npm is the standard package manager for Node.js.

It is two things: first and foremost, it is an online repository for the publishing of open-source Node.js projects; second, it is a command-line utility for interacting with said repository that aids in package installation, version management, and dependency management. A plethora of Node.js libraries and applications are published on npm, and many more are added every day

npx

npx is a very powerful command that's been available in npm starting version 5.2, released in July 2017. If you don't want to install npm, you can install npx as a standalone package. npx lets you run code built with Node.js and published through the npm registry.

Global Install vs Local Install

NodeJS and NPM allow two methods of installing dependencies/packages: Local and Global. This is mainly used when adding a package or dependency as part of a specific project you're working on. The package would be installed (with its dependencies) in node_modules folder under your project. In addition, in package.json file there will be a new line added for the installed dependency under the label dependencies. At this point - you can start using the package in your NodeJS code by importing the package. Unlike the local install, you can install packages and dependencies globally. This would install it in a system path, and these packages would be available to any program which runs on this specific computer. This method is often used for installing command line tools (for example, even npm program is a Globally installed npm package).

npm provides various features to help install and maintain the project's dependencies. Dependencies get updates with new features and fixes, so upgrading to a newer version is recommended. We use npm update commands for this.

Using Packages

Open source Node modules are very powerful as you can instantly get access to the functionality that you’d otherwise have to write yourself. We normally use CommonJS or ESM to import an installed package.

Running Scripts

In Node.js, npm scripts are used for the purpose of initiating a server, starting the build of a project, and also for running the tests. We can define this scripts in the package.json file of the folder. Also, we can split the huge scripts into many smaller parts if it is needed.

npm workspaces

Workspace is a generic term that refers to the set of npm CLI features that support managing multiple packages from your local file system from within a singular top-level root package.

Creating Packages

npm packages allow you to bundle some specific functionality into a reusable package which can then be uploaded to some package registry such as npm or GitHub packages and then be installed and reused in projects using npm.

Error handling is a way to find bugs and solve them as quickly as humanly possible. The errors in Node.js can be either operation or programmer errors. Read the articles linked below to understand how to handle different types of errors in Node.js

Stack Trace

The stack trace is used to trace the active stack frames at a particular instance during the execution of a program. The stack trace is useful while debugging code as it shows the exact point that has caused an error.

Using debugger

Node.js includes a command-line debugging utility. The Node.js debugger client is not a full-featured debugger, but simple stepping and inspection are possible. To use it, start Node.js with the inspect argument followed by the path to the script to debug.

Example - $ node inspect myscript.js

Uncaught Exceptions

When a JavaScript error is not properly handled, an uncaughtException is emitted. These suggest the programmer has made an error, and they should be treated with the utmost priority.

The correct use of uncaughtException is to perform synchronous cleanup of allocated resources (e.g. file descriptors, handles, etc) before shutting down the process. It is not safe to resume normal operation after uncaughtException because system becomes corrupted. The best way is to let the application crash, log the error and then restart the process automatically using nodemon or pm2.

Error types# Javascript Errors

JavaScript Errors are used by JavaScript to inform developers about various issue in the script being executed. These issues can be syntax error where the developer/programmer has used the worng syntax, it can be due to some wrong user input or some other problem.

JavaScript has six types of errors that may occur during the execution of the script:

  • EvalError
  • RangeError
  • ReferenceError
  • SyntaxError
  • TypeError
  • URIError

System errors# User specified errors# Assertion errors# Async errors

Errors must always be handled. If you are using synchronous programming you could use a try catch. But this does not work if you work asynchronous! Async errors will only be handled inside the callback function!

Working with Files

You can programmatically manipulate files in Node.js with the built-in fs module. The name is short for “file system,” and the module contains all the functions you need to read, write, and delete files on the local machine.

Fs module

File System or fs module is a built in module in Node that enables interacting with the file system using JavaScript. All file system operations have synchronous, callback, and promise-based forms, and are accessible using both CommonJS syntax and ES6 Modules.

path module

The path module provides utilities for working with file and directory paths. It's built-in to Node.js core and can simply be used by requiring it.

process.cwd()

The process.cwd() method returns the current working directory of the Node.js process.

The glob pattern is most commonly used to specify filenames, called wildcard characters, and strings, called wildcard matching.

Globby# fs-extra

fs-extra adds file system methods that aren't included in the native fs module and adds promise support to the fs methods. It also uses graceful-fs to prevent EMFILE errors. It should be a drop in replacement for fs.

Chokidar

Chokidar is a fast open-source file watcher for node. js. You give it a bunch of files, it watches them for changes and notifies you every time an old file is edited; or a new file is created.

__dirname

The __dirname in a node script returns the path of the folder where the current JavaScript file resides. __filename and __dirname are used to get the filename and directory name of the currently executing file.

The __filename in Node.js returns the filename of the executed code. It gives the absolute path of the code file. The following approach covers implementing __filename in the Node.js project.

  • Official Docs (Official Docs)# Nodejs command line apps# Exiting and exit codes

Exiting is a way of terminating a Node.js process by using node.js process module.

Printing output# Process stdout

The process.stdout property is an inbuilt application programming interface of the process module which is used to send data out of our program. A Writable Stream to stdout. It implements a write() method.

process.stderr

The process.stderr is an inbuilt application programming interface of class Process within process module which is used to returns a stream connected to stderr.

Chalk

Chalk is a clean and focused library used to do string styling in your terminal applications. With it you can print different styled messages to your console like changing font colors, font boldness, font opacity and also the background of any message printed on your console.

Figlet

This package aims to fully implement the FIGfont spec in JavaScript, which represents the graphical arrangement of characters representing larger characters. It works in the browser and with Node.js.

Cli progress

CLI-Progress is a package that provides a custom progress bar for CLI applications.

Taking input

Node.js provides a few ways to take inputs from user, including the built-in process.stdin and readline module. There are also several third party packages like prompts and Enquirer built on top of readline that provide an easy to use and intuitive interface.

Process stdin

The process.stdin is a standard Readable stream which listens for user input and is accessible via the process module. It uses on() function to listen for input events.

Prompts

Prompts is a higher level and user friendly interface built on top of Node.js's inbuilt Readline module. It supports different type of prompts such as text, password, autocomplete, date, etc. It is an interactive module and comes with inbuilt validation support.

Inquirer

Inquirer.js is a collection of common interactive command line interfaces for taking inputs from user. It is promise based and supports chaining series of prompt questions together, receiving text input, checkboxes, lists of choices and much more.

You can use it to empower your terminal applications that need user input or to build your own CLI.

Command line args# process.argv

process.argv is an array of parameters that are sent when you run a Node.js file or Node.js process.

Commander.js

Commander is a light-weight, expressive, and powerful command-line framework for node.js. with Commander.js you can create your own command-line interface (CLI).

Environment variables

dotenv is a zero-dependency module that loads environment variables from a .env file into process.env. Storing configuration in the environment separate from code is based on The Twelve-Factor App methodology.

process.env

In Node. js, process. env is a global variable that is injected during runtime. It is a view of the state of the system environment variables. When we set an environment variable, it is loaded into process.env during runtime and can later be accessed.

APIs

API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other.

Http module

To make HTTP requests in Node.js, there is a built-in module HTTP in Node.js to transfer data over the HTTP. To use the HTTP server in node, we need to require the HTTP module using require() method. The HTTP module creates an HTTP server that listens to server ports and gives a response back to the client.

Express.js

Express is a node js web application framework that provides broad features for building web and mobile applications. It is used to build a single page, multipage, and hybrid web application.

NestJS

NestJS is a progressive Node.js framework for creating efficient and scalable server-side applications.

Fastify

Fastify is a web framework highly focused on providing the best developer experience with the least overhead and a powerful plugin architecture, inspired by Hapi and Express.

Got

Got is a lighter, human-friendly, and powerful HTTP request library explicitly designed to work with Node.js. It supports pagination, RFC compliant caching, makes an API request again if it fails, supports cookies out of the box, etc.

unfetch

unfetch is a tiny 500b fetch 'barely-polyfill'

Axios

Axios is a promise-based HTTP Client for node.js and the browser. Used for making requests to web servers. On the server-side it uses the native node.js http module, while on the client (browser) it uses XMLHttpRequests.

Api calls http# JSON Web Token

JWT, or JSON-Web-Token, is an open standard for sharing security information between two parties — a client and a server. Each JWT contains encoded JSON objects, including a set of claims. JWTs are signed using a cryptographic algorithm to ensure that the claims cannot be altered after the token is issued.

Passport js

Passport.js is authentication middleware for Node.js. It makes implementing authentication in express apps really easy and fast. It is extremely flexible and modular. It uses 'strategies' to support authentication using a username and password, Facebook, Twitter, and a lot of other sites.

Keep App Running

In Node.js, you need to restart the process to make changes take effect. This adds an extra step to your workflow. You can eliminate this extra step by using nodemon to restart the process automatically.

Since Node.js 18.11.0, you can run Node with the --watch flag to reload your app everytime a file is changed. So you don't need to use nodemon anymore. Node.js 18.11.0 Changelog.

Nodemon

In Node.js, you need to restart the process to make changes take effect. This adds an extra step to your workflow. You can eliminate this extra step by using nodemon or PM2 to restart the process automatically.

nodemon is a command-line interface (CLI) utility developed by @rem that wraps your Node app, watches the file system, and automatically restarts the process.

Template Engines

Template engine helps us to create an HTML template with minimal code. Also, it can inject data into HTML template at client side and produce the final HTML.

Some examples of template engines in Node.js are:

  • Nunjucks
  • Jade
  • Vash
  • EJS
  • Handlebars
  • HAML

Marko

Marko is a fast and lightweight HTML-based templating engine that compiles templates to CommonJS modules and supports streaming, async rendering, and custom tags. It is HTML re-imagined as a language for building dynamic and reactive user interfaces.

Pug

Pug is a JavaScript template engine. It is a high-performance template engine heavily influenced by Haml and implemented with JavaScript for Node.js and browsers. Pug was formerly called Jade.

Pug is a high-performance template engine heavily influenced by Haml and implemented with JavaScript for Node.js and browsers

EJS

EJS is a templating language or engine that allows you to generate HTML markup with pure JavaScript. And this is what makes it perfect for Nodejs applications. In simple words, the EJS template engine helps to easily embed JavaScript into your HTML template.

What is Database

A database is an organized collection of structured information, or data, typically stored electronically in a computer system. A database is usually controlled by a database management system (DBMS).

Relational

A relational database is a (most commonly digital) database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system (RDBMS). Many relational database systems are equipped with the option of using the SQL (Structured Query Language) for querying and maintaining the database.

Knex

Knex.js is a 'batteries included' SQL query builder for PostgreSQL, CockroachDB, MSSQL, MySQL, MariaDB, SQLite3, Better-SQLite3, Oracle, and Amazon Redshift designed to be flexible, portable, and fun to use.

TypeORM

TypeORM is an ORM that can run in NodeJS, Browser, Cordova, PhoneGap, Ionic, React Native, NativeScript, Expo, and Electron platforms and can be used with TypeScript and JavaScript (ES5, ES6, ES7, ES8). Its goal is to always support the latest JavaScript features and provide additional features that help you to develop any kind of application that uses databases - from small applications with a few tables to large scale enterprise applications with multiple databases.

TypeORM supports both Active Record and Data Mapper patterns, unlike all other JavaScript ORMs currently in existence, which means you can write high quality, loosely coupled, scalable, maintainable applications the most productive way.

Sequelize

Sequelize is an easy-to-use and promise-based Node.js ORM tool for Postgres, MySQL, MariaDB, SQLite, DB2, Microsoft SQL Server, and Snowflake. It features solid transaction support, relations, eager and lazy loading, read replication and more.

What is an ORM ?

An ORM is known as Object Relational Mapper. This is a tool or a level of abstraction which maps(converts) data in a relational database into programmatic objects that can be manipulated by a programmer using a programming language(usually an OOP language). ORMs solely exist to map the details between two data sources which due to a mismatch cannot coexist together.

Prisma

Prisma is an ORM that helps app developers build faster and make fewer errors. Combined with its Data Platform developers gain reliability and visibility when working with databases.

Native drivers

NativeDriver is an implementation of the WebDriver API which drives the UI of a native application rather than a web application. It extends the WebDriver API in a few key places, and re-interprets the existing API for native applications.

Document

A document database is a type of nonrelational database that is designed to store and query data as JSON-like documents. Document databases make it easier for developers to store and query data in a database by using the same document-model format they use in their application code. The flexible, semistructured, and hierarchical nature of documents and document databases allows them to evolve with applications’ needs.

Mongoose

Mongoose is an Object Data Modeling (ODM) library for MongoDB and Node.js. Mongoose provides a straight-forward, schema-based solution to model your application data. It includes built-in type casting, validation, query building, business logic hooks and more, out of the box.

Prisma

Prisma is an open source next-generation ORM in the TypeScript ecosystem. It offer a dedicated API for relation filters. It provide an abstraction layer that makes you more productive compared to writing SQL. Prisma currently supports PostgreSQL, MySQL, SQL Server, SQLite, MongoDB and CockroachDB.

Another way to connect to different databases in Node.js is to use the official native drivers provided by the database. For example, here is the list of drivers by MongoDB# Testing

Software testing is the process of verifying that what we create is doing exactly what we expect it to do. The tests are created to prevent bugs and improve code quality.

The two most common testing approaches are unit testing and end-to-end testing. In the first, we examine small snippets of code, in the second, we test an entire user flow.

Jest is a delightful JavaScript Testing Framework with a focus on simplicity. It works with projects using: Babel, TypeScript, Node, React, Angular, Vue and more!

Mocha

Mocha is an open source JavaScript test framework running on Nodejs and in the browser, making asynchronous testing simple and fun, and it's a great candidate for BDD (Behavior Driven Development).

Cypress is a new front end testing tool built for the modern web. It enables you to write faster, easier and more reliable tests.

Node.js Logging

Logging is an essential part of understanding the complete application life cycle of the Node.js application. We can much more easily and quickly fix errors by looking at logs throughout the development process, from creating to debugging to designing new features. Error, warn, info, and debug are the four basic logging levels in Node.js. Logging involves persistently collecting information about an application's runtime behaviour.

Morgan

Morgan is a NodeJS and express.js middleware to log the HTTP request and error, simplifying the debugging process. It provides flexibility in defining the format of log messages and helps override the output destination for your logs.

Winston

winston is designed to be a simple and universal logging library with support for multiple transports. A transport is essentially a storage device for your logs. Each winston logger can have multiple transports configured at different levels. For example, one may want error logs to be stored in a persistent remote location (like a database), but all logs output to the console or a local file.

Keep your app running in Production

PM2 lets you run your nodejs scripts forever. In the event that your application crashes, PM2 will also restart it for you.

Pm2

PM2 is a production process manager for Node.js applications with a built-in load balancer. It allows you to keep applications alive forever, to reload them without downtime and to facilitate common system admin tasks.

Forever

Forever is a node.js package for ensuring that a given script runs continuously (i.e. forever) even when the server crash/stops. A CLI tool for the production environment to manage the Node applications and their processes.

Nohup

Nohup, short for no hang up is a command in Linux systems that keep processes running, will ignore the SIGHUP signal even after exiting the shell or terminal.

Nodejs Threads

Node.js is a single-threaded language and gives us ways to work parallelly to our main process. Taking note of nowadays multicore system single threading is very memory efficient.

The child_process module gives the node the ability to run the child process, established through IPC (inter-process communication) by accessing operating system commands.

The three main methods inside this module are : child_process.spawn() child_process.fork() child_process.exec()

Cluster

The Cluster module allows you to easily create child processes that each runs simultaneously on their own single thread, to handle workloads among their application threads.

Worker thread is a continuous parallel thread that runs and accepts messages until it is explicitly closed or terminated. With worker threads, we can achieve a much efficient application without creating a deadlock situation. Workers, unlike children's processes, can exchange memory.

Nodejs streams

Streams are a type of data handling methods and are used to read, write or transform chunks of data piece by piece without keeping it in memory all at once. There are four types of streams in Node.js.

  • Readable: streams from which data can be read.
  • Writable: streams to which we can write data.
  • Duplex: streams that are both Readable and Writable.
  • Transform: streams that can modify or transform the data as it is written and read.

Multiple streams can be chained together using pipe() method.

More Debugging

Debugging is a concept to identify and remove errors from software applications. Here, we will learn about the technique to debug a Node.js application.

Why not to use console.log() for debugging?

Using console.log to debug the code generally dives into an infinite loop of “stopping the app and adding a console.log, and start the app again” operations. Besides slowing down the development of the app, it also makes the writing dirty and creates unnecessary code. Finally, trying to log out variables alongside with the noise of other potential logging operations, may make the process of debugging difficult when attempting to find the values you are debugging.

Memory Leaks

Memory leaks are caused when your Node.js app’s CPU and memory usage increases over time for no apparent reason. In simple terms, a Node.js memory leak is an orphan block of memory on the Heap that is no longer used by your app because it has not been released by the garbage collector. It’s a useless block of memory. These blocks can grow over time and lead to your app crashing because it runs out of memory.

Garbage Collection

Memory management in JavaScript is performed automatically and invisibly to us. We create primitives, objects, functions… All that takes memory. The main concept of memory management in JavaScript is reachability.

Node Inspect

Node.js provides a built-in DevTools-based debugger to allow debugging Node.js applicatins.

Using APM

As much fun as it is to intercept your container requests with inspect and step through your code, you won’t have this option in production. This is why it makes a lot of sense to try and debug your application locally in the same way as you would in production.

In production, one of your tools would be to login to your remote server to view the console logs, just as you would on local. But this can be a tedious approach. Luckily, there are tools out there that perform what is called log aggregation, such as Stackify.

These tools send your logs from your running application into a single location. They often come with high-powered search and query utilities so that you can easily parse your logs and visualize them.

Nodejs common modules

These are the common modules that come with Node.js out of the box. This module provides tools or APIs for performing out certain standard Node.js operations. like interacting with the file system, url parsing, or logging information to the console.

Builtin modules

Built-in modules are already installed with Node.js, so you don't need to install them with any package manager (yarn, npm, etc.).

  • fs: dealing with the system files.

  • os: provides information about the operation system.

  • net: to build clients and servers.

  • path: to handle file paths.

  • url: help in parsing URL strings.

  • events: provides a method for interacting with events.

  • http: making Node.js transfer data over HTTP.

  • console: to log information in the console.

  • assert: provides a set of assertion tests.

  • process: provides information about, and control over, the current process.

  • cluster: able to creating child processes that runs simultaneously and share the same server port.

  • perf_hooks: provides APIs for performance measurement

  • crypto: to handle OpenSSL cryptographic functions.

  • Buffer: provides APIs to handling streams of binary data.

  • DNS: enables name resolution.

  • events: for handling existing events and creating custon events.

  • child_processes: provides the ability to spawn subprocesses.

  • REPL: provides a Read-Eval-Print-Loop (REPL) implementation that is available both as a standalone program or includible in other applications.

  • readline: provides an interface for reading data from a Readable stream one line at a time.

  • util: supports the needs of Node.js internal APIs.

  • querystring: provides utilities for parsing and formatting URL query strings.

  • string_decoder: provides an API for decoding Buffer objects into strings.

  • tls: provides an implementation of the Transport Layer Security (TLS) and Secure Socket Layer (SSL) protocols.

  • API documentation of Built-in modules (Official Website)

  • Built-in modules - w3schools (Read)

Python

Python is a high-level, interpreted, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically-typed and garbage-collected.

Basic Syntax

Setup the environment for python and get started with the basics.

Variables

Variables are used to store information to be referenced and manipulated in a computer program. They also provide a way of labeling data with a descriptive name, so our programs can be understood more clearly by the reader and ourselves. It is helpful to think of variables as containers that hold information. Their sole purpose is to label and store data in memory. This data can then be used throughout your program.

Data Types

Variables in Python can be of different data types. These data types can be text (str), numeric (int, float, complex), sequence (list, tuple, range), mapping (dict), set (set, frozenset), boolean (boolean), binary (bytes, bytearray, memoryview), or none (None).

Conditionals

Conditional Statements in Python perform different actions depending on whether a specific condition evaluates to true or false. Conditional Statements are handled by IF-ELIF-ELSE statements and MATCH-CASE statements in Python.

Typecasting

The process of converting the value of one data type (integer, string, float, etc.) to another data type is called type conversion. Python has two types of type conversion: Implicit and Explicit.

Exceptions

Python has many built-in exceptions that are raised when your program encounters an error (something in the program goes wrong). When these exceptions occur, the Python interpreter stops the current process and passes it to the calling process until it is handled. If not handled, the program will crash.

Functions

In programming, a function is a reusable block of code that executes a certain functionality when it is called. Functions are integral parts of every programming language because they help make your code more modular and reusable.

In Python, you define a function with the def keyword, then write the function identifier (name) followed by parentheses and a colon.

Lists, Tuples, Sets, and Dictionaries

Lists: are just like dynamic sized arrays, declared in other languages (vector in C++ and ArrayList in Java). Lists need not be homogeneous always which makes it the most powerful tool in Python.

Tuple: A Tuple is a collection of Python objects separated by commas. In some ways, a tuple is similar to a list in terms of indexing, nested objects, and repetition but a tuple is immutable, unlike lists that are mutable.

Set: A Set is an unordered collection data type that is iterable, mutable, and has no duplicate elements. Python’s set class represents the mathematical notion of a set.

Dictionary: In python, Dictionary is an ordered (since Py 3.7) [unordered (Py 3.6 & prior)] collection of data values, used to store data values like a map, which, unlike other Data Types that hold only a single value as an element, Dictionary holds key:value pair. Key-value is provided in the dictionary to make it more optimized.

Data Structures and Algorithms

A data structure is a named location that can be used to store and organize data. And, an algorithm is a collection of steps to solve a particular problem. Learning data structures and algorithms allow us to write efficient and optimized computer programs.

Arrays and Linked lists

Arrays store elements in contiguous memory locations, resulting in easily calculable addresses for the elements stored and this allows faster access to an element at a specific index. Linked lists are less rigid in their storage structure and elements are usually not stored in contiguous locations, hence they need to be stored with additional tags giving a reference to the next element. This difference in the data storage scheme decides which data structure would be more suitable for a given situation.

Heaps Stacks and Queues

Stacks: Operations are performed LIFO (last in, first out), which means that the last element added will be the first one removed. A stack can be implemented using an array or a linked list. If the stack runs out of memory, it’s called a stack overflow.

Queue: Operations are performed FIFO (first in, first out), which means that the first element added will be the first one removed. A queue can be implemented using an array.

Heap: A tree-based data structure in which the value of a parent node is ordered in a certain way with respect to the value of its child node(s). A heap can be either a min heap (the value of a parent node is less than or equal to the value of its children) or a max heap (the value of a parent node is greater than or equal to the value of its children).

Hash Tables

Hash Table, Map, HashMap, Dictionary or Associative are all the names of the same data structure. It is a data structure that implements a set abstract data type, a structure that can map keys to values.

Binary Search Trees

A binary search tree, also called an ordered or sorted binary tree, is a rooted binary tree data structure with the key of each internal node being greater than all the keys in the respective node's left subtree and less than the ones in its right subtree

Recursion

Recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code.

Sorting Algorithms

Sorting refers to arranging data in a particular format. Sorting algorithm specifies the way to arrange data in a particular order. Most common orders are in numerical or lexicographical order.

The importance of sorting lies in the fact that data searching can be optimized to a very high level, if data is stored in a sorted manner.

Advanced Topics

Now that you have covered the basics of Python, let's move on to some advanced topics. In this section, you will be learning about things like OOP, Lambdas, Decorators, Iterators, Modules, and more.

OOP

In Python, object-oriented Programming (OOPs) is a programming paradigm that uses objects and classes in programming. It aims to implement real-world entities like inheritance, polymorphisms, encapsulation, etc. in the programming. The main concept of OOPs is to bind the data and the functions that work on that together as a single unit so that no other part of the code can access this data.

Methods and Dunder

A method in python is somewhat similar to a function, except it is associated with object/classes. Methods in python are very similar to functions except for two major differences.

  • The method is implicitly used for an object for which it is called.
  • The method is accessible to data that is contained within the class.

Dunder or magic methods in Python are the methods having two prefix and suffix underscores in the method name. Dunder here means “Double Under (Underscores)”. These are commonly used for operator overloading. Few examples for magic methods are: init, add, len, repr etc.

Inheritance

Inheritance allows us to define a class that inherits all the methods and properties from another class.

Classes

A class is a user-defined blueprint or prototype from which objects are created. Classes provide a means of bundling data and functionality together. Creating a new class creates a new type of object, allowing new instances of that type to be made. Each class instance can have attributes attached to it for maintaining its state. Class instances can also have methods (defined by their class) for modifying their state.

Regular Expressions

A regular expression is a sequence of characters that specifies a search pattern in text. Usually such patterns are used by string-searching algorithms for 'find' or 'find and replace' operations on strings, or for input validation.

Decorators

decorator is a design pattern in Python that allows a user to add new functionality to an existing object without modifying its structure. Decorators are usually called before the definition of a function you want to decorate.

Lambdas

Python Lambda Functions are anonymous function means that the function is without a name. As we already know that the def keyword is used to define a normal function in Python. Similarly, the lambda keyword is used to define an anonymous function in Python.

Iterators

An iterator is an object that contains a countable number of values. An iterator is an object that can be iterated upon, meaning that you can traverse through all the values. Technically, in Python, an iterator is an object which implements the iterator protocol, which consist of the methods iter() and next() .

Modules

Modules refer to a file containing Python statements and definitions. A file containing Python code, for example: example.py, is called a module, and its module name would be example. We use modules to break down large programs into small manageable and organized files. Furthermore, modules provide reusability of code.

Builtin Modules

Python interpreter has a number of built-in functions. They are always available for use in every interpreter session. Many of them have been discussed in previously. For example print() and input() for I/O, number conversion functions (int(), float(), complex()), data type conversions (list(), tuple(), set()) etc.

Custom Modules

Modules refer to a file containing Python statements and definitions. A file containing Python code, for example: example.py, is called a module, and its module name would be example. We use modules to break down large programs into small manageable and organized files. Furthermore, modules provide reusability of code.

Version Control Systems

Version control systems allow you to track changes to your codebase/files over time. They allow you to go back to some previous version of the codebase without any issues. Also, they help in collaborating with people working on the same code – if you’ve ever collaborated with other people on a project, you might already know the frustration of copying and merging the changes from someone else into your codebase; version control systems allow you to get rid of this issue.

Git

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Repo Hosting Services

There are different repository hosting services with the most famous one being GitHub, GitLab and BitBucket. I would recommend creating an account on GitHub because that is where most of the OpenSource work is done and most of the developers are.

Services Links

GitHub

GitHub is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

GitLab

GitLab is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

BitBucket

BitBucket is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Package Managers

Package managers allow you to manage the dependencies (external code written by you or someone else) that your project needs to work correctly.

PyPI and Pip are the most common contenders but here are some other options available as well:

  • Poetry : Manages dependencies via isolation
  • PIPX : Isolation-based app deployment, so you don't have to affect the system or user PIP libraries. It enables you to try individual python CLI tools without affecting other dependencies.

PyPI

PyPI, typically pronounced pie-pee-eye, is a repository containing several hundred thousand packages. These range from trivial Hello, World implementations to advanced deep learning libraries.

Pip

The standard package manager for Python is pip. It allows you to install and manage packages that aren’t part of the Python standard library.

Python Frameworks

Frameworks automate the common implementation of common solutions which gives the flexibility to the users to focus on the application logic instead of the basic routine processes.

Frameworks make the life of web developers easier by giving them a structure for app development. They provide common patterns in a web application that are fast, reliable and easily maintainable.

Synchronous Frameworks

Synchronous frameworks in python handle the flow of data in a synchronous manner. On a s̲y̲n̲c̲h̲r̲o̲n̲o̲u̲s̲ request, you make the request and stop executing your program until you get a response from the HTTP server (or an error if the server can't be reached, or a timeout if the sever is taking way, way too long to reply) The interpreter is blocked until the request is completed (until you got a definitive answer of what happened with the request: did it go well? was there an error? a timeout?... ).

Django

Django is a free and open-source, Python-based web framework that follows the model–template–views architectural pattern. It is maintained by the Django Software Foundation, an independent organization established in the US as a 501 non-profit

Flask

Flask is a micro web framework written in Python. It is classified as a microframework because it does not require particular tools or libraries. It has no database abstraction layer, form validation, or any other components where pre-existing third-party libraries provide common functions.

Pyramid

Pyramid is a general, open source, web application development framework built in python. It allows python developer to create web applications with ease. Pyramid is backed by the enterprise knowledge Management System KARL (a George Soros project).

Asynchronous

Asynchronous programming is a type of parallel programming in which a unit of work is allowed to run separately from the primary application thread. When the work is complete, it notifies the main thread about completion or failure of the worker thread. This style is mostly concerned with the asynchronous execution of tasks. Python has several asynchronous frameworks that are used to implement asynchronous programming.

gevent

gevent is a Python library that provides a high-level interface to the event loop. It is based on non-blocking IO (libevent/libev) and lightweight greenlets. Non-blocking IO means requests waiting for network IO won't block other requests; greenlets mean we can continue to write code in synchronous style.

AIOHTTP

aiohttp is a Python 3.5+ library that provides a simple and powerful asynchronous HTTP client and server implementation.

Tornado

Tornado is a scalable, non-blocking web server and web application framework written in Python. It was developed for use by FriendFeed; the company was acquired by Facebook in 2009 and Tornado was open-sourced soon after.

Sanic

Sanic is a Python 3.7+ web server and web framework that's written to go fast. It allows the usage of the async/await syntax added in Python 3.5, which makes your code non-blocking and speedy.

Testing

A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.

PyUnit / Unittest

PyUnit is an easy way to create unit testing programs and UnitTests with Python. (Note that docs.python.org uses the name 'unittest', which is also the module name.)

pytest

pytest is a mature full-featured Python testing tool that helps you write better programs.

Doctest

Python’s standard library comes equipped with a test framework module called doctest. The doctest module programmatically searches Python code for pieces of text within comments that look like interactive Python sessions. Then, the module executes those sessions to confirm that the code referenced by a doctest runs as expected.

Nose

Nose is another opensource testing framework that extends unittest to provide a more flexible testing framework.

Learn the Basics

Learn the common concepts of Go like variables, loops, conditional statements, functions, data types, and so on. A good starting point for go basics is its Go's official docs.

Basic Syntax

Learn about the basic syntax of Go, such as how the go programs are executed, package imports, main function, and so on. Visit the resources listed below

Variables in Go

Variable is the name given to a memory location to store a value of a specific type. Go provides multiple ways to declare and use variables.

For Loop

Go has only one looping construct, the for loop. The basic for loop has three components separated by semicolons:

Range

Range is used with For Loops to iterate over each element in arrays, strings and other data structures .

Conditional Statements

Conditional statements are used to run code only if a certain condition is true; go supports :

Errors/Panic/Recover

In lieu of adding exception handlers, the Go creators exploited Go’s ability to return multiple values. The most commonly used Go technique for issuing errors is to return the error as the last value in a return.

A panic typically means something went unexpectedly wrong. Mostly used to fail fast on errors that shouldn’t occur during normal operation, or that we aren’t prepared to handle gracefully.

Panic recovery in Go depends on a feature of the language called deferred functions. Go has the ability to guarantee the execution of a function at the moment its parent function returns. This happens regardless of whether the reason for the parent function’s return is a return statement, the end of the function block, or a panic.

Functions

Discover how functions work in Go, the list of resources below will cover :

Packages

Packages are the most powerful part of the Go language. The purpose of a package is to design and maintain a large number of programs by grouping related features together into single units so that they can be easy to maintain and understand and independent of the other package programs. This modularity allows them to share and reuse. In Go language, every package is defined with a different name and that name is close to their functionality like “strings” package and it contains methods and functions that only related to strings.

Type Casting

Go doesn't support automatic type conversion, but it allows type casting, which is the process of explicitly changing the variable type. To learn more about typecasting, visit these resources :

Type Inference

Type inference gives go the capability to detect the type of a value without being explicitly indicated , hence the possibility to declare variables without providing its type at first

Arrays

In Go an array is a collection of elements of the same type with a fixed size defined when the array is created.

Slices

Slices are similar to arrays but are more powerful and flexible. Like arrays, slices are also used to store multiple values of the same type in a single variable. However, unlike arrays, the length of a slice can grow and shrink as you see fit.

Maps

Maps are the data structure in Go, where we use whenever we want to have mappings between key:value pairs. They have flexibility in terms of removing or adding elements into them. Maps do not allow duplicate entries while data are kept unordered.

Golang's built-in function make, helps us create and initialize slices, maps and channels, depending on the arguments that are provided to the function.

Structs are user-defined types that help us create a collection of data describing a single entity.

Go is a statically typed programming language, which means each variable has a type defined at first and can only hold values with that type. There are two categories of types in Go: basics types and composite types.

To learn more about types in Go, visit these resources :

Go advanced# Modules

Go modules are a group of related packages that are versioned and distributed together. They specify the requirements of our project, list all the required dependencies, and help us keep track of the specific versions of installed dependencies.

Modules are identified by a module path that is declared in the first line of the go.mod file in our project.

JSON (JavaScript Object Notation) is a simple data interchange format. Syntactically it resembles the objects and lists of JavaScript. It is most commonly used for communication between web back-ends and JavaScript programs running in the browser, but it is used in many other places, too.

Type assertions in Golang provide access to the exact type of variable of an interface.

An interface in Go, is a type that defines a set of methods. If we have a type (e.g. struct) that implements that set of methods, then we have a type that implements this interface.

Context

The context package provides a standard way to solve the problem of managing the state during a request. The package satisfies the need for request-scoped data and provides a standardized way to handle: Deadlines, Cancellation Signals, etc.

Goroutines

Goroutines allow us to write concurrent programs in Go. Things like web servers handling thousands of requests or a website rendering new pages while also concurrently making network requests are a few example of concurrency.

In Go, each of these concurrent tasks are called Goroutines.

Channels are the pipes that connect concurrent goroutines. You can send values into channels from one goroutine and receive those values into another goroutine.

Channels are a typed conduit through which you can send and receive values with the channel operator, <- .

Buffer

The buffer belongs to the byte package of the Go language, and we can use these package to manipulate the byte of the string.

The select statement lets a goroutine wait on multiple communication operations.

A select blocks until one of its cases can run, then it executes that case. It chooses one at random if multiple are ready. The select statement is just like switch statement, but in the select statement, case statement refers to communication, i.e. sent or receive operation on the channel.

Mutex

Go allows us to run code concurrently using goroutines. However, when concurrent processes access the same piece of data, it can lead to race conditions. Mutexes are data structures provided by the sync package. They can help us place a lock on different sections of data so that only one goroutine can access it at a time.

Building CLI Applications

Command line interfaces (CLIs), unlike graphical user interfaces (GUIs), are text-only. Cloud and infrastructure applications are primarily CLI-based due to their easy automation and remote capabilities.

Go applications are built into a single self contained binary making installing Go applications trivial; specifically, programs written in Go run on any system without requiring any existing libraries, runtimes, or dependencies. And programs written in Go have an immediate startup time—similar to C or C++ but unobtainable with other programming languages.

Cobra is a library for creating powerful modern CLI applications.

Urfave cli

Urfave cli is a simple, fast, and fun package for building command line apps in Go.

ORMs

Object–relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between type systems using object-oriented programming languages. This creates, in effect, a 'virtual object database', hence a layer of abstraction, that can be used from within the programming language.

Most common ORM library in Go is GORM.# Gorm

The GORM is fantastic ORM library for Golang, aims to be developer friendly. It is an ORM library for dealing with relational databases. This gorm library is developed on the top of database/sql package. The overview and feature of ORM are: Full-Featured ORM (almost)

Web Frameworks

There are several famous web frameworks for Go. Most common ones being:

  • Beego
  • Gin
  • Revel
  • Echo# Beego

Beego is used for rapid development of enterprise application in Go, including RESTful APIs, web apps and backend services. It is inspired by Tornado, Sinatra and Flask. beego has some Go-specific features such as interfaces and struct embedding.

Gin

Gin is a high-performance HTTP web framework written in Golang (Go). Gin has a martini-like API and claims to be up to 40 times faster. Gin allows you to build web applications and microservices in Go.

Revel

Revel organizes endpoints into Controllers. They provide easy data binding and form validation. Revel makes Go Templates simple to use at scale. Register functionality to be called before or after actions.

Echo

Echo is a performance-focused, extensible, open-source Go web application framework. It is a minimalist web framework that stands between stdlib + router and a full-stack web framework.

Gorilla

Gorilla is a web toolkit for the Go programming language that provides useful, composable packages for writing HTTP-based applications.

Gofiber

Go Fiber is an Express-inspired framework for Golang. Go Fiber is a web framework built on top of fast HTTP. It can be used to handle operations such as routing/endpoints, middleware, server request, etc.

Buffalo

Buffalo helps you to generate a web project that already has everything from front-end (JavaScript, SCSS, etc.) to the back-end (database, routing, etc.) already hooked up and ready to run. From there it provides easy APIs to build your web application quickly in Go.

Logging

Go has built-in features to make it easier for programmers to implement logging. Third parties have also built additional tools to make logging easier.

The zerolog package provides a fast and simple logger dedicated to JSON output.

Zerolog's API is designed to provide both a great developer experience and stunning performance. Its unique chaining API allows zerolog to write JSON (or CBOR) log events by avoiding allocations and reflection.

Zap

Blazing fast, structured, leveled logging in Go.

Apex

Structured logging package for Go.

Go realtime communication# Melody

Melody is websocket framework based on github.com/gorilla/websocket that abstracts away the tedious parts of handling websockets. It gets out of your way so you can write real-time apps.

Centrifugo is an open-source scalable real-time messaging server. Centrifugo can instantly deliver messages to application online users connected over supported transports (WebSocket, HTTP-streaming, SSE/EventSource, GRPC, SockJS, WebTransport). Centrifugo has the concept of a channel – so it's a user-facing PUB/SUB server.

API Clients

An API client is a set of tools and protocols that operate from an application on a computer. They help you to bypass some operations when developing a web application rather than reinventing the wheel every time. Using a client API is a great way to speed up the development process.

REST

REST (Representational State Transfer) API (Application Programming Interface) is used to deliver user functionality when dealing with websites. HTTP requests are used to communicate with REST APIs so users can navigate a URL website. These URLs can return certain information that is stored as part of the API.

Heimdall

Heimdall is an HTTP client that helps your application make a large number of requests, at scale. With Heimdall, you can:

  • Use a hystrix-like circuit breaker to control failing requests
  • Add synchronous in-memory retries to each request, with the option of setting your own retrier strategy
  • Create clients with different timeouts for every request

All HTTP methods are exposed as a fluent interface.

Grequests

Golang implementation of Python Grequests library(one of well known HTTP Library in Python).

Features:

  • Responses can be serialized into JSON and XML

  • Easy file uploads

  • Easy file downloads

  • Support for the following HTTP verbs GET, HEAD, POST, PUT, DELETE, PATCH, OPTIONS

  • GitHub Repository (GitHub Repository)# Graphql

GraphQL is a query language for APIs, it offers a service that prioritizes giving just the data that the client requested and no more.

Besides, you don't need to be worried about breaking changes, versioning and backwards compatibility like REST APIs. Therefore you can implement your version and auto-document your API just by using GraphQL.

A GraphQL package for Go.

Gqlgen

According to their documentation, it's a Golang library for building GraphQL servers without much effort.

Testing Go Code

Go has a built-in testing command that we can use to test our program.

Microservices

Microservices are an architectural approach to software development that allows the creation of a distributed application from deployable services that allow communication through a well-defined API. Being a solution to monoliths.

Watermill

Watermill is an event streaming library for handling asynchronous requests in go. It provides multiple sets of implementations for pub/sub. e.g: You can use conventional pub/sub implementations like Kafka or RabbitMQ, but also HTTP or MySQL binlog, if that fits your use case.

Rpcx

Rpcx is a RPC (Remote Procedure Call) framework like Alibaba Dubbo and Weibo Motan. Some of the advantages on using Rpcx:

  • Simple: easy to learn, easy to develop, easy to intergate and easy to deploy

  • Performance: high performance (>= grpc-go)

  • Cross-platform: support raw slice of bytes, JSON, Protobuf and MessagePack. Theoretically it can be used with java, php, python, c/c++, node.js, c# and other platforms

  • Service discovery and service governance: support zookeeper, etcd and consul.

  • Rpcx English Documentation (Read)

  • Rpcx Github (Official Github)

  • Rpcx Official Website (Official Website)

Go kit

Go kit is a programming toolkit for building microservices (or elegant monoliths) in Go. it solves common problems in distributed systems and application architecture so you can focus on delivering business value.

Micro

It is an API first development platform. It leverages the microservices architecture pattern and provides a set of services which act as the building blocks of a platform.

go-zero

go-zero is a web and rpc framework with lots of engineering best practices builtin. It’s born to ensure the stability of the busy services with resilience design, and has been serving sites with tens of millions users for years.

Protocol Buffers(Protobuf) is a free, open-source, language-neutral, platform-neutral, extensible data format used to serialize structured data. It’s like JSON, except it's smaller and faster, and it generates native language bindings.

Some of the advantages of using protocol buffers include:

  • Compact data storage

  • Fast parsing

  • Availability in many programming languages

  • Optimized functionality through automatically-generated classes

  • Protobuf Github (Official Github)

  • Protobuf Doc (Official Website)

  • Protobuf with Go (Read)

gRPC Go

Go language implementation of gRPC(gRPC is a technology for implementing RPC APIs).

Grpc gateway

gRPC-Gateway creates a layer over gRPC services that will act as a RESTful service to a client. It is a plugin of protoc. It reads a gRPC service definition and generates a reverse-proxy server which translates a RESTful JSON API into gRPC.

Twirp

Twirp is a framework for service-to-service communication emphasizing simplicity and minimalism. It generates routing and serialization from API definition files and lets you focus on your application's logic instead of thinking about folderol like HTTP methods and paths and JSON.

Twirp is similar to gRPC, but without the custom HTTP server and transport implementations: it runs on the standard library's extremely-well-tested-and-high-performance net/http Server. It can run on HTTP 1.1, not just http/2, and supports JSON serialization for easy debugging.

Java

Java is a general-purpose language, primarily used for Internet-based applications. It was created in 1995 by James Gosling at Sun Microsystems and is one of the most popular options for backend developers.

Java Fundamentals

Java is a programming language and computing platform first released by Sun Microsystems in 1995. Java is a general-purpose, class-based, object-oriented programming language designed for having lesser implementation dependencies. It is a computing platform for application development. Java is fast, secure, and reliable, therefore. It is widely used for developing Java applications in laptops, data centers, game consoles, scientific supercomputers, cell phones, etc.

Learn about the fundamentals of Java such as basic syntax, data types, variables, conditionals, functions, data structures, packages, etc.

Free Resources

Books

Data Types and Variables

Variable in Java is a data container that stores the data values during Java program execution. Every variable is assigned a data type, which designates the type and quantity of values it can hold. Variable is a memory location name of the data. The Java variables have mainly three types: Local, Instance and Static.

Data Types are divided into two group -

  • Primitive - byte,short,int,long,float,double,boolean and char
  • Non-Primitive - String, Arrays and Classes

Basic Syntax

Understanding the basics is the key to a solid foundation. In this section, learn the basic terminologies, naming conventions, reserved words, conditions, functions, data structures, OOP, packages, etc.

  • To print output use --> System.out.println();
  • To take input from user --> Scanner or BufferedReader class can be used

Conditionals

Java has the following conditional statements:

  • Use if to specify a block of code to be executed, if a specified condition is true
  • Use else to specify a block of code to be executed if the same condition is false
  • Use else if to specify a new condition to test; if the first condition is false
  • Use switch to specify many alternative blocks of code to be executed
  • Use ?,: operator to specify one line condition

Functions

A method/function is a way to perform some task. Similarly, in programming like Java, a function method is a block of code written to perform a specific task repeatedly. It provides reusability of code. We write the function once and use it many times. It works on the 'DRY' principle i.e., 'Do not repeat yourself'.

Steps -

  1. Define function - datatype function_name(parameters){body}
  2. Call function - function_name(values)

Datastructures

As the name indicates itself, a Data Structure is a way of organizing the data in the memory so that it can be used efficiently. Some common data structures are array, linked list, stack, hashtable, queue, tree, heap, and graph.

  • Array allocates continuous memory for homogeneous data
  • Linked List stores data in nodes with references
  • Stack follows Last In First Out principle
  • Queue follows First In First Out principle

OOP

Object-oriented programming is a core of Java Programming, which is used for designing a program using classes and objects. This can also be characterized as data controlling for accessing the code.

A package is a namespace that mainly contains classes and interfaces. For instance, the standard class ArrayList is in the package java.util. For this class, java.util.ArrayList is called its fully qualified name because this syntax has no ambiguity. Classes in different packages can have the same name. For example, you have the two classes java.util.Date and java.sql.Date, which are different. If no package is declared in a class, its package is the default package.

To create package use this command -> javac -d directory javafilename

Files and APIs

Learn how to work with files i.e., reading, writing and deleting, files and folders, etc. Also, learn how to make API calls, parse the incoming response, and so on.

  • FileWriter - this class is useful to create a file by writing characters into it
  • FileReader - this class is useful to read data in form of characters from file

In Java and other programming languages, loops are used to iterate a part of the program several times. There are four types of loops in Java, for, forEach, while, and do...while.

  • Synatx of for loop is for(initialization;condition;increment/decrement){}
  • Syntax of forEach loop is for(data_type variable:array_name){}

Exception Handling

Exception Handling in Java is one of the effective means to handle the runtime errors so that the regular flow of the application can be preserved. Java Exception Handling is a mechanism to handle runtime errors such as ClassNotFoundException, IOException, SQLException, RemoteException, etc.

There are three types of exceptions -

  1. Checked Exception - exceptions checked at compile time. Example - IOException
  2. Unchecked Exception - exceptions checked at run time. Example - NullPointerException
  3. Error - It is irrecoverable. Example - OutOfMemoryError

Java advanced topics# Generics

Java Generic methods and generic classes enable programmers to specify, with a single method declaration, a set of related methods, or with a single class declaration, a set of related types, respectively.

In Java, memory management is the process of allocation and de-allocation of objects, called Memory management.

The Collection in Java is a framework that provides an architecture to store and manipulate the group of objects. Java Collections can achieve all the operations that you perform on a data such as searching, sorting, insertion, manipulation, and deletion.

Serialization

Serialization is the conversion of the state of an object into a byte stream; deserialization does the opposite. Stated differently, serialization is the conversion of a Java object into a static stream (sequence) of bytes, which we can then save to a database or transfer over a network.

Streams

Java provides a new additional package in Java 8 called java.util.stream. This package consists of classes, interfaces and enum to allows functional-style operations on the elements. You can use stream by importing java.util.stream package.

The Java Virtual Machine is a program whose purpose is to execute other programs. JVMs are available for many hardware and software platforms (i.e. JVM is platform dependent). JVM is the one that actually calls the main method present in a java code. JVM is a part of JRE(Java Runtime Environment)

Garbage Collection

Java garbage collection is the process by which Java programs perform automatic memory management. Java programs compile to bytecode that can be run on a Java Virtual Machine, or JVM for short. When Java programs run on the JVM, objects are created on the heap, which is a portion of memory dedicated to the program

A thread in Java is the direction or path that is taken while a program is being executed. Generally, all the programs have at least one thread, known as the main thread, that is provided by the JVM or Java Virtual Machine at the starting of the program’s execution

Build Tools

A build tool is a program or command-line utility that automates the process of compiling, assembling, and deploying software.

Build tools are not only limited to compiling code; they can also help with package management, dependency handling, and continuous integration systems.# Gradle

Gradle is an open-source build automation tool that helps software engineers to test, build, and release high-performance software products. In addition, Gradle also supports multi-language development. Currently, the supported languages for Gradle include Java, Kotlin, Groovy, Scala, C/C++, and JavaScript.

Reference Resource

Maven is an open-source build tool, used primarily for Java projects.

Ant

Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications. Ant supplies a number of built-in tasks allowing to compile, assemble, test and run Java applications. Ant can also be used effectively to build non Java applications, for instance C or C++ applications. More generally, Ant can be used to pilot any type of process which can be described in terms of targets and tasks.

Reference Resource

Web Frameworks

Frameworks are tools with pre-written code, that act as a template or skeleton, which can be reused to create an application by simply filling with your code as needed which enables developers to program their application with no overhead of creating each line of code again and again from scratch.

Spring

Spring is a powerful open-source Java platform (framework), that is used to create and maintain web applications.

Spring Boot

Spring Boot is an open source, microservice-based Java web framework. The Spring Boot framework creates a fully production-ready environment that is completely configurable using its prebuilt code within its codebase. The microservice architecture provides developers with a fully enclosed application, including embedded application servers.

Play Framework

Play Framework is a high-productivity web application framework that allows the model-view-controller pattern. It is written in Scala but can also be used for other programming languages that are compiled and run on the JVM. e.g.Java.

Reference Resource

Spark is a micro framework for creating web applications in Kotlin and Java 8. Sinatra, a popular Ruby micro framework, was the inspiration for it.

Reference Resource

A programming method to map objects in Java to relational entities in a database. In other words, converting data between relational databases and object-oriented programming languages. Some popular ORM tools/frameworks in Java are:

The Jakarta Persistence API provides Java developers with an object/relational mapping facility for managing relational data in Java applications. JPA is not a tool nor a framework, but a set of interfaces for accessing, persisting, and managing data between Java objects and (a) relational database. Because it is a set of interfaces, it will require an implementation to work with and persist Java objects. This will be ORM. Here are the main features of JPA:

  • Cleaner, easier, standardized ORM.
  • Supports inheritance, polymorphism, and polymorphic queries.
  • Supports metadata annotations/XML descriptors to define the mapping (between objects and relational database).
  • Supports a rich, SQL-like query language for static and dynamic queries.
  • Pluggable persistence providers like Hibernate, MyBatis, etc.
  • Caching: JPA supports 2 kinds of cache - first and second levels - to support performance tuning.
  • Read more here.

Note: In 2019, JPA was renamed from Java Persistence API to Jakarta Persistence.

Spring data jpa

Spring Data JPA aims to significantly improve the implementation of data access layers by reducing the effort to the amount that's actually needed. As a developer you write your repository interfaces, including custom finder methods, and Spring will provide the implementation automatically.

Reference Resource

Hibernate is an open source object-relational mapping (ORM) tool that provides a framework to map object-oriented domain models to relational databases for web applications.

Ebean

Ebean is an object-relational mapping tool written in Java. It supports the standard JPA annotations for declaring entities. However, it provides a much simpler API for persisting. In fact, one of the points worth mentioning about the Ebean architecture is that it is sessionless, meaning it does not fully manage entities.

Logging is an important feature that helps developers to trace out the errors. It provides the ability to capture the log file. Logging provides the complete tracing information of the application and also records the critical failure if any occur in an application. There are three components of Logging: Logger, Logging handlers or Appenders and Layouts or logging formatters.

Reference Resource

Apache Log4j is a Java-based logging utility. Log4j Java library's role is to log information that helps applications run smoothly, determine what's happening, and help with the debugging process when errors occur. Logging libraries typically write down messages to the log file or a database.

Log4j2 is the updated version of the popular and influential log4j library, used extensively throughout the Java ecosystem for so many years. Version 2. x keeps all the logging features of its predecessor and builds on that foundation with some significant improvements, especially in the area of performance.

Reference Resource

Logback

Logback is one of the most widely used logging frameworks in the Java Community. It's a replacement for its predecessor, Log4j. Logback offers a faster implementation, provides more options for configuration, and more flexibility in archiving old log files.

Reference Resource

Slf4j

The SLF4J or the Simple Logging Facade for Java is an abstraction layer for various Java logging frameworks, like Log4j 2 or Logback. This allows for plugging different logging frameworks at deployment time without the need for code changes.

Reference Resource

Tinylog

Tinylog is a lightweight open-source logging framework for Java and Android, optimized for ease of use.

Reference Resource

Java JDBC

JDBC is an API(Application programming interface) used in java programming to interact with databases. The classes and interfaces of JDBC allow the application to send requests made by users to the specified database.

Jdbi3

Jdbi is an open source Java library (Apache license) that uses lambda expressions and reflection to provide a friendlier, higher level interface than JDBC to access the database.

JDBCTemplate is a central class in the JDBC core package that simplifies the use of JDBC and helps to avoid common errors. It internally uses JDBC API and eliminates many problems with JDBC API. It executes SQL queries or updates, initiating iteration over ResultSets, catching JDBC exceptions, and translating them to the generic. It executes core JDBC workflow, leaving application code to provide SQL and extract results. It handles the exception and provides informative exception messages with the help of exception classes defined in the org.springframework.dao package.

A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.

Mocking

Mocking removes external dependencies from a unit test to create a sense of an entire controlled environment. The traditional method of mocks involves mocking all other classes that interact with the class we want to test. The common targets for mocking are:

  • Database connections
  • Web services
  • Slow Classes
  • Classes with side effects
  • Classes with non-deterministic behavior

Reference Resource

Cucumber JVM

Cucumber is a testing tool that supports Behavior Driven Development (BDD). It offers a way to write tests that anybody can understand, regardless of their technical knowledge.

Cukes

cukes-rest takes simplicity of Cucumber and provides bindings for HTTP specification. As a sugar on top, cukes-rest adds steps for storing and using request/response content from a file system, variable support in .features, context inflation in all steps and a custom plug-in system to allow users to add additional project specific content.

Reference Resource

Jbehave

JBehave is a framework for Behaviour-Driven Development (BDD). BDD is an evolution of test-driven development (TDD) and acceptance-test driven design, and is intended to make these practices more accessible and intuitive to newcomers and experts alike. It shifts the vocabulary from being test-based to behaviour-based, and positions itself as a design philosophy.

JUnit is a testing framework for Java.

Testng

TestNG is a testing framework inspired from JUnit and NUnit but introducing some new functionalities that make it more powerful and easier to use.

Testing and validating REST services in Java is harder than in dynamic languages such as Ruby and Groovy. REST Assured brings the simplicity of using these languages into the Java domain.

Apache JMeter is an Apache project that can be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications.

Reference Resource

Design System Basics

A design system is a set of standards to manage design at scale by reducing redundancy while creating a shared language and visual consistency across different pages and channels.

What is Design System

A Design System is the single source of truth which groups all the elements that will allow the teams to design, realize and develop a product.

Need of design system

Having a solid design system speeds up your work by making the product team more efficient, and it creates consistency and harmony within the product and brand ecosystem. A strong design system takes the burden off individual designers to think through commonly recurring design problems. With a full library of pre-approved elements, designers can focus on bigger problems like creating seamless, intuitive flows that delight users. That kind of efficiency pays huge dividends over time.

Design System vs Component Library

A component library is just a collection of visuals i.e. colours, button stylings, fonts, etc. A Design System takes it to the next level by including standards and documentation around the look and usage of each component. The Design System acts as the single-source of truth.

Atomic Design

Atomic design (by Brad Frost) is a mental model to help you think of user interfaces as a cohesive whole and a collection of parts at the same time. Through the comparison to atoms, molecules, and organisms, we can think of the design of our UI as a composition of self-containing modules put together.

Stakeholders

Building an effective design system is not an individual responsibility, you need more than just designers. Here’s a quick list of the disciplines that can be represented in your team to create an effective design system:

  • Designers: to define the visual elements of the system

  • Frontend Developers: To create modular efficient code

  • Accessibility Experts: Accessibility experts to ensure your system conforms to standards like WCAG

  • Performance Experts: who can ensure your system loads quickly on all devices

  • Content Strategists: who can help the team nail the voice and tone of the system

  • Researchers: who can help you understand customer needs

  • Product Managers: to ensure the system is aligning to customer needs

  • Leaders: (VPs and directors) to champion and align the vision throughout the company including up to executive leadership

  • Designing the Design System (Read)

Design System Examples

Terminology

Design systems can be tricky if you don’t know what certain words mean. Have a look at the roadmap nodes as well as follow the link below to read the glossary.

Component

Components are the reusable building blocks of a design system. Each component meets a specific interaction or UI needs, and is specifically created to work together to create patterns and intuitive user experiences.

Component Library

A component library is a collection of all the components used in a website, software or app. Some of the common tools to showcase and browse components in a component library include are given below:

Design Language

A design language or design vocabulary is an overarching scheme or style that guides the design of a complement of products or architectural settings, creating a coherent design system for styling.

Governance

Governance is a framework for clarifying roles, responsibilities, and authority over decisions. Having that clarity ensures that decisions for the design system funnel smoothly through the governance process

Guidelines

Design guidelines are sets of recommendations on how to apply design principles to provide a positive user experience. Designers use such guidelines to judge how to adopt principles such as intuitiveness, learnability, efficiency and consistency so they can create compelling designs and meet and exceed user needs.

Pattern

Patterns are best practice design solutions for specific user-focused tasks and page types. Patterns often use one or more components and explain how to adapt them to the context. Some sample patterns could be user signing in to the application or performing the checkout operation.

Pilot

Pilots are one of the best ways to put your design system through its paces, especially before the design system even gets to a v1. Like television pilots help test audience reactions to a series concept without investing significant resources to create the whole thing, application pilots are a good foundation for ensuring your design system’s design and code are battle-tested.

Token

Design system tokens are the style values of UI elements such as color, typography, spacing, shadows, etc., that are used across products and capable of being converted to a format for any platform (web, mobile, desktop). Tokens are building blocks of the design system—think of them as sub atoms, the smallest pieces of style values that allow designers to create styles for a product.

UI Kit

As it relates to a design system, a UI Kit is a representation of coded components created in a way that designers who don’t know code can create interface mockups. Examples of UI kits are Sketch libraries and Figma design systems.

Making a Design System

First step in building a design system is identifying if you even need a design system.

From Scratch

If you are building a Design System from Scratch, you may skip the 'Existing Design Analysis' node of the roadmap and start with 'Creating Design Language'.

From Existing Design

If you are creating a Design System from pre-existing product design, there is an additional step to perform the existing design analysis, understand the existing design process, perform a visual audit, identify design elements and components and so on.

Existing Design Analysis

First step in creating a design system from an existing design is performing a design analysis and understanding what you will be working with to identify the requirements and prepare a plan. Performing the analysis may consist of:

  • Understanding the Existing Design Process
  • Performing Visual Audit
  • Identifying Design Elements
  • Identify Common Components
  • Understanding the A/B Testing and Experimentation Needs
  • Understanding any Locale or regional requirements (such as LTR/RTL).
  • Documenting your findings

Existing Design Process

To better understand the kind of design system you would like to implement, you need to start by reviewing and analyzing the current approach for design at your company. Find the answers to the following questions:

  • What is the design process that your company follows?
  • What are the existing tools that your company uses?

It’s also recommended to evaluate the level of design maturity of the product teams. This knowledge will help you estimate the time required to introduce the system to your organization.

Visual Audit

Take screenshots of your current product with the help of your team. You can use any presentation software like Google Slides or print and pin them on foam-core boards. Group the screenshots into categories like buttons, navigation, forms, tables, charts, lists etc.

Now, review each category to find inconsistencies and note areas for improvement with your team. Use a tool like CSS Stats to see how many unique colors, typefaces you have in your style sheets.

Identify Design Elements

Use the results of visual audit and prepare a comprehensive list of design elements such as Colors, Typography, Sizes, Spaces, Grid, Layouts etc. These elements will be the building blocks of your components.

Identify Components

Components of the application are created using a composition of design elements gathered in the previous step. Identify the list of components required for the application, which could include buttons, dropdowns, carousels, tabs, icons, alerts, toasts etc. Also, make sure to keep track of the different states of these components as well as different variants and actions.

A/B Tests and Experiments

Understand how the team implements A/B tests and experiments on different screens and if the new design system should accommodate any necessary requirements.

Regional Requirements

Understand any regional requirements such as LTR or any other UX variations that your design system should accommodate.

Documentation

Organize and document the results of visual audit, design elements, components with variations, states, patterns found, any existing documentation, current design process, and considerations. This documentation will be shared across the team and act as a guide when building the new design system.

Design Language

Like any language, a design language is a methodical way of communicating with your audience through your approach to product design. It’s the cornerstone of consistent customer experiences.

Brand

Brand drives every single decision you make when building new products or features. A good brand is much more than a name and a logo. It’s the values that define your unique identity and what makes you stand out from others.

Vision

Identify why you exist, what your values are and how they’ll help guide the future of your product.

Design Principles

The considerations that guide the basis of your practice. They outline how you approach design from a philosophical perspective and help with everyday decisions.

Terminology

Create the standard terms and phrases that need to be kept the same throughout the user experience, speeding up the design process and unifying your voice.

Tone of Voice

A clear tone of voice defines how you speak to your audience at every moment in their journey, helping them get wherever they want to go.

Writing Guidelines

Every consistent experience needs watertight writing. Laying down the foundations for your house style early keeps everything in line with consistent grammar, style choices and action-oriented language to help your design.

Guidelines

Providing guidance on how to approach common UX patterns will allow your organisation to establish a consistent approach and a consistent user experience on any platform.

Accessibility

Guidelines for how you approach accessibility and how you leverage colour, hierarchy and assistive technologies to help your users.

User Onboarding

How you onboard your users to your product or a new feature and give them a great experience from the start.

Microcopy Guidelines

The standard way to write for the components in your design system. These take platform conventions and best practices for writing all into consideration.

Logo

Most customers form an opinion about a product in seconds. In most cases, your logo will be the first brand asset someone sees. It’s all about making the right first impression. A distinctive logo helps users recognise a product immediately and gives them the essence of your branding.

Monochrome Version

A monochrome version of your logo that looks good on top of photography or when it’s printed with a poor quality printer.

Small Use Guidance

Your logo must perform well and be recognisable at all sizes. Tips for using your logo in these cases will minimise the risk of it being misused.

Placement and Clearance Guidance

Your logo must come with clear guidance on how to place it and how to preserve its space since it lives along with other content.

Usage Guidance

These are the logo crimes, providing contextual examples of what to (not) do with your logo.

Different File Formats

Providing a variety of formats for the vector version of your logo will make it easier for others to work and prevent anyone from redrawing it.

Design Tokens

Variables that store values for the base layer of your design system, like colour and typography. They’re used in components, so changes on this level will resonate throughout the whole system.

Layout

A well thought out layout goes a long way. Consistent use of a grid and spacing makes it easier for your users to scan the user interface and grasp the content.

Spacing

Horizontal and vertical rhythm plays a big role in a layout. You should provide easy methods for adding space between interface elements independent of your grid.

Breakpoints

Predefine the screen sizes and orientations your grid will adapt to.

Grid

Every layout should sit on a grid that brings order and hierarchy to the interface. Define a grid separately for mobile, tablet and desktop devices with columns, gutters, and margins so your interface can adapt to any platform easily.

Units

Units are the most granular building blocks for layout. Defining a set of values with consistent increments (such as 4, 8, 12 and 16 for a 4-point system) will provide you with the foundation when you’re designing your grid and spacing values.

Color

Not only an efficient way to showcase your brand, but also an efficient way to communicate with your users. Colour palettes created with purpose over aesthetics in mind can help you create intuitive design patterns by adding meaning to your interface.

Guidelines

Provide guidelines on how and when to use the colours in your palette, what to keep in mind when working with them and how not to use them.

Dark Mode

Preparing a dark mode version of your colour palette will allow your design system to adapt to dark mode and respect what your user wants to see.

Functional Colors

Besides your brand colours, make sure to have colours defined and made into variables for functions like disabled states, backgrounds, actions and high contrast text.

Accessibility

Make sure to have accessible pairings between the main colours in your palette. More importantly, make sure that your background and text colours have at least an AA standard contrast ratio between them.

Iconography

Icons are symbols that represent functionality or content. They’re especially recognisable and helpful in user interfaces since their meaning can be understood at a glance. hough they can be used just for decoration, their full potential can be realised when they’re used meaningfully and consistently.

Accessibility

For icons that convey a meaning or serve a function, add the necessary support for screen readers. You can skip this for decorative icons.

Style

Make sure that your icon family makes visual sense as a whole. Picking an outlined or filled style and sticking with it will lead to better visual consistency and predictability.

Naming

Name your icons based on what they are, not what they represent. For instance, a trash icon should be named trash, not delete. You can still add related keywords to improve discoverability.

Grid Relation

Draw your icons in a bounding box that plays well with your grid. This makes for a better pairing with other UI elements. A good example of this would be icons with bounding boxes paired with text.

Sizes

Provide different sizes for icons that correlate to your grid. Provide a minimum size and remove unnecessary detail for your icons for smaller sizes.

Keywords

Adding keywords will improve the discoverability of each icon and provide a better user experience for anyone using your system.

Reserved Icons

Reserving icons that represent common actions will prevent their use in any other context. System icons for navigation or adding and deleting are a good example. This leads to a more intuitive user experience.

Guidelines

Provide guidelines on how and when to use icons, what to keep in mind when working with them and how not to use them.

Typography

Typography is one of the main ways you surface content in products. A clear hierarchy and contrasting styles in your typography scale will make things easier to read and help with the overall structure of your product. It’s also an opportunity to visualise your brand character and presence.

Responsiveness

Desktop devices can usually afford to have bigger font sizes compared to mobile devices. Creating a typography scale that adapts to the viewport size will help with a more meaningful hierarchy and layout.

Grid Relation

Font sizes and leading should match your grid to allow better pairing between text and other UI elements. A good example of this is text paired with icons with bounding boxes.

Readability

Optimising the letter spacing (tracking), line height (leading) and line length for your typography scale will help with the readability of text.

Performance

Custom fonts need to be downloaded before they can be displayed, especially on the web. Make sure that you have sensible fallbacks and fast loading time for your typography assets. Using system fonts solves this performance problem.

Guidelines

Provide guidelines on how and when to use the pairings in your typography scale, what to keep in mind when working with them and how not to use them.

Core Components

Components are the main building blocks for user interfaces. Building a reusable component library enhances your product development workflow by reducing design and tech debt and speeding up the process. Core components can’t be broken down into granular pieces without losing their meaning.

Avatar

Avatars are used to show a thumbnail of a user photo or a visual representation of any other type of content.

  • Image: Avatars should mask an image into their shape and work with any image size since they may get this image from unknown data sources.
  • Image Fallback: There should be fallbacks when there’s no image available. This can be done with placeholder images or initials.
  • Accessibility: Always provide a description for screen readers describing what’s displayed on the avatar image instead of just naming its role.
  • Sizes: There are many contexts to use avatars and they all require different sizes for the component. For average projects use at least 2-3 different sizes and make sure there’s at least a small size available.
  • Icon: Avatars can be used with an icon instead of an image to emphasize areas that don’t necessarily have (or need) an image associated with it.
  • Background Colors: When used with icons or text, there has to be a background colour from the design system colour tokens applied to the avatar shape. Make sure that icons and text have enough contrast ratio with the background according to the WCAG AA standard.

Banner

Banners display an actionable message used as a prominent way of communicating with your users.

  • Appearance: Banners are used to display different types of messages and it’s important to differentiate their visual appearance based on the role they’re playing. If you’re using background colours for role differentiation, make sure there’s enough contrast ratio with the content according to the WCAG AA standard.
  • Area for icons or images: Banners can supplement their message using a supporting icon or image. They shouldn’t be used instead of text content.
  • Actions: Actions in banners should relate to its text and provide a way to react to the message sent to the user.
  • Dismissible Action: Don’t overwhelm the user with banners on the page and include a dismissable action. That may be either a separate close button or one of the actions provided.
  • Accessibility: If a banner dynamically appears on the page, it should be announced to the user by their assistive technology.
  • Responsiveness: Banners should adapt to the viewport size. This usually means that they become full-width for mobile to save some space.

Badge

Badges are elements that represent the status of an object or user input value.

  • Appearance: Badges may play various roles in your product and having a predefined colour for each role should help users understand their meaning. When changing colours, make sure the text has enough contrast ratio with the background according to the WCAG AA standard.
  • Dismissible Action: Badges can be used as a dynamic way to display selected values and there should be a way to dismiss them.

Button

Buttons are interactive elements used for single-step actions.

  • Hover State: Clearly show that the button is interactive when it gets hovered with a mouse cursor.
  • Active State: Used when a button gets pressed. The same state can be used to represent the button responsible for toggling another element on the page while that element is visibly opened.
  • Focused State: Used when a button gets selected through keyboard navigation.
  • Icon Support: Icons easily communicate the purpose of the button when used next to its label or can be used without text when there’s not enough space. Make sure that the accessibility label is provided when used with an icon only.
  • Disabled: Visually shows that a button is not interactive and restricts it from being pressed.
  • Loading: Used when users have to wait for the result of their action after they press a button. If a spinner is used to display this state make sure that it’s not changing the original button width or height.
  • Full Width: By default buttons take the width of their content, but they should also come with a full width variant that works well in mobile devices.
  • Variants: When using multiple buttons, there should be a way to differentiate between primary and secondary actions. Buttons may play different roles for the user or be used on different types of surfaces and they have to change the way they look.
  • Sizes: Buttons can be used in different areas of the website and may have multiple predefined sizes. On mobile, tappable areas have to be a minimum of 48px to be accessible according to iOS and Android accessibility guidelines.

Card

Cards are used to group information about subjects and their related actions.

  • Supports any type of content: Cards are one of the most used components in the product, so they have to be flexible enough to support any other components placed in them.
  • Information structure: No matter how flexible cards are, it’s important for cards to have a specific structure for its elements for product consistency.
  • Supports media sections: One of the most popular scenarios for using cards is mixing them with media content. The most popular options are having a full-width area on top of the content or full-height area at one of the card’s sides.
  • Supplementary actions: Cards can be used with actions usually placed at the bottom of the card, or the card itself can be tappable and represent an action.
  • Responsiveness: On mobile viewports cards are usually full-width in order to save space for the content.

Carousel

Carousels stack the same type of items and allows scrolling through them horizontally.

  • Navigation Controls: Carousels should have easy-to-find navigation controls for scrolling through content.
  • Supports any content: Carousels can be used in different contexts and shouldn’t be limited to a specific child component. In some scenarios you might want items within the same carousel to differ from each other.
  • Items width customisation: For simple products, it might be fine to use multiple predefined sizes for carousel items. For more flexibility, it’s good to provide a way to define a custom width.
  • Touch events support: Carousels should be scrollable on touch devices. Some of the best practices are to use native scrolling and to make sure you’re supporting the same behaviour for all touch devices, not just mobile phones.
  • Keyboard navigation: It should be possible to scroll through content with keyboard arrows when focused on navigation controls.
  • Responsiveness: It’s good practice to hide or reduce the size of navigation controls for mobile viewports to improve the visibility of the content.

Dropdown

Dropdowns are used to display a contextual subview with a list of actions or content related to the area where the dropdown is.

  • Supports any type of content: Dropdowns may be used in a lot of contexts like date pickers, language selection or other product features.
  • Action Menu: One of the most used scenarios for dropdowns is providing an action menu for the user, so it’s useful to have this layout defined.
  • Focus Trapping: Once the dropdown’s opened, the focus should work only for elements inside the dropdown. When it’s closed, the focus should move to the dropdown trigger.
  • Close Action: Either some actions inside the dropdown should close it or there should be a separate close button. Also, it’s good practice to close the dropdown when a user clicks outside.
  • Keyboard Navigation: It should be possible to navigate through dropdown children elements with the keyboard and close it with an Esc key.
  • Dynamic Position: Dropdown content should be displayed based on the current position of the trigger element on the screen and always visible to the user.
  • Responsiveness: Dropdown content should be adapted for mobile viewpoints as it may take a lot of space on desktops.

Icon

The icon component is a way to align the way iconography static assets are displayed in the product.

  • Sizes: Icons should have a number of predefined sizes to provide a holistic experience across the product. Typography pairings may be used for these size values to ensure that they are aligned with the text sizes.
  • Colors: Icons should be using values from the design system colour palette. Using parent element text colour for icon fill colour can make this automatic.

Input Checkbox

An input checkbox is a form element used for selecting one or multiple options.

  • Checked State: Used when the checkbox is selected and will use its value for the form submission.
  • Disabled State: Prevents checkbox interactions and removes its value from the form submission.
  • Intermediate State: Used when the checkbox has children selectable elements and only some of them are selected.
  • Label: There should be a text label linked with the checkbox field. Clicking the label should also trigger the checkbox selection.
  • Error State: The error state is used for form validation errors when the error is related to the checkbox field only. Always use a text error along with changing the colour of the field.
  • Keyboard State: Checkbox selections should be triggered with the Space key. Using native elements for this should provide this kind of interaction out of the box.
  • Checkbox Group: Checkboxes can be grouped to work with multiple values at the same time.

Input Radio

An input radio is a form element used for selecting one option from a list.

  • Checked State: Used when the radio is selected and will use its value for the form submission. A radio input can’t be unselected by pressing it again.
  • Disabled State: Prevents radio interactions and removes its value from the form submission.
  • Label: There should be a text label linked with the radio field. Clicking the label should also trigger the radio selection.
  • Error State: The error state is used for form validation errors when the error is related to the radio field only. Always use a text error along with changing the colour of the field.
  • Keyboard State: A radio selection should be triggered when the Space key is pressed. Using native elements for this should provide this kind of interaction out of the box.
  • Radio Group: Radio inputs should always be used in a group. If one of them is selected, it can be deselected only by choosing another radio.

Input Text

Input text lets users enter and edit text.

  • Disabled State: Prevents input interactions and removes its value from the form submission.
  • Placeholder: When there’s no value entered, show a placeholder with a potential value example. Don’t use placeholders as labels for the inputs.
  • Label: There should be a text label linked with the text field. Clicking the label should move the focus to the field.
  • Error State: The error state is used for form validation errors when the error is related to the text field only. Always use a text error along with changing the colour of the field.
  • Focused State: The focused state should highlight the text field when users start to interact with it. There is always only one focused field in the form.
  • Autocomplete: When applicable, adding support for the HTML autocomplete attribute will allow users to easily enter different data types.
  • Icon Support: Icons are used to describe input methods, express a text field state or provide additional functionality.

Input Switch

Input switches toggle the state of a single item. Compared to the input checkbox, their changes usually apply without any additional submission.

  • Checked State: Used when an input switch is turned on. It’s better to provide an additional way to indicate the checked state besides changing its colour when applicable.
  • Disabled State: Prevents interacting with an input switch.
  • Label: There should be a text label linked with the switch field. Clicking the label should also trigger the input selection.
  • Keyboard State: A switch selection should be triggered when the Space key is pressed.

Select

Select lets user select a value from a list of values in a form context:

  • Disabled State: Prevents input interactions and removes its value from the form submission.
  • Placeholder: When there’s no value entered, show a placeholder with a potential value example. Don’t use placeholders as labels for the inputs.
  • Label: There should be a text label linked with the text field. Clicking the label should move the focus to the field.
  • Error State: The error state is used for form validation errors when the error is related to the text field only. Always use a text error along with changing the colour of the field.
  • Focused State: The focused state should highlight the text field when users start to interact with it. There is always only one focused field in the form.
  • Autocomplete: When applicable, adding support for the HTML autocomplete attribute will allow users to easily enter different data types.
  • Icon Support: Icons are used to describe input methods, express a text field state or provide additional functionality.

Textarea

Text area lets users enter and edit text.

  • Disabled State: Prevents input interactions and removes its value from the form submission.
  • Placeholder: When there’s no value entered, show a placeholder with a potential value example. Don’t use placeholders as labels for the inputs.
  • Label: There should be a text label linked with the text field. Clicking the label should move the focus to the field.
  • Error State: The error state is used for form validation errors when the error is related to the text field only. Always use a text error along with changing the colour of the field.
  • Focused State: The focused state should highlight the text field when users start to interact with it. There is always only one focused field in the form.

List

Lists define the layout of the page content or groups of elements stacking them vertically or horizontally.

  • Support any type of content: Lists can be used in any context from page-level layout to managing offsets between granular components. hey should work with any component used inside.
  • Horizontal Stacking: Lists can be used for inline elements and they have to manage how they’re stacked horizontally, including handling offsets between multiple rows of elements.
  • Divided Variant: Lists with dividers are the best practice advised by many platform guidelines (especially on mobile).
  • Supports Actionable Content: Sometimes lists are used for grouping tappable components, where the whole area of the list item should be clickable.

Loading Indicator

The loading indicator shows that an operation’s being performed and how long the process will take.

  • Linear and non-linear Variants: Depending on the context and the component it’s used for, the loading indicator can be represented either with linear or with a non-linear (e.g. circular) variant.
  • Determinate or indeterminate wait time: In some cases, the wait time can’t be determined. The loading indicator should be shown until the loading finishes or an error happens. In other cases, it’s better to indicate how much time’s left until the loading is done.
  • Light Variant: The loading indicator should respect its parent element background and provide a variant to be used on darker background colours.
  • Reduced Motion: The loading indicator should be synced with the system motion settings and reduce its animation speed when reduced motion settings are turned on.

Modal

Modals are containers appearing in front of the main content to provide critical information or an actionable piece of content.

  • Supports any type of Content: Like any other container, modals can be used in different scenarios and you should be able to use it with any other component inside.
  • Supplementary Actions: Since content in the modal may be actionable, it’s important to have an area for action elements. This area is usually located at the bottom of the modal container.
  • Close Action: Modals should provide a clear way to be closed as they’re blocking content when open. This may be either a separate close button or one of the supplementary actions.
  • Information Structure: Even though modals can be used as an empty container for the content, they need a defined information structure to provide a holistic experience. It may include defining how titles and subtitles look by default or where an action element’s area is.
  • Keyboard Navigation Support: It should be possible to close a modal by pressing the Esc key and all the focusable elements inside the modal container should be accessible with keyboard navigation.
  • Focus Trapping: Once a modal is opened, the focus should be moved to the first element inside the modal and should be looped within the modal container. Closing the modal should return the focus to the last focused element on the page.

Tabs

Tabs organise navigation between multiple pages or content sections.

  • Active Button State: There should be a clear differentiation between selected and unselected tab buttons.
  • Button Icon Support: Icons help show the purpose of the tab buttons when used next to its label.
  • Equally-sized tab buttons: Tabs can be used in a relatively small-sized container where you need to switch between a definite number of sections. For such scenarios, it’s better to support a variant where the button’s area is divided equally.
  • Keyboard Navigation: All tab buttons should be focusable and navigation between the tab’s component should be accessible from the keyboard.
  • Responsiveness: If all tabs on mobile don’t fit into the viewport, users should still have access to all tab buttons. Ways to solve this can be making the button area scrollable for mobile or showing a More button containing a dropdown with the rest of the buttons.

Toast

Toasts provide short meaningful feedback messages about the action results.

  • Dismissed Automatically: Toast messages shouldn’t interrupt the user flow, block the screen for a long time or require additional action from the user.
  • Action Support: Besides displaying the message, toasts may also provide an action related to the message like undoing an action.
  • Handles Multiple Instances: Even though it doesn’t happen often, toasts can be called from multiple sources at the same time and all resulting toasts should be queued. It’s good practice not to show all the messages at the same time.
  • Accessibility: Toast messages should be announced by the voice assistive technology and their action should be easily accessible from the keyboard.
  • Responsivenss: Toasts should be aligned with the mobile viewport and their action should be easily reachable for tapping.

Tooltip

Tooltips are desktop-only components that display additional information when hovering over or focusing on an element.

  • Keyboard Hover Support: Tooltips should be accessible when an element is focused using the keyboard.
  • Dynamic Positioning: Tooltip content should be displayed based on the current position of the trigger element on the screen and always visible to the user.
  • Hover Timeout: Having a small timeout before triggering a tooltip will help to prevent occasionally showing tooltips while users move their mouse cursor.
  • Light Variant: The tooltip should respect its parent element background and provide a variant to be used on darker background colours.
  • Instant Transition for Element Groups: If there’s a group of elements using tooltips, hovering over another element while a tooltip’s already active shouldn’t trigger the animation.

Tooling

To make things efficient for anyone using your design system, make tooling essential. Find the workflows where you can integrate things with the tools people use. This helps organically spread your design system and make it crucial to people’s daily work.

Development

One of the main challenges in developing a design system isn’t building the components. It’s making your code stable, easy to read and contribute to.

Component Catalog

Isolate your UI components’ environment outside of your product codebase to make sure they’re not dependent on any global dependencies and can be easily reused.

Documentation

Having your code documented is key to driving adoption and reducing the load on the contributors.

Code Style

Having a defined code style helps align the way code’s written in the system and increases development velocity. It should be automated with the tools provided for each platform.

Unit Testing

Every part of the design system should be covered with unit tests. Once your system’s adopted, any change in the isolated environment may affect how the product works.

Accessibility Testing

Design systems should cover accessibility as much as possible. Making this automatic reduces the risk of inaccessible components or user flows in the product.

Semantic Versioning

Version your code with semantic versioning that dictates how version numbers are assigned and incremented.

Release Strategy

Design system releases should be automated and ideally use scripts ran locally or in remote CI pipelines to prevent broken releases.

Commit Guidelines

Automate the generation of your changelog by adopting a commit message guidelines that categorise and define the changes being made.

Pull Request Templates

Create pull request templates that outline the change being proposed to facilitate productive discussions.

Contribution Guidelines

Define the process of contributing to the code of the design system. Document everything in a discoverable place to make it easier for everyone to contribute.

Design

The UI and UX in a design system need to be tied to development as much as possible. The tools in this checklist should help designers and developers work better together.

Design Editor

There are many design editors available in the market today with the most popular names being Sketch, Figma and Adobe XD. If you're unsure which route to go down it's often best to speak with your team.

Plugins

Most popular Design Editors (Sketch and Figma, especially) come with third-party plugin support. Whilst it's best to use the editors in-built tools for laying out your components, plugins can bring in a range of useful data to populate them.

Version Control

Having your design versioned with semantic versioning will allow you to easily align design with development, roll back faulty changes and release changes in code and design at the same time.

Contribution Guidelines

Define the process of contributing to the UI and UX of the design system and document it in a discoverable place to make it easier for everyone to contribute.

Project Management

Design systems are no different than any other project your team might take on. In order to successfully build and maintain one, you need a clear strategy that’s well executed daily, and you‘ll need to create opportunities for your colleagues to give feedback to help share your design system together.

Task Management

Solid task management and workflows are a crucial step in executing any project. Adopting a methodology like Agile or Kanban helps you cover a lot of ground.

Ticketing

Make it easier to track your day-to-day progress by using ticketing software like Jira, Trello or GitHub. This’ll make it easier for others to submit feature proposals or bug reports.

Milestones

Define milestones that act as bigger epics in your project management with the help of your roadmap. These will help you understand your progress.

Roadmap

Setting your short and long term vision and mapping things out helps you decide the steps to take, understand your place in the bigger picture and prioritise day-to-day tasks.

Communications

Your users play a great role in shaping your design system. Creating communication channels where they can raise their voices helps you keep track of how they’re using your system. It’ll also improve their sense of ownership and the adoption of your system.

Community Meetings

Arrange community meetings with everyone who uses the design system. Share your knowledge and make proposals to improve the sense of community.

Communication Channel

Most product development work happens digitally, so create a digital channel where people can reach out and ask questions.

Open Hours

Create open hours in which you can engage your audience in a more private setting where you can discuss things in more detail. You can also use these as peer coding or peer design opportunities.

FAQs

To save everyone time, define which questions are asked frequently by your audience and document them in a discoverable place.

Analytics

Data isn’t the only driving factor when it comes to the development of design systems. Keeping a sharp eye on how your system’s used in the development process and the end product can inform your go-forward strategy.

Component Analytics

Track the usage of your components. For development you can use built-in tools like Figma’s Design System Analytics. For the end product you can have a separate way of tracking per platform depending on the technology.

Error Logging

Implement a way to track and pinpoint component-related outages in your product.

Tooling Analytics

Track what tools are being used for your design system. Find out which ones are used the most and which features are the most popular.

Service and Health Metrics

Define service and health metrics for your design system to set a benchmark on how well you’re doing. Common examples can be the number of tickets closed, improvements made or bugs fixed.

Blockchain

A blockchain is a decentralized, distributed, and oftentimes public, digital ledger consisting of records called blocks that is used to record transactions across many computers so that any involved block cannot be altered retroactively, without the alteration of all subsequent blocks.

Blockchain

A blockchain is a decentralized, distributed, and oftentimes public, digital ledger consisting of records called blocks that is used to record transactions across many computers so that any involved block cannot be altered retroactively, without the alteration of all subsequent blocks.

Decentralization

In blockchain, decentralization refers to the transfer of control and decision-making from a centralized entity (individual, organization, or group thereof) to a distributed network. Decentralized networks strive to reduce the level of trust that participants must place in one another, and deter their ability to exert authority or control over one another in ways that degrade the functionality of the network.

Why it matters

The nature of blockchain allows for trustless systems to be built on top of it. Users don’t rely on a centralized group of people, such as a bank, to make decisions and allow transactions to flow through. Because the system is decentralized, users know that transactions will never be denied for non-custodial reasons.

This decentralization enables use-cases that were previously impossible, such as parametric insurance, decentralized finance, and decentralized organizations (DAOs), among a few. This allows developers to build products that provide immediate value without having to go through a bureaucratic process of applications, approvals, and general red tape.

Blockchain Structure

The blockchain gets its name from its underlying structure. The blockchain is organized as a series of “blocks” that are “chained” together.

Understanding blockchain security requires understanding how the blockchain is put together. This requires knowing what the blocks and chains of blockchain are and why they are designed the way that they are.

Basic Blockchain Operations

Operations in a decentralized networks are the responsibility of the peer participants and their respective computational nodes. These are specific for each type of blockchain.

Application and uses of Blockchain technology

Blockchain applications go far beyond cryptocurrency and bitcoin. With its ability to create more transparency and fairness while also saving businesses time and money, the technology is impacting a variety of sectors in ways that range from how contracts are enforced to making government work more efficiently.

Blockchain general knowledge

Storage

Unlike a centralized server operated by a single company or organization, decentralized storage systems consist of a peer-to-peer network of user-operators who hold a portion of the overall data, creating a resilient file storage sharing system.

Mining and incentive models

Mining is the process of adding transaction details to the Blockchain, like sender address, hash value, etc. The Blockchain contains all the history of the transactions that have taken place in the past for record purposes and it is stored in such a manner that, it can’t be manipulated.

An Incentive is basically a reward given to a Blockchain Miner for speeding up the transactions and making correct decisions while processing the complete transaction securely.

Decentralization vs trust

Blockchains, cryptocurrency, smart contracts, and oracles have emerged as new technologies for coordinating social and economic activities in a more secure, transparent, and accessible manner. Most importantly, these technologies are revealing the power of cryptographic guarantees—what we often call cryptographic truth—in restoring users’ trust in everyday interactions.

A fork happens whenever a community makes a change to the blockchain’s protocol, or basic set of rules.

Cryptocurrencies

A cryptocurrency, crypto-currency, or crypto is a digital currency designed to work as a medium of exchange through a blockchain, which is not reliant on any central authority, such as a government or bank, to uphold or maintain it.

A cryptocurrency wallet is an application that functions as a wallet for your cryptocurrency.

Cryptography

Cryptography, or cryptology, is the practice and study of techniques for secure communication in the presence of adversarial behavior.

Consensus protocols

Consensus for blockchain is a procedure in which the peers of a Blockchain network reach agreement about the present state of the data in the network. Through this, consensus algorithms establish reliability and trust in the Blockchain network.

Blockchain interoperability

The concept of “blockchain interoperability” refers to the ability of different blockchain networks to exchange and leverage data between one another and to move unique types of digital assets between the networks’ respective blockchains.

Blockchains

Blockchain systems vary considerably in their design, particularly with regard to the consensus mechanisms used to perform the essential task of verifying network data.

Solana

Solana is a public blockchain platform with smart contract functionality. Its native cryptocurrency is SOL.

EVM based

The Ethereum Virtual Machine (EVM) is a dedicated software virtual stack that executes smart contract bytecode and is integrated into each Ethereum node. Simply said, EVM is a software framework that allows developers to construct Ethereum-based decentralized applications (DApps). All Ethereum accounts and smart contracts are stored on this virtual computer.

Many blockchains have forked the Ethereum blockchain and added functionality on top, these blockchains are referred to as EVM-based blockchains.

Ethereum

Ethereum is a programmable blockchain platform with the capacity to support smart contracts, dapps (decentralized apps), and other DeFi projects. The Ethereum native token is the Ether (ETH), and it’s used to fuel operations on the blockchain.

The Ethereum platform launched in 2015, and it’s now the second largest form of crypto next to Bitcoin (BTC).

Polygon

Polygon, formerly known as the Matic Network, is a scaling solution that aims to provide multiple tools to improve the speed and reduce the cost and complexities of transactions on the Ethereum blockchain.

Binance Smart Chain

Binance Smart Chain (also known as BNB Chain) is a blockchain project initiated by Binance as a central piece of their cryptocurrency exchange, which is the largest exchange in the world in terms of daily trading volume of cryptocurrencies.

Gnosis Chain

Gnosis is a blockchain based on Ethereum, which changed the consensus model to PoS to solve major issues on the Ethereum mainnet. While the platform solves problems surrounding transaction fees and speed, it also means that the Gnosis chain is less decentralized, as it is somewhat reliant on the Ethereum chain.

Huobi Eco Chain

Huobi's ECO Chain (also known as HECO) is a public blockchain that provides developers with a low-cost onchain environment for running decentralized apps (dApps) of smart contracts and storing digital assets.

Avalanche

Avalanche describes itself as an “open, programmable smart contracts platform for decentralized applications.” What does that mean? Like many other decentralized protocols, Avalanche has its own token called AVAX, which is used to pay transaction fees and can be staked to secure the network.

Fantom

Fantom is a decentralized, open-source smart contract platform that supports decentralized applications (dApps) and digital assets. It's one of many blockchain networks built as a faster, more efficient alternative to Ethereum, it uses the proof-of-stake consensus mechanism.

Moonbeam Moonriver

Moonbeam is a Polkadot network parachain that promises cross-chain interoperability between the Ethereum and Polkadot . More specifically, Moonbeam is a smart contract platform that enables developers to move dApps between the two networks without having to rewrite code or redeploy infrastructure.

Moonriver is an incentivized testnet. It enables developers to create, test, and adjust their protocols prior to launching on Moonbeam. Moonbeam is the mainnet of the ecosystem.

L2 blockchains

Layer-2 refers to a network or technology that operates on top of an underlying blockchain protocol to improve its scalability and efficiency.

This category of scaling solutions entails shifting a portion of Ethereum's transactional burden to an adjacent system architecture, which then handles the brunt of the network’s processing and only subsequently reports back to Ethereum to finalize its results.

Arbitrum

Arbitrum aims to reduce transaction fees and congestion by moving as much computation and data storage off of Ethereum's main blockchain (layer 1) as it can. Storing data off of Ethereum's blockchain is known as Layer 2 scaling solutions.

Moonbeam Moonriver

Moonbeam is a Polkadot network parachain that promises cross-chain interoperability between the Ethereum and Polkadot . More specifically, Moonbeam is a smart contract platform that enables developers to move dApps between the two networks without having to rewrite code or redeploy infrastructure.

Moonriver is an incentivized testnet. It enables developers to create, test, and adjust their protocols prior to launching on Moonbeam. Moonbeam is the mainnet of the ecosystem.

Blockchain Oracles

A blockchain oracle is a third-party service that connects smart contracts with the outside world, primarily to feed information in from the world, but also the reverse. Information from the world encapsulates multiple sources so that decentralised knowledge is obtained.

Hybrid Smart Contracts

Hybrid smart contracts combine code running on the blockchain (on-chain) with data and computation from outside the blockchain (off-chain) provided by Decentralized Oracle Networks.

Chainlink

Chainlink is a decentralized network of oracles that enables smart contracts to securely interact with real-world data and services that exist outside of blockchain networks.

Oracle Networks

By leveraging many different data sources, and implementing an oracle system that isn’t controlled by a single entity, decentralized oracle networks provide an increased level of security and fairness to smart contracts.

Smart Contracts

A smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document legally relevant events and actions according to the terms of a contract or an agreement.

Smart contracts can be programmed using relatively developer-friendly languages. If you're experienced with Python or any curly-bracket language, you can find a language with familiar syntax.

Solidity is an object-oriented programming language created specifically by Ethereum Network team for constructing smart contracts on various blockchain platforms, most notably, Ethereum.

  • It's used to create smart contracts that implements business logic and generate a chain of transaction records in the blochain system.
  • It acts as a tool for creating machine-level code and compilling it on the Ethereum Vitural Machine (EVM).

Like any other programming languages, Solidity also has variables, functions, classes, arithmetic operations, string manipulation, and many more.

Vyper

Vyper is a contract-oriented, pythonic programming language that targets the Ethereum Virtual Machine (EVM).

Rust is a multi-paradigm, general-purpose programming language. Rust emphasizes performance, type safety, and concurrency. It is popular on smart contract chains Solana and Polkadot.

Testing smart contracts is one of the most important measures for improving smart contract security. Unlike traditional software, smart contracts cannot typically be updated after launching, making it imperative to test rigorously before deploying contracts onto mainnet.

Unit testing involves testing individual components in a smart contract for correctness. A unit test is simple, quick to run, and provides a clear idea of what went wrong if the test fails.

Integration Tests

Integration tests validate interactions between multiple components. For smart contract testing this can mean interactions between different components of a single contract, or across multiple contracts.

Code Coverage

Code coverage is a metric that can help you understand how much of your source is tested. It's a very useful metric that can help you assess the quality of your test suite.

Deployment

Unlike other software, smart contracts don’t run on a local computer or a remote server: they live on the blockchain. Thus, interacting with them is different from more traditional applications.

Monitoring

Monitoring smart contracts allow their authors to view its activity and interactions based on generated transactions and events, allowing verification of the contract's intended purpose and functionality.

Upgrades

Smart contracts are immutable by default. Once they are created there is no way to alter them, effectively acting as an unbreakable contract among participants. However, for some scenarios, it is desirable to be able to modify them.

ERC Tokens

An ‘Ethereum Request for Comments’ (ERC) is a document that programmers use to write smart contracts on Ethereum Blockchain. They describe rules in these documents that Ethereum-based tokens must comply with.

While there are several Ethereum standards. These ERC Ethereum standards are the most well-known and popular: ERC-20, ERC-721, ERC-1155, and ERC-777.

Crypto Wallets

A cryptocurrency wallet is a device, physical medium, program, or service which stores the public and/or private keys for cryptocurrency transactions. In addition to this basic function of storing the keys, a cryptocurrency wallet more often also offers the functionality of encrypting and/or signing information.

IDEs

An integrated development environment is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools and a debugger.

Crypto Faucets

A crypto faucet lets users earn small crypto rewards by completing simple tasks. The metaphor is based on how even one drop of water from a leaky faucet could eventually fill up a cup. There are various kinds of crypto faucets, including bitcoin (BTC), Ethereum (ETH), and BNB faucets.

Faucets are common in development environments where developers obtain testnet crypto in order develop and test their application prior to mainnet deployment.

Decentralized Storage

Decentralized storage is where data is stored on a decentralized network across multiple locations by users or groups who are incentivized to join, store, and keep data accessible. The servers used are hosted by people, rather than a single company. Anyone is free to join, they are kept honest due to smart contracts, and they are incentivized to participate via tokens.

Smart Contract Frameworks

Building a full-fledged dapp requires different pieces of technology. Software frameworks include many of the needed features or provide easy plugin systems to pick the tools you desire.

Hardhat

Hardhat is an Ethereum development environment. It allows users to compile contracts and run them on a development network. Get Solidity stack traces, console.log and more.

Brownie is a Python-based development and testing framework for smart contracts targeting the Ethereum Virtual Machine.

Truffle

A development environment, testing framework, and asset pipeline for blockchains using the Ethereum Virtual Machine (EVM), aiming to make life as a developer easier.

Foundry

Foundry is a smart contract development toolchain. Foundry manages your dependencies, compiles your project, runs tests, deploys, and lets you interact with the chain from the command-line and via Solidity scripts.

Security

Smart contracts are extremely flexible, capable of both holding large quantities of tokens (often in excess of $1B) and running immutable logic based on previously deployed smart contract code. While this has created a vibrant and creative ecosystem of trustless, interconnected smart contracts, it is also the perfect ecosystem to attract attackers looking to profit by exploiting vulnerabilities

Practices

Smart contract programming requires a different engineering mindset. The cost of failure can be high, and change can be difficult.

Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a smart contract.

Static analysis is the analysis of smart contracts performed without executing them.

Smart contract audits enable developers to provide a thorough analysis of smart contract sets. The main goal of a smart contract audit is to detect and eliminate vulnerabilities, starting with the most common threat vectors.

Source of Randomness Attacks

The security of cryptographic systems depends on some secret data that is known to authorized persons but unknown and unpredictable to others. To achieve this unpredictability, some randomization is typically employed. Modern cryptographic protocols often require frequent generation of random quantities. Cryptographic attacks that subvert or exploit weaknesses in this process are known as randomness attacks.

Tools

Blockchain and smart contract technology is faily new, therefore, you should expect constant changes in the security landscape, as new bugs and security risks are discovered, and new best practices are developed. Keeping track of this constantly moving landscape proves difficult, so using tools to aid this mission is important. The cost of failing to property secure smart contracts can be high, and because change can be difficult, we must make use of these tools.

Slither

Slither is a Solidity static analysis framework written in Python 3. It runs a suite of vulnerability detectors, prints visual information about contract details, and provides an API to easily write custom analyses. Slither enables developers to find vulnerabilities, enhance their code comprehension, and quickly prototype custom analyses.

Manticore

Manticore is a symbolic execution tool for analysis of smart contracts and binaries.

Mythx

MythX is a comprehensive smart contract security analysis tools developed by Consensys. It allows users to detect security vulnerabilities in Ethereum smart contracts throughout the development life cycle as well as analyze Solidity dapps for security holes and known smart contract vulnerabilities.

Echidna

Echidna is a Haskell program designed for fuzzing/property-based testing of Ethereum smarts contracts. It uses sophisticated grammar-based fuzzing campaigns based on a contract ABI to falsify user-defined predicates or Solidity assertions.

Management Platforms

Managing smart contracts in a production environment (mainnet) can prove difficult as users must keep track of different versions, blockchains, deployments, etc. Using a tool for this process eliminates a lot of the risk that comes with manual tracking.

OpenZeppelin

OpenZeppelin Contracts helps you minimize risk by using battle-tested libraries of smart contracts for Ethereum and other blockchains. It includes the most used implementations of ERC standards.

Version Control Systems

Version control/source control systems allow developers to track and control changes to code over time. These services often include the ability to make atomic revisions to code, branch/fork off of specific points, and to compare versions of code. They are useful in determining the who, what, when, and why code changes were made.

Git

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Repo Hosting Services

When working on a team, you often need a remote place to put your code so others can access it, create their own branches, and create or review pull requests. These services often include issue tracking, code review, and continuous integration features. A few popular choices are GitHub, GitLab, BitBucket, and AWS CodeCommit.

GitHub

GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

GitLab

GitLab is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Bitbucket

Bitbucket is a Git based hosting and source code repository service that is Atlassian's alternative to other products like GitHub, GitLab etc

Bitbucket offers hosting options via Bitbucket Cloud (Atlassian's servers), Bitbucket Server (customer's on-premise) or Bitbucket Data Centre (number of servers in customers on-premise or cloud environment)

dApps

A decentralised application (dApp) is an application that can operate autonomously, through the use of smart contracts that run on a blockchain. Like traditional applications, dApps provide some function or utility to its users.

Frontend Frameworks

Web frameworks are designed to write web applications. Frameworks are collections of libraries that aid in the development of a software product or website. Frameworks for web application development are collections of various tools. Frameworks vary in their capabilities and functions, depending on the tasks set. They define the structure, establish the rules, and provide the development tools required.

React

React is the most popular front-end JavaScript library for building user interfaces. React can also render on the server using Node and power mobile apps using React Native.

Angular

Angular is a component based front-end development framework built on TypeScript which includes a collection of well-integrated libraries that include features like routing, forms management, client-server communication, and more.

Vue.js

Vue.js is an open-source JavaScript framework for building user interfaces and single-page applications. It is mainly focused on front end development.

A key to building software that meets requirements without defects is testing. Software testing helps developers know they are building the right software. When tests are run as part of the development process (often with continuous integration tools), they build confidence and prevent regressions in the code.

Like traditional software, testing dApps involves testing the entire stack that makes up the dApp (backend, frontend, db, etc.).

Deploying a dApp involves deployment of all of its layers, generally through a management framework.

Maintenance

dApps can be harder to maintain because the code and data published to the blockchain is harder to modify. It’s hard for developers to make updates to their dapps (or the underlying data stored by a dapp) once they are deployed, even if bugs or security risks are identified in an old version.

Architecture

Unlike Web2 applications, in Web3 there’s no centralized database that stores the application state or user identity, and there’s no centralized web server where the backend logic resides.

dApps face unique security challenges as they run on immutable blockchains. dApps are harder to maintain, and developers cannot modify or update their codes once deployed. Therefore, special consideration must be taken before putting it on the blockchain.

Applicability

dApps can be used for just about anything that requires two or more parties to agree on something. When the appropriate conditions are met, the smart contract will execute automatically. An important differentiation is that these transactions are no longer based on trust but they are rather based on cryptographically-backed smart contracts.

DeFi

Decentralized finance offers financial instruments without relying on intermediaries such as brokerages, exchanges, or banks by using smart contracts on a blockchain.

DAOs

A decentralized autonomous organization (DAO) is an emerging form of legal structure. With no central governing body, every member within a DAO typically shares a common goal and attempt to act in the best interest of the entity. Popularized through cryptocurrency enthusiasts and blockchain technology, DAOs are used to make decisions in a bottoms-up management approach.

NFTs

A non-fungible token (NFT) is a financial security consisting of digital data stored in a blockchain, a form of distributed ledger. The ownership of an NFT is recorded in the blockchain, and can be transferred by the owner, allowing NFTs to be sold and traded.

Payments

Blockchain technology has the ability to eliminate all the tolls exacted by centralized organization when transferring payments.

Insurance

Blockchain technology has the ability to automate claims functions by verifying real-world data through the use of an oracle. It also automates payments between parties for claims and thus lower administrative costs for insurance companies.

Node as a Service (NaaS)

Running your own blockchain node can be challenging, especially when getting started or while scaling fast. There are a number of services that run optimized node infrastructures for you, so you can focus on developing your application or product instead.

Alchemy

Alchemy is a developer platform that empowers companies to build scalable and reliable decentralized applications without the hassle of managing blockchain infrastructure in-house.

Infura

Infura provides the tools and infrastructure that allow developers to easily take their blockchain application from testing to scaled deployment - with simple, reliable access to Ethereum and IPFS.

Moralis

Moralis provides a single workflow for building high performance dapps. Fully compatible with your favorite web3 tools and services.

Quicknode

QuickNode is a Web3 developer platform used to build and scale blockchain applications.

While the bulk of the logic in blockchain applications is handled by smart contracts, all the surrounding services that support those smart contracts (frontend, monitoring, etc.) are often written in other languages.

JavaScript

JavaScript, often abbreviated JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. It lets us add interactivity to pages e.g. you might have seen sliders, alerts, click interactions, and popups etc on different websites -- all of that is built using JavaScript. Apart from being used in the browser, it is also used in other non-browser environments as well such as Node.js for writing server-side code in JavaScript, Electron for writing desktop applications, React Native for mobile applications and so on.

Python

Python is a well known programming language which is both a strongly typed and a dynamically typed language. Being an interpreted language, code is executed as soon as it is written and the Python syntax allows for writing code in functional, procedural or object-oriented programmatic ways.

Go

Go is an open source programming language supported by Google. Go can be used to write cloud services, CLI tools, used for API development, and much more.

You don't need to write every smart contract in your project from scratch. There are many open source smart contract libraries available that provide reusable building blocks for your project that can save you from having to reinvent the wheel.

Ethers.js

The ethers.js library aims to be a complete and compact library for interacting with the Ethereum Blockchain and its ecosystem. It was originally designed for use with ethers.io and has since expanded into a more general-purpose library.

Web3.js

web3.js is a collection of libraries that allow you to interact with a local or remote ethereum node using HTTP, IPC or WebSocket.

A library that gives you access to the powerful Moralis Server backend from your JavaScript app.

Client Nodes

A blockchain is a distributed network of computers (known as nodes) running software that can verify blocks and transaction data. The software application, known as a client, must be run on your computer to turn it into a blockchain node.

Geth

Go Ethereum (Geth) is one of the three original implementations (along with C++ and Python) of the Ethereum protocol. It is written in Go, fully open source and licensed under the GNU LGPL v3.

Besu

Besu is an Apache 2.0 licensed, MainNet compatible, Ethereum client written in Java.

Nethermind

Nethermind is a high-performance, highly configurable full Ethereum protocol client built on .NET that runs on Linux, Windows, and macOS, and supports Clique, Aura, Ethash, and Proof-of-Stake consensus algorithms.

Substrate

Substrate is a Software Development Kit (SDK) specifically designed to provide you with all of the fundamental components s blockchain requires so you can focus on crafting the logic that makes your chain unique and innovative.

Due to the limited number of transactions-per-second (TPS) built-in to blockchains, a number of alternative mechanism and technologies have emerged to aid the scaling of blockchain dApps.

State and Payment Channels

State channels refer to the process in which users transact with one another directly outside of the blockchain, or ‘off-chain,’ and greatly minimize their use of ‘on-chain’ operations.

Optimistic Rollups and Fraud Proofs

‍Optimistic rollups are a layer 2 (L2) construction that improves throughput and latency on Ethereum’s base layer by moving computation and data storage off-chain. An optimistic rollup processes transactions outside of Ethereum Mainnet, reducing congestion on the base layer and improving scalability.

Optimistic rollups allow anyone to publish blocks without providing proofs of validity. However, to ensure the chain remains safe, optimistic rollups specify a time window during which anyone can dispute a state transition.

Zero-knowledge rollups (ZK-rollups) are layer 2 scaling solutions that increase the throughput of a blockchain by moving computation and state-storage off-chain.

Validium

Validium is a scaling solution that enforces integrity of transactions using validity proofs like ZK-rollups, but doesn’t store transaction data on the Ethereum Mainnet. While off-chain data availability introduces trade-offs, it can lead to massive improvements in scalability

Plasma

Plasma is a framework that allows the creation of child blockchains that use the main Ethereum chain as a layer of trust and arbitration. In Plasma, child chains can be designed to meet the requirements of specific use cases, specifically those that are not currently feasible on Ethereum.

Sidechains

A sidechain is a separate blockchain network that connects to another blockchain – called a parent blockchain or mainnet – via a two-way peg.

Ethereum 2

Ethereum 2.0 marks a long-anticipated upgrade to the Ethereum public mainnet. Designed to accelerate Ethereum’s usage and adoption by improving its performance, Ethereum 2.0 implements Proof of Stake.

On-Chain Scaling

On-chain scaling refers to any direct modification made to a blockchain, like data sharding and execution sharding in the incoming version of Ethereum 2.0. Another type of on-chain scaling would be a sidechain with two-way bridge to Ethereum, like Polygon.

QA Basics

Quality Assurance (QA) also known as QA testing is an activity to ensure that an organization provides the best product or service to the customers. QA testing of a software involves the testing of performance, adaptability, and functionality. Yet, software quality assurance extends beyond software quality; it also comprises the quality process used for developing, testing, and release of software products. QA relies on the software development cycle, which includes the management of software requirements, design, coding, testing, and release.

Test Oracles

A test oracle is a mechanism; different from the program itself that can be used to check the correctness of the program's output for the test cases. Conceptually, we can consider testing a process in which the test cases are given to the test oracle and the program under testing.

What is Quality

Quality is extremely hard to define, and it is simply stated: “Fit for use or purpose.” It is all about meeting the needs and expectations of customers concerning the functionality, design, reliability, durability, & price of the product.

What is Assurance

Assurance is nothing but a positive declaration of a product or service, which gives confidence. It is certain of a product or a service which it will work well. It provides a guarantee that the product will work without any problems as per the expectations or requirements.

Quality Assurance in Software Testing

Quality Assurance in Software Testing is defined as a procedure to ensure the quality of software products or services provided to the customers by an organization. Quality assurance focuses on improving the software development process and making it efficient and effective per the quality standards defined for software products. Quality Assurance is popularly known as QA Testing.

Tester Mindset

As a Quality Assurance Engineer, your job is to look for the weak spots in a product, whatever that product may be, and report them back, so that they can be fixed and thus, the product you are working on can be of the highest quality possible.

To do your job successfully, you need to have a Testing mindset. What does that mean? Among other things, it means you have to think in the most destructive way possible and be as creative as possible.

A few important points:

Test approach has two techniques: Proactive - An approach in which the test design process is initiated as early as possible in order to find and fix the defects before the build is created. Reactive - An approach in which the testing is not started until after design and coding are completed.

Three approaches are commonly used to implement functional testing:

White Box Testing is a technique in which software’s internal structure, design, and coding are tested to verify input-output flow and improve design, usability, and security. In white box testing, code is visible to testers, so it is also called Clear box testing, Open box testing, Transparent box testing, Code-based testing, and Glass box testing.

Gray Box Testing

Gray box testing is a software testing technique to test a software product or application with partial knowledge of the internal structure of the application. The purpose of gray box testing is to search and identify the defects due to improper code structure or improper use of applications.

Black Box Testing is a software testing method in which the functionalities of software applications are tested without having knowledge of internal code structure, implementation details and internal paths. Black Box Testing mainly focuses on input and output of software applications and it is entirely based on software requirements and specifications. It is also known as Behavioral Testing.

Test Prioritization

Test prioritization is ordering the test cases to be conducted eventually. Prioritizing test cases aids to meet two important constraints, namely time and budget in software testing to enhance the fault detection rate as early as possible.

Manage Your Testing

Test Management is a process of managing the testing activities in order to ensure high quality and high-end testing of the software application. The method consists of organizing, controlling, ensuring traceability and visibility of the testing process in order to deliver the high quality software application. It ensures that the software testing process runs as expected.

qTest is a test management tool used for Project Management, Bug Tracking, and Test Management. It follows the centralized test management concept that helps to communicate easily and assists in rapid development of task across QA team and other stakeholders.

TestRail

TestRail is a web-based test case management tool. It is used by testers, developers and team leads to manage, track, and organize software testing efforts. TestRail allows team members to enter test cases, organize test suites, execute test runs, and track their results, all from a modern and easy to use web interface.

TestLink is most widely used web based open source test management tool. It synchronizes both requirements specification and test specification together. Tester can create test project and document test cases using this tool. With TestLink you can create an account for multiple users and assign different user roles.

Zephyr is a testing solution that improves the quality of your software by managing and monitoring end-to-end testing. It is very effective for managing manual testing. Its key capabilities include:

A project is a temporary endeavor to create a unique product, service, or result. A project is temporary because it has a defined beginning and end time, and it is unique because it has a particular set of operations designed to accomplish a goal.

Project Management is a discipline of planning, organizing, motivating, and controlling the resources to achieve specific project goals. The main objective of project management is to achieve project goals and targets while keeping in mind the project scope, time, quality, and cost. It facilitates the project workflow with team collaboration on a single project.

Jira is a software application used for issue tracking and project management. The tool, developed by the Australian software company Atlassian, has become widely used by agile development teams to track bugs, stories, epics, and other tasks.

Assembla is an extensive suite of applications for software development, enabling distributed agile teams. It allows development teams to manage, initiate and maintain agile projects, applications and websites.

YouTrack is a project management software developed by JetBrains. It’s in the form of a plugin that can be attached to the JetBrains IDEs such as Intellij Idea, and helps create and assign tasks to a development team as well as track the progress of working.

Trello is a popular, simple, and easy-to-use collaboration tool that enables you to organize projects and everything related to them into boards. With Trello, you can find all kinds of information, such as:

Testing Techniques are methods applied to evaluate a system or a component with a purpose to find if it satisfies the given requirements. Testing of a system helps to identify gaps, errors, or any kind of missing requirements differing from the actual requirements. These techniques ensure the overall quality of the product or software including performance, security, customer experience, and so on.

Functional testing is a type of software testing that validates the software system against the functional requirements/specifications. The purpose of Functional tests is to test each function of the software application by providing appropriate input and verifying the output against the Functional requirements.

UAT

User Acceptance Testing (UAT) is a type of testing performed by the end user or the client to verify/accept the software system before moving the software application to the production environment. UAT is done in the final phase of testing after functional, integration and system testing is done.

Exploratory testing is evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.)

Sanity Testing

Sanity testing is a kind of Software Testing performed after receiving a software build, with minor changes in code, or functionality, to ascertain that the bugs have been fixed and no further issues are introduced due to these changes. The goal is to determine that the proposed functionality works roughly as expected. If sanity test fails, the build is rejected to save the time and costs involved in a more rigorous testing.

Regression Testing is a type of software testing to confirm that a recent program or code change has not adversely affected existing features. Regression testing is a black box testing technique. Test cases are re-executed to check the previous functionality of the application is working fine and that the new changes have not produced any bugs.

Smoke Testing

Smoke Testing is a software testing process that determines whether the deployed software build is stable or not. Smoke testing is a confirmation for QA team to proceed with further software testing. It consists of a minimal set of tests run on each build to test software functionalities.

Unit testing is where individual units (modules, functions/methods, routines, etc.) of software are tested to ensure their correctness. This low-level testing ensures smaller components are functionally sound while taking the burden off of higher-level tests. Generally, a developer writes these tests during the development process and they are run as automated tests.

Integration Testing

Integration Testing is a type of testing where software modules are integrated logically and tested as a group. A typical software project consists of multiple software modules coded by different programmers. This testing level aims to expose defects in the interaction between these software modules when they are integrated. Integration Testing focuses on checking data communication amongst these modules.

Non-Functional Testing

Non-functional testing is a type of software testing to test non-functional parameters such as reliability, load test, performance, and accountability of the software. The primary purpose of non-functional testing is to test the reading speed of the software system as per non-functional parameters. The parameters of non-functional testing are never tested before the functional testing.

Load Testing is a type of Performance Testing that determines the performance of a system, software product, or software application under real-life-based load conditions. Load testing determines the behavior of the application when multiple users use it at the same time. It is the response of the system measured under varying load conditions.

Performance Testing is a subset of Performance Engineering. It is a process of evaluating a system’s behavior under various extreme conditions. The main intent of performance testing is monitoring and improving key performance indicators such as response time, throughput, memory, CPU utilization, and more.

There are three objectives (three S) of Performance testing to observe and evaluate: Speed, Scalability, and Stability.

Types of Performance Testing

Following are the commonly used performance testing types, but not limited to:

Stress Testing is a type of Performance Testing. The objective of stress testing is to identify the breaking point of application under test under extreme normal load.

e.g. Injecting high volume of requests per second to an API might cause the disruption to its service, or throws HTTP 503 Service Unavailable or cause other consequences.

Security Testing is a type of Software Testing that uncovers vulnerabilities, threats, or risks in a software application and prevents malicious attacks from intruders. The purpose of Security Tests is to identify all possible loopholes and weaknesses of the software system which might result in a loss of information, revenue, repute at the hands of the employees or outsiders of the organization.

Accessibility Testing is defined as a type of Software Testing performed to ensure that the application being tested is usable by people with disabilities like hearing, color blindness, old age, low vision and other disadvantaged groups.

SDLC

The Software Development Life Cycle (SDLC) is a process followed for a software project, within a software organization. It consists of a detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The life cycle defines a methodology for improving the quality of software and the overall development process.

Waterfall Model is a sequential model that divides software development into pre-defined phases. Each phase must be completed before the next phase can begin with no overlap between the phases. Each phase is designed for performing specific activity during the SDLC phase.

V Model is a highly disciplined SDLC model that has a testing phase parallel to each development phase. The V model is an extension of the waterfall model wherein software development and testing is executed in a sequential way. It's also known as the Validation or Verification Model.

The agile model refers to a software development approach based on iterative development. Agile methods break tasks into smaller iterations or parts that do not directly involve long-term planning. The project scope and requirements are laid down at the beginning of the development process. Plans regarding the number of iterations, the duration, and the scope of each iteration are clearly defined in advance.

The Agile software development methodology is one of the simplest and most effective processes to turn a vision for a business need into software solutions.

Kanban is a very popular framework for development in the agile software development methodology. It provides a transparent way of visualizing the tasks and work capacity of a team. It mainly uses physical and digital boards to allow the team members to visualize the current state of the project they are working on.

A kanban board is an agile project management tool designed to help visualize work, limit work-in-progress, and maximize efficiency.

Scrum in Software Testing is a methodology for building complex software applications. It provides easy solutions for executing complicated tasks. Scrum helps the development team to focus on all aspects of the software product development like quality, performance, usability, and so on. It provides with transparency, inspection and adaptation during the software development to avoid complexity.

Scaled Agile Framework (SAFe) is a freely available online knowledge base that allows you to apply lean-agile practices at the enterprise level. It provides a simple and lightweight experience for software development. It is a set of organizations and workflow patterns intended to guide enterprises for scaling lean and agile practices. It is divided into three segments which are Team, Program and Portfolio.

Extreme Programming (XP) is an agile software development framework that aims to produce higher quality software, and higher quality of life for the development team. XP is the most specific of the agile frameworks regarding appropriate engineering practices for software development.

Manual Testing is a type of software testing in which test cases are executed manually by a tester without using any automated tools. The purpose of Manual Testing is to identify the bugs, issues, and defects in the software application. Manual software testing is the most primitive technique of all testing types and it helps to find critical bugs in the software application.

Test Driven Development (TDD) is software development approach in which test cases are developed to specify and validate what the code will do. In simple terms, test cases for each functionality are created and tested first and if the test fails then the new code is written in order to pass the test and making code simple and bug-free.

A Test Plan is a detailed document that describes the test strategy, objectives, schedule, estimation, deliverables, and resources required to perform testing for a software product. Test Plan helps us determine the effort needed to validate the quality of the application under test.

A Test Case is a set of actions executed to verify a particular feature or functionality of your software application. A Test Case contains test steps, test data, precondition, and postcondition developed for a specific test scenario to verify any requirement. The test case includes specific variables or conditions, using which a testing engineer can compare expected and actual results to determine whether a software product is functioning as per the requirements of the customer.

A Test Scenario is defined as any functionality that can be tested. It is a collective set of test cases which helps the testing team to determine the positive and negative characteristics of the project.

Communicating the QA and testing team outputs can be interpreted in several different ways. Having a solid reporting stream is very essential for all the decisions that a stakeholder/manager can take.

Compatibility

Compatibility is nothing but the capability of existing or living together. Compatibility Testing is a type of Software testing to check whether your software is capable of running on different hardware, operating systems, applications, network environments or Mobile devices.

Verification in Software Testing is a process of checking documents, design, code, and program in order to check if the software has been built according to the requirements or not. The main goal of verification process is to ensure quality of software application, design, architecture etc. The verification process involves activities like reviews, walk-throughs and inspection.

Validation in Software Engineering is a dynamic mechanism of testing and validating if the software product actually meets the exact needs of the customer or not. The process helps to ensure that the software fulfills the desired use in an appropriate environment. The validation process involves activities like unit testing, integration testing, system testing and user acceptance testing.

Automation Testing is a software testing technique that performs using special automated testing software tools to execute a test case suite. On the contrary, Manual Testing is performed by a human sitting in front of a computer carefully executing the test steps.

Automated testing is the application of software tools to automate a human-driven manual process of reviewing and validating a software product. Most modern agile and DevOps software projects now include automated testing from inception. To fully appreciate the value of automated testing, however, it helps to understand what life was like before it was widely adopted.

Frontend automation

Front-end automation is a way to characterize automation that streamlines tasks focused on interactivity, websites, and attended processes. Robotic process automation, or RPA, is considered automation on the front end, or from the user-interface (UI) level. Benefits of front-end automation include quick task building with no programming knowledge, no required changes to existing programs or applications, and those individuals who know the keystrokes can easily build the automation task.

Basic introduction# HTML/CSS/JavaScript Basics

HTML stands for HyperText Markup Language. It is used on the front and gives structure to the webpage, which you can style using CSS and make interactive using JavaScript.

CSS or Cascading Style Sheets is the language used to style the front end of any website. CSS is a cornerstone technology of the World Wide Web, alongside HTML and JavaScript.

JavaScript allows you to add interactivity to your pages. You may have seen common examples on the websites: sliders, click interactions, popups, and so on.

Browser devtools

Every modern web browser includes a powerful suite of developer tools. These tools do a range of things, from inspecting currently-loaded HTML, CSS and JavaScript to showing which assets the page has requested and how long they took to load. This article explains how to use the basic functions of your browser's devtools.

Ajax

AJAX stands for Asynchronous JavaScript And XML. In a nutshell, it is the use of the XMLHttpRequest object to communicate with servers. It can send and receive information in various formats, including JSON, XML, HTML, and text files.

Caching ensures that the resources downloaded once are reused instead of doing a fresh fetch again. It is useful for increasing subsequent page load speed by reusing cached images, fonts, and other static assets. Caching should not be typically done on dynamic content. For example list of posts or comments. As part of the testing strategy, both caching and cache invalidation (not getting stale dynamic content) needs to be tested.

CSR stands for Client Side Rendering and SSR stands for Server Side Rendering. CSR pages are computed in your machine and then shown by your browser while in the case of SSR, the server sends ready to show Html content directly. Primarily React, Vue, and Angular apps are examples of CSR (technically it is possible for them to be executed in SSR mode too) and almost all older tech stacks are SSR like PHP, ruby on rails, java, dot net, etc. From the user's standpoint, CSR apps take higher time to render but compensate by avoiding page reloads later (SPA) while SSR apps often have faster initial load time but do a full page reload often.

Automation frameworks# Qa wolf# Cypress

Cypress framework is a JavaScript-based end-to-end testing framework built on top of Mocha – a feature-rich JavaScript test framework running on and in the browser, making asynchronous testing simple and convenient. It also uses a BDD/TDD assertion library and a browser to pair with any JavaScript testing framework.

Free Resources

Webdriver io# Jasmine

Jasmine is a very popular JavaScript BDD (behavior-driven development) framework for unit testing JavaScript applications. It provides utilities that can be used to run automated tests for both synchronous and asynchronous code. It does not depend on any other JavaScript frameworks. It does not require a DOM.

Robot Framework is a Python-based, extensible keyword-driven automation framework for acceptance testing, acceptance test driven development (ATDD), behavior driven development (BDD) and robotic process automation (RPA).

Robot Framework is open and extensible. Robot Framework can be integrated with virtually any other tool to create powerful and flexible automation solutions.

Selenium is an open-source tool that automates web browsers. It provides a single interface that lets you write test scripts in programming languages like Ruby, Java, NodeJS, PHP, Perl, Python, and C#, among others.

Jest

Jest is a delightful JavaScript Testing Framework with a focus on simplicity. It works with projects using: Babel, TypeScript, Node, React, Angular, Vue and more!

Puppeteer

Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome or Chromium.

Playwright

Playwright Test was created specifically to accommodate the needs of end-to-end testing. Playwright supports all modern rendering engines including Chromium, WebKit, and Firefox. Test on Windows, Linux, and macOS, locally or on CI, headless or headed with native mobile emulation of Google Chrome for Android and Mobile Safari.Playwright leverages the DevTools protocol to write powerful, stable automated tests.Playwright can actually see into and control the browser rather than relying on a middle translation layer, it allows for the simulation of more insightful and relevant user scenarios.

Selenium IDE

Selenium IDE is an open source web automation testing tool from the Selenium Suite used primarily for QA purposes. It functions as a Firefox extension and does not require any programming knowledge and test cases can be created simply by interacting with the browser.

Selenium itself is an open-source, automated testing tool used to test web applications across various browsers. It's primarily built in Java and supports several browsers and programming languages. Selenium IDE was developed to speed up the creation of automation scripts. It’s a rapid prototyping tool and can be used by engineers with no programming knowledge whatsoever. Because of its simplicity, Selenium IDE is best used as a prototyping tool and not a complete solution for developing and maintaining complex test suites.

Mobile automation, as the name suggests, refers to 'automation' that is done on mobile devices. Mobile Automation can test a WAP site or an app. As we know, mobile devices consist of hardware and software components, while a mobile application is simply the software. Testing the mobile device is also connected to evaluating the hardware component and the software part.

Appium is an open-source framework that allows QAs to conduct automated app testing on different platforms like Android, iOS, and Windows. It is developed and supported by Sauce Labs to automate native and hybrid mobile apps. It is a cross-platform mobile automation tool, which means that it allows the same test to be run on multiple platforms.

XCUITest

Mobile app testing, and more specifically, app UI testing involves checking how the interface behaves when user actions are performed and then compares results with expected outcomes. Here, testers try to replicate exactly how a user would interact with the application and validate the state of the UI. XCUITest allows them to write test cases for these purposes using two fundamental concepts.

Espresso is a native testing framework for Android to write reliable UI tests. Google released the Espresso framework in October 2013 and, as of release version 2.0, Espresso is part of the Android Support Repository. One of the important features in Espresso is that it automatically synchronizes your test actions with the user interface of your application. The framework also ensures that your activity is started before the test runs. It can also force a test to wait until all observer background activities have finished, which is sometimes a problem with other testing frameworks.

Detox

Detox is a JavaScript mobile testing framework that is built into the application and the test execution starts with app launch. This makes test execution really fast and robust as no external additional tools are needed to orchestrate and synchronize during the test execution.

Backend Automation

Backend Testing is a testing method that checks the server side or database of web applications or software. Backend testing aims to test the application layer or database layer to ensure that the web application or software is free from database defects like deadlock, data corruption, or data loss.

Karate framework

Karate is the only open-source tool to combine API test-automation, mocks, performance-testing and even UI automation into a single, unified framework. The BDD syntax popularized by Cucumber is language-neutral, and easy for even non-programmers. Assertions and HTML reports are built-in, and you can run tests in parallel for speed.

There's also a cross-platform stand-alone executable for teams not comfortable with Java. You don't have to compile code. Just write tests in a simple, readable syntax - carefully designed for HTTP, JSON, GraphQL and XML. And you can mix API and UI test-automation within the same test script.

A Java API also exists for those who prefer to programmatically integrate Karate's rich automation and data-assertion capabilities.

Cypress

Cypress framework is a JavaScript-based end-to-end testing framework built on top of Mocha – a feature-rich JavaScript test framework running on and in the browser, making asynchronous testing simple and convenient. It also uses a BDD/TDD assertion library and a browser to pair with any JavaScript testing framework.

Free Resources

Soap ui

SoapUI is the world's leading Functional Testing tool for SOAP and REST testing. With its easy-to-use graphical interface, and enterprise-class features, SoapUI allows you to easily and rapidly create and execute automated functional, regression, and load tests.

Newman

Postman is an API platform for building and using APIs. Postman simplifies each step of the API lifecycle and streamlines collaboration so you can create better APIs—faster. It is an API client that makes it easy for developers to create, share, test, and document APIs. With this open-source solution, users can create and save simple and complex HTTP/s requests and read their responses.

Newman is a command-line Collection Runner for Postman. It enables you to run and test a Postman Collection directly from the command line. It's built with extensibility to integrate it with your continuous integration servers and build systems.

Rest-assured

Rest-assured helps developers and test engineers to test REST APIs in Java ease by using techniques used in dynamic languages such as Groovy and Ruby.

Non Functional Testing

In the process of Software testing, testing and analyzing only software’s functions doesn't complete the testing process. There are some other attributes which will demonstrate the entire software quality, they are known as quality characteristics. These characteristics includes performance, security, usability, and reliability. Also not testing and analyzing the report of these characterisitics will not affect the function of software, it will work to a degree of extent,but testing of these quality characteristics are referred to as Qa non-functional testing.

Load and Performance Testing

Performance Testing is a subset of Performance Engineering. It is a process of evaluating a system’s behavior under various extreme conditions. The main intent of performance testing is to monitor and improve key performance indicators such as response time, throughput, memory, CPU utilization, and more.

There are three objectives (three S) of Performance testing to observe and evaluate: Speed, Scalability and Stability. Following are the commonly used performance testing types, but not limited to:

  • Load Testing
  • Stress Testing
  • Spike Testing
  • Endurance Testing
  • Volume Testing
  • Scalability Testing
  • Capacity Testing

Load Testing is one type of performance testing. It helps to evaluate the application under tests' behaviors such as response time, throughput, pass/fail transactions, and more under the normal workload. e.g., cart checkout response time is 500 milliseconds under typical business hours.

Vegeta

Vegeta is a versatile HTTP load testing tool built out of a need to drill HTTP services with a constant request rate. It can be used both as a command line utility and a library.

JMeter

Apache JMeter is an Apache project that can be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications.

Reference Resource

Locust

Locust is an easy-to-use, scriptable and scalable performance testing tool. You define the behavior of your users in regular Python code instead of being stuck in a UI or restrictive domain-specific language. This makes Locust infinitely expandable and very developer friendly. Given below are some of the features of Locust.

  • Write test scenarios in plain old Python

  • Distributed and scalable - supports hundreds of thousands of concurrent users

  • Web-based UI

  • Can test any system

  • Hackable

  • Locust Website (Official Website)

  • Learn Locust (Course)# Gatling

Gatling is a highly capable load testing tool. It is designed for ease of use, maintainability and high performance.

Out of the box, Gatling comes with excellent support of the HTTP protocol that makes it a tool of choice for load testing any HTTP server. As the core engine is actually protocol agnostic, it is perfectly possible to implement support for other protocols. For example, Gatling currently also ships JMS support.

Gatling’s architecture is asynchronous as long as the underlying protocol, such as HTTP, can be implemented in a non blocking way. This kind of architecture lets us implement virtual users as messages instead of dedicated threads, making them very resource cheap. Thus, running thousands of concurrent virtual users is not an issue.

k6

Grafana k6 is an open-source load testing tool that makes performance testing easy and productive for engineering teams. k6 is free, developer-centric, and extensible.

Using k6, you can test the reliability and performance of your systems and catch performance regressions and problems earlier. k6 will help you to build resilient and performant applications that scale.

Artillery

Artillery is a modern, powerful & easy-to-use performance testing toolkit. Use it to ship scalable applications that stay performant & resilient under high load.

Artillery prioritizes developer productivity and happiness, and follows the 'batteries-included' philosophy.

Features

  • Emulate complex user behavior with scenarios

  • Load testing and smoke testing

  • Batteries included

  • Extensible & hackable

  • Integrations and add-ons

  • Designed for cross-team collaboration

  • Planet-scale testing

  • Artillery Website (Official Website)

  • Learn Artillery (Course)# Lighthouse

Lighthouse is an open-source, automated tool for improving the quality of web pages. You can run it against any web page, public or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO, and more. You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module. You give Lighthouse a URL to audit, run a series of audits against the page, and then generate a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important and how to fix it.

WebPageTest

WebPageTest is a web performance tool providing deep diagnostic information about how a page performs under a variety of conditions.

Each test can be run from different locations around the world, on real browsers, over any number of customizable network conditions.

In software QA, accessibility testing is the practice of confirming that an application is usable for as many people as possible, including people with disabilities such as vision impairment, hearing problems and cognitive conditions.

Chrome dev tools

These are a set of tools built into the browser to aid frontend developers diagnose and solve various issues in their applications — such as JavaScript and logical bugs, CSS styling issues or even just making quick temprary alterations to the DOM.

To enter the dev tools, right click and click Inspect (or press ctrl+shift+c/cmd+opt+c) to enter the Elements panel. Here you can debug CSS and HTML issues. If you want to see logged messages or interact with javascript, enter the Console tab from the tabs above (or press ctrl+shift+j/cmd+opt+j to enter it directly). Another very useful feature in the Chrome dev tools is the Lighthouse (for checking perfomance) — more on this later.

NOTE: This isn't a chrome-specific feature, and most browsers (Chromium based or otherwise) will have their own, largely-similar set of devtools.

Wave# Axe# Security Testing

Security testing is a process intended to reveal flaws in the security mechanisms of an information system that protect data and maintain functionality as intended. Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements.

Typical security requirements may include specific elements of confidentiality, integrity, authentication, availability, authorization and non-repudiation. Actual security requirements tested depend on the security requirements implemented by the system. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such, a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from.

Authentication is the process of verifying that an individual, entity or website is whom it claims to be. Authentication in the context of web applications is commonly performed by submitting a username or ID and one or more items of private information that only a given user should know.

Authorization may be defined as 'the process of verifying that a requested action or service is approved for a specific entity' (NIST). Authorization is distinct from authentication which is the process of verifying an entity's identity. When designing and developing a software solution, it is important to keep these distinctions in mind. A user who has been authenticated (perhaps by providing a username and password) is often not authorized to access every resource and perform every action that is technically possible through a system.

For example, a web app may have both regular users and admins, with the admins being able to perform actions the average user is not privileged to do so, even though they have been authenticated. Additionally, authentication is not always required for accessing resources; an unauthenticated user may be authorized to access certain public resources, such as an image or login page, or even an entire web app.

Vulnerability scanning identifies hosts and host attributes (e.g., operating systems, applications, open ports), but it also attempts to identify vulnerabilities rather than relying on human interpretation of the scanning results. Many vulnerability scanners are equipped to accept results from network discovery and network port and service identification, which reduces the amount of work needed for vulnerability scanning.

Also, some scanners can perform their own network discovery and network port and service identification. Vulnerability scanning can help identify outdated software versions, missing patches, and misconfigurations, and validate compliance with or deviations from an organization’s security policy.

This is done by identifying the operating systems and major software applications running on the hosts and matching them with information on known vulnerabilities stored in the scanners’ vulnerability databases.

The Open Web Application Security Project, or OWASP, is an international non-profit organization dedicated to web application security.

The OWASP Top 10 is a regularly-updated report outlining security concerns for web application security, focusing on the 10 most critical risks. The report is put together by a team of security experts from all over the world. OWASP refers to the Top 10 as an ‘awareness document’ and they recommend that all companies incorporate the report into their processes in order to minimize and/or mitigate security risks.

Reference Resource

This metric reflects the context by which vulnerability exploitation is possible. This metric value (and consequently the Base Score) will be larger the more remote (logically, and physically) an attacker can be in order to exploit the vulnerable component. The assumption is that the number of potential attackers for a vulnerability that could be exploited from across a network is larger than the number of potential attackers that could exploit a vulnerability requiring physical access to a device, and therefore warrants a greater Base Score.

Secrets Management is a systematic way of in managing, storing, securing, and retrieving credentials for any systems, database, and other services.

Credentials such as passwords, SSH keys, certificates, API keys, backup codes, and more.

Email testing allows you to view your email before sending it out to your subscriber list to verify links, design, spelling errors, and more.

Mailinator# Gmail tester# Qa reporting# JUnit

JUnit is known as a unit testing framework used for the Java programming language. JUnit has been playing a crucial in the development of test-driven development and is one of a family of unit testing frameworks. JUnit is useful to write repeatable tests for your application code units. JUnit stimulates the idea of “testing first, then coding,”. The test approach explicates –test a little + code a little = JUnit. JUnit helps the programmer by increasing the productivity and the stability of the program’s code snippets.That will helps in reducing the time of the tester, which is spent on debugging of the code.

Allure

Allure Report is a flexible, lightweight multi-language test reporting tool. It provides clear graphical reports and allows everyone involved in the development process to extract the maximum of information from the everyday testing process.

Test Rail

TestRail is a web-based test management tool used by testers, developers and other stake holders to manage, track and organize software testing efforts. It follows a centralized test management concept that helps in easy communication and enables rapid development of task across QA team and other stakeholders .

Monitoring and Logs

DevOps monitoring entails overseeing the entire development process from planning, development, integration and testing, deployment, and operations. It involves a complete and real-time view of the status of applications, services, and infrastructure in the production environment. Features such as real-time streaming, historical replay, and visualizations are critical components of application and service monitoring.

Grafana

Grafana is the open-source platform for monitoring and observability. It allows you to query, visualize, alert on and understand your metrics no matter where they are stored.

New Relic is an observability platform that helps you build better software. You can bring in data from any digital source so that you can fully understand your system and how to improve it.

A Simple Tool for Monitoring Complex APIs. Verify that the structure and content of your API calls meets your expectations. Powerful and flexible assertions give you total control over defining a successful API call.

Create simple monitors with dynamic data for even the most complex use cases. More than simple string matching, build API validations without any code and use them across local dev, staging and production environments.

Sentry

Sentry tracks your software performance, measuring metrics like throughput and latency, and displaying the impact of errors across multiple systems. Sentry captures distributed traces consisting of transactions and spans, which measure individual services and individual operations within those services.

Kibana is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps.

Datadog is a monitoring and analytics platform for large-scale applications. It encompasses infrastructure monitoring, application performance monitoring, log management, and user-experience monitoring. Datadog aggregates data across your entire stack with 400+ integrations for troubleshooting, alerting, and graphing.

PagerDuty

Through its SaaS-based platform, PagerDuty empowers developers, DevOps, IT operations and business leaders to prevent and resolve business-impacting incidents for exceptional customer experience. When revenue and brand reputation depends on customer satisfaction, PagerDuty arms organizations with the insight to proactively manage events that may impact customers across their IT environment. With hundreds of native integrations, on-call scheduling and escalations, machine learning, business-wide response orchestration, analytics, and much more, PagerDuty gets the right data in the hands of the right people in real time, every time.

Version control/source control systems allow developers to track and control changes to code over time. These services often include the ability to make atomic revisions to code, branch/fork off of specific points, and to compare versions of code. They are useful in determining the who, what, when, and why code changes were made.

Git

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

There are different repository hosting services with the most famous one being GitHub, GitLab and BitBucket. I would recommend creating an account on GitHub because that is where most of the OpenSource work is done and most of the developers are.

Services Links

GitLab

GitLab is a provider of internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Bitbucket

Bitbucket is a Git based hosting and source code repository service that is Atlassian's alternative to other products like GitHub, GitLab etc

Bitbucket offers hosting options via Bitbucket Cloud (Atlassian's servers), Bitbucket Server (customer's on-premise) or Bitbucket Data Centre (number of servers in customers on-premise or cloud environment)

GitHub

GitHub is a provider of Internet hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

CI / CD

Continuous Integration is a software development method where team members integrate their work at least once daily. An automated build checks every integration to detect errors in this method. In Continuous Integration, the software is built and tested immediately after a code commit. In a large project with many developers, commits are made many times during the day. With each commit, code is built and tested.

Continuous Delivery is a software engineering method in which a team develops software products in a short cycle. It ensures that software can be easily released at any time. The main aim of continuous delivery is to build, test, and release software with good speed and frequency. It helps reduce the cost, time, and risk of delivering changes by allowing for frequent updates in production.

Jenkins is an open-source CI/CD automation server. Jenkins is primarily used for building projects, running tests, static code analysis and deployments.

Travis CI

Travis CI is a CI/CD service that is primarily used for building and testing projects that are hosted on BitBucket and GitHub. Open source projects can utilized Travis CI for free.

CircleCI

CircleCI is a CI/CD service that can be integrated with GitHub, BitBucket and GitLab repositories. The service that can be used as a SaaS offering or self-managed using your own resources.

Drone

Drone is a CI/CD service offering by Harness. Each build runs on an isolated Docker container, and Drone integrates with many popular source code management repositories like GitHub, BitBucket and GitLab

GitLab CI

GitLab offers a CI/CD service that can be used as a SaaS offering or self-managed using your own resources. You can use GitLab CI with any GitLab hosted repository, or any BitBucket Cloud or GitHub repository in the GitLab Premium self-managed, GitLab Premium SaaS and higher tiers.

Bamboo

Bamboo is a CI/CD service provided by Atlassian. Bamboo is primarily used for automating builds, tests and releases in a single workflow.

TeamCity

Bamboo is a CI/CD service provided by JetBrains. TeamCity can be used as a SaaS offering or self-managed using your own resources.

Azure DevOps

Azure DevOps is developed by Microsoft as a full scale application lifecycle management and CI/CD service. Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications.

Headless Testing

Headless testing is when end-to-end tests are performed without loading the browser's user interface. Since the browser operates as a typical browser would but does not make use of the user interface, it is considered highly suitable for automated testing.

A few example cases where one may use headless browser testing include:

Zombie.js allows you to run Unit or Integration tests without a real web browser. Instead, it uses a simulated browser where it stores the HTML code and runs the JavaScript you may have in your HTML page. This means that an HTML page doesn’t need to be displayed, saving precious time that would otherwise be occupied rendering it.

Playwright is an open-source test automation library initially developed by Microsoft contributors. It supports programming languages such as Java, Python, C#, and NodeJS. Playwright comes with Apache 2.0 License and is most popular with NodeJS with Javascript/Typescript.

Puppeteer is a Node library that provides a high-level API to control headless Chrome or Chromium browsers over the DevTools Protocol. It can also be configured to use full (non-headless) Chrome or Chromium.

Cypress framework is a JavaScript-based end-to-end testing framework built on top of Mocha – a feature-rich JavaScript test framework running on and in the browser, making asynchronous testing simple and convenient. It also uses a BDD/TDD assertion library and a browser to pair with any JavaScript testing framework.

Free Resources

Headless Chrome is a way to run the Chrome browser in a headless environment without the full browser UI. One of the benefits of using Headless Chrome (as opposed to testing directly in Node) is that your JavaScript tests will be executed in the same environment as users of your site.

Headless Browser Testing is a process of running the browser tests without the type of browser UI or GUI. In headless browser testing, to conduct cross-browser testing the tester can run test cases accurately and successfully without requiring the browser on which application needs to be tested.

HtmlUnit is a 'GUI-Less browser for Java programs'. It models HTML documents and provides an API that allows you to invoke pages, fill out forms, click links, etc... just like you do in your 'normal' browser. It has fairly good JavaScript support (which is constantly improving) and is able to work even with quite complex AJAX libraries, simulating Chrome, Firefox or Internet Explorer depending on the configuration used.

HtmlUnit is not a generic unit testing framework. It is specifically a way to simulate a browser for testing purposes and is intended to be used within another testing framework such as JUnit or TestNG.

Patterns and design principles# CQRS eventual consistency

CQRS (Segregation of Responsibility for Command Queries) is an architecture pattern that comes with the idea of separating read and write operations into two distinct logical processes.

ACID & CAP Theorem

ACID (Atomicity, Consistency, Isolation, Durability) and CAP (Consistency, Availability, Partition Tolerance) are essential concepts in distributed systems. They are often used to explain the trade-offs between consistency and availability.

CAP is an acronym for Consistency, Availability, and Partition Tolerance. According to the CAP theorem, any distributed system can only guarantee two of the three properties at any time. You can't guarantee all three properties at once.

ACID is an acronym that stands for Atomicity, Consistency, Isolation, Durability. ACID is a set of properties of database transactions intended to guarantee validity even in the event of errors, power failures, etc.

Test driven development (TDD) is the process of writing tests for software's requirements which will fail until the software is developed to meet those requirements. Once those tests pass, then the cycle repeats to refactor code or develop another feature/requirement. In theory, this ensures that software is written to meet requirements in the simplest form, and avoids code defects.

MVC MVP MVVM

Model-view-controller, or MVC, is a pattern used to separate user-interface, data and application logic. It does this by separating an application into three parts: Model, View, and Controller. The model holds the data, the view encompasses the user-interface, and the controller acts as a mediator between the two.

Model-view-presenter, or MVP, was designed to ease automated unit testing and improve the separation of concerns in presentation logic. MVP is a variant of the MVC pattern, though differs in that it divides the application into the user-interface (view), data (model) and presentation logic (presenter). While the model and the view represent stay the same as in the model-view-controller pattern, the presenter differs from the controller in that it manipulates the model and updates the view.

Another variant of the MVC is the model-view-viewmodel pattern. The Model-view-viewmodel, or MVVM, separates the application into three core components: Model, View, and View Model. While the view and model represent all that they did in their parent pattern, the view model acts as a link between the model and view, retrieves data from the model and exposes it to the view through two-way data binding and can manipulate the model's data.

Actors

Actor Model is a model that represents actors as the basic unit of a system, they can only communicate through messages and have their own private state, and they can also manage other actors, resulting in an encapsulated and fault-tolerant system.

SOLID is a set of principles applied to object-oriented design (OOD) to create maintainable, understandable, and flexible code, while avoiding code smells and defects. The principles are:

Domain-Driven Design

Domain-driven design (DDD) is a software design approach focusing on modeling software to match a domain according to input from that domain's experts.

In terms of object-oriented programming, it means that the structure and language of software code (class names, class methods, class variables) should match the business domain. For example, if a software processes loan applications, it might have classes like LoanApplication and Customer, and methods such as AcceptOffer and Withdraw.

DDD connects the implementation to an evolving model and it is predicated on the following goals:

  • Placing the project's primary focus on the core domain and domain logic;

  • Basing complex designs on a model of the domain;

  • Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.

  • Domain Driven Design Quickly (Official Docs)

OOP

Object-oriented programming (OOP) is a computer programming model that organizes software design around data, or objects, rather than functions and logic. An object can be defined as a data field that has unique attributes and behavior.

Software Architect Basics

Understand different concepts such as what is software architecture, software architect, different types of architects and so on.

What is Software Architecture?

Describes how an application is built including its components, how they interact with each other, environment in which they operate and so on.

What is Software Architect?

An expert developer who design software solutions from the ground up, making high-level decisions about each stage of the process including technical standards, tools, design principles, platforms to be used, etc., leading a team of engineers to create the final product.

Levels of Architecture

Architecture can be done on several “levels” of abstractions. The level influences the importance of necessary skills. As there are many categorizations possible my favorite segmentation includes these 3 levels:

  • Application Level: The lowest level of architecture. Focus on one single application. Very detailed, low level design. Communication is usually within one development team.
  • Solution Level: The mid-level of architecture. Focus on one or more applications which fulfill a business need (business solution). Some high, but mainly low-level design. Communication is between multiple development teams.
  • Enterprise Level: The highest level of architecture. Focus on multiple solutions. High level, abstract design, which needs to be detailed out by solution or application architects. Communication is across the organization.

Application Level Architecture

The lowest level of architecture. Focus on one single application. Very detailed, low level design. Communication is usually within one development team.

Solution Level Architecture

The mid-level of architecture. Focus on one or more applications which fulfill a business need (business solution). Some high, but mainly low-level design. Communication is between multiple development teams.

Enterprise Level Architecture

The highest level of architecture. Focus on multiple solutions. High level, abstract design, which needs to be detailed out by solution or application architects. Communication is across the organization.

Important Skills

To support the laid-out activities specific skills are required. From my experience, read books and discussions we can boil this down to these ten skills every software architect should have:

  • Design
  • Decide
  • Simplify
  • Code
  • Document
  • Communicate
  • Estimate
  • Balance
  • Consult
  • Market

Design and Architecture

What makes a good design? This is probably the most important and challenging question. I will make a distinction between theory and practice. To my experience, having a mix of both is most valuable. Let’s start with theory:

  • Know the basic design patterns: Patterns are one of the most important tools an architect needs to have to develop maintainable systems. With patterns you can reuse designs to solve common problems with proven solutions. The book “Design Patterns: Elements of Reusable Object-Oriented Software” written by John Vlissides, Ralph Johnson, Richard Helm, Erich Gamma is a must-read to everyone who is in software development. Although the patterns were published more than 20 years ago they are still the basis of modern software architecture. For example, the Model-View-Controller (MVC) pattern was described in this book, which is applied in many areas or is the basis for newer pattern, e.g. Model-View-ViewModel (MVVM).
  • Dig deeper into patterns and anti-patterns: If you already know all basic Gang-of-Four patterns, then extend your knowledge with more software design patterns or dig deeper into your area of interest. One of my favorite books about application integration is “Enterprise Integration Patterns” written by Gregor Hohpe. This book is applicable in various areas whenever two applications need to exchange data, whether it is an old-school file exchange from some legacy systems or a modern microservice architecture.
  • Know quality measures: Defining architecture is not the end. There are reasons why guidelines and coding standards are defined, applied and controlled. You do this because of quality and non-functional requirements. You want to have a system which is maintainable, reliable, adaptable, secure, testable, scalable, usable, etc. And one piece to achieving all of these quality attributes is applying good architecture work. You can start to learn more about quality measures on Wikipedia. Theory is important. Practice is equally—or even more—important if you do not want to become an Ivory Tower Architect.
  • Try out and understand different technology stacks: I think this is the most important activity if you want to become a better architect. Try out (new) technology stacks and learn their ups and downs. Different or new technology comes with different design aspects and patterns. You most likely do not learn anything from just flipping through abstract slides but by trying it out by yourself and feeling the pain or the relief. An architect should not only have broad, but—also in some areas—deep knowledge. It is not important to master all technology stacks but to have a solid understanding of the most important in your area. Also, try out technology which is not in your area, e.g., if you are deep into SAP R/3 you should also try JavaScript and vice versa. Still, both parties will be surprised about the latest advances in SAP S/4 Hana. For example, you can try it by yourself and take a course at openSAP for free. Be curious and try out new things. Also try out stuff which you did not like some years ago.
  • Analyze and understand applied patterns: Have a look at any current framework, e.g., Angular. You can study a lot of patterns in practice, e.g., Observables. Try to understand how it is applied in the framework, why it was done. And if you are really dedicated, have a deeper look into the code and understand how it was implemented.
  • Be curious and attend User Groups. Meetup

Decision Making

An architect needs to be able to take decisions and guide projects or the entire organization into the right direction.

  • Know what is important: Do not waste time with unimportant decisions or activities. Learn what is important. To my knowledge there is not a book which has these information. My personal favorites are these 2 characteristics which I usually consider when evaluating if something is important or not:
    1. Conceptional Integrity: If you decide to do it in one way, stick to it, even if it is sometimes better to do it differently. Usually, this leads to a more straightforward overall concept, eases comprehensibility and eases maintenance.
    2. Uniformity: If you for example define and apply naming conventions it is not about upper- or lowercase, but to have it applied everywhere in the same way.
  • Prioritize: Some decisions are highly critical. If they are not taken early enough workarounds are build up which are often unlikely to be removed later and are a nightmare for maintenance, or worse, developers simply stop working until a decision is taken. In such situations it is sometimes even better to go with a “bad” decision instead of having no decision. But before it comes to this situation, consider prioritizing upcoming decisions. There are different ways to do so. I suggest having a look at the Weighted Shortest Job First (WSJF) model which is widely used within agile software development. Especially the measures time criticality and risk reduction are critical to estimate the priority of architecture decisions.
  • Know your competence: Do not decide things which are not in your competence. This is critical as it may ruin your position as architect significantly if not considered. To avoid this, clarify with your peers which responsibilities you have and what is part of your role. If there are more than one architect, then you should respect the level of architecture in which you are currently deployed. As an lower level architect you better come up with suggestions for higher level architecture instead of decisions. Further, I recommend checking critical decisions always with a peer.
  • Evaluate multiple options: Always lay out more than one option if it comes to decisions. In the majority of the cases I was involved in, there was more than one possible (good) option. Going with only one option is bad in two respects: First, it seems that you did not do your job properly and secondly it impedes making proper decisions. By defining measures, options can be compared based on facts instead of gut feelings, e.g. license costs or maturity. This usually leads to better and more sustainable decisions. Further, it eases to sell the decision to different stakeholders. Besides, if you do not have evaluated options properly you may miss arguments when it comes to discussions.

Simplifying Things

Keep in mind the problem-solving principle Occam’s Razor which states to prefer simplicity. I interpret the principle as following: If you have too many assumptions about the problem to solve your solution will probably be wrong or lead to an unnecessary complex solution. Assumptions should be reduced (simplified) to come to a good solution.

  • Shake the solution: To get solutions simplified, it often helps to “shake” the solution and look at them from different positions. Try to shape the solution by thinking top-down and again bottom-up. If you have a data flow or process, then first think left to right and again right to left. Ask questions such as: “What happens to your solution in a perfect world?” Or: “What would company / person X do?” (Where X is probably not your competitor, but one of the GAFA (Google, Apple, Facebook, & Amazon) companies.) Both questions force you to reduce assumptions as suggested by Occam’s Razor.
  • Take a step back: After intense and long discussions, highly complex scribbles are often the results. You should never ever see these as the final results. Take a step back: Have a look at the big picture again (abstract level). Does it still make sense? Then go through it on the abstract level again and refactor. Sometimes it helps to stop a discussion and continue the next day. At least my brain needs some time to process and to come up with better, more elegant and simpler solutions.
  • Divide and Conquer: Simplify the problem by dividing it into smaller pieces. Then solve them independently. Afterwards validate if the small pieces match together. Take the step back to have a look at the overall picture for this.
  • Refactoring is not evil: It is totally ok to start with a more complex solution if no better idea can be found. If the solution is making troubles you can later rethink the solution and apply your learning. Refactoring is not evil. But before you start refactoring, keep in mind to have (1) enough automated tests in place which can ensure the proper functionality of the system and (2) the buy-in from your stakeholders. To learn more about refactoring I suggest reading “Refactoring. Improving the Design of Existing Code” by Martin Fowler.

How to Code

Even as an Enterprise Architect, the most abstract level of architecture, you should still know what developers are doing on their daily basis. And if you do not understand how this is done, you may face two major problems:

  • Developers won’t accept your sayings.
  • You do not understand challenges and needs of developers.
  • Have a side project: The purpose of this is to try out new technologies and tools to find out how development is done today and in the future. Experience is the combination of observations, emotions and hypothesis (“Experience and Knowledge Management in Software Engineering” by Kurt Schneider). Reading a tutorial or some pros and cons is good. But this is just “book knowledge”. Only if you try out things by yourself you can experience emotions and can built up hypothesis about why something is good or bad. And the longer you work with a technology the better your hypothesis will get. This will help you to take better decisions in your day to day work. As I started programming I had no code completion and only some utility libraries to speed up development. Obviously, with this background I would make wrong decisions today. Today, we have tons of programming languages, frameworks, tools, processes and practices. Only if you have some experience and a rough overview in the major trends you are able to take part of the conversation and to steer development into the right direction.
  • Find the right things to try out: You cannot try out everything. This is simply impossible. You need a more structured approach. One source I recently discovered is the Technology Radar from ThoughtWorks. They categorize technologies, tools, platforms, languages and frameworks into four categories:
    • Adopt: “strong feeling to be ready for enterprise usage”.
    • Trial: “enterprise should try it in one project that can handle the risk”.
    • Assess: “explore how it affects your enterprise”
    • Hold: “process with caution”.

With this categorization it is easier to get an overview of new things and their readiness to better evaluate which trend to explore next.

Documentation

Architectural documentation is sometimes more and sometimes less important. Important documents are for example architectural decisions or code guidelines. Initial documentation is often required before coding starts and need to be refined continuously. Other documentation can be automatically generated as code can also be documentation, e.g. UML class diagrams.

  • Clean Code: Code is the best documentation if done right. A good architect should be capable to distinguish between good and bad code. A really great resource to learn more about good and bad code is the book “Clean Code” by Robert C. Martin.
  • Generate documentation where possible: Systems are changing quickly and it is hard to update the documentation. Whether it is about APIs or system landscapes in form of CMDBs (Configuration management database): The underlying information often changes too fast to keep the corresponding documentation up to date by hand. Example: For APIs you could auto generate documentation based on the definition file if you are model driven, or directly from the source code. A lot of tools exist for that, I think Swagger and RAML are a good starting point to learn more.
  • As much as necessary, as little as possible: Whatever you need to document, e.g., decision papers, try to focus on only one thing at a time and include only the necessary information for this one thing. Extensive documentation is hard to read and to understand. Additional information should be stored in the appendix. Especially for decision papers it is more important to tell a convincing story instead of just throwing tons of arguments. Further, this saves you and your co-workers, who have to read it, a lot of time. Have a look at some documentation you have done in the past (source code, models, decision papers, etc.) and ask yourself the following questions: “Are all necessary information included to understand it?”, “Which information are really required and which could be omitted?” and “Does the documentation has a red line?”.
  • Learn more about architecture frameworks: This point could be applied to all other “technical” points as well. I put it here, as frameworks like TOGAF or Zachmann are providing “tools” which feel heavy on the documentation side, although their added value is not limited to documentation. Getting certified in such a framework teaches you to tackle architecture more systematically.

Communication

From my observations this is one of the most underestimated skill. If you are brilliant in design but cannot communicate your ideas, your thoughts are likely to have less impact or even fail to succeed.

  • Learn how to communicate your ideas: When collaborating on a board or flip chart, it is essential to know how to use it properly in order to structure you and your peers’ thoughts. I found the book “UZMO — Thinking With Your Pen” to be a good resource to enhance my skills in this area. As an architect you usually do not only participating in a meeting, usually you need to drive the meeting and to moderate it.
  • Give talks to large groups: Presenting your ideas to a small or large group should be doable for you. If you feel uncomfortable with this, start presenting to your best friend. Enlarge the group slowly. This is something which you can only learn by doing and by leaving your personal comfort zone. Be patient with yourself, this process may take some time.
  • Find the right level of communication: Different stakeholders have different interests and views. They need to be addressed individually on their level. Before you communicate, step back and check if the information you want to share have the right level, regarding abstractness, content, goals, motivations, etc. Example: A developer is usually interested in the very little detail of the solution, whereas a manager prefers to know which option saves most money.
  • Communicate often: A brilliant architecture is worthless if nobody knows about it. Distribute the target architecture and the thoughts behind it, regularly and on every organizational level. Schedule meetings with developers, architects and managers to show them the desired or defined way.
  • Be transparent: Regular communication mitigates missing transparency only partially. You need to make the reason behind decisions transparent. Especially, if people are not involved in the decision-making process it is hard to understand and to follow the decision and rationale behind it.
  • Be always prepared to give a presentation: There is always someone with questions and you want to give the right answers immediately. Try to always have the most important slides in a consolidated set which you can show and explain. It saves you a lot of time and it gives security to yourself.

Estimate and Evaluate

  • Know basic project management principles: As architect or lead developer you are often asked for estimates to realize your ideas: How long, how much, how many people, which skills, etc.? Of course, if you plan to introduce new tools or frameworks you need to have an answer for these kind of “management” questions. Initially, you should be able to give a rough estimate, like days, months or years. And do not forget that it is not only about implementing, there are more activities to consider, like requirements engineering, testing and fixing bugs. Therefore, you should know the activities the used software development process. One thing you can apply to get better estimates, is to use past data and derive your prediction from that. If you do not have past data, you can also try approaches such as COCOMO by Barry W. Boehm. If you are deployed in an agile project, learn how to estimate and to plan properly: The book “Agile Estimating and Planning” by Mike Cohn provides a solid overview in this area.
  • Evaluate “unknown” architecture: As architect you should also be able to evaluate the suitability of architectures for the current or future context(s). This is not an easy task but you can prepare for it by having a set of questions at hand which are common for every architecture. And it’s not only about architecture but also about how the system is managed, as this also gives you insights about the quality. I suggest to always have some questions prepared and ready to use. Some ideas for general questions:
    • Design practices: Which patterns does the architecture follow? Are they consequently and correctly used? Does the design follow a red line or is there an uncontrolled growth? Is there a clear structure and separation of concerns?
    • Development practices: Code guidelines in place and followed? How is the code versioned? Deployment practices?
    • Quality assurance: Test automation coverage? Static code analysis in place and good results? Peer reviews in place?
    • Security: Which security concepts are in place? Built-in security? Penetration tests or automated security analysis tools in place and regularly used?

Balance

  • Quality comes at a price: Earlier I talked about quality and non-functional requirements. If you overdo architecture it will increase costs and probably lower speed of development. You need to balance architectural and functional requirements. Over engineering should be avoided.
  • Solve contradicting goals: A classic example of contradicting goals are short- and long-term goals. Projects often tend to build the simplest solution whereas an architect has the long-term vision in mind. Often, the simple solution does not fit into the long-term solution and is at risk to be thrown away later (sunk costs). To avoid implementation into the wrong direction, two things need to be considered:
    1. Developers and business need to understand the long term vision and their benefits in order to adapt their solution and
    2. managers who are responsible for budget need to be involved to understand the financial impact. It is not necessary to have 100% of the long term vision in place directly, but the developed piece should fit into it.
  • Conflict management: Architects are often the glue between multiple groups with different backgrounds. This may lead to conflicts on different levels of communication. To find a balanced solution which also reflect long-term, strategic goals, it is often the role of architects to help overcome the conflict. My starting point regarding communication theory was the “Four-Ears Model” of Schulze von Thun. Based on this model a lot can be shown and deducted. But this theory needs some practice, which should be experienced during communication seminars.

Consult and Coach

Being pro-active is probably the best you can do when it comes to consulting and coaching. If you are asked, it is often too late. And cleaning up on the architecture site is something which you want to avoid. You need to somehow foresee the next weeks, months or even years and prepare yourself and the organization for the next steps.

  • Have a vision: If you are deployed in a project, whether it is a traditional waterfall like approach or agile, you always need to have a vision of your mid- and long-term goals you want to achieve. This is not a detailed concept, but more a road-map towards everyone can work. As you cannot achieve everything at once (it is a journey) I prefer to use maturity models. They give a clear structure which can be easily consumed and give the current status of progress at every time. For different aspects I use different models, e.g. development practices or continuous delivery. Every level in the maturity model has clear requirements which follow the SMART criteria in order to ease measuring if you have achieved it or not. One nice example I found is for continues delivery.
  • Build a community of practice (CoP): Exchanging experience and knowledge among a common interest group helps distributing ideas and standardizing approaches. For example you could gather all JavaScript developer and architects in one room, every three months or so, and discuss past and current challenges and how they were tackled or new methodologies and approaches. Architects can share, discuss and align their visions, developers can share experience and learn from their peers. Such a round can be highly beneficial for the enterprise but also for the individual itself, as it helps building a stronger network and distributes ideas. Also check out the article Communities of Practice from the SAFe Framework which explains the CoP concept in an agile setting.
  • Conduct open door sessions: One source of misconceptions or ambiguity is lack of communication. Block a fixed time slot, e.g. 30 min every week, for exchanging hot topics with your peers. This session has no agenda everything can be discussed. Try to solve minor things on the spot. Schedule follow-ups on the more complex topics.

Marketing Skills

Your ideas are great and you have communicated them well but still nobody wants to follow? Then you probably lack marketing skills.

  • Motivate and convince: How do companies convince you of buying a product? They demonstrate its value and benefits. But not just with 5 bullet points. They wrap it nicely and make it as easy as possible to digest.
    • Prototypes: Show a prototype of your idea. There are plenty of tools for creating prototypes. In the context of enterprises who love SAP check out build.me in which you can create nice looking and clickable UI5 apps fast and easy.
    • Show a video: Instead of “boring slides” you can also show a video which demonstrates your idea or at least the direction. But please, don’t overdo marketing: In the long term, content is king. If your words do not come true, this will damage your reputation in the long term.
  • Fight for your ideas and be persistent: People sometime do not like your ideas or they are just too lazy to follow them. If you are really convinced by your ideas, you should continuously go after them and “fight”. This is sometimes necessary. Architecture decisions with long term goals are often not the easiest one’s: Developers do not like them, as they are more complex to develop. Managers do not like them, as they are more expensive in the short term. This is your job to be persistent and to negotiate.
  • Find allies: Establishing or enforcing your ideas on your own can be hard or even impossible. Try to find allies who can support and help convincing others. Use your network. If you do not have one yet, start building it now. You could start by talking to your (open-minded) peers about your ideas. If they like it, or at least parts of it, it is likely that they support your idea if asked by others (“The idea by X was interesting.”). If they don’t like it, ask for the why: Maybe you have missed something? Or your story is not convincing enough? Next step is to find allies with decision power. Ask for an open-minded discussion. If you fear the discussion, remember that sometimes you need to leave your comfort zone.
  • Repeat It, Believe It: “[…] studies show that repeated exposure to an opinion makes people believe the opinion is more prevalent, even if the source of that opinion is only a single person.” (Source: The Financial Brand) If you publish few messages often enough, it can help to convince people more easily. But be aware: From my perspective such a strategy should be used wisely as it could backfire as a lousy marketing trick.

Technical Skills

  • Experience in software development
  • Experience in project management
  • Knowledge of one or more programming languages, such as Java, Python, JavaScript, Ruby, Rust, and C
  • Knowledge of different development platforms
  • Understanding of web applications, cybersecurity, and open source technologies
  • Proficiency in analyzing code for issues and errors
  • Experience in database platforms
  • Experience with Operations and DevOps Skills

Programming languages# Java kotlin scala<DedicatedRoadmap

href='/python' title='Python Roadmap' description='Click to check the detailed Python Roadmap.' />

Python

Python is a multi-paradigm language. Being an interpreted language, code is executed as soon as it is written and the Python syntax allows for writing code in functional, procedural or object-oriented programmatic ways. Python is frequently recommended as the first language new coders should learn, because of its focus on readability, consistency, and ease of use. This comes with some downsides, as the language is not especially performant in most production tasks.

Ruby

Ruby is a high-level, interpreted programming language that blends Perl, Smalltalk, Eiffel, Ada, and Lisp. Ruby focuses on simplicity and productivity along with a syntax that reads and writes naturally. Ruby supports procedural, object-oriented and functional programming and is dynamically typed.

Go

Go is an open source programming language supported by Google. Go can be used to write cloud services, CLI tools, used for API development, and much more.

JavaScript

JavaScript allows you to add interactivity to your pages. Common examples that you may have seen on the websites are sliders, click interactions, popups and so on. Apart from being used on the frontend in browsers, there is Node.js which is an open-source, cross-platform, back-end JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser.

TypeScript

TypeScript is a strongly typed programming language that builds on JavaScript, giving you better tooling at any scale.

Free Resources

.NET Framework

.NET is an open-source platform with tools and libraries for building web, mobile, desktop, games, IoT, cloud, and microservices.

Officially supported languages in .NET: C#, F#, Visual Basic.

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Reference Resource

Slack is a messaging app for business that connects people to the information that they need. By bringing people together to work as one unified team, Slack transforms the way that organisations communicate.

Reference Resource

Trello is the visual tool that empowers your team to manage any type of project, workflow, or task tracking.

Reference Resource

Serverless architecture (also known as serverless computing or function as a service, FaaS) is a software design pattern where applications are hosted by a third-party service, eliminating the need for server software and hardware management by the developer. Applications are broken up into individual functions that can be invoked and scaled individually.

Microservices are an architectural approach to software development that allows the creation of a distributed application from deployable services that allow communication through a well-defined API. Being a solution to monoliths.

Client server architecture

Layered architecture# Distributed systems

Service-oriented architecture (SOA) is an enterprise-wide approach to software development of application components that takes advantage of reusable software components, or services.

SOA provides four different service types:

  1. Functional services (i.e., business services), which are critical for business applications.
  2. Enterprise services, which serve to implement functionality.
  3. Application services, which are used to develop and deploy apps.
  4. Infrastructure services, which are instrumental for backend processes like security and authentication.

OWASP or Open Web Application Security Project is an online community that produces freely-available articles, methodologies, documentation, tools, and technologies in the field of web application security.

Auth strategies

JWT

SAML

OAuth & Open ID Connect

Working with data# Spark, Hadoop MapReduce

Apache Spark is a data processing framework that can quickly perform processing tasks on very large data sets, and can also distribute data processing tasks across multiple computers, either on its own or in tandem with other distributed computing tools.

Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.

ETL Datawarehouses

In the world of data warehousing, if you need to bring data from multiple different data sources into one, centralized database, you must first:

  • EXTRACT data from its original source
  • TRANSFORM data by deduplicating it, combining it, and ensuring quality, to then
  • LOAD data into the target database

ETL tools enable data integration strategies by allowing companies to gather data from multiple data sources and consolidate it into a single, centralized location. ETL tools also make it possible for different types of data to work together.

Sql databases

SQL stands for Structured Query Language. It's used for relational databases. A SQL database is a collection of tables that stores a specific set of structured data.

Examples of SQL Databases

  • MariaDB and MySQL
  • PostgreSQL

NoSQL databases (aka 'not only SQL') are non-tabular databases and store data differently than relational tables. NoSQL databases come in a variety of types based on their data model. The main types are document, key-value, wide-column, and graph. They provide flexible schemas and scale easily with large amounts of data and high user loads.

Types of NoSQL databases

  • Document databases Ex. MongoDB
  • Key-value databases Ex. Amazon S3
  • Wide-column databases Ex. Cassandra
  • Graph databases Ex. Neo4J

Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.

Hadoop

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.

Datawarehouses principles

It is based on the assumption that every system should take care of a concern in a way that such concern should be encapsulated by the system itself.

Apis and integrations# gPRC

gRPC is a platform agnostic serialization protocol that is used to communicate between services. Designed by Google in 2015, it is a modern alternative to REST APIs. It is a binary protocol that uses HTTP/2 as a transport layer. It is a high performance, open source, general-purpose RPC framework that puts mobile and HTTP/2 first.

It's main use case is for communication between two different languages within the same application. You can use Python to communicate with Go, or Java to communicate with C#.

gRPC uses the protocol buffer language to define the structure of the data that is

  • gRPC Website (Official Website)

  • gRPC Introduction (Official Website)

  • gRPC Core Concepts (Official Website)

  • Stephane Maarek - gRPC Introduction (Watch)# Esb soap# GraphQL GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

  • Apollo GraphQL Tutorials (Read)# REST

REST, or REpresentational State Transfer, is an architectural style for providing standards between computer systems on the web, making it easier for systems to communicate with each other.

BPM BPEL

BPM: Business Process Management

Medium or large enterprises needs robust processes to streamline their business needs by reducing the cost incurred per process and diminishing the turn around time for each activity. To achieve the above, there are various BPM tools like PEGA, IBM BPM, Appian, etc. Basically these tools automate the processes through a robust process modelling and implementation.

BPMN: Business Process Management Notations

Its is a standard for representing business processes graphically. While modelling the process, the notations used are complied with BPMN (there are other like EPC, etc.). So BPMN is a standard notation that BPM consultants follow to model the business process. BPMN has versions and now BPMN 2.0 is the standard one.

BPEL : Business Process Execution Language

Programmers use BPEL to define how a business process that involves web services will be executed. BPEL messages are typically used to invoke remote services, orchestrate process execution and manage events and exceptions. BPEL is often associated with Business Process Management Notation. In many organizations, analysts use BPMN to visualize business processes and developers transform the visualizations to BPEL for execution.

Messaging queues

Message queuing makes it possible for applications to communicate asynchronously, by sending messages to each other via a queue. A message queue provides temporary storage between the sender and the receiver so that the sender can keep operating without interruption when the destination program is busy or not connected.

Reference Resource

Functional programming is a programming paradigm designed to handle pure mathematical functions. This paradigm is totally focused on writing more compounded and pure functions.

Reactive programming describes a design paradigm that relies on asynchronous programming logic to handle real-time updates to otherwise static content. It provides an efficient means -- the use of automated data streams -- to handle data updates to content whenever a user makes an inquiry.

React

React is the most popular front-end JavaScript library for building user interfaces. React can also render on the server using Node and power mobile apps using React Native.

Vue.js

Vue.js is an open-source JavaScript framework for building user interfaces and single-page applications. It is mainly focused on front end development.

Angular

Angular is a component based front-end development framework built on TypeScript which includes a collection of well-integrated libraries that include features like routing, forms management, client-server communication, and more.

SPA vs SSG vs SSR

  • SPA: A single page application loads only a single web document from the server and then updates the content of that document on demand via Javascript APIs without reloading the entire document. React, Vue, Angular are the top frameworks used to create single page applications.

  • SSR: This technique uses a server like Node.js to fully render the web document upon the receival of a request and then send it back to the client. This way the user get an interactive document with all the necessary information without having to wait for any JavaScript or CSS files to load.

  • SSG: Static site generation renders the web document in the server(like SSR), however the page is rendered at build time. So, instead of rendering the page on the server upon the receival of a request, the page is already rendered in the server, waiting to be served to the client.

  • Web design patterns — SSR, SSG, and SPA (Read)

  • Rendering on the Web (Read)

PWA

Progressive Web Apps (PWAs) are websites that are progressively enhanced to function like installed, native apps on supporting platforms, while functioning like regular websites on other browsers.

Microfrontends

Microfrontends is an architectural style where independently deliverable frontend applications built by different teams using different technologies are composed into a greater whole. Simply, a Micro-Frontend is a portion of a webpage (not the entire page). There is a “Host” or a “Container” page in the Micro-Frontend Architecture page that can host one or more Micro-Frontends.

W3c and WHATWG Standards

World Wide Web Consortium (W3C) standards define the best practises for web development to enable developers to build rich interactive experiences that are available on any device. Theses standards range from recommended web technologies such as HTML, CSS, XML to the generally accepted principles of web architecture, semantics and services.

Web Hypertext Application Technology Working Group (WHATWG) is another set of web standards that came into existence after W3C announced that it was going to be focusing on XHTML over HTML.

Architect frameworks# Babok

The guide to the Business Analysis Body of Knowledge (BABOK Guide) is a book from the International Institute of Business Analysis (IIBA) that provides business analysts (BAs) with strategies for using data to improve an organization's workflow processes, technology, products and services.

Iaf

The Integrated Architecture Framework (IAF) is an enterprise architecture framework that covers business, information, information system and technology infrastructure.

UML

The Unified Modeling Language, or UML, is a modeling language that is intended to provide a standard way to visualize and describe the design of a system.

Togaf

The TOGAF content framework provides a detailed model of architectural work products, including deliverables, artifacts within deliverables, and the architectural building blocks that artifacts represent.

Management# Certifications# Pmi itil prince2 rup# Agile scrum# Less# Safe# Networks# OSI and TCP/IP Models

The OSI and TCP/IP model is used to help the developer to design their system for interoperability. The OSI model has 7 layers while the TCP/IP model has a more summarized form of the OSI model only consisting 4 layers. This is important if you're are trying to design a system to communicate with other systems.

Http Https

HTTP is the TCP/IP based application layer communication protocol which standardizes how the client and server communicate with each other. It defines how the content is requested and transmitted across the internet.

HTTPS (Hypertext Transfer Protocol Secure) is the secure version of HTTP, which is the primary protocol used to send data between a web browser and a website.

HTTPS = HTTP + SSL/TLS

Proxies

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource.

Firewalls

A Firewall is a network security device that monitors and filters incoming and outgoing network traffic based on an organization's previously established security policies.

Operations knowledge# Infrastructure as Code

Sometimes referred to as IaC, this section refers to the techniques and tools used to define infrastructure, typically in a markup language like YAML or JSON. Infrastructure as code allows DevOps Engineers to use the same workflows used by software developers to version, roll back, and otherwise manage changes.

The term Infrastructure as Code encompasses everything from bootstrapping to configuration to orchestration, and it is considered a best practice in the industry to manage all infrastructure as code. This technique precipitated the explosion in system complexity seen in modern DevOps organizations.

Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers.

There are still servers in serverless, but they are abstracted away from app development. A cloud provider handles the routine work of provisioning, maintaining, and scaling the server infrastructure. Developers can simply package their code in containers for deployment.

Linux / Unix

Knowledge of UNIX is a must for almost all kind of development as most of the codes that you write is most likely going to be finally deployed on a UNIX/Linux machine. Linux has been the backbone of the free and open source software movement, providing a simple and elegant operating system for almost all your needs.

Service Mesh

A Service Mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. In layman's terms, it's a tool which helps you to control how different services communicate with each other.

CI/CD is a method to frequently deliver apps to customers by introducing automation into the stages of app development. The main concepts attributed to CI/CD are continuous integration, continuous delivery, and continuous deployment. CI/CD is a solution to the problems integrating new code can cause for development and operations teams (AKA 'integration hell').

Containers

Containers are a construct in which cgroups, namespaces, and chroot are used to fully encapsulate and isolate a process. This encapsulated process, called a container image, shares the kernel of the host with other containers, allowing containers to be significantly smaller and faster than virtual machines.

These images are designed for portability, allowing for full local testing of a static image, and easy deployment to a container management platform.

Cloud Design Patterns

These design patterns are useful for building reliable, scalable, secure applications in the cloud.

The link below has cloud design patterns where each pattern describes the problem that the pattern addresses, considerations for applying the pattern, and an example based on Microsoft Azure. Most patterns include code samples or snippets that show how to implement the pattern on Azure. However, most patterns are relevant to any distributed system, whether hosted on Azure or other cloud platforms.

Enterprise software# MS Dynamics

Microsoft Dynamics 365 is a combination of both Enterprise Resource Planning (ERP) software and Customer Relationship Management (CRM) software.

Sap epr hana business objects# Emc dms# IBM BPM

IBM BPM is a comprehensive business process management platform. It provides a robust set of tools to author, test, and deploy business processes, as well as full visibility and insight to managing those business processes.

Salesforce# Architect Responsibilities

To understand the necessary skills an architect needs, we first need to understand typical activities. The following list contains from my perspective the most important activities:

  • Define and decide development technology and platform
  • Define development standards, e.g., coding standards, tools, review processes, test approach, etc.
  • Support identifying and understanding business requirements
  • Design systems and take decisions based on requirements
  • Document and communicate architectural definitions, design and decisions
  • Check and review architecture and code, e.g., check if defined patterns and coding standards are implemented properly
  • Collaborate with other architects and stakeholders
  • Coach and consult developers
  • Make sure that as implementation takes place, the architecture is being adhered to
  • Play a key part in reviewing code
  • Detail out and refine higher level design into lower level design

Note: Architecture is a continuous activity, especially when it is applied in agile software development. Therefore, these activities are done over and over again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment