Skip to content

Instantly share code, notes, and snippets.

@yogain123
Created July 28, 2024 06:01
FE System Design
@yogain123
Copy link
Author

Making SEO better

🔹 Meta Title (<title> tag)
<title>Best SEO Tips & Strategies for Higher Rankings | MyWebsite</title>

🔹 Meta Description ()
<meta name="description" content="Learn the best SEO tips, strategies, and techniques to rank higher on Google. Improve your website’s visibility with expert insights.">

🔹 Viewport Meta Tag ()
<meta name="viewport" content="width=device-width, initial-scale=1">


1. Technical SEO

Improve Site Speed:

  • Use a CDN (Content Delivery Network).
  • Optimize images (use WebP format, compress large files).
  • Minify CSS, JavaScript, and HTML.
  • Enable lazy loading for images.
  • Implement caching using tools like Redis or browser cache.

2. On-Page SEO

Optimize Images with Alt Text:

  • Use descriptive alt text for images to improve accessibility and SEO.

Improve URL Structure:

  • Keep URLs short and keyword-rich (/best-seo-tips instead of /post123).

Use Internal Linking Strategically:

  • Link to relevant pages using anchor text to improve navigation and SEO.

Use HTML Semantic tags like main, section, header:

@yogain123
Copy link
Author

yogain123 commented Feb 6, 2025

Base64 Vs Binary Images from Frontend

  • In FormData - Direct BLOB (Binary Large Object) is sent to BE
  • In Base64 - BLOB (Binary Large Object) is converted to Base64 and then sent in body

To send a file from your React frontend to the backend in Base64 format, follow these steps:


Frontend (React) - Convert File to Base64 & Send via Fetch

  1. Select a file using an <input> element.
  2. Convert the file to Base64 using FileReader.
  3. Send the Base64 string in the request body.

React Code (Frontend)

import React, { useState } from "react";

const FileUpload = () =&gt; {
  const [fileBase64, setFileBase64] = useState("");

  const handleFileChange = (event) =&gt; {
    const file = event.target.files[0];
    if (!file) return;

    const reader = new FileReader();
    reader.readAsDataURL(file); // Convert to Base64
    reader.onload = () =&gt; {
      setFileBase64(reader.result.split(",")[1]); // Remove the data type prefix
    };
  };

  const handleUpload = async () =&gt; {
    if (!fileBase64) {
      alert("Please select a file first.");
      return;
    }

    const payload = {
      fileName: "example.png", // Can be dynamic
      fileData: fileBase64, // The actual Base64 content
    };

    try {
      const response = await fetch("http://localhost:5000/upload", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(payload),
      });

      const result = await response.json();
      console.log("Upload success:", result);
    } catch (error) {
      console.error("Upload failed:", error);
    }
  };

  return (
    &lt;div&gt;
      &lt;input type="file" onChange={handleFileChange} /&gt;
      &lt;button onClick={handleUpload}&gt;Upload&lt;/button&gt;
    &lt;/div&gt;
  );
};

export default FileUpload;

Backend (Node.js with Express) - Decode Base64 & Save the File

Your backend should:

  1. Accept a JSON payload with fileData (Base64 string) and fileName.
  2. Decode Base64 to binary.
  3. Save the file to disk.

Node.js + Express Backend

const express = require("express");
const fs = require("fs");
const path = require("path");
const app = express();
const PORT = 5000;

// Middleware to parse JSON
app.use(express.json({ limit: "10mb" })); // Increase limit if needed

app.post("/upload", async (req, res) =&gt; {
  try {
    const { fileName, fileData } = req.body;

    if (!fileName || !fileData) {
      return res.status(400).json({ message: "Missing file data" });
    }

    // Convert Base64 to Buffer
    const buffer = Buffer.from(fileData, "base64");

    // Define file path
    const filePath = path.join(__dirname, "uploads", fileName);

    // Ensure uploads directory exists
    if (!fs.existsSync("uploads")) {
      fs.mkdirSync("uploads");
    }

    // Save file
    fs.writeFileSync(filePath, buffer);
    
    res.json({ message: "File uploaded successfully", filePath });
  } catch (error) {
    res.status(500).json({ message: "Error uploading file", error });
  }
});

app.listen(PORT, () =&gt; {
  console.log(`Server running on http://localhost:${PORT}`);
});

How It Works

  1. User selects a file in the React UI.
  2. The file is converted to Base64 using FileReader.
  3. React sends the Base64 string to the backend via fetch().
  4. The backend:
    • Receives the Base64 string.
    • Decodes it into binary format.
    • Saves the file to the uploads directory.

Alternative Approach (FormData)

If your backend does not require Base64, it's more efficient to use FormData (sends the file as binary):

React - Send as FormData

const handleUpload = async () =&gt; {
  const formData = new FormData();
  formData.append("file", selectedFile);

  const response = await fetch("http://localhost:5000/upload", {
    method: "POST",
    body: formData,
  });

  const result = await response.json();
  console.log("Upload success:", result);
};

Backend - Express with Multer

const multer = require("multer");
const upload = multer({ dest: "uploads/" });

app.post("/upload", upload.single("file"), (req, res) =&gt; {
  res.json({ message: "File uploaded", filePath: req.file.path });
});

FileReader In details

import React, { useState } from "react";
import "./App.css";

function App() {
  const [fileData, setFileData] = useState({
    arrayBuffer: null,
    dataUrl: null,
    text: null,
  });

  const handleFileChange = (event) => {
    const file = event.target.files[0];

    // For ArrayBuffer - for images etc - non text
    const reader1 = new FileReader();
    reader1.onload = (e) => {
      setFileData((prev) => ({ ...prev, arrayBuffer: e.target.result }));
    };
    reader1.readAsArrayBuffer(file);

    // For DataURL -> to show on UI directly
    const reader2 = new FileReader();
    reader2.onload = (e) => {
      setFileData((prev) => ({ ...prev, dataUrl: e.target.result }));
    };
    reader2.readAsDataURL(file);

    // For Text - for text file like csv, .json, .js , .txt etc
    const reader3 = new FileReader();
    reader3.onload = (e) => {
      setFileData((prev) => ({ ...prev, text: e.target.result }));
    };
    reader3.readAsText(file);
  };

  return (
    <div>
      <input type="file" onChange={handleFileChange} />

      {fileData.dataUrl && (
        <img
          src={fileData.dataUrl}
          alt="Preview"
          style={{ maxWidth: "300px" }}
        />
      )}

      {fileData.text && <pre>{fileData.text}</pre>}

      {fileData.arrayBuffer && (
        <p>File size in bytes: {fileData.arrayBuffer.byteLength}</p>
      )}
    </div>
  );
}

export default App;

multipart talks

Why Do We Use multipart/form-data in Headers for File Uploads?

When uploading files via FormData, we set the request header as Content-Type: multipart/form-data. This is necessary because the file needs to be sent in a format that the server can properly parse and process.


Does This Mean the File Is Sent in Chunks?

Yes and No

  • Yes, because multipart/form-data splits the request body into multiple parts (text fields + files) using a boundary string.
  • No, because it's not "chunked" like streaming; instead, the entire file is still part of a single HTTP request.

How Does multipart/form-data Work?

  1. Each part of the request (including the file) is separated by a unique boundary string.
  2. The file is sent as raw binary data within the body, making it more efficient than encoding it into base64.
  3. The server extracts the file and other fields based on the boundaries.

Example of multipart/form-data Request Body

When you send a file using FormData, your request body looks something like this:

------WebKitFormBoundaryXyz
Content-Disposition: form-data; name="username"

JohnDoe
------WebKitFormBoundaryXyz
Content-Disposition: form-data; name="file"; filename="image.png"
Content-Type: image/png

(binary file data here)
------WebKitFormBoundaryXyz--

  • Each part has a header (e.g., Content-Disposition) that tells the server what the field represents.
  • The file is sent as raw binary data, not encoded (which keeps it small and fast).
  • The boundary string (------WebKitFormBoundaryXyz) separates parts of the request.

@yogain123
Copy link
Author

HLD - Ticket Booking System

Ticket Booking System - https://excalidraw.com/#json=yjZFXJdRdHu4e4R2-n-9C,AoIqj9Gia_F-MRl8ZJxA0A

@yogain123
Copy link
Author

CI/CD Process

Screenshot 2025-02-08 at 11 45 02 AM

CI/CD in Frontend Development (Explained in Detail & Easy Manner)

CI/CD stands for Continuous Integration (CI) and Continuous Deployment (CD). It is a process that helps developers automate testing and deployment so that code changes can be delivered quickly, safely, and efficiently.


1. Understanding CI (Continuous Integration)

What is CI?

CI is the process where developers frequently push code to a shared repository (like GitHub, GitLab, or Bitbucket). Every time code is pushed, it is automatically built, tested, and verified to ensure nothing breaks.

Steps in CI (Frontend Perspective)

🔹 Step 1: Developer Writes Code

  • You make changes in your frontend code (React, Angular, Vue, etc.).
  • You push the code to a Git repository (e.g., GitHub).

🔹 Step 2: Automated Build Process

  • CI tools (Jenkins, GitHub Actions, GitLab CI, CircleCI) detect the new changes.
  • The build process starts (e.g., Webpack, Vite, Babel compile the code).
  • If there are errors, the build fails immediately.

🔹 Step 3: Automated Testing

  • Runs unit tests (e.g., Jest, Mocha, Cypress for frontend).
  • Runs linting checks (ESLint, Prettier).
  • Runs static code analysis (SonarQube, CodeClimate).
  • If tests pass, the process continues. Otherwise, developers fix issues.

CI ensures every new code change is verified, tested, and bug-free before merging.


2. Understanding CD (Continuous Deployment/Delivery)

What is CD?

CD is the process of automatically deploying the tested and verified code to a staging or production environment.

There are two variations:

  1. Continuous Delivery – Code is deployed to a staging environment, but deployment to production requires manual approval.
  2. Continuous Deployment – Code is automatically deployed to production without manual approval.

Steps in CD (Frontend Perspective)

🔹 Step 4: Deployment to Staging

  • Once CI is successful, the code is deployed to a staging environment.
  • Staging is a testing ground that mimics production.

🔹 Step 5: Manual or Automated Approval for Production

  • If the company uses Continuous Delivery, a human approves before deploying to production.
  • If using Continuous Deployment, the system automatically deploys.

🔹 Step 6: Deployment to Production

  • The frontend code is uploaded to a CDN (e.g., Cloudflare, AWS S3 + CloudFront, Vercel, Netlify).
  • The application is now live for users!

CD ensures that the latest changes reach users quickly and safely.


4. Real-Life Frontend CI/CD Example with GitHub Actions

name: CI/CD Pipeline

on:
  push:
    branches:
      - main  # Runs on main branch push

jobs:
  build_and_test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v2

      - name: Install Dependencies
        run: npm install

      - name: Run Linter
        run: npm run lint

      - name: Run Tests
        run: npm run test

      - name: Build Application
        run: npm run build

  deploy:
    needs: build_and_test
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to S3
        run: aws s3 sync ./build s3://my-frontend-bucket --delete

This workflow automatically builds, tests, and deploys frontend code!

@yogain123
Copy link
Author

Deployments

What Exactly Happens During Deployment in CI/CD?

You already understand CI (Continuous Integration)—it handles build, testing, and linting. Now, let’s go deep into CD (Continuous Deployment/Delivery) and understand what happens internally when we deploy React (frontend), Node.js (backend), CDN (static assets), and Database changes.


1. What is Deployment Actually Doing?

When we say, "Deploy to staging (stg) or production (prod)," the process involves:
Packing the app (React build, Node.js files, DB migrations).
Uploading to a hosting platform (AWS, Vercel, Netlify, Docker, Kubernetes).
Updating services (backend API servers, CDN assets, databases).
Switching traffic (blue-green deployment, canary release, etc.).


2. Deployment Internals for React (Frontend Apps)

Scenario: Deploying a React app to AWS S3 + CloudFront

Step-by-Step Internal Process

1️⃣ The Build is Uploaded to a Storage Bucket (AWS S3, Vercel, Netlify, Firebase Hosting)

  • When you run npm run build, React creates a build/ folder with:
    build/
    ├── index.html
    ├── static/
        ├── js/ (hashed JS files)
        ├── css/ (hashed CSS files)
        ├── media/ (optimized images)
    
  • These files are uploaded to AWS S3 or another hosting provider.
    aws s3 sync build/ s3://my-react-app --delete

2️⃣ CDN (CloudFront, Vercel) Caches and Distributes the Files

  • The CDN caches these files across global edge locations.
  • If an old version exists, we invalidate the cache to force updates:
    aws cloudfront create-invalidation --distribution-id ABC123 --paths "/*"
  • This ensures users always get the latest frontend code.

3️⃣ DNS & Routing Updates

  • If using Vercel or Netlify, the deployment happens automatically, and the domain is updated.
  • If using AWS Route 53 (custom domain), we update DNS to point to the latest CloudFront distribution.

Now, when users visit the website, they fetch the latest React files via CDN.


3. Deployment Internals for Node.js (Backend)

Scenario: Deploying a Node.js API (Express.js, NestJS, or Next.js SSR) to AWS EC2

Step-by-Step Internal Process

1️⃣ Package the Code as a Deployable Unit

  • If using plain Node.js, we package the files and node_modules (or use Docker).
  • If using TypeScript, we compile it (tsc generates JS files).

2️⃣ Upload to a Remote Server

  • Using SSH & SCP (manual method):
    scp -r ./dist user@server:/var/www/myapp
  • Using AWS Elastic Beanstalk / Lambda / Docker:
    eb deploy    # Elastic Beanstalk

3️⃣ Start the Node.js Process

  • If using PM2 (Process Manager):
    pm2 start server.js --name "backend"
  • If using Docker, pull the latest image:
    docker pull my-backend:latest
    docker run -d -p 3000:3000 my-backend

4️⃣ Configure Load Balancer and Traffic Routing

  • If using Nginx or AWS ALB, it routes traffic to the Node.js instance.
  • If using Kubernetes, the Ingress Controller updates the routing.

Now, the backend API is live, and client requests are served by the latest Node.js version.

@yogain123
Copy link
Author

yogain123 commented Feb 8, 2025

Deployment Strategies: Rolling Update vs. Canary Deployment vs Blue Green Deployment

When deploying new versions of an application, we need strategies to ensure zero downtime, smooth rollouts, and safe rollbacks in case of failure. Two common deployment strategies are Rolling Updates and Canary Deployments and Blue, Green Deployment


1️⃣ Rolling Update

What is a Rolling Update?

A Rolling Update replaces old application instances gradually, without taking down the entire service.

How It Works?

  1. A new version of the app (v2) is deployed gradually, replacing instances of the old version (v1).
  2. New instances are started, while old instances are terminated in batches.
  3. Traffic is always available because at least some instances are running.
  4. If an issue is detected, the update can be paused or rolled back.

Example: Rolling Update in Kubernetes

Kubernetes updates pods one by one (or in batches) while keeping the service running.

Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 4  # 4 instances running
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1  # At most 1 pod down at a time
      maxSurge: 1  # Add at most 1 new pod at a time
  template:
    spec:
      containers:
        - name: app
          image: my-app:v2  # New version of the app

🚀 What Happens?

  • Step 1: One old pod (v1) is terminated.
  • Step 2: A new pod (v2) is started.
  • Step 3: Repeat until all pods are updated.
  • If an error occurs? Rollback to my-app:v1.

When to Use Rolling Updates?

✔️ Works well if new updates are safe (e.g., UI fixes, minor API changes).
✔️ Ensures zero downtime and smooth rollouts.
❌ If a bug exists in the new version, all instances will eventually have the issue before it's detected.


2️⃣ Canary Deployment

What is Canary Deployment?

A Canary Deployment releases the new version to a small percentage of users first. If everything is stable, the update gradually rolls out to more users.

How It Works?

  1. Deploy v2 to 5% of users while keeping 95% on v1.
  2. Monitor logs, errors, and performance.
  3. If v2 is stable, increase traffic (e.g., 25%, 50%, 100%).
  4. If v2 has issues, rollback to v1 before affecting all users.

Example: Canary Deployment with Nginx

We use Nginx to send only 5% of requests to the new version.

upstream old_version {
  server old-app.example.com;
}

upstream new_version {
  server new-app.example.com;
}

server {
  listen 80;
  
  location / {
    set $canary_weight 5;  # 5% of traffic goes to new version
    if ($request_id ~* "^.{0,$canary_weight}1") {
      proxy_pass http://new_version;
    }
    proxy_pass http://old_version;
  }
}

🔹 Blue-Green Deployment (Zero-Downtime Deployment Strategy)

1️⃣ What is Blue-Green Deployment?

Blue-Green Deployment is a zero-downtime deployment strategy where you have two identical environments:

  • 🔵 Blue (Current Live Version)
  • 🟢 Green (New Version Ready for Deployment)

At any time, only one of them is live. When the new version (Green) is ready, traffic is switched instantly from Blue to Green. If issues occur, you can rollback instantly by switching traffic back to Blue.


2️⃣ How Blue-Green Deployment Works?

1️⃣ Run two environments (Blue & Green)

  • Blue serves live traffic.
  • Green runs the new version but does not get live traffic yet.

2️⃣ Test the Green Environment

  • Perform integration tests and monitor logs.
  • Ensure the new version (Green) is stable.

3️⃣ Switch Traffic to Green

  • Modify Load Balancer (NGINX, AWS ALB, Kubernetes Ingress, etc.)
  • Users now access Green instead of Blue.

4️⃣ Keep Blue as a Backup

  • If an issue arises, switch back to Blue instantly.

5️⃣ Delete Blue (Optional)

  • If Green is stable, delete Blue or use it for the next release.

3️⃣ Real-World Example: Blue-Green Deployment

🔹 Example 1: Blue-Green Deployment in AWS (Using ALB)

Amazon Web Services (AWS) Application Load Balancer (ALB) allows traffic switching.

  • Step 1: Deploy Blue (v1) on EC2 / ECS / Kubernetes.
  • Step 2: Deploy Green (v2) on a separate instance.
  • Step 3: Test Green using a temporary staging URL.
  • Step 4: When Green is stable, update ALB Target Group to redirect all traffic to Green.
  • Step 5: If issues occur, switch back to Blue instantly.

@yogain123
Copy link
Author

yogain123 commented Feb 9, 2025

Iframe Access

If you want to access the parent window from an iframe, you can use:

window.parent

If you want to access a child iframe from the parent window, you can use:

iframeElem.contentWindow

Important Note:

Direct access between the parent and child iframe is only possible if both are on the same domain. If they are on different domains, the postMessage API is used to securely send and receive messages.

Example using postMessage:

Parent Window (parent.html):

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Parent Window</title>
</head>
<body>
    <iframe id="myIframe" src="child.html" width="400" height="200"></iframe>
    
    <button onclick="sendMessage()">Send Message to Iframe</button>
    
    <script>
        const iframe = document.getElementById("myIframe");

        function sendMessage() {
            iframe.contentWindow.postMessage("Hello from Parent!", "*"); // Sending message
        }

        window.addEventListener("message", (event) => {
            if (event.origin !== "https://trusted-domain.com") return; // Security check
            console.log("Received from iframe:", event.data);
        });
    </script>
</body>
</html>

Child Iframe (child.html):

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Child Iframe</title>
</head>
<body>
    <script>
        window.addEventListener("message", (event) => {
            if (event.origin !== "https://trusted-domain.com") return; // Security check
            console.log("Received from parent:", event.data);

            // Sending response back to parent
            event.source.postMessage("Hello from Iframe!", event.origin);
        });
    </script>
</body>
</html>

This ensures secure communication between different domains using postMessage.

@yogain123
Copy link
Author

yogain123 commented Feb 9, 2025

Design patterns

Design patterns help you write clean, maintainable, and efficient code by following best practices.
It's an approach/solutions to some of the common problems faced

In JavaScript and General Front-End

  1. Module Pattern

    • What It Is: A way to organize code into self-contained “modules” that keep details private and only expose what’s necessary.
    • Why It’s Useful: It prevents global variables from cluttering the environment and makes code easier to maintain.
    • Example:
      // Using an IIFE (Immediately Invoked Function Expression)
      const myModule = (function() {
        const privateVar = "I am hidden";
        function privateFunc() {
          console.log(privateVar);
        }
        return {
          publicFunc: function() {
            privateFunc();
          }
        };
      })();
      
      myModule.publicFunc(); // Logs: I am hidden
  2. Observer Pattern

    • What It Is: A pattern where one piece of code (the "subject") keeps a list of dependents ("observers") and notifies them automatically of any changes or events.
    • Why It’s Useful: It’s great for handling events and updating parts of your application when something changes.
    • Example:
      class EventEmitter {
        constructor() {
          this.events = {};
        }
        subscribe(event, listener) {
          if (!this.events[event]) {
            this.events[event] = [];
          }
          this.events[event].push(listener);
        }
        emit(event, data) {
          if (this.events[event]) {
            this.events[event].forEach(listener => listener(data));
          }
        }
      }
      
      const emitter = new EventEmitter();
      emitter.subscribe('greet', message => console.log(message));
      emitter.emit('greet', 'Hello, world!'); // Logs: Hello, world!
  3. Singleton Pattern

    • What It Is: A design that restricts a class to a single instance.
    • Why It’s Useful: When you need one central object (like a configuration or a state manager) that is shared across your application.
    • Example:
      const Singleton = (function() {
        let instance;
        function createInstance() {
          return { timestamp: new Date() };
        }
        return {
          getInstance: function() {
            if (!instance) {
              instance = createInstance();
            }
            return instance;
          }
        };
      })();
      
      const obj1 = Singleton.getInstance();
      const obj2 = Singleton.getInstance();
      console.log(obj1 === obj2); // true
  4. Factory Pattern

    • What It Is: A way to create objects without specifying the exact class of the object that will be created.
    • Why It’s Useful: It allows you to centralize object creation and can help in managing different types of objects based on input.
    • Example:
      function carFactory(type) {
        if (type === "sedan") {
          return { wheels: 4, type: "sedan" };
        } else if (type === "truck") {
          return { wheels: 6, type: "truck" };
        }
      }
      
      const myCar = carFactory("sedan");
      console.log(myCar); // { wheels: 4, type: "sedan" }

@yogain123
Copy link
Author

GDPR and PCI DSS

GDPR -> General Data Protection Regulation
PCI DSS -> Payment Card Industry Data Security Standard

GDPR:

  • Requires clear user consent and transparency in data collection.
  • Empowers users with rights to access, modify, or delete their data.
  • Enforces secure data storage and strict privacy practices.

PCI DSS:

  • Mandates secure handling and encryption of payment card data.
  • Requires robust network security measures (firewalls, regular scans).
  • Involves periodic audits to ensure ongoing compliance.

In Short: Both standards push you to implement strong security and privacy measures, but they also increase the development and maintenance effort to avoid hefty fines and legal issues.

@yogain123
Copy link
Author

yogain123 commented Feb 10, 2025

Service Side Resource Forgery

What is SSRF?

  • SSRF (Server-Side Request Forgery) is when an attacker tricks a server into making unintended requests.
  • How it works: The attacker sends a malicious URL to the server, which then fetches that URL, often accessing internal or protected systems.

How Attackers Exploit SSRF:

  • They force the server to access internal resources (like 127.0.0.1, 10.x.x.x, etc.) or cloud services (e.g., metadata endpoints).
  • This can reveal sensitive information or allow further attacks.

Prevention Methods:

  • Input Validation: Only allow trusted URLs (whitelisting) and block internal IP addresses (e.g., 127.0.0.1, 192.168.x.x, etc.).
  • Network Restrictions: Use firewall rules to limit outbound requests.
  • Protocol Restrictions: Allow only necessary protocols (e.g., HTTP/HTTPS).
  • Secure Proxies: Route requests through controlled, secure proxies.
  • Regular Updates: Keep all software and libraries up-to-date.

Here's a concrete example to illustrate SSRF:

Scenario: Image Fetching Feature

The Setup:
Imagine a website that lets users add a profile picture by providing an image URL. When a user submits the URL, the server fetches the image from that URL to display it on the profile.

How It Could Go Wrong (SSRF Exploit):

  1. User Input:

    • A legitimate user enters: http://example.com/picture.jpg
    • The server fetches and displays the image.
  2. Attacker Input:

    • An attacker instead enters: http://127.0.0.1/admin
    • The server, following the same process, makes a request to http://127.0.0.1/admin—an internal address.
    • Result: The server might retrieve sensitive internal data or perform actions meant only for internal use, exposing confidential information.

Prevention Techniques in This Example

  1. Input Validation:

    • Whitelist URLs: Only allow URLs from trusted external domains.
    • Block Internal Addresses: Reject any URLs that point to internal IP addresses like 127.0.0.1, 192.168.x.x, or 10.x.x.x.
  2. Network Restrictions:

    • Configure firewall rules or server settings to prevent outgoing requests to internal IP ranges.
  3. Protocol Filtering:

    • Ensure that only the necessary protocols (e.g., HTTP/HTTPS) are allowed in the URLs.

Pseudocode Example

function fetchImage(url) {
  // Simple check to block internal IP addresses
  if (url.includes("127.0.0.1") || url.includes("192.168.") || url.includes("10.")) {
    throw new Error("Access to internal resources is not allowed.");
  }
  
  // Proceed to fetch the image if the URL is safe
  return http.get(url);
}

@yogain123
Copy link
Author

yogain123 commented Feb 10, 2025

SSJI (Server-Side JavaScript Injection)

SSJI (Server-Side JavaScript Injection) occurs when user input is directly executed as JavaScript code on the server. This can allow attackers to run arbitrary code with the same privileges as the server process.

Short Example

app.get('/', (req, res) => {
  // The server expects a simple arithmetic expression from the user
  const userCode = req.query.code; // e.g., "2+2"

  // Vulnerable: Directly evaluating user input
  const result = eval(userCode);
  res.send(`Result: ${result}`);
});

Risk:
If an attacker passes malicious code (e.g., process.exit(1)), they can disrupt or compromise the server by executing unwanted commands.


How to prevent it

  1. Avoid Dynamic Code Execution: Don’t use eval or similar functions on untrusted input.
  2. Validate & Sanitize Inputs: Only allow expected characters/data (whitelisting) to prevent injection.
  3. Use Safe Libraries: Utilize libraries (e.g., math.js for arithmetic) that safely parse and evaluate expressions.
  4. Sandbox Execution: If dynamic evaluation is needed, run code in a restricted environment (e.g., Node’s vm module with strict limitations).

@yogain123
Copy link
Author

SRI - SubResource Integrity

SubResource Integrity (SRI) is a security feature that allows browsers to verify that external resources—such as JavaScript files, CSS files, or other assets—have not been tampered with during transit.

How SRI Works:

  • Hashing the Resource: When you include an external resource, you generate a cryptographic hash (using algorithms like SHA-256, SHA-384, or SHA-512) of the file’s contents.
  • Adding the Integrity Attribute: You then include this hash in your HTML markup using the integrity attribute within the <script> or tag.
  • Browser Verification: When the browser fetches the resource, it computes the hash of the downloaded file and compares it with the hash provided in the integrity attribute.
  • Action Based on Match: If the hashes match, the browser proceeds to use the resource. If they don’t, the browser blocks the resource from executing or rendering, thereby protecting the user from potentially malicious modifications.
<script src="https://example.com/script.js"
        integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxo7DS7FK5U4IuEKJvU03kJjgVnIDT+"
        crossorigin="anonymous"></script>

How hash is generated
Suppose you have a file script.js:

  • You read the entire contents of script.js.
  • You run the content through a hash function like SHA-256.
  • The resulting hash (in binary) is then Base64-encoded to produce a string.
  • This string is then placed in your HTML tag as follows:
<script src="https://example.com/script.js"
        integrity="sha256-Base64EncodedHashValue"
        crossorigin="anonymous"></script>

In summary, the hash is generated solely from the file's content using a secure, standardized cryptographic hash function. This ensures that the browser can verify the integrity of the resource by re-computing the hash when the file is loaded and comparing it to the one provided in the HTML.

@yogain123
Copy link
Author

All Security

IFrame Attack
CSP - Content Security Policy

XSS/CSRF - Cross Site Scripting - Client Side Resource Fourgery

SSRF - Server Side Resource Fourgery
SSJI - Server Side JavaScript Injection

SRI- SubResource Integrity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment