Skip to content

Instantly share code, notes, and snippets.

@SamirPaulb
Last active March 26, 2023 19:45
Show Gist options
  • Save SamirPaulb/04f759be3877e3f030bdb5be35415391 to your computer and use it in GitHub Desktop.
Save SamirPaulb/04f759be3877e3f030bdb5be35415391 to your computer and use it in GitHub Desktop.
Learn how to design systems at scale and prepare for system design interviews.

System Design Course

@VLSILab
Copy link

VLSILab commented Mar 26, 2023

Challenges

Here are some challenges of event-drive architecture:

  • Guaranteed delivery.
  • Error handling is difficult.
  • Event-driven systems are complex in general.
  • Exactly once, in-order processing of events.

Use cases

Below are some common use cases where event-driven architectures are beneficial:

  • Metadata and metrics.
  • Server and security logs.
  • Integrating heterogeneous systems.
  • Fanout and parallel processing.

Examples

Here are some widely used technologies for implementing event-driven architectures:

Event Sourcing

Instead of storing just the current state of the data in a domain, use an append-only store to record the full series of actions taken on that data. The store acts as the system of record and can be used to materialize the domain objects.

event-sourcing

This can simplify tasks in complex domains, by avoiding the need to synchronize the data model and the business domain, while improving performance, scalability, and responsiveness. It can also provide consistency for transactional data, and maintain full audit trails and history that can enable compensating actions.

Event sourcing vs Event-Driven Architecture (EDA)

Event sourcing is seemingly constantly being confused with Event-driven Architecture (EDA). Event-driven architecture is about using events to communicate between service boundaries. Generally, leveraging a message broker to publish and consume events asynchronously within other boundaries.

Whereas, event sourcing is about using events as a state, which is a different approach to storing data. Rather than storing the current state, we're instead going to be storing events. Also, event sourcing is one of the several patterns to implement an event-driven architecture.

Advantages

Let's discuss some advantages of using event sourcing:

  • Excellent for real-time data reporting.
  • Great for fail-safety, data can be reconstituted from the event store.
  • Extremely flexible, any type of message can be stored.
  • Preferred way of achieving audit logs functionality for high compliance systems.

Disadvantages

Following are the disadvantages of event sourcing:

  • Requires an extremely efficient network infrastructure.
  • Requires a reliable way to control message formats, such as a schema registry.
  • Different events will contain different payloads.

Command and Query Responsibility Segregation (CQRS)

Command Query Responsibility Segregation (CQRS) is an architectural pattern that divides a system's actions into commands and queries. It was first described by Greg Young.

In CQRS, a command is an instruction, a directive to perform a specific task. It is an intention to change something and doesn't return a value, only an indication of success or failure. And, a query is a request for information that doesn't change the system's state or cause any side effects.

command-and-query-responsibility-segregation

The core principle of CQRS is the separation of commands and queries. They perform fundamentally different roles within a system, and separating them means that each can be optimized as needed, which distributed systems can really benefit from.

CQRS with Event Sourcing

The CQRS pattern is often used along with the Event Sourcing pattern. CQRS-based systems use separate read and write data models, each tailored to relevant tasks and often located in physically separate stores.

When used with the Event Sourcing pattern, the store of events is the write model and is the official source of information. The read model of a CQRS-based system provides materialized views of the data, typically as highly denormalized views.

Advantages

Let's discuss some advantages of CQRS:

  • Allows independent scaling of read and write workloads.
  • Easier scaling, optimizations, and architectural changes.
  • Closer to business logic with loose coupling.
  • The application can avoid complex joins when querying.
  • Clear boundaries between the system behavior.

Disadvantages

Below are some disadvantages of CQRS:

  • More complex application design.
  • Message failures or duplicate messages can occur.
  • Dealing with eventual consistency is a challenge.
  • Increased system maintenance efforts.

Use cases

Here are some scenarios where CQRS will be helpful:

  • The performance of data reads must be fine-tuned separately from the performance of data writes.
  • The system is expected to evolve over time and might contain multiple versions of the model, or where business rules change regularly.
  • Integration with other systems, especially in combination with event sourcing, where the temporal failure of one subsystem shouldn't affect the availability of the others.
  • Better security to ensure that only the right domain entities are performing writes on the data.

API Gateway

The API Gateway is an API management tool that sits between a client and a collection of backend services. It is a single entry point into a system that encapsulates the internal system architecture and provides an API that is tailored to each client. It also has other responsibilities such as authentication, monitoring, load balancing, caching, throttling, logging, etc.

api-gateway

Why do we need an API Gateway?

The granularity of APIs provided by microservices is often different than what a client needs. Microservices typically provide fine-grained APIs, which means that clients need to interact with multiple services. Hence, an API gateway can provide a single entry point for all clients with some additional features and better management.

Features

Below are some desired features of an API Gateway:

Advantages

Let's look at some advantages of using an API Gateway:

  • Encapsulates the internal structure of an API.
  • Provides a centralized view of the API.
  • Simplifies the client code.
  • Monitoring, analytics, tracing, and other such features.

Disadvantages

Here are some possible disadvantages of an API Gateway:

  • Possible single point of failure.
  • Might impact performance.
  • Can become a bottleneck if not scaled properly.
  • Configuration can be challenging.

Backend For Frontend (BFF) pattern

In the Backend For Frontend (BFF) pattern, we create separate backend services to be consumed by specific frontend applications or interfaces. This pattern is useful when we want to avoid customizing a single backend for multiple interfaces. This pattern was first described by Sam Newman.

Also, sometimes the output of data returned by the microservices to the front end is not in the exact format or filtered as needed by the front end. To solve this issue, the frontend should have some logic to reformat the data, and therefore, we can use BFF to shift some of this logic to the intermediate layer.

backend-for-frontend

The primary function of the backend for the frontend pattern is to get the required data from the appropriate service, format the data, and sent it to the frontend.

GraphQL performs really well as a backend for frontend (BFF).

When to use this pattern?

We should consider using a Backend For Frontend (BFF) pattern when:

  • A shared or general purpose backend service must be maintained with significant development overhead.
  • We want to optimize the backend for the requirements of a specific client.
  • Customizations are made to a general-purpose backend to accommodate multiple interfaces.

Examples

Following are some widely used gateways technologies:

REST, GraphQL, gRPC

A good API design is always a crucial part of any system. But it is also important to pick the right API technology. So, in this tutorial, we will briefly discuss different API technologies such as REST, GraphQL, and gRPC.

What's an API?

Before we even get into API technologies, let's first understand what is an API.

An API is a set of definitions and protocols for building and integrating application software. It's sometimes referred to as a contract between an information provider and an information user establishing the content required from the producer and the content required by the consumer.

In other words, if you want to interact with a computer or system to retrieve information or perform a function, an API helps you communicate what you want to that system so it can understand and complete the request.

REST

A REST API (also known as RESTful API) is an application programming interface that conforms to the constraints of REST architectural style and allows for interaction with RESTful web services. REST stands for Representational State Transfer and it was first introduced by Roy Fielding in the year 2000.

In REST API, the fundamental unit is a resource.

Concepts

Let's discuss some concepts of a RESTful API.

Constraints

In order for an API to be considered RESTful, it has to conform to these architectural constraints:

  • Uniform Interface: There should be a uniform way of interacting with a given server.
  • Client-Server: A client-server architecture managed through HTTP.
  • Stateless: No client context shall be stored on the server between requests.
  • Cacheable: Every response should include whether the response is cacheable or not and for how much duration responses can be cached at the client-side.
  • Layered system: An application architecture needs to be composed of multiple layers.
  • Code on demand: Return executable code to support a part of your application. (optional)

HTTP Verbs

HTTP defines a set of request methods to indicate the desired action to be performed for a given resource. Although they can also be nouns, these request methods are sometimes referred to as HTTP verbs. Each of them implements a different semantic, but some common features are shared by a group of them.

Below are some commonly used HTTP verbs:

  • GET: Request a representation of the specified resource.
  • HEAD: Response is identical to a GET request, but without the response body.
  • POST: Submits an entity to the specified resource, often causing a change in state or side effects on the server.
  • PUT: Replaces all current representations of the target resource with the request payload.
  • DELETE: Deletes the specified resource.
  • PATCH: Applies partial modifications to a resource.

HTTP response codes

HTTP response status codes indicate whether a specific HTTP request has been successfully completed.

There are five classes defined by the standard:

  • 1xx - Informational responses.
  • 2xx - Successful responses.
  • 3xx - Redirection responses.
  • 4xx - Client error responses.
  • 5xx - Server error responses.

For example, HTTP 200 means that the request was successful.

Advantages

Let's discuss some advantages of REST API:

  • Simple and easy to understand.
  • Flexible and portable.
  • Good caching support.
  • Client and server are decoupled.

Disadvantages

Let's discuss some disadvantages of REST API:

  • Over-fetching of data.
  • Sometimes multiple round trips to the server are required.

Use cases

REST APIs are pretty much used universally and are the default standard for designing APIs. Overall REST APIs are quite flexible and can fit almost all scenarios.

Example

Here's an example usage of a REST API that operates on a users resource.

URI HTTP verb Description
/users GET Get all users
/users/{id} GET Get a user by id
/users POST Add a new user
/users/{id} PATCH Update a user by id
/users/{id} DELETE Delete a user by id

There is so much more to learn when it comes to REST APIs, I will highly recommend looking into Hypermedia as the Engine of Application State (HATEOAS).

GraphQL

GraphQL is a query language and server-side runtime for APIs that prioritizes giving clients exactly the data they request and no more. It was developed by Facebook and later open-sourced in 2015.

GraphQL is designed to make APIs fast, flexible, and developer-friendly. Additionally, GraphQL gives API maintainers the flexibility to add or deprecate fields without impacting existing queries. Developers can build APIs with whatever methods they prefer, and the GraphQL specification will ensure they function in predictable ways to clients.

In GraphQL, the fundamental unit is a query.

Concepts

Let's briefly discuss some key concepts in GraphQL:

Schema

A GraphQL schema describes the functionality clients can utilize once they connect to the GraphQL server.

Queries

A query is a request made by the client. It can consist of fields and arguments for the query. The operation type of a query can also be a mutation which provides a way to modify server-side data.

Resolvers

Resolver is a collection of functions that generate responses for a GraphQL query. In simple terms, a resolver acts as a GraphQL query handler.

Advantages

Let's discuss some advantages of GraphQL:

  • Eliminates over-fetching of data.
  • Strongly defined schema.
  • Code generation support.
  • Payload optimization.

Disadvantages

Let's discuss some disadvantages of GraphQL:

  • Shifts complexity to server-side.
  • Caching becomes hard.
  • Versioning is ambiguous.
  • N+1 problem.

Use cases

GraphQL proves to be essential in the following scenarios:

  • Reducing app bandwidth usage as we can query multiple resources in a single query.
  • Rapid prototyping for complex systems.
  • When we are working with a graph-like data model.

Example

Here's a GraphQL schema that defines a User type and a Query type.

type Query {
  getUser: User
}

type User {
  id: ID
  name: String
  city: String
  state: String
}

Using the above schema, the client can request the required fields easily without having to fetch the entire resource or guess what the API might return.

{
  getUser {
    id
    name
    city
  }
}

This will give the following response to the client.

{
  "getUser": {
    "id": 123,
    "name": "Karan",
    "city": "San Francisco"
  }
}

Learn more about GraphQL at graphql.org.

gRPC

gRPC is a modern open-source high-performance Remote Procedure Call (RPC) framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking, authentication and much more.

Concepts

Let's discuss some key concepts of gRPC.

Protocol buffers

Protocol buffers provide a language and platform-neutral extensible mechanism for serializing structured data in a forward and backward-compatible way. It's like JSON, except it's smaller and faster, and it generates native language bindings.

Service definition

Like many RPC systems, gRPC is based on the idea of defining a service and specifying the methods that can be called remotely with their parameters and return types. gRPC uses protocol buffers as the Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages.

Advantages

Let's discuss some advantages of gRPC:

  • Lightweight and efficient.
  • High performance.
  • Built-in code generation support.
  • Bi-directional streaming.

Disadvantages

Let's discuss some disadvantages of gRPC:

  • Relatively new compared to REST and GraphQL.
  • Limited browser support.
  • Steeper learning curve.
  • Not human readable.

Use cases

Below are some good use cases for gRPC:

  • Real-time communication via bi-directional streaming.
  • Efficient inter-service communication in microservices.
  • Low latency and high throughput communication.
  • Polyglot environments.

Example

Here's a basic example of a gRPC service defined in a *.proto file. Using this definition, we can easily code generate the HelloService service in the programming language of our choice.

service HelloService {
  rpc SayHello (HelloRequest) returns (HelloResponse);
}

message HelloRequest {
  string greeting = 1;
}

message HelloResponse {
  string reply = 1;
}

REST vs GraphQL vs gRPC

Now that we know how these API designing techniques work, let's compare them based on the following parameters:

  • Will it cause tight coupling?
  • How chatty (distinct API calls to get needed information) are the APIs?
  • What's the performance like?
  • How complex is it to integrate?
  • How well does the caching work?
  • Built-in tooling and code generation?
  • What's API discoverability like?
  • How easy is it to version APIs?
Type Coupling Chattiness Performance Complexity Caching Codegen Discoverability Versioning
REST Low High Good Medium Great Bad Good Easy
GraphQL Medium Low Good High Custom Good Good Custom
gRPC High Medium Great Low Custom Great Bad Hard

Which API technology is better?

Well, the answer is none of them. There is no silver bullet as each of these technologies has its own advantages and disadvantages. Users only care about using our APIs in a consistent way, so make sure to focus on your domain and requirements when designing your API.

Long polling, WebSockets, Server-Sent Events (SSE)

Web applications were initially developed around a client-server model, where the web client is always the initiator of transactions like requesting data from the server. Thus, there was no mechanism for the server to independently send, or push, data to the client without the client first making a request. Let's discuss some approaches to overcome this problem.

Long polling

HTTP Long polling is a technique used to push information to a client as soon as possible from the server. As a result, the server does not have to wait for the client to send a request.

In Long polling, the server does not close the connection once it receives a request from the client. Instead, the server responds only if any new message is available or a timeout threshold is reached.

long-polling

Once the client receives a response, it immediately sends a new request to the server to have a new pending connection to send data to the client, and the operation is repeated. With this approach, the server emulates a real-time server push feature.

Working

Let's understand how long polling works:

  1. The client makes an initial request and waits for a response.
  2. The server receives the request and delays sending anything until an update is available.
  3. Once an update is available, the response is sent to the client.
  4. The client receives the response and makes a new request immediately or after some defined interval to establish a connection again.

Advantages

Here are some advantages of long polling:

  • Easy to implement, good for small-scale projects.
  • Nearly universally supported.

Disadvantages

A major downside of long polling is that it is usually not scalable. Below are some of the other reasons:

  • Creates a new connection each time, which can be intensive on the server.
  • Reliable message ordering can be an issue for multiple requests.
  • Increased latency as the server needs to wait for a new request.

WebSockets

WebSocket provides full-duplex communication channels over a single TCP connection. It is a persistent connection between a client and a server that both parties can use to start sending data at any time.

The client establishes a WebSocket connection through a process known as the WebSocket handshake. If the process succeeds, then the server and client can exchange data in both directions at any time. The WebSocket protocol enables the communication between a client and a server with lower overheads, facilitating real-time data transfer from and to the server.

websockets

This is made possible by providing a standardized way for the server to send content to the client without being asked and allowing for messages to be passed back and forth while keeping the connection open.

Working

Let's understand how WebSockets work:

  1. The client initiates a WebSocket handshake process by sending a request.
  2. The request also contains an HTTP Upgrade header that allows the request to switch to the WebSocket protocol (ws://).
  3. The server sends a response to the client, acknowledging the WebSocket handshake request.
  4. A WebSocket connection will be opened once the client receives a successful handshake response.
  5. Now the client and server can start sending data in both directions allowing real-time communication.
  6. The connection is closed once the server or the client decides to close the connection.

Advantages

Below are some advantages of WebSockets:

  • Full-duplex asynchronous messaging.
  • Better origin-based security model.
  • Lightweight for both client and server.

Disadvantages

Let's discuss some disadvantages of WebSockets:

  • Terminated connections aren't automatically recovered.
  • Older browsers don't support WebSockets (becoming less relevant).

Server-Sent Events (SSE)

Server-Sent Events (SSE) is a way of establishing long-term communication between client and server that enables the server to proactively push data to the client.

server-sent-events

It is unidirectional, meaning once the client sends the request it can only receive the responses without the ability to send new requests over the same connection.

Working

Let's understand how server-sent events work:

  1. The client makes a request to the server.
  2. The connection between client and server is established and it remains open.
  3. The server sends responses or events to the client when new data is available.

Advantages

  • Simple to implement and use for both client and server.
  • Supported by most browsers.
  • No trouble with firewalls.

Disadvantages

  • Unidirectional nature can be limiting.
  • Limitation for the maximum number of open connections.
  • Does not support binary data.

Geohashing and Quadtrees

Geohashing

Geohashing is a geocoding method used to encode geographic coordinates such as latitude and longitude into short alphanumeric strings. It was created by Gustavo Niemeyer in 2008.

For example, San Francisco with coordinates 37.7564, -122.4016 can be represented in geohash as 9q8yy9mf.

How does Geohashing work?

Geohash is a hierarchical spatial index that uses Base-32 alphabet encoding, the first character in a geohash identifies the initial location as one of the 32 cells. This cell will also contain 32 cells. This means that to represent a point, the world is recursively divided into smaller and smaller cells with each additional bit until the desired precision is attained. The precision factor also determines the size of the cell.

geohashing

Geohashing guarantees that points are spatially closer if their Geohashes share a longer prefix which means the more characters in the string, the more precise the location. For example, geohashes 9q8yy9mf and 9q8yy9vx are spatially closer as they share the prefix 9q8yy9.

Geohashing can also be used to provide a degree of anonymity as we don't need to expose the exact location of the user because depending on the length of the geohash we just know they are somewhere within an area.

The cell sizes of the geohashes of different lengths are as follows:

Geohash length Cell width Cell height
1 5000 km 5000 km
2 1250 km 1250 km
3 156 km 156 km
4 39.1 km 19.5 km
5 4.89 km 4.89 km
6 1.22 km 0.61 km
7 153 m 153 m
8 38.2 m 19.1 m
9 4.77 m 4.77 m
10 1.19 m 0.596 m
11 149 mm 149 mm
12 37.2 mm 18.6 mm

Use cases

Here are some common use cases for Geohashing:

  • It is a simple way to represent and store a location in a database.
  • It can also be shared on social media as URLs since it is easier to share, and remember than latitudes and longitudes.
  • We can efficiently find the nearest neighbors of a point through very simple string comparisons and efficient searching of indexes.

Examples

Geohashing is widely used and it is supported by popular databases.

Quadtrees

A quadtree is a tree data structure in which each internal node has exactly four children. They are often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. Each child or leaf node stores spatial information. Quadtrees are the two-dimensional analog of Octrees which are used to partition three-dimensional space.

quadtree

Types of Quadtrees

Quadtrees may be classified according to the type of data they represent, including areas, points, lines, and curves. The following are common types of quadtrees:

  • Point quadtrees
  • Point-region (PR) quadtrees
  • Polygonal map (PM) quadtrees
  • Compressed quadtrees
  • Edge quadtrees

Why do we need Quadtrees?

Aren't latitude and longitude enough? Why do we need quadtrees? While in theory using latitude and longitude we can determine things such as how close points are to each other using euclidean distance, for practical use cases it is simply not scalable because of its CPU-intensive nature with large data sets.

quadtree-subdivision

Quadtrees enable us to search points within a two-dimensional range efficiently, where those points are defined as latitude/longitude coordinates or as cartesian (x, y) coordinates. Additionally, we can save further computation by only subdividing a node after a certain threshold. And with the application of mapping algorithms such as the Hilbert curve, we can easily improve range query performance.

Use cases

Below are some common uses of quadtrees:

  • Image representation, processing, and compression.
  • Spacial indexing and range queries.
  • Location-based services like Google Maps, Uber, etc.
  • Mesh generation and computer graphics.
  • Sparse data storage.

Circuit breaker

The circuit breaker is a design pattern used to detect failures and encapsulates the logic of preventing a failure from constantly recurring during maintenance, temporary external system failure, or unexpected system difficulties.

circuit-breaker

The basic idea behind the circuit breaker is very simple. We wrap a protected function call in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all. Usually, we'll also want some kind of monitor alert if the circuit breaker trips.

Why do we need circuit breaking?

It's common for software systems to make remote calls to software running in different processes, probably on different machines across a network. One of the big differences between in-memory calls and remote calls is that remote calls can fail, or hang without a response until some timeout limit is reached. What's worse if we have many callers on an unresponsive supplier, then we can run out of critical resources leading to cascading failures across multiple systems.

States

Let's discuss circuit breaker states:

Closed

When everything is normal, the circuit breakers remain closed, and all the request passes through to the services as normal. If the number of failures increases beyond the threshold, the circuit breaker trips and goes into an open state.

Open

In this state circuit breaker returns an error immediately without even invoking the services. The Circuit breakers move into the half-open state after a certain timeout period elapses. Usually, it will have a monitoring system where the timeout will be specified.

Half-open

In this state, the circuit breaker allows a limited number of requests from the service to pass through and invoke the operation. If the requests are successful, then the circuit breaker will go to the closed state. However, if the requests continue to fail, then it goes back to the open state.

Rate Limiting

Rate limiting refers to preventing the frequency of an operation from exceeding a defined limit. In large-scale systems, rate limiting is commonly used to protect underlying services and resources. Rate limiting is generally used as a defensive mechanism in distributed systems, so that shared resources can maintain availability. It also protects our APIs from unintended or malicious overuse by limiting the number of requests that can reach our API in a given period of time.

rate-limiting

Why do we need Rate Limiting?

Rate limiting is a very important part of any large-scale system and it can be used to accomplish the following:

  • Avoid resource starvation as a result of Denial of Service (DoS) attacks.
  • Rate Limiting helps in controlling operational costs by putting a virtual cap on the auto-scaling of resources which if not monitored might lead to exponential bills.
  • Rate limiting can be used as defense or mitigation against some common attacks.
  • For APIs that process massive amounts of data, rate limiting can be used to control the flow of that data.

Algorithms

There are various algorithms for API rate limiting, each with its advantages and disadvantages. Let's briefly discuss some of these algorithms:

Leaky Bucket

Leaky Bucket is an algorithm that provides a simple, intuitive approach to rate limiting via a queue. When registering a request, the system appends it to the end of the queue. Processing for the first item on the queue occurs at a regular interval or first-in, first-out (FIFO). If the queue is full, then additional requests are discarded (or leaked).

Token Bucket

Here we use a concept of a bucket. When a request comes in, a token from the bucket must be taken and processed. The request will be refused if no token is available in the bucket, and the requester will have to try again later. As a result, the token bucket gets refreshed after a certain time period.

Fixed Window

The system uses a window size of n seconds to track the fixed window algorithm rate. Each incoming request increments the counter for the window. It discards the request if the counter exceeds a threshold.

Sliding Log

Sliding Log rate-limiting involves tracking a time-stamped log for each request. The system stores these logs in a time-sorted hash set or table. It also discards logs with timestamps beyond a threshold. When a new request comes in, we calculate the sum of logs to determine the request rate. If the request would exceed the threshold rate, then it is held.

Sliding Window

Sliding Window is a hybrid approach that combines the fixed window algorithm's low processing cost and the sliding log's improved boundary conditions. Like the fixed window algorithm, we track a counter for each fixed window. Next, we account for a weighted value of the previous window's request rate based on the current timestamp to smooth out bursts of traffic.

Rate Limiting in Distributed Systems

Rate Limiting becomes complicated when distributed systems are involved. The two broad problems that come with rate limiting in distributed systems are:

Inconsistencies

When using a cluster of multiple nodes, we might need to enforce a global rate limit policy. Because if each node were to track its rate limit, a consumer could exceed a global rate limit when sending requests to different nodes. The greater the number of nodes, the more likely the user will exceed the global limit.

The simplest way to solve this problem is to use sticky sessions in our load balancers so that each consumer gets sent to exactly one node but this causes a lack of fault tolerance and scaling problems. Another approach might be to use a centralized data store like Redis but this will increase latency and cause race conditions.

Race Conditions

This issue happens when we use a naive "get-then-set" approach, in which we retrieve the current rate limit counter, increment it, and then push it back to the datastore. This model's problem is that additional requests can come through in the time it takes to perform a full cycle of read-increment-store, each attempting to store the increment counter with an invalid (lower) counter value. This allows a consumer to send a very large number of requests to bypass the rate limiting controls.

One way to avoid this problem is to use some sort of distributed locking mechanism around the key, preventing any other processes from accessing or writing to the counter. Though the lock will become a significant bottleneck and will not scale well. A better approach might be to use a "set-then-get" approach, allowing us to quickly increment and check counter values without letting the atomic operations get in the way.

Service Discovery

Service discovery is the detection of services within a computer network. Service Discovery Protocol (SDP) is a networking standard that accomplishes the detection of networks by identifying resources.

Why do we need Service Discovery?

In a monolithic application, services invoke one another through language-level methods or procedure calls. However, modern microservices-based applications typically run in virtualized or containerized environments where the number of instances of a service and their locations change dynamically. Consequently, we need a mechanism that enables the clients of service to make requests to a dynamically changing set of ephemeral service instances.

Implementations

There are two main service discovery patterns:

Client-side discovery

client-side-service-discovery

In this approach, the client obtains the location of another service by querying a service registry which is responsible for managing and storing the network locations of all the services.

Server-side discovery

server-side-service-discovery

In this approach, we use an intermediate component such as a load balancer. The client makes a request to the service via a load balancer which then forwards the request to an available service instance.

Service Registry

A service registry is basically a database containing the network locations of service instances to which the clients can reach out. A Service Registry must be highly available and up-to-date.

Service Registration

We also need a way to obtain service information, often known as service registration. Let's look at two possible service registration approaches:

Self-Registration

When using the self-registration model, a service instance is responsible for registering and de-registering itself in the Service Registry. In addition, if necessary, a service instance sends heartbeat requests to keep its registration alive.

Third-party Registration

The registry keeps track of changes to running instances by polling the deployment environment or subscribing to events. When it detects a newly available service instance, it records it in its database. The Service Registry also de-registers terminated service instances.

Service mesh

Service-to-service communication is essential in a distributed application but routing this communication, both within and across application clusters, becomes increasingly complex as the number of services grows. Service mesh enables managed, observable, and secure communication between individual services. It works with a service discovery protocol to detect services. Istio and envoy are some of the most commonly used service mesh technologies.

Examples

Here are some commonly used service discovery infrastructure tools:

SLA, SLO, SLI

Let's briefly discuss SLA, SLO, and SLI. These are mostly related to the business and site reliability side of things but good to know nonetheless.

Why are they important?

SLAs, SLOs, and SLIs allow companies to define, track and monitor the promises made for a service to its users. Together, SLAs, SLOs, and SLIs should help teams generate more user trust in their services with an added emphasis on continuous improvement to incident management and response processes.

SLA

An SLA, or Service Level Agreement, is an agreement made between a company and its users of a given service. The SLA defines the different promises that the company makes to users regarding specific metrics, such as service availability.

SLAs are often written by a company's business or legal team.

SLO

An SLO, or Service Level Objective, is the promise that a company makes to users regarding a specific metric such as incident response or uptime. SLOs exist within an SLA as individual promises contained within the full user agreement. The SLO is the specific goal that the service must meet in order to comply with the SLA. SLOs should always be simple, clearly defined, and easily measured to determine whether or not the objective is being fulfilled.

SLI

An SLI, or Service Level Indicator, is a key metric used to determine whether or not the SLO is being met. It is the measured value of the metric described within the SLO. In order to remain in compliance with the SLA, the SLI's value must always meet or exceed the value determined by the SLO.

Disaster recovery

Disaster recovery (DR) is a process of regaining access and functionality of the infrastructure after events like a natural disaster, cyber attack, or even business disruptions.

Disaster recovery relies upon the replication of data and computer processing in an off-premises location not affected by the disaster. When servers go down because of a disaster, a business needs to recover lost data from a second location where the data is backed up. Ideally, an organization can transfer its computer processing to that remote location as well in order to continue operations.

Disaster Recovery is often not actively discussed during system design interviews but it's important to have some basic understanding of this topic. You can learn more about disaster recovery from AWS Well-Architected Framework.

Why is disaster recovery important?

Disaster recovery can have the following benefits:

  • Minimize interruption and downtime
  • Limit damages
  • Fast restoration
  • Better customer retention

Terms

Let's discuss some important terms relevantly for disaster recovery:

disaster-recovery

RTO

Recovery Time Objective (RTO) is the maximum acceptable delay between the interruption of service and restoration of service. This determines what is considered an acceptable time window when service is unavailable.

RPO

Recovery Point Objective (RPO) is the maximum acceptable amount of time since the last data recovery point. This determines what is considered an acceptable loss of data between the last recovery point and the interruption of service.

Strategies

A variety of disaster recovery (DR) strategies can be part of a disaster recovery plan.

Back-up

This is the simplest type of disaster recovery and involves storing data off-site or on a removable drive.

Cold Site

In this type of disaster recovery, an organization sets up basic infrastructure in a second site.

Hot site

A hot site maintains up-to-date copies of data at all times. Hot sites are time-consuming to set up and more expensive than cold sites, but they dramatically reduce downtime.

Virtual Machines (VMs) and Containers

Before we discuss virtualization vs containerization, let's learn what are virtual machines (VMs) and Containers.

Virtual Machines (VM)

A Virtual Machine (VM) is a virtual environment that functions as a virtual computer system with its own CPU, memory, network interface, and storage, created on a physical hardware system. A software called a hypervisor separates the machine's resources from the hardware and provisions them appropriately so they can be used by the VM.

VMs are isolated from the rest of the system, and multiple VMs can exist on a single piece of hardware, like a server. They can be moved between host servers depending on the demand or to use resources more efficiently.

What is a Hypervisor?

A Hypervisor sometimes called a Virtual Machine Monitor (VMM), isolates the operating system and resources from the virtual machines and enables the creation and management of those VMs. The hypervisor treats resources like CPU, memory, and storage as a pool of resources that can be easily reallocated between existing guests or new virtual machines.

Why use a Virtual Machine?

Server consolidation is a top reason to use VMs. Most operating system and application deployments only use a small amount of the physical resources available. By virtualizing our servers, we can place many virtual servers onto each physical server to improve hardware utilization. This keeps us from needing to purchase additional physical resources.

A VM provides an environment that is isolated from the rest of a system, so whatever is running inside a VM won't interfere with anything else running on the host hardware. Because VMs are isolated, they are a good option for testing new applications or setting up a production environment. We can also run a single-purpose VM to support a specific use case.

Containers

A container is a standard unit of software that packages up code and all its dependencies such as specific versions of runtimes and libraries so that the application runs quickly and reliably from one computing environment to another. Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run. This decoupling allows container-based applications to be deployed easily and consistently, regardless of the target environment.

Why do we need containers?

Let's discuss some advantages of using containers:

Separation of responsibility

Containerization provides a clear separation of responsibility, as developers focus on application logic and dependencies, while operations teams can focus on deployment and management.

Workload portability

Containers can run virtually anywhere, greatly easing development and deployment.

Application isolation

Containers virtualize CPU, memory, storage, and network resources at the operating system level, providing developers with a view of the OS logically isolated from other applications.

Agile development

Containers allow developers to move much more quickly by avoiding concerns about dependencies and environments.

Efficient operations

Containers are lightweight and allow us to use just the computing resources we need.

Virtualization vs Containerization

virtualization-vs-containerization

In traditional virtualization, a hypervisor virtualizes physical hardware. The result is that each virtual machine contains a guest OS, a virtual copy of the hardware that the OS requires to run, and an application and its associated libraries and dependencies.

Instead of virtualizing the underlying hardware, containers virtualize the operating system so each container contains only the application and its dependencies making them much more lightweight than VMs. Containers also share the OS kernel and use a fraction of the memory VMs require.

OAuth 2.0 and OpenID Connect (OIDC)

OAuth 2.0

OAuth 2.0, which stands for Open Authorization, is a standard designed to provide consented access to resources on behalf of the user, without ever sharing the user's credentials. OAuth 2.0 is an authorization protocol and not an authentication protocol, it is designed primarily as a means of granting access to a set of resources, for example, remote APIs or user's data.

Concepts

The OAuth 2.0 protocol defines the following entities:

  • Resource Owner: The user or system that owns the protected resources and can grant access to them.
  • Client: The client is the system that requires access to the protected resources.
  • Authorization Server: This server receives requests from the Client for Access Tokens and issues them upon successful authentication and consent by the Resource Owner.
  • Resource Server: A server that protects the user's resources and receives access requests from the Client. It accepts and validates an Access Token from the Client and returns the appropriate resources.
  • Scopes: They are used to specify exactly the reason for which access to resources may be granted. Acceptable scope values, and which resources they relate to, are dependent on the Resource Server.
  • Access Token: A piece of data that represents the authorization to access resources on behalf of the end-user.

How does OAuth 2.0 work?

Let's learn how OAuth 2.0 works:

oauth2

  1. The client requests authorization from the Authorization Server, supplying the client id and secret as identification. It also provides the scopes and an endpoint URI to send the Access Token or the Authorization Code.
  2. The Authorization Server authenticates the client and verifies that the requested scopes are permitted.
  3. The resource owner interacts with the authorization server to grant access.
  4. The Authorization Server redirects back to the client with either an Authorization Code or Access Token, depending on the grant type. A Refresh Token may also be returned.
  5. With the Access Token, the client can request access to the resource from the Resource Server.

Disadvantages

Here are the most common disadvantages of OAuth 2.0:

  • Lacks built-in security features.
  • No standard implementation.
  • No common set of scopes.

OpenID Connect

OAuth 2.0 is designed only for authorization, for granting access to data and features from one application to another. OpenID Connect (OIDC) is a thin layer that sits on top of OAuth 2.0 that adds login and profile information about the person who is logged in.

When an Authorization Server supports OIDC, it is sometimes called an identity provider (IdP), since it provides information about the Resource Owner back to the Client. OpenID Connect is relatively new, resulting in lower adoption and industry implementation of best practices compared to OAuth.

Concepts

The OpenID Connect (OIDC) protocol defines the following entities:

  • Relying Party: The current application.
  • OpenID Provider: This is essentially an intermediate service that provides a one-time code to the Relying Party.
  • Token Endpoint: A web server that accepts the One-Time Code (OTC) and provides an access code that's valid for an hour. The main difference between OIDC and OAuth 2.0 is that the token is provided using JSON Web Token (JWT).
  • UserInfo Endpoint: The Relying Party communicates with this endpoint, providing a secure token and receiving information about the end-user

Both OAuth 2.0 and OIDC are easy to implement and are JSON based, which is supported by most web and mobile applications. However, the OpenID Connect (OIDC) specification is more strict than that of basic OAuth.

Single Sign-On (SSO)

Single Sign-On (SSO) is an authentication process in which a user is provided access to multiple applications or websites by using only a single set of login credentials. This prevents the need for the user to log separately into the different applications.

The user credentials and other identifying information are stored and managed by a centralized system called Identity Provider (IdP). The Identity Provider is a trusted system that provides access to other websites and applications.

Single Sign-On (SSO) based authentication systems are commonly used in enterprise environments where employees require access to multiple applications of their organizations.

Components

Let's discuss some key components of Single Sign-On (SSO).

Identity Provider (IdP)

User Identity information is stored and managed by a centralized system called Identity Provider (IdP). The Identity Provider authenticates the user and provides access to the service provider.

The identity provider can directly authenticate the user by validating a username and password or by validating an assertion about the user's identity as presented by a separate identity provider. The identity provider handles the management of user identities in order to free the service provider from this responsibility.

Service Provider

A service provider provides services to the end-user. They rely on identity providers to assert the identity of a user, and typically certain attributes about the user are managed by the identity provider. Service providers may also maintain a local account for the user along with attributes that are unique to their service.

Identity Broker

An identity broker acts as an intermediary that connects multiple service providers with various different identity providers. Using Identity Broker, we can perform single sign-on over any application without the hassle of the protocol it follows.

SAML

Security Assertion Markup Language is an open standard that allows clients to share security information about identity, authentication, and permission across different systems. SAML is implemented with the Extensible Markup Language (XML) standard for sharing data.

SAML specifically enables identity federation, making it possible for identity providers (IdPs) to seamlessly and securely pass authenticated identities and their attributes to service providers.

How does SSO work?

Now, let's discuss how Single Sign-On works:

sso

  1. The user requests a resource from their desired application.
  2. The application redirects the user to the Identity Provider (IdP) for authentication.
  3. The user signs in with their credentials (usually, username and password).
  4. Identity Provider (IdP) sends a Single Sign-On response back to the client application.
  5. The application grants access to the user.

SAML vs OAuth 2.0 and OpenID Connect (OIDC)

There are many differences between SAML, OAuth, and OIDC. SAML uses XML to pass messages, while OAuth and OIDC use JSON. OAuth provides a simpler experience, while SAML is geared towards enterprise security.

OAuth and OIDC use RESTful communication extensively, which is why mobile, and modern web applications find OAuth and OIDC a better experience for the user. SAML, on the other hand, drops a session cookie in a browser that allows a user to access certain web pages. This is great for short-lived workloads.

OIDC is developer-friendly and simpler to implement, which broadens the use cases for which it might be implemented. It can be implemented from scratch pretty fast, via freely available libraries in all common programming languages. SAML can be complex to install and maintain, which only enterprise-size companies can handle well.

OpenID Connect is essentially a layer on top of the OAuth framework. Therefore, it can offer a built-in layer of permission that asks a user to agree to what the service provider might access. Although SAML is also capable of allowing consent flow, it achieves this by hard-coding carried out by a developer and not as part of its protocol.

Both of these authentication protocols are good at what they do. As always, a lot depends on our specific use cases and target audience.

Advantages

Following are the benefits of using Single Sign-On:

  • Ease of use as users only need to remember one set of credentials.
  • Ease of access without having to go through a lengthy authorization process.
  • Enforced security and compliance to protect sensitive data.
  • Simplifying the management with reduced IT support cost and admin time.

Disadvantages

Here are some disadvantages of Single Sign-On:

  • Single Password Vulnerability, if the main SSO password gets compromised, all the supported applications get compromised.
  • The authentication process using Single Sign-On is slower than traditional authentication as every application has to request the SSO provider for verification.

Examples

These are some commonly used Identity Providers (IdP):

SSL, TLS, mTLS

Let's briefly discuss some important communication security protocols such as SSL, TLS, and mTLS. I would say that from a "big picture" system design perspective, this topic is not very important but still good to know about.

SSL

SSL stands for Secure Sockets Layer, and it refers to a protocol for encrypting and securing communications that take place on the internet. It was first developed in 1995 but since has been deprecated in favor of TLS (Transport Layer Security).

Why is it called an SSL certificate if it is deprecated?

Most major certificate providers still refer to certificates as SSL certificates, which is why the naming convention persists.

Why was SSL so important?

Originally, data on the web was transmitted in plaintext that anyone could read if they intercepted the message. SSL was created to correct this problem and protect user privacy. By encrypting any data that goes between the user and a web server, SSL also stops certain kinds of cyber attacks by preventing attackers from tampering with data in transit.

TLS

Transport Layer Security, or TLS, is a widely adopted security protocol designed to facilitate privacy and data security for communications over the internet. TLS evolved from a previous encryption protocol called Secure Sockets Layer (SSL). A primary use case of TLS is encrypting the communication between web applications and servers.

There are three main components to what the TLS protocol accomplishes:

  • Encryption: hides the data being transferred from third parties.
  • Authentication: ensures that the parties exchanging information are who they claim to be.
  • Integrity: verifies that the data has not been forged or tampered with.

mTLS

Mutual TLS, or mTLS, is a method for mutual authentication. mTLS ensures that the parties at each end of a network connection are who they claim to be by verifying that they both have the correct private key. The information within their respective TLS certificates provides additional verification.

Why use mTLS?

mTLS helps ensure that the traffic is secure and trusted in both directions between a client and server. This provides an additional layer of security for users who log in to an organization's network or applications. It also verifies connections with client devices that do not follow a login process, such as Internet of Things (IoT) devices.

Nowadays, mTLS is commonly used by microservices or distributed systems in a zero trust security model to verify each other.

System Design Interviews

System design is a very extensive topic and system design interviews are designed to evaluate your capability to produce technical solutions to abstract problems, as such, they're not designed for a specific answer. The unique aspect of system design interviews is the two-way nature between the candidate and the interviewer.

Expectations are quite different at different engineering levels as well. Because someone with a lot of practical experience will approach it quite differently from someone who's new in the industry. As a result, it's hard to come up with a single strategy that will help us stay organized during the interview.

Let's look at some common strategies for the system design interviews:

Requirements clarifications

System design interview questions, by nature, are vague or abstract. Asking questions about the exact scope of the problem, and clarifying functional requirements early in the interview is essential. Usually, requirements are divided into three parts:

Functional requirements

These are the requirements that the end user specifically demands as basic functionalities that the system should offer. All these functionalities need to be necessarily incorporated into the system as part of the contract.

For example:

  • "What are the features that we need to design for this system?"
  • "What are the edge cases we need to consider, if any, in our design?"

Non-functional requirements

These are the quality constraints that the system must satisfy according to the project contract. The priority or extent to which these factors are implemented varies from one project to another. They are also called non-behavioral requirements. For example, portability, maintainability, reliability, scalability, security, etc.

For example:

  • "Each request should be processed with the minimum latency"
  • "System should be highly available"

Extended requirements

These are basically "nice to have" requirements that might be out of the scope of the system.

For example:

  • "Our system should record metrics and analytics"
  • "Service health and performance monitoring?"

Estimation and Constraints

Estimate the scale of the system we're going to design. It is important to ask questions such as:

  • "What is the desired scale that this system will need to handle?"
  • "What is the read/write ratio of our system?"
  • "How many requests per second?"
  • "How much storage will be needed?"

These questions will help us scale our design later.

Data model design

Once we have the estimations, we can start with defining the database schema. Doing so in the early stages of the interview would help us to understand the data flow which is the core of every system. In this step, we basically define all the entities and relationships between them.

  • "What are the different entities in the system?"
  • "What are the relationships between these entities?"
  • "How many tables do we need?"
  • "Is NoSQL a better choice here?"

API design

Next, we can start designing APIs for the system. These APIs will help us define the expectations from the system explicitly. We don't have to write any code, just a simple interface defining the API requirements such as parameters, functions, classes, types, entities, etc.

For example:

createUser(name: string, email: string): User

It is advised to keep the interface as simple as possible and come back to it later when covering extended requirements.

High-level component design

Now we have established our data model and API design, it's time to identify system components (such as Load Balancers, API Gateway, etc.) that are needed to solve our problem and draft the first design of our system.

  • "Is it best to design a monolithic or a microservices architecture?"
  • "What type of database should we use?"

Once we have a basic diagram, we can start discussing with the interviewer how the system will work from the client's perspective.

Detailed design

Now it's time to go into detail about the major components of the system we designed. As always discuss with the interviewer which component may need further improvements.

Here is a good opportunity to demonstrate your experience in the areas of your expertise. Present different approaches, advantages, and disadvantages. Explain your design decisions, and back them up with examples. This is also a good time to discuss any additional features the system might be able to support, though this is optional.

  • "How should we partition our data?"
  • "What about load distribution?"
  • "Should we use cache?"
  • "How will we handle a sudden spike in traffic?"

Also, try not to be too opinionated about certain technologies, statements like "I believe that NoSQL databases are just better, SQL databases are not scalable" reflect poorly. As someone who has interviewed a lot of people over the years, my two cents here would be to be humble about what you know and what you do not. Use your existing knowledge with examples to navigate this part of the interview.

Identify and resolve bottlenecks

Finally, it's time to discuss bottlenecks and approaches to mitigate them. Here are some important questions to ask:

  • "Do we have enough database replicas?"
  • "Is there any single point of failure?"
  • "Is database sharding required?"
  • "How can we make our system more robust?"
  • "How to improve the availability of our cache?"

Make sure to read the engineering blog of the company you're interviewing with. This will help you get a sense of what technology stack they're using and which problems are important to them.

URL Shortener

Let's design a URL shortener, similar to services like Bitly, TinyURL.

What is a URL Shortener?

A URL shortener service creates an alias or a short URL for a long URL. Users are redirected to the original URL when they visit these short links.

For example, the following long URL can be changed to a shorter URL.

Long URL: https://karanpratapsingh.com/courses/system-design/url-shortener

Short URL: https://bit.ly/3I71d3o

Why do we need a URL shortener?

URL shortener saves space in general when we are sharing URLs. Users are also less likely to mistype shorter URLs. Moreover, we can also optimize links across devices, this allows us to track individual links.

Requirements

Our URL shortening system should meet the following requirements:

Functional requirements

  • Given a URL, our service should generate a shorter and unique alias for it.
  • Users should be redirected to the original URL when they visit the short link.
  • Links should expire after a default timespan.

Non-functional requirements

  • High availability with minimal latency.
  • The system should be scalable and efficient.

Extended requirements

  • Prevent abuse of services.
  • Record analytics and metrics for redirections.

Estimation and Constraints

Let's start with the estimation and constraints.

Note: Make sure to check any scale or traffic related assumptions with your interviewer.

Traffic

This will be a read-heavy system, so let's assume a 100:1 read/write ratio with 100 million links generated per month.

Reads/Writes Per month

For reads per month:

$$ 100 \times 100 \space million = 10 \space billion/month $$

Similarly for writes:

$$ 1 \times 100 \space million = 100 \space million/month $$

What would be Requests Per Second (RPS) for our system?

100 million requests per month translate into 40 requests per second.

$$ \frac{100 \space million}{(30 \space days \times 24 \space hrs \times 3600 \space seconds)} = \sim 40 \space URLs/second $$

And with a 100:1 read/write ratio, the number of redirections will be:

$$ 100 \times 40 \space URLs/second = 4000 \space requests/second $$

Bandwidth

Since we expect about 40 URLs every second, and if we assume each request is of size 500 bytes then the total incoming data for then write requests would be:

$$ 40 \times 500 \space bytes = 20 \space KB/second $$

Similarly, for the read requests, since we expect about 4K redirections, the total outgoing data would be:

$$ 4000 \space URLs/second \times 500 \space bytes = \sim 2 \space MB/second $$

Storage

For storage, we will assume we store each link or record in our database for 10 years. Since we expect around 100M new requests every month, the total number of records we will need to store would be:

$$ 100 \space million \times 10\space years \times 12 \space months = 12 \space billion $$

Like earlier, if we assume each stored recorded will be approximately 500 bytes. We will need around 6TB of storage:

$$ 12 \space billion \times 500 \space bytes = 6 \space TB $$

Cache

For caching, we will follow the classic Pareto principle also known as the 80/20 rule. This means that 80% of the requests are for 20% of the data, so we can cache around 20% of our requests.

Since we get around 4K read or redirection requests each second. This translates into 350M requests per day.

$$ 4000 \space URLs/second \times 24 \space hours \times 3600 \space seconds = \sim 350 \space million \space requests/day $$

Hence, we will need around 35GB of memory per day.

$$ 20 \space percent \times 350 \space million \times 500 \space bytes = 35 \space GB/day $$

High-level estimate

Here is our high-level estimate:

Type Estimate
Writes (New URLs) 40/s
Reads (Redirection) 4K/s
Bandwidth (Incoming) 20 KB/s
Bandwidth (Outgoing) 2 MB/s
Storage (10 years) 6 TB
Memory (Caching) ~35 GB/day

Data model design

Next, we will focus on the data model design. Here is our database schema:

url-shortener-datamodel

Initially, we can get started with just two tables:

users

Stores user's details such as name, email, createdAt, etc.

urls

Contains the new short URL's properties such as expiration, hash, originalURL, and userID of the user who created the short URL. We can also use the hash column as an index to improve the query performance.

What kind of database should we use?

Since the data is not strongly relational, NoSQL databases such as Amazon DynamoDB, Apache Cassandra, or MongoDB will be a better choice here, if we do decide to use an SQL database then we can use something like Azure SQL Database or Amazon RDS.

For more details, refer to SQL vs NoSQL.

API design

Let us do a basic API design for our services:

Create URL

This API should create a new short URL in our system given an original URL.

createURL(apiKey: string, originalURL: string, expiration?: Date): string

Parameters

API Key (string): API key provided by the user.

Original Url (string): Original URL to be shortened.

Expiration (Date): Expiration date of the new URL (optional).

Returns

Short URL (string): New shortened URL.

Get URL

This API should retrieve the original URL from a given short URL.

getURL(apiKey: string, shortURL: string): string

Parameters

API Key (string): API key provided by the user.

Short Url (string): Short URL mapped to the original URL.

Returns

Original URL (string): Original URL to be retrieved.

Delete URL

This API should delete a given shortURL from our system.

deleteURL(apiKey: string, shortURL: string): boolean

Parameters

API Key (string): API key provided by the user.

Short Url (string): Short URL to be deleted.

Returns

Result (boolean): Represents whether the operation was successful or not.

Why do we need an API key?

As you must've noticed, we're using an API key to prevent abuse of our services. Using this API key we can limit the users to a certain number of requests per second or minute. This is quite a standard practice for developer APIs and should cover our extended requirement.

High-level design

Now let us do a high-level design of our system.

URL Encoding

Our system's primary goal is to shorten a given URL, let's look at different approaches:

Base62 Approach

In this approach, we can encode the original URL using Base62 which consists of the capital letters A-Z, the lower case letters a-z, and the numbers 0-9.

$$ Number \space of \space URLs = 62^N $$

Where,

N: Number of characters in the generated URL.

So, if we want to generate a URL that is 7 characters long, we will generate ~3.5 trillion different URLs.

$$ \begin{gather*} 62^5 = \sim 916 \space million \space URLs \\ 62^6 = \sim 56.8 \space billion \space URLs \\ 62^7 = \sim 3.5 \space trillion \space URLs \end{gather*} $$

This is the simplest solution here, but it does not guarantee non-duplicate or collision-resistant keys.

MD5 Approach

The MD5 message-digest algorithm is a widely used hash function producing a 128-bit hash value (or 32 hexadecimal digits). We can use these 32 hexadecimal digits for generating 7 characters long URL.

$$ MD5(original_url) \rightarrow base62encode \rightarrow hash $$

However, this creates a new issue for us, which is duplication and collision. We can try to re-compute the hash until we find a unique one but that will increase the overhead of our systems. It's better to look for more scalable approaches.

Counter Approach

In this approach, we will start with a single server which will maintain the count of the keys generated. Once our service receives a request, it can reach out to the counter which returns a unique number and increments the counter. When the next request comes the counter again returns the unique number and this goes on.

$$ Counter(0-3.5 \space trillion) \rightarrow base62encode \rightarrow hash $$

The problem with this approach is that it can quickly become a single point for failure. And if we run multiple instances of the counter we can have collision as it's essentially a distributed system.

To solve this issue we can use a distributed system manager such as Zookeeper which can provide distributed synchronization. Zookeeper can maintain multiple ranges for our servers.

$$ \begin{align*} & Range \space 1: \space 1 \rightarrow 1,000,000 \\ & Range \space 2: \space 1,000,001 \rightarrow 2,000,000 \\ & Range \space 3: \space 2,000,001 \rightarrow 3,000,000 \\ & ... \end{align*} $$

Once a server reaches its maximum range Zookeeper will assign an unused counter range to the new server. This approach can guarantee non-duplicate and collision-resistant URLs. Also, we can run multiple instances of Zookeeper to remove the single point of failure.

Key Generation Service (KGS)

As we discussed, generating a unique key at scale without duplication and collisions can be a bit of a challenge. To solve this problem, we can create a standalone Key Generation Service (KGS) that generates a unique key ahead of time and stores it in a separate database for later use. This approach can make things simple for us.

How to handle concurrent access?

Once the key is used, we can mark it in the database to make sure we don't reuse it, however, if there are multiple server instances reading data concurrently, two or more servers might try to use the same key.

The easiest way to solve this would be to store keys in two tables. As soon as a key is used, we move it to a separate table with appropriate locking in place. Also, to improve reads, we can keep some of the keys in memory.

KGS database estimations

As per our discussion, we can generate up to ~56.8 billion unique 6 character long keys which will result in us having to store 300 GB of keys.

$$ 6 \space characters \times 56.8 \space billion = \sim 390 \space GB $$

While 390 GB seems like a lot for this simple use case, it is important to remember this is for the entirety of our service lifetime and the size of the keys database would not increase like our main database.

Caching

Now, let's talk about caching. As per our estimations, we will require around ~35 GB of memory per day to cache 20% of the incoming requests to our services. For this use case, we can use Redis or Memcached servers alongside our API server.

For more details, refer to caching.

Design

Now that we have identified some core components, let's do the first draft of our system design.

url-shortener-basic-design

Here's how it works:

Creating a new URL

  1. When a user creates a new URL, our API server requests a new unique key from the Key Generation Service (KGS).
  2. Key Generation Service provides a unique key to the API server and marks the key as used.
  3. API server writes the new URL entry to the database and cache.
  4. Our service returns an HTTP 201 (Created) response to the user.

Accessing a URL

  1. When a client navigates to a certain short URL, the request is sent to the API servers.
  2. The request first hits the cache, and if the entry is not found there then it is retrieved from the database and an HTTP 301 (Redirect) is issued to the original URL.
  3. If the key is still not found in the database, an HTTP 404 (Not found) error is sent to the user.

Detailed design

It's time to discuss the finer details of our design.

Data Partitioning

To scale out our databases we will need to partition our data. Horizontal partitioning (aka Sharding) can be a good first step. We can use partitions schemes such as:

  • Hash-Based Partitioning
  • List-Based Partitioning
  • Range Based Partitioning
  • Composite Partitioning

The above approaches can still cause uneven data and load distribution, we can solve this using Consistent hashing.

For more details, refer to Sharding and Consistent Hashing.

Database cleanup

This is more of a maintenance step for our services and depends on whether we keep the expired entries or remove them. If we do decide to remove expired entries, we can approach this in two different ways:

Active cleanup

In active cleanup, we will run a separate cleanup service which will periodically remove expired links from our storage and cache. This will be a very lightweight service like a cron job.

Passive cleanup

For passive cleanup, we can remove the entry when a user tries to access an expired link. This can ensure a lazy cleanup of our database and cache.

Cache

Now let us talk about caching.

Which cache eviction policy to use?

As we discussed before, we can use solutions like Redis or Memcached and cache 20% of the daily traffic but what kind of cache eviction policy would best fit our needs?

Least Recently Used (LRU) can be a good policy for our system. In this policy, we discard the least recently used key first.

How to handle cache miss?

Whenever there is a cache miss, our servers can hit the database directly and update the cache with the new entries.

Metrics and Analytics

Recording analytics and metrics is one of our extended requirements. We can store and update metadata like visitor's country, platform, the number of views, etc alongside the URL entry in our database.

Security

For security, we can introduce private URLs and authorization. A separate table can be used to store user ids that have permission to access a specific URL. If a user does not have proper permissions, we can return an HTTP 401 (Unauthorized) error.

We can also use an API Gateway as they can support capabilities like authorization, rate limiting, and load balancing out of the box.

Identify and resolve bottlenecks

url-shortener-advanced-design

Let us identify and resolve bottlenecks such as single points of failure in our design:

  • "What if the API service or Key Generation Service crashes?"
  • "How will we distribute our traffic between our components?"
  • "How can we reduce the load on our database?"
  • "What if the key database used by KGS fails?"
  • "How to improve the availability of our cache?"

To make our system more resilient we can do the following:

  • Running multiple instances of our Servers and Key Generation Service.
  • Introducing load balancers between clients, servers, databases, and cache servers.
  • Using multiple read replicas for our database as it's a read-heavy system.
  • Standby replica for our key database in case it fails.
  • Multiple instances and replicas for our distributed cache.

WhatsApp

Let's design a WhatsApp like instant messaging service, similar to services like WhatsApp, Facebook Messenger, and WeChat.

What is WhatsApp?

WhatsApp is a chat application that provides instant messaging services to its users. It is one of the most used mobile applications on the planet connecting over 2 billion users in 180+ countries. WhatsApp is also available on the web.

Requirements

Our system should meet the following requirements:

Functional requirements

  • Should support one-on-one chat.
  • Group chats (max 100 people).
  • Should support file sharing (image, video, etc.).

Non-functional requirements

  • High availability with minimal latency.
  • The system should be scalable and efficient.

Extended requirements

  • Sent, Delivered, and Read receipts of the messages.
  • Show the last seen time of users.
  • Push notifications.

Estimation and Constraints

Let's start with the estimation and constraints.

Note: Make sure to check any scale or traffic-related assumptions with your interviewer.

Traffic

Let us assume we have 50 million daily active users (DAU) and on average each user sends at least 10 messages to 4 different people every day. This gives us 2 billion messages per day.

$$ 50 \space million \times 20 \space messages = 2 \space billion/day $$

Messages can also contain media such as images, videos, or other files. We can assume that 5 percent of messages are media files shared by the users, which gives us additional 200 million files we would need to store.

$$ 5 \space percent \times 2 \space billion = 200 \space million/day $$

What would be Requests Per Second (RPS) for our system?

2 billion requests per day translate into 24K requests per second.

$$ \frac{2 \space billion}{(24 \space hrs \times 3600 \space seconds)} = \sim 24K \space requests/second $$

Storage

If we assume each message on average is 100 bytes, we will require about 200 GB of database storage every day.

$$ 2 \space billion \times 100 \space bytes = \sim 200 \space GB/day $$

As per our requirements, we also know that around 5 percent of our daily messages (100 million) are media files. If we assume each file is 50 KB on average, we will require 10 TB of storage every day.

$$ 100 \space million \times 100 \space KB = 10 \space TB/day $$

And for 10 years, we will require about 38 PB of storage.

$$ (10 \space TB + 0.2 \space TB) \times 10 \space years \times 365 \space days = \sim 38 \space PB $$

Bandwidth

As our system is handling 10.2 TB of ingress every day, we will require a minimum bandwidth of around 120 MB per second.

$$ \frac{10.2 \space TB}{(24 \space hrs \times 3600 \space seconds)} = \sim 120 \space MB/second $$

High-level estimate

Here is our high-level estimate:

Type Estimate
Daily active users (DAU) 50 million
Requests per second (RPS) 24K/s
Storage (per day) ~10.2 TB
Storage (10 years) ~38 PB
Bandwidth ~120 MB/s

Data model design

This is the general data model which reflects our requirements.

whatsapp-datamodel

We have the following tables:

users

This table will contain a user's information such as name, phoneNumber, and other details.

messages

As the name suggests, this table will store messages with properties such as type (text, image, video, etc.), content, and timestamps for message delivery. The message will also have a corresponding chatID or groupID.

chats

This table basically represents a private chat between two users and can contain multiple messages.

users_chats

This table maps users and chats as multiple users can have multiple chats (N:M relationship) and vice versa.

groups

This table represents a group between multiple users.

users_groups

This table maps users and groups as multiple users can be a part of multiple groups (N:M relationship) and vice versa.

What kind of database should we use?

While our data model seems quite relational, we don't necessarily need to store everything in a single database, as this can limit our scalability and quickly become a bottleneck.

We will split the data between different services each having ownership over a particular table. Then we can use a relational database such as PostgreSQL or a distributed NoSQL database such as Apache Cassandra for our use case.

API design

Let us do a basic API design for our services:

Get all chats or groups

This API will get all chats or groups for a given userID.

getAll(userID: UUID): Chat[] | Group[]

Parameters

User ID (UUID): ID of the current user.

Returns

Result (Chat[] | Group[]): All the chats and groups the user is a part of.

Get messages

Get all messages for a user given the channelID (chat or group id).

getMessages(userID: UUID, channelID: UUID): Message[]

Parameters

User ID (UUID): ID of the current user.

Channel ID (UUID): ID of the channel (chat or group) from which messages need to be retrieved.

Returns

Messages (Message[]): All the messages in a given chat or group.

Send message

Send a message from a user to a channel (chat or group).

sendMessage(userID: UUID, channelID: UUID, message: Message): boolean

Parameters

User ID (UUID): ID of the current user.

Channel ID (UUID): ID of the channel (chat or group) user wants to send a message to.

Message (Message): The message (text, image, video, etc.) that the user wants to send.

Returns

Result (boolean): Represents whether the operation was successful or not.

Join or leave a group

Send a message from a user to a channel (chat or group).

joinGroup(userID: UUID, channelID: UUID): boolean
leaveGroup(userID: UUID, channelID: UUID): boolean

Parameters

User ID (UUID): ID of the current user.

Channel ID (UUID): ID of the channel (chat or group) the user wants to join or leave.

Returns

Result (boolean): Represents whether the operation was successful or not.

High-level design

Now let us do a high-level design of our system.

Architecture

We will be using microservices architecture since it will make it easier to horizontally scale and decouple our services. Each service will have ownership of its own data model. Let's try to divide our system into some core services.

User Service

This is an HTTP-based service that handles user-related concerns such as authentication and user information.

Chat Service

The chat service will use WebSockets and establish connections with the client to handle chat and group message-related functionality. We can also use cache to keep track of all the active connections sort of like sessions which will help us determine if the user is online or not.

Notification Service

This service will simply send push notifications to the users. It will be discussed in detail separately.

Presence Service

The presence service will keep track of the last seen status of all users. It will be discussed in detail separately.

Media service

This service will handle the media (images, videos, files, etc.) uploads. It will be discussed in detail separately.

What about inter-service communication and service discovery?

Since our architecture is microservices-based, services will be communicating with each other as well. Generally, REST or HTTP performs well but we can further improve the performance using gRPC which is more lightweight and efficient.

Service discovery is another thing we will have to take into account. We can also use a service mesh that enables managed, observable, and secure communication between individual services.

Note: Learn more about REST, GraphQL, gRPC and how they compare with each other.

Real-time messaging

How do we efficiently send and receive messages? We have two different options:

Pull model

The client can periodically send an HTTP request to servers to check if there are any new messages. This can be achieved via something like Long polling.

Push model

The client opens a long-lived connection with the server and once new data is available it will be pushed to the client. We can use WebSockets or Server-Sent Events (SSE) for this.

The pull model approach is not scalable as it will create unnecessary request overhead on our servers and most of the time the response will be empty, thus wasting our resources. To minimize latency, using the push model with WebSockets is a better choice because then we can push data to the client once it's available without any delay given the connection is open with the client. Also, WebSockets provide full-duplex communication, unlike Server-Sent Events (SSE) which are only unidirectional.

Note: Learn more about Long polling, WebSockets, Server-Sent Events (SSE).

Last seen

To implement the last seen functionality, we can use a heartbeat mechanism, where the client can periodically ping the servers indicating its liveness. Since this needs to be as low overhead as possible, we can store the last active timestamp in the cache as follows:

Key Value
User A 2022-07-01T14:32:50
User B 2022-07-05T05:10:35
User C 2022-07-10T04:33:25

This will give us the last time the user was active. This functionality will be handled by the presence service combined with Redis or Memcached as our cache.

Another way to implement this is to track the latest action of the user, once the last activity crosses a certain threshold, such as "user hasn't performed any action in the last 30 seconds", we can show the user as offline and last seen with the last recorded timestamp. This will be more of a lazy update approach and might benefit us over heartbeat in certain cases.

Notifications

Once a message is sent in a chat or a group, we will first check if the recipient is active or not, we can get this information by taking the user's active connection and last seen into consideration.

If the recipient is not active, the chat service will add an event to a message queue with additional metadata such as the client's device platform which will be used to route the notification to the correct platform later on.

The notification service will then consume the event from the message queue and forward the request to Firebase Cloud Messaging (FCM) or Apple Push Notification Service (APNS) based on the client's device platform (Android, iOS, web, etc). We can also add support for email and SMS.

Why are we using a message queue?

Since most message queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they're sent and that a message is delivered at least once which is an important part of our service functionality.

While this seems like a classic publish-subscribe use case, it is actually not as mobile devices and browsers each have their own way of handling push notifications. Usually, notifications are handled externally via Firebase Cloud Messaging (FCM) or Apple Push Notification Service (APNS) unlike message fan-out which we commonly see in backend services. We can use something like Amazon SQS or RabbitMQ to support this functionality.

Read receipts

Handling read receipts can be tricky, for this use case we can wait for some sort of Acknowledgment (ACK) from the client to determine if the message was delivered and update the corresponding deliveredAt field. Similarly, we will mark message the message seen once the user opens the chat and update the corresponding seenAt timestamp field.

Design

Now that we have identified some core components, let's do the first draft of our system design.

whatsapp-basic-design

Detailed design

It's time to discuss our design decisions in detail.

Data Partitioning

To scale out our databases we will need to partition our data. Horizontal partitioning (aka Sharding) can be a good first step. We can use partitions schemes such as:

  • Hash-Based Partitioning
  • List-Based Partitioning
  • Range Based Partitioning
  • Composite Partitioning

The above approaches can still cause uneven data and load distribution, we can solve this using Consistent hashing.

For more details, refer to Sharding and Consistent Hashing.

Caching

In a messaging application, we have to be careful about using cache as our users expect the latest data, but many users will be requesting the same messages, especially in a group chat. So, to prevent usage spikes from our resources we can cache older messages.

Some group chats can have thousands of messages and sending that over the network will be really inefficient, to improve efficiency we can add pagination to our system APIs. This decision will be helpful for users with limited network bandwidth as they won't have to retrieve old messages unless requested.

Which cache eviction policy to use?

We can use solutions like Redis or Memcached and cache 20% of the daily traffic but what kind of cache eviction policy would best fit our needs?

Least Recently Used (LRU) can be a good policy for our system. In this policy, we discard the least recently used key first.

How to handle cache miss?

Whenever there is a cache miss, our servers can hit the database directly and update the cache with the new entries.

For more details, refer to Caching.

Media access and storage

As we know, most of our storage space will be used for storing media files such as images, videos, or other files. Our media service will be handling both access and storage of the user media files.

But where can we store files at scale? Well, object storage is what we're looking for. Object stores break data files up into pieces called objects. It then stores those objects in a single repository, which can be spread out across multiple networked systems. We can also use distributed file storage such as HDFS or GlusterFS.

Fun fact: WhatsApp deletes media on its servers once it has been downloaded by the user.

We can use object stores like Amazon S3, Azure Blob Storage, or Google Cloud Storage for this use case.

Content Delivery Network (CDN)

Content Delivery Network (CDN) increases content availability and redundancy while reducing bandwidth costs. Generally, static files such as images, and videos are served from CDN. We can use services like Amazon CloudFront or Cloudflare CDN for this use case.

API gateway

Since we will be using multiple protocols like HTTP, WebSocket, TCP/IP, deploying multiple L4 (transport layer) or L7 (application layer) type load balancers separately for each protocol will be expensive. Instead, we can use an API Gateway that supports multiple protocols without any issues.

API Gateway can also offer other features such as authentication, authorization, rate limiting, throttling, and API versioning which will improve the quality of our services.

We can use services like Amazon API Gateway or Azure API Gateway for this use case.

Identify and resolve bottlenecks

whatsapp-advanced-design

Let us identify and resolve bottlenecks such as single points of failure in our design:

  • "What if one of our services crashes?"
  • "How will we distribute our traffic between our components?"
  • "How can we reduce the load on our database?"
  • "How to improve the availability of our cache?"
  • "Wouldn't API Gateway be a single point of failure?"
  • "How can we make our notification system more robust?"
  • "How can we reduce media storage costs"?
  • "Does chat service has too much responsibility?"

To make our system more resilient we can do the following:

  • Running multiple instances of each of our services.
  • Introducing load balancers between clients, servers, databases, and cache servers.
  • Using multiple read replicas for our databases.
  • Multiple instances and replicas for our distributed cache.
  • We can have a standby replica of our API Gateway.
  • Exactly once delivery and message ordering is challenging in a distributed system, we can use a dedicated message broker such as Apache Kafka or NATS to make our notification system more robust.
  • We can add media processing and compression capabilities to the media service to compress large files similar to WhatsApp which will save a lot of storage space and reduce cost.
  • We can create a group service separate from the chat service to further decouple our services.

Twitter

Let's design a Twitter like social media service, similar to services like Facebook, Instagram, etc.

What is Twitter?

Twitter is a social media service where users can read or post short messages (up to 280 characters) called tweets. It is available on the web and mobile platforms such as Android and iOS.

Requirements

Our system should meet the following requirements:

Functional requirements

  • Should be able to post new tweets (can be text, image, video, etc.).
  • Should be able to follow other users.
  • Should have a newsfeed feature consisting of tweets from the people the user is following.
  • Should be able to search tweets.

Non-Functional requirements

  • High availability with minimal latency.
  • The system should be scalable and efficient.

Extended requirements

  • Metrics and analytics.
  • Retweet functionality.
  • Favorite tweets.

Estimation and Constraints

Let's start with the estimation and constraints.

Note: Make sure to check any scale or traffic-related assumptions with your interviewer.

Traffic

This will be a read-heavy system, let us assume we have 1 billion total users with 200 million daily active users (DAU), and on average each user tweets 5 times a day. This gives us 1 billion tweets per day.

$$ 200 \space million \times 5 \space messages = 1 \space billion/day $$

Tweets can also contain media such as images, or videos. We can assume that 10 percent of tweets are media files shared by the users, which gives us additional 100 million files we would need to store.

$$ 10 \space percent \times 1 \space billion = 100 \space million/day $$

What would be Requests Per Second (RPS) for our system?

1 billion requests per day translate into 12K requests per second.

$$ \frac{1 \space billion}{(24 \space hrs \times 3600 \space seconds)} = \sim 12K \space requests/second $$

Storage

If we assume each message on average is 100 bytes, we will require about 100 GB of database storage every day.

$$ 1 \space billion \times 100 \space bytes = \sim 100 \space GB/day $$

We also know that around 10 percent of our daily messages (100 million) are media files per our requirements. If we assume each file is 50 KB on average, we will require 5 TB of storage every day.

$$ 100 \space million \times 100 \space KB = 5 \space TB/day $$

And for 10 years, we will require about 19 PB of storage.

$$ (5 \space TB + 0.1 \space TB) \times 365 \space days \times 10 \space years = \sim 19 \space PB $$

Bandwidth

As our system is handling 5.1 TB of ingress every day, we will require a minimum bandwidth of around 60 MB per second.

$$ \frac{5.1 \space TB}{(24 \space hrs \times 3600 \space seconds)} = \sim 60 \space MB/second $$

High-level estimate

Here is our high-level estimate:

Type Estimate
Daily active users (DAU) 100 million
Requests per second (RPS) 12K/s
Storage (per day) ~5.1 TB
Storage (10 years) ~19 PB
Bandwidth ~60 MB/s

Data model design

This is the general data model which reflects our requirements.

twitter-datamodel

We have the following tables:

users

This table will contain a user's information such as name, email, dob, and other details.

tweets

As the name suggests, this table will store tweets and their properties such as type (text, image, video, etc.), content, etc. We will also store the corresponding userID.

favorites

This table maps tweets with users for the favorite tweets functionality in our application.

followers

This table maps the followers and followees as users can follow each other (N:M relationship).

feeds

This table stores feed properties with the corresponding userID.

feeds_tweets

This table maps tweets and feed (N:M relationship).

What kind of database should we use?

While our data model seems quite relational, we don't necessarily need to store everything in a single database, as this can limit our scalability and quickly become a bottleneck.

We will split the data between different services each having ownership over a particular table. Then we can use a relational database such as PostgreSQL or a distributed NoSQL database such as Apache Cassandra for our use case.

API design

Let us do a basic API design for our services:

Post a tweet

This API will allow the user to post a tweet on the platform.

postTweet(userID: UUID, content: string, mediaURL?: string): boolean

Parameters

User ID (UUID): ID of the user.

Content (string): Contents of the tweet.

Media URL (string): URL of the attached media (optional).

Returns

Result (boolean): Represents whether the operation was successful or not.

Follow or unfollow a user

This API will allow the user to follow or unfollow another user.

follow(followerID: UUID, followeeID: UUID): boolean
unfollow(followerID: UUID, followeeID: UUID): boolean

Parameters

Follower ID (UUID): ID of the current user.

Followee ID (UUID): ID of the user we want to follow or unfollow.

Media URL (string): URL of the attached media (optional).

Returns

Result (boolean): Represents whether the operation was successful or not.

Get newsfeed

This API will return all the tweets to be shown within a given newsfeed.

getNewsfeed(userID: UUID): Tweet[]

Parameters

User ID (UUID): ID of the user.

Returns

Tweets (Tweet[]): All the tweets to be shown within a given newsfeed.

High-level design

Now let us do a high-level design of our system.

Architecture

We will be using microservices architecture since it will make it easier to horizontally scale and decouple our services. Each service will have ownership of its own data model. Let's try to divide our system into some core services.

User Service

This service handles user-related concerns such as authentication and user information.

Newsfeed Service

This service will handle the generation and publishing of user newsfeeds. It will be discussed in detail separately.

Tweet Service

The tweet service will handle tweet-related use cases such as posting a tweet, favorites, etc.

Search Service

The service is responsible for handling search-related functionality. It will be discussed in detail separately.

Media service

This service will handle the media (images, videos, files, etc.) uploads. It will be discussed in detail separately.

Notification Service

This service will simply send push notifications to the users.

Analytics Service

This service will be used for metrics and analytics use cases.

What about inter-service communication and service discovery?

Since our architecture is microservices-based, services will be communicating with each other as well. Generally, REST or HTTP performs well but we can further improve the performance using gRPC which is more lightweight and efficient.

Service discovery is another thing we will have to take into account. We can also use a service mesh that enables managed, observable, and secure communication between individual services.

Note: Learn more about REST, GraphQL, gRPC and how they compare with each other.

Newsfeed

When it comes to the newsfeed, it seems easy enough to implement, but there are a lot of things that can make or break this feature. So, let's divide our problem into two parts:

Generation

Let's assume we want to generate the feed for user A, we will perform the following steps:

  1. Retrieve the IDs of all the users and entities (hashtags, topics, etc.) user A follows.
  2. Fetch the relevant tweets for each of the retrieved IDs.
  3. Use a ranking algorithm to rank the tweets based on parameters such as relevance, time, engagement, etc.
  4. Return the ranked tweets data to the client in a paginated manner.

Feed generation is an intensive process and can take quite a lot of time, especially for users following a lot of people. To improve the performance, the feed can be pre-generated and stored in the cache, then we can have a mechanism to periodically update the feed and apply our ranking algorithm to the new tweets.

Publishing

Publishing is the step where the feed data is pushed according to each specific user. This can be a quite heavy operation, as a user may have millions of friends or followers. To deal with this, we have three different approaches:

  • Pull Model (or Fan-out on load)

newsfeed-pull-model

When a user creates a tweet, and a follower reloads their newsfeed, the feed is created and stored in memory. The most recent feed is only loaded when the user requests it. This approach reduces the number of write operations on our database.

The downside of this approach is that the users will not be able to view recent feeds unless they "pull" the data from the server, which will increase the number of read operations on the server.

  • Push Model (or Fan-out on write)

newsfeed-push-model

In this model, once a user creates a tweet, it is "pushed" to all the follower's feeds immediately. This prevents the system from having to go through a user's entire followers list to check for updates.

However, the downside of this approach is that it would increase the number of write operations on the database.

  • Hybrid Model

A third approach is a hybrid model between the pull and push model. It combines the beneficial features of the above two models and tries to provide a balanced approach between the two.

The hybrid model allows only users with a lesser number of followers to use the push model and for users with a higher number of followers celebrities, the pull model will be used.

Ranking Algorithm

As we discussed, we will need a ranking algorithm to rank each tweet according to its relevance to each specific user.

For example, Facebook used to utilize an EdgeRank algorithm, here, the rank of each feed item is described by:

$$ Rank = Affinity \times Weight \times Decay $$

Where,

Affinity: is the "closeness" of the user to the creator of the edge. If a user frequently likes, comments, or messages the edge creator, then the value of affinity will be higher, resulting in a higher rank for the post.

Weight: is the value assigned according to each edge. A comment can have a higher weightage than likes, and thus a post with more comments is more likely to get a higher rank.

Decay: is the measure of the creation of the edge. The older the edge, the lesser will be the value of decay and eventually the rank.

Nowadays, algorithms are much more complex and ranking is done using machine learning models which can take thousands of factors into consideration.

Retweets

Retweets are one of our extended requirements. To implement this feature we can simply create a new tweet with the user id of the user retweeting the original tweet and then modify the type enum and content property of the new tweet to link it with the original tweet.

For example, the type enum property can be of type tweet, similar to text, video, etc and content can be the id of the original tweet. Here the first row indicates the original tweet while the second row is how we can represent a retweet.

id userID type content createdAt
ad34-291a-45f6-b36c 7a2c-62c4-4dc8-b1bb text Hey, this is my first tweet… 1658905644054
f064-49ad-9aa2-84a6 6aa2-2bc9-4331-879f tweet ad34-291a-45f6-b36c 1658906165427

This is a very basic implementation, to improve this we can create a separate table itself to store retweets.

Search

Sometimes traditional DBMS are not performant enough, we need something which allows us to store, search, and analyze huge volumes of data quickly and in near real-time and give results within milliseconds. Elasticsearch can help us with this use case.

Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. It is built on top of Apache Lucene.

How do we identify trending topics?

Trending functionality will be based on top of the search functionality. We can cache the most frequently searched queries, hashtags, and topics in the last N seconds and update them every M seconds using some sort of batch job mechanism. Our ranking algorithm can also be applied to the trending topics to give them more weight and personalize them for the user.

Notifications

Push notifications are an integral part of any social media platform. We can use a message queue or a message broker such as Apache Kafka with the notification service to dispatch requests to Firebase Cloud Messaging (FCM) or Apple Push Notification Service (APNS) which will handle the delivery of the push notifications to user devices.

For more details, refer to the WhatsApp system design where we discuss push notifications.

Detailed design

It's time to discuss our design decisions in detail.

Data Partitioning

To scale out our databases we will need to partition our data. Horizontal partitioning (aka Sharding) can be a good first step. We can use partitions schemes such as:

  • Hash-Based Partitioning
  • List-Based Partitioning
  • Range Based Partitioning
  • Composite Partitioning

The above approaches can still cause uneven data and load distribution, we can solve this using Consistent hashing.

For more details, refer to Sharding and Consistent Hashing.

Mutual friends

For mutual friends, we can build a social graph for every user. Each node in the graph will represent a user and a directional edge will represent followers and followees. After that, we can traverse the followers of a user to find and suggest a mutual friend. This would require a graph database such as Neo4j and ArangoDB.

This is a pretty simple algorithm, to improve our suggestion accuracy, we will need to incorporate a recommendation model which uses machine learning as part of our algorithm.

Metrics and Analytics

Recording analytics and metrics is one of our extended requirements. As we will be using Apache Kafka to publish all sorts of events, we can process these events and run analytics on the data using Apache Spark which is an open-source unified analytics engine for large-scale data processing.

Caching

In a social media application, we have to be careful about using cache as our users expect the latest data. So, to prevent usage spikes from our resources we can cache the top 20% of the tweets.

To further improve efficiency we can add pagination to our system APIs. This decision will be helpful for users with limited network bandwidth as they won't have to retrieve old messages unless requested.

Which cache eviction policy to use?

We can use solutions like Redis or Memcached and cache 20% of the daily traffic but what kind of cache eviction policy would best fit our needs?

Least Recently Used (LRU) can be a good policy for our system. In this policy, we discard the least recently used key first.

How to handle cache miss?

Whenever there is a cache miss, our servers can hit the database directly and update the cache with the new entries.

For more details, refer to Caching.

Media access and storage

As we know, most of our storage space will be used for storing media files such as images, videos, or other files. Our media service will be handling both access and storage of the user media files.

But where can we store files at scale? Well, object storage is what we're looking for. Object stores break data files up into pieces called objects. It then stores those objects in a single repository, which can be spread out across multiple networked systems. We can also use distributed file storage such as HDFS or GlusterFS.

Content Delivery Network (CDN)

Content Delivery Network (CDN) increases content availability and redundancy while reducing bandwidth costs. Generally, static files such as images, and videos are served from CDN. We can use services like Amazon CloudFront or Cloudflare CDN for this use case.

Identify and resolve bottlenecks

twitter-advanced-design

Let us identify and resolve bottlenecks such as single points of failure in our design:

  • "What if one of our services crashes?"
  • "How will we distribute our traffic between our components?"
  • "How can we reduce the load on our database?"
  • "How to improve the availability of our cache?"
  • "How can we make our notification system more robust?"
  • "How can we reduce media storage costs"?

To make our system more resilient we can do the following:

  • Running multiple instances of each of our services.
  • Introducing load balancers between clients, servers, databases, and cache servers.
  • Using multiple read replicas for our databases.
  • Multiple instances and replicas for our distributed cache.
  • Exactly once delivery and message ordering is challenging in a distributed system, we can use a dedicated message broker such as Apache Kafka or NATS to make our notification system more robust.
  • We can add media processing and compression capabilities to the media service to compress large files which will save a lot of storage space and reduce cost.

Netflix

Let's design a Netflix like video streaming service, similar to services like Amazon Prime Video, Disney Plus, Hulu, Youtube, Vimeo, etc.

What is Netflix?

Netflix is a subscription-based streaming service that allows its members to watch TV shows and movies on an internet-connected device. It is available on platforms such as the Web, iOS, Android, TV, etc.

Requirements

Our system should meet the following requirements:

Functional requirements

  • Users should be able to stream and share videos.
  • The content team (or users in YouTube's case) should be able to upload new videos (movies, tv shows episodes, and other content).
  • Users should be able to search for videos using titles or tags.
  • Users should be able to comment on a video similar to YouTube.

Non-Functional requirements

  • High availability with minimal latency.
  • High reliability, no uploads should be lost.
  • The system should be scalable and efficient.

Extended requirements

  • Certain content should be geo-blocked.
  • Resume video playback from the point user left off.
  • Record metrics and analytics of videos.

Estimation and Constraints

Let's start with the estimation and constraints.

Note: Make sure to check any scale or traffic-related assumptions with your interviewer.

Traffic

This will be a read-heavy system, let us assume we have 1 billion total users with 200 million daily active users (DAU), and on average each user watches 5 videos a day. This gives us 1 billion videos watched per day.

$$ 200 \space million \times 5 \space videos = 1 \space billion/day $$

Assuming, a 200:1 read/write ratio, about 50 million videos will be uploaded every day.

$$ \frac{1}{200} \times 1 \space billion = 50 \space million/day $$

What would be Requests Per Second (RPS) for our system?

1 billion requests per day translate into 12K requests per second.

$$ \frac{1 \space billion}{(24 \space hrs \times 3600 \space seconds)} = \sim 12K \space requests/second $$

Storage

If we assume each video is 100 MB on average, we will require about 5 PB of storage every day.

$$ 50 \space million \times 100 \space MB = 5 \space PB/day $$

And for 10 years, we will require an astounding 18,250 PB of storage.

$$ 5 \space PB \times 365 \space days \times 10 \space years = \sim 18,250 \space PB $$

Bandwidth

As our system is handling 5 PB of ingress every day, we will require a minimum bandwidth of around 58 GB per second.

$$ \frac{5 \space PB}{(24 \space hrs \times 3600 \space seconds)} = \sim 58 \space GB/second $$

High-level estimate

Here is our high-level estimate:

Type Estimate
Daily active users (DAU) 200 million
Requests per second (RPS) 12K/s
Storage (per day) ~5 PB
Storage (10 years) ~18,250 PB
Bandwidth ~58 GB/s

Data model design

This is the general data model which reflects our requirements.

netflix-datamodel

We have the following tables:

users

This table will contain a user's information such as name, email, dob, and other details.

videos

As the name suggests, this table will store videos and their properties such as title, streamURL, tags, etc. We will also store the corresponding userID.

tags

This table will simply store tags associated with a video.

views

This table helps us to store all the views received on a video.

comments

This table stores all the comments received on a video (like YouTube).

What kind of database should we use?

While our data model seems quite relational, we don't necessarily need to store everything in a single database, as this can limit our scalability and quickly become a bottleneck.

We will split the data between different services each having ownership over a particular table. Then we can use a relational database such as PostgreSQL or a distributed NoSQL database such as Apache Cassandra for our use case.

API design

Let us do a basic API design for our services:

Upload a video

Given a byte stream, this API enables video to be uploaded to our service.

uploadVideo(title: string, description: string, data: Stream<byte>, tags?: string[]): boolean

Parameters

Title (string): Title of the new video.

Description (string): Description of the new video.

Data (Byte[]): Byte stream of the video data.

Tags (string[]): Tags for the video (optional).

Returns

Result (boolean): Represents whether the operation was successful or not.

Streaming a video

This API allows our users to stream a video with the preferred codec and resolution.

streamVideo(videoID: UUID, codec: Enum<string>, resolution: Tuple<int>, offset?: int): VideoStream

Parameters

Video ID (UUID): ID of the video that needs to be streamed.

Codec (Enum<string>): Required codec of the requested video, such as h.265, h.264, VP9, etc.

Resolution (Tuple<int>): Resolution of the requested video.

Offset (int): Offset of the video stream in seconds to stream data from any point in the video (optional).

Returns

Stream (VideoStream): Data stream of the requested video.

Search for a video

This API will enable our users to search for a video based on its title or tags.

searchVideo(query: string, nextPage?: string): Video[]

Parameters

Query (string): Search query from the user.

Next Page (string): Token for the next page, this can be used for pagination (optional).

Returns

Videos (Video[]): All the videos available for a particular search query.

Add a comment

This API will allow our users to post a comment on a video (like YouTube).

comment(videoID: UUID, comment: string): boolean

Parameters

VideoID (UUID): ID of the video user wants to comment on.

Comment (string): The text content of the comment.

Returns

Result (boolean): Represents whether the operation was successful or not.

High-level design

Now let us do a high-level design of our system.

Architecture

We will be using microservices architecture since it will make it easier to horizontally scale and decouple our services. Each service will have ownership of its own data model. Let's try to divide our system into some core services.

User Service

This service handles user-related concerns such as authentication and user information.

Stream Service

The tweet service will handle video streaming-related functionality.

Search Service

The service is responsible for handling search-related functionality. It will be discussed in detail separately.

Media service

This service will handle the video uploads and processing. It will be discussed in detail separately.

Analytics Service

This service will be used for metrics and analytics use cases.

What about inter-service communication and service discovery?

Since our architecture is microservices-based, services will be communicating with each other as well. Generally, REST or HTTP performs well but we can further improve the performance using gRPC which is more lightweight and efficient.

Service discovery is another thing we will have to take into account. We can also use a service mesh that enables managed, observable, and secure communication between individual services.

Note: Learn more about REST, GraphQL, gRPC and how they compare with each other.

Video processing

There are so many variables in play when it comes to processing a video. For example, an average data size of two-hour raw 8K footage from a high-end camera can easily be up to 4 TB, thus we need to have some kind of processing to reduce both storage and delivery costs.

Here's how we can process videos once they're uploaded by the content team (or users in YouTube's case) and are queued for processing in our message queue.

video-processing-pipeline

Let's discuss how this works:

  • File Chunker

This is the first step of our processing pipeline. File chunking is the process of splitting a file into smaller pieces called chunks. It can help us eliminate duplicate copies of repeating data on storage, and reduces the amount of data sent over the network by only selecting changed chunks.

Usually, a video file can be split into equal size chunks based on timestamps but Netflix instead splits chunks based on scenes, this slight variation becomes a huge factor for a better user experience as whenever the client requests a chunk from the server, there is a lower chance of interruption as a complete scene will be retrieved.

file-chunking

  • Content Filter

This step checks if the video adheres to the content policy of the platform, this can be pre-approved in the case of Netflix as per the content rating of the media or can be strictly enforced like YouTube.

This entire step is done by a machine learning model which performs copyright, piracy, and NSFW checks. If issues are found, we can push the task to a dead-letter queue (DLQ) and someone from the moderation team can do further inspection.

  • Transcoder

Transcoding is a process in which the original data is decoded to an intermediate uncompressed format, which is then encoded into the target format. This process uses different codecs to perform bitrate adjustment, image downsampling, or re-encoding the media.

This results in a smaller size file and a much more optimized format for the target devices. Standalone solutions such as FFmpeg or cloud-based solutions like AWS Elemental MediaConvert can be used to implement this step of the pipeline.

  • Quality Conversion

This is the last step of the processing pipeline and as the name suggests, this step handles the conversion of the transcoded media from the previous step into different resolutions such as 4K, 1440p, 1080p, 720p, etc.

This allows us to fetch the desired quality of the video as per the user's request, and once the media file finishes processing, it will be uploaded to a distributed file storage such as HDFS, GlusterFS, or an object storage such as Amazon S3 for later retrieval during streaming.

Note: We can add additional steps such as subtitles and thumbnails generation as part of our pipeline.

Why are we using a message queue?

Processing videos as a long-running task makes much more sense, and a message queue also decouples our video processing pipeline from the uploads functionality. We can use something like Amazon SQS or RabbitMQ to support this.

Video streaming

Video streaming is a challenging task from both the client and server perspectives. Moreover, internet connection speeds vary quite a lot between different users. To make sure users don't re-fetch the same content, we can use a Content Delivery Network (CDN).

Netflix takes this a step further with its Open Connect program. In this approach, they partner with thousands of Internet Service Providers (ISPs) to localize their traffic and deliver their content more efficiently.

What is the difference between Netflix's Open Connect and a traditional Content Delivery Network (CDN)?

Netflix Open Connect is our purpose-built Content Delivery Network (CDN) responsible for serving Netflix's video traffic. Around 95% of the traffic globally is delivered via direct connections between Open Connect and the ISPs their customers use to access the internet.

Currently, they have Open Connect Appliances (OCAs) in over 1000 separate locations around the world. In case of issues, Open Connect Appliances (OCAs) can failover, and the traffic can be re-routed to Netflix servers.

Additionally, we can use Adaptive bitrate streaming protocols such as HTTP Live Streaming (HLS) which is designed for reliability and it dynamically adapts to network conditions by optimizing playback for the available speed of the connections.

Lastly, for playing the video from where the user left off (part of our extended requirements), we can simply use the offset property we stored in the views table to retrieve the scene chunk at that particular timestamp and resume the playback for the user.

Searching

Sometimes traditional DBMS are not performant enough, we need something which allows us to store, search, and analyze huge volumes of data quickly and in near real-time and give results within milliseconds. Elasticsearch can help us with this use case.

Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. It is built on top of Apache Lucene.

How do we identify trending content?

Trending functionality will be based on top of the search functionality. We can cache the most frequently searched queries in the last N seconds and update them every M seconds using some sort of batch job mechanism.

Sharing

Sharing content is an important part of any platform, for this, we can have some sort of URL shortener service in place that can generate short URLs for the users to share.

For more details, refer to the URL Shortener system design.

Detailed design

It's time to discuss our design decisions in detail.

Data Partitioning

To scale out our databases we will need to partition our data. Horizontal partitioning (aka Sharding) can be a good first step. We can use partitions schemes such as:

  • Hash-Based Partitioning
  • List-Based Partitioning
  • Range Based Partitioning
  • Composite Partitioning

The above approaches can still cause uneven data and load distribution, we can solve this using Consistent hashing.

For more details, refer to Sharding and Consistent Hashing.

Geo-blocking

Platforms like Netflix and YouTube use Geo-blocking to restrict content in certain geographical areas or countries. This is primarily done due to legal distribution laws that Netflix has to adhere to when they make a deal with the production and distribution companies. In the case of YouTube, this will be controlled by the user during the publishing of the content.

We can determine the user's location either using their IP or region settings in their profile then use services like Amazon CloudFront which supports a geographic restrictions feature or a geolocation routing policy with Amazon Route53 to restrict the content and re-route the user to an error page if the content is not available in that particular region or country.

Recommendations

Netflix uses a machine learning model which uses the user's viewing history to predict what the user might like to watch next, an algorithm like Collaborative Filtering can be used.

However, Netflix (like YouTube) uses its own algorithm called Netflix Recommendation Engine which can track several data points such as:

  • User profile information like age, gender, and location.
  • Browsing and scrolling behavior of the user.
  • Time and date a user watched a title.
  • The device which was used to stream the content.
  • The number of searches and what terms were searched.

For more detail, refer to Netflix recommendation research.

Metrics and Analytics

Recording analytics and metrics is one of our extended requirements. We can capture the data from different services and run analytics on the data using Apache Spark which is an open-source unified analytics engine for large-scale data processing. Additionally, we can store critical metadata in the views table to increase data points within our data.

Caching

In a streaming platform, caching is important. We have to be able to cache as much static media content as possible to improve user experience. We can use solutions like Redis or Memcached but what kind of cache eviction policy would best fit our needs?

Which cache eviction policy to use?

Least Recently Used (LRU) can be a good policy for our system. In this policy, we discard the least recently used key first.

How to handle cache miss?

Whenever there is a cache miss, our servers can hit the database directly and update the cache with the new entries.

For more details, refer to Caching.

Media streaming and storage

As most of our storage space will be used for storing media files such as thumbnails and videos. Per our discussion earlier, the media service will be handling both the upload and processing of media files.

We will use distributed file storage such as HDFS, GlusterFS, or an object storage such as Amazon S3 for storage and streaming of the content.

Content Delivery Network (CDN)

Content Delivery Network (CDN) increases content availability and redundancy while reducing bandwidth costs. Generally, static files such as images, and videos are served from CDN. We can use services like Amazon CloudFront or Cloudflare CDN for this use case.

Identify and resolve bottlenecks

netflix-advanced-design

Let us identify and resolve bottlenecks such as single points of failure in our design:

  • "What if one of our services crashes?"
  • "How will we distribute our traffic between our components?"
  • "How can we reduce the load on our database?"
  • "How to improve the availability of our cache?"

To make our system more resilient we can do the following:

  • Running multiple instances of each of our services.
  • Introducing load balancers between clients, servers, databases, and cache servers.
  • Using multiple read replicas for our databases.
  • Multiple instances and replicas for our distributed cache.

Uber

Let's design an Uber like ride-hailing service, similar to services like Lyft, OLA Cabs, etc.

What is Uber?

Uber is a mobility service provider, allowing users to book rides and a driver to transport them in a way similar to a taxi. It is available on the web and mobile platforms such as Android and iOS.

Requirements

Our system should meet the following requirements:

Functional requirements

We will design our system for two types of users: Customers and Drivers.

Customers

  • Customers should be able to see all the cabs in the vicinity with an ETA and pricing information.
  • Customers should be able to book a cab to a destination.
  • Customers should be able to see the location of the driver.

Drivers

  • Drivers should be able to accept or deny the customer requested ride.
  • Once a driver accepts the ride, they should see the pickup location of the customer.
  • Drivers should be able to mark the trip as complete on reaching the destination.

Non-Functional requirements

  • High reliability.
  • High availability with minimal latency.
  • The system should be scalable and efficient.

Extended requirements

  • Customers can rate the trip after it's completed.
  • Payment processing.
  • Metrics and analytics.

Estimation and Constraints

Let's start with the estimation and constraints.

Note: Make sure to check any scale or traffic-related assumptions with your interviewer.

Traffic

Let us assume we have 100 million daily active users (DAU) with 1 million drivers and on average our platform enables 10 million rides daily.

If on average each user performs 10 actions (such as request a check available rides, fares, book rides, etc.) we will have to handle 1 billion requests daily.

$$ 100 \space million \times 10 \space actions = 1 \space billion/day $$

What would be Requests Per Second (RPS) for our system?

1 billion requests per day translate into 12K requests per second.

$$ \frac{1 \space billion}{(24 \space hrs \times 3600 \space seconds)} = \sim 12K \space requests/second $$

Storage

If we assume each message on average is 400 bytes, we will require about 400 GB of database storage every day.

$$ 1 \space billion \times 400 \space bytes = \sim 400 \space GB/day $$

And for 10 years, we will require about 1.4 PB of storage.

$$ 400 \space GB \times 10 \space years \times 365 \space days = \sim 1.4 \space PB $$

Bandwidth

As our system is handling 400 GB of ingress every day, we will require a minimum bandwidth of around 4 MB per second.

$$ \frac{400 \space GB}{(24 \space hrs \times 3600 \space seconds)} = \sim 5 \space MB/second $$

High-level estimate

Here is our high-level estimate:

Type Estimate
Daily active users (DAU) 100 million
Requests per second (RPS) 12K/s
Storage (per day) ~400 GB
Storage (10 years) ~1.4 PB
Bandwidth ~5 MB/s

Data model design

This is the general data model which reflects our requirements.

uber-datamodel

We have the following tables:

customers

This table will contain a customer's information such as name, email, and other details.

drivers

This table will contain a driver's information such as name, email, dob and other details.

trips

This table represents the trip taken by the customer and stores data such as source, destination, and status of the trip.

cabs

This table stores data such as the registration number, and type (like Uber Go, Uber XL, etc.) of the cab that the driver will be driving.

ratings

As the name suggests, this table stores the rating and feedback for the trip.

payments

The payments table contains the payment-related data with the corresponding tripID.

What kind of database should we use?

While our data model seems quite relational, we don't necessarily need to store everything in a single database, as this can limit our scalability and quickly become a bottleneck.

We will split the data between different services each having ownership over a particular table. Then we can use a relational database such as PostgreSQL or a distributed NoSQL database such as Apache Cassandra for our use case.

API design

Let us do a basic API design for our services:

Request a Ride

Through this API, customers will be able to request a ride.

requestRide(customerID: UUID, source: Tuple<float>, destination: Tuple<float>, cabType: Enum<string>, paymentMethod: Enum<string>): Ride

Parameters

Customer ID (UUID): ID of the customer.

Source (Tuple<float>): Tuple containing the latitude and longitude of the trip's starting location.

Destination (Tuple<float>): Tuple containing the latitude and longitude of the trip's destination.

Returns

Result (boolean): Represents whether the operation was successful or not.

Cancel the Ride

This API will allow customers to cancel the ride.

cancelRide(customerID: UUID, reason?: string): boolean

Parameters

Customer ID (UUID): ID of the customer.

Reason (UUID): Reason for canceling the ride (optional).

Returns

Result (boolean): Represents whether the operation was successful or not.

Accept or Deny the Ride

This API will allow the driver to accept or deny the trip.

acceptRide(driverID: UUID, rideID: UUID): boolean
denyRide(driverID: UUID, rideID: UUID): boolean

Parameters

Driver ID (UUID): ID of the driver.

Ride ID (UUID): ID of the customer requested ride.

Returns

Result (boolean): Represents whether the operation was successful or not.

Start or End the Trip

Using this API, a driver will be able to start and end the trip.

startTrip(driverID: UUID, tripID: UUID): boolean
endTrip(driverID: UUID, tripID: UUID): boolean

Parameters

Driver ID (UUID): ID of the driver.

Trip ID (UUID): ID of the requested trip.

Returns

Result (boolean): Represents whether the operation was successful or not.

Rate the Trip

This API will enable customers to rate the trip.

rateTrip(customerID: UUID, tripID: UUID, rating: int, feedback?: string): boolean

Parameters

Customer ID (UUID): ID of the customer.

Trip ID (UUID): ID of the completed trip.

Rating (int): Rating of the trip.

Feedback (string): Feedback about the trip by the customer (optional).

Returns

Result (boolean): Represents whether the operation was successful or not.

High-level design

Now let us do a high-level design of our system.

Architecture

We will be using microservices architecture since it will make it easier to horizontally scale and decouple our services. Each service will have ownership of its own data model. Let's try to divide our system into some core services.

Customer Service

This service handles customer-related concerns such as authentication and customer information.

Driver Service

This service handles driver-related concerns such as authentication and driver information.

Ride Service

This service will be responsible for ride matching and quadtree aggregation. It will be discussed in detail separately.

Trip Service

This service handles trip-related functionality in our system.

Payment Service

This service will be responsible for handling payments in our system.

Notification Service

This service will simply send push notifications to the users. It will be discussed in detail separately.

Analytics Service

This service will be used for metrics and analytics use cases.

What about inter-service communication and service discovery?

Since our architecture is microservices-based, services will be communicating with each other as well. Generally, REST or HTTP performs well but we can further improve the performance using gRPC which is more lightweight and efficient.

Service discovery is another thing we will have to take into account. We can also use a service mesh that enables managed, observable, and secure communication between individual services.

Note: Learn more about REST, GraphQL, gRPC and how they compare with each other.

How is the service expected to work?

Here's how our service is expected to work:

uber-working

  1. Customer requests a ride by specifying the source, destination, cab type, payment method, etc.
  2. Ride service registers this request, finds nearby drivers, and calculates the estimated time of arrival (ETA).
  3. The request is then broadcasted to the nearby drivers for them to accept or deny.
  4. If the driver accepts, the customer is notified about the live location of the driver with the estimated time of arrival (ETA) while they wait for pickup.
  5. The customer is picked up and the driver can start the trip.
  6. Once the destination is reached, the driver will mark the ride as complete and collect payment.
  7. After the payment is complete, the customer can leave a rating and feedback for the trip if they like.

Location Tracking

How do we efficiently send and receive live location data from the client (customers and drivers) to our backend? We have two different options:

Pull model

The client can periodically send an HTTP request to servers to report its current location and receive ETA and pricing information. This can be achieved via something like Long polling.

Push model

The client opens a long-lived connection with the server and once new data is available it will be pushed to the client. We can use WebSockets or Server-Sent Events (SSE) for this.

The pull model approach is not scalable as it will create unnecessary request overhead on our servers and most of the time the response will be empty, thus wasting our resources. To minimize latency, using the push model with WebSockets is a better choice because then we can push data to the client once it's available without any delay given the connection is open with the client. Also, WebSockets provide full-duplex communication, unlike Server-Sent Events (SSE) which are only unidirectional.

Additionally, the client application should have some sort of background job mechanism to ping GPS location while the application is in the background.

Note: Learn more about Long polling, WebSockets, Server-Sent Events (SSE).

Ride Matching

We need a way to efficiently store and query nearby drivers. Let's explore different solutions we can incorporate into our design.

SQL

We already have access to the latitude and longitude of our customers, and with databases like PostgreSQL and MySQL we can perform a query to find nearby driver locations given a latitude and longitude (X, Y) within a radius (R).

SELECT * FROM locations WHERE lat BETWEEN X-R AND X+R AND long BETWEEN Y-R AND Y+R

However, this is not scalable, and performing this query on large datasets will be quite slow.

Geohashing

Geohashing is a geocoding method used to encode geographic coordinates such as latitude and longitude into short alphanumeric strings. It was created by Gustavo Niemeyer in 2008.

Geohash is a hierarchical spatial index that uses Base-32 alphabet encoding, the first character in a geohash identifies the initial location as one of the 32 cells. This cell will also contain 32 cells. This means that to represent a point, the world is recursively divided into smaller and smaller cells with each additional bit until the desired precision is attained. The precision factor also determines the size of the cell.

geohashing

For example, San Francisco with coordinates 37.7564, -122.4016 can be represented in geohash as 9q8yy9mf.

Now, using the customer's geohash we can determine the nearest available driver by simply comparing it with the driver's geohash. For better performance, we will index and store the geohash of the driver in memory for faster retrieval.

Quadtrees

A Quadtree is a tree data structure in which each internal node has exactly four children. They are often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. Each child or leaf node stores spatial information. Quadtrees are the two-dimensional analog of Octrees which are used to partition three-dimensional space.

quadtree

Quadtrees enable us to search points within a two-dimensional range efficiently, where those points are defined as latitude/longitude coordinates or as cartesian (x, y) coordinates.

We can save further computation by only subdividing a node after a certain threshold.

quadtree-subdivision

Quadtree seems perfect for our use case, we can update the Quadtree every time we receive a new location update from the driver. To reduce the load on the quadtree servers we can use an in-memory datastore such as Redis to cache the latest updates. And with the application of mapping algorithms such as the Hilbert curve, we can perform efficient range queries to find nearby drivers for the customer.

What about race conditions?

Race conditions can easily occur when a large number of customers will be requesting rides simultaneously. To avoid this, we can wrap our ride matching logic in a Mutex to avoid any race conditions. Furthermore, every action should be transactional in nature.

For more details, refer to Transactions and Distributed Transactions.

How to find the best drivers nearby?

Once we have a list of nearby drivers from the Quadtree servers, we can perform some sort of ranking based on parameters like average ratings, relevance, past customer feedback, etc. This will allow us to broadcast notifications to the best available drivers first.

Dealing with high demand

In cases of high demand, we can use the concept of Surge Pricing. Surge pricing is a dynamic pricing method where prices are temporarily increased as a reaction to increased demand and mostly limited supply. This surge price can be added to the base price of the trip.

For more details, learn how surge pricing works with Uber.

Payments

Handling payments at scale is challenging, to simplify our system we can use a third-party payment processor like Stripe or PayPal. Once the payment is complete, the payment processor will redirect the user back to our application and we can set up a webhook to capture all the payment-related data.

Notifications

Push notifications will be an integral part of our platform. We can use a message queue or a message broker such as Apache Kafka with the notification service to dispatch requests to Firebase Cloud Messaging (FCM) or Apple Push Notification Service (APNS) which will handle the delivery of the push notifications to user devices.

For more details, refer to the WhatsApp system design where we discuss push notifications.

Detailed design

It's time to discuss our design decisions in detail.

Data Partitioning

To scale out our databases we will need to partition our data. Horizontal partitioning (aka Sharding) can be a good first step. We can shard our database either based on existing partition schemes or regions. If we divide the locations into regions using let's say zip codes, we can effectively store all the data in a given region on a fixed node. But this can still cause uneven data and load distribution, we can solve this using Consistent hashing.

For more details, refer to Sharding and Consistent Hashing.

Metrics and Analytics

Recording analytics and metrics is one of our extended requirements. We can capture the data from different services and run analytics on the data using Apache Spark which is an open-source unified analytics engine for large-scale data processing. Additionally, we can store critical metadata in the views table to increase data points within our data.

Caching

In a location services-based platform, caching is important. We have to be able to cache the recent locations of the customers and drivers for fast retrieval. We can use solutions like Redis or Memcached but what kind of cache eviction policy would best fit our needs?

Which cache eviction policy to use?

Least Recently Used (LRU) can be a good policy for our system. In this policy, we discard the least recently used key first.

How to handle cache miss?

Whenever there is a cache miss, our servers can hit the database directly and update the cache with the new entries.

For more details, refer to Caching.

Identify and resolve bottlenecks

uber-advanced-design

Let us identify and resolve bottlenecks such as single points of failure in our design:

  • "What if one of our services crashes?"
  • "How will we distribute our traffic between our components?"
  • "How can we reduce the load on our database?"
  • "How to improve the availability of our cache?"
  • "How can we make our notification system more robust?"

To make our system more resilient we can do the following:

  • Running multiple instances of each of our services.
  • Introducing load balancers between clients, servers, databases, and cache servers.
  • Using multiple read replicas for our databases.
  • Multiple instances and replicas for our distributed cache.
  • Exactly once delivery and message ordering is challenging in a distributed system, we can use a dedicated message broker such as Apache Kafka or NATS to make our notification system more robust.

Next Steps

Congratulations, you've finished the course!

Now that you know the fundamentals of System Design, here are some additional resources:

It is also recommended to actively follow engineering blogs of companies putting what we learned in the course into practice at scale:

Last but not least, volunteer for new projects at your company, and learn from senior engineers and architects to further improve your system design skills.

I hope this course was a great learning experience. I would love to hear feedback from you.

Wishing you all the best for further learning!

References

Here are the resources that were referenced while creating this course.

@SamirPaulb
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment