Skip to content

Instantly share code, notes, and snippets.

@ZapDos7
Last active March 7, 2024 15:16
Show Gist options
  • Save ZapDos7/d84cf8d46e9bf7ffc25ef5e5e4151adc to your computer and use it in GitHub Desktop.
Save ZapDos7/d84cf8d46e9bf7ffc25ef5e5e4151adc to your computer and use it in GitHub Desktop.

Api Design Guides

A good, well designed API should aim to support:

  • platform independence (use standard protocols, data format agreement between client & web service)
  • service evolution (the API should be able to evolve & add functionality independently from client aplications & existing applications, should continue to function w/o modification

Steps for API dev (from my experience)

  1. Start with API definition
  2. Move to API implementation (controller/view layer)
  3. Each resource should have its own controller
  4. The smaller the class, the better!

SOLID Design Principles

Source

What are the SOLID Design Principles?

SOLID is an acronym that stands for:

  • Single Responsibility Principle (SRP)
  • Open-Closed Principle (OCP)
  • Liskov Substitution Principle (LSP)
  • Interface Segregation Principle (ISP)
  • Dependency Inversion Principle (DIP)

They were introduced by the American computer scientist and instructor, Robert C. Martin (A.K.A Uncle Bob) in a 2000 paper.

SRP

a class, module, or function should have only one reason to change, meaning it should do one thing

Possibly can lead to more code, but better readability and maintainability. A developer who didn’t write the code can come to it and understand what’s going on quicker than having it all in one class.

OCP

classes, modules, and functions should be open for extension but closed for modification

It means you should be able to extend the functionality of a class, module, or function by adding more code without modifying the existing code. Normally, if you’re using a switch statement, then it’s very likely you will violate the open-closed principle.

LSP

child classes or subclasses must be substitutable for their parent classes or super classes. In other words, the child class must be able to replace the parent class

This has the advantage of letting you know what to expect from your code It was introduced by the computer scientist Barbara Liskov in 1987 in a paper she co-authored with Jeannette Wing

ISP

clients should not be forced to implement interfaces or methods they do not use

The ISP suggests that software developers should break down large interfaces into smaller, more specific ones, so that clients only need to depend on the interfaces that are relevant to them. This can make the codebase easier to maintain.

This principle is fairly similar to the single responsibility principle (SRP). But it’s not just about a single interface doing only one thing – it’s about breaking the whole codebase into multiple interfaces or components.

DIP

decoupling software modules: high-level modules should not depend on low-level modules

Instead, they should both depend on abstractions. Additionally, abstractions should not depend on details, but details should depend on abstractions.

In simpler terms, this means instead of writing code that relies on specific details of how lower-level code works, you should write code that depends on more general abstractions that can be implemented in different ways.

This makes it easier to change the lower-level code without having to change the higher-level code.

REST

= Representational State Transfer

It's an architectural style for building distributed systems based on hypermedia.

Principles

  • REST APIs are designed around resources:
    • any kind of object, data or service that can be accessed by the client
    • key abstraction of info (e.g. document, image, temporal service, a person etc)
  • The resources have an identified URI that uniquely identifies the resource, e.g. http://app.com/user/1
  • Clients interact with a service by exchanging representations of resources:
    • state of resource: data, metadata, hypermedia links
    • varying data formats = Media types (define how representation is to be processed)
    • they are self-descriptive
    • e.g. JSON format {"id":1,"name":"John"}
  • Client-server: separation of UI concerns from data storage concerns (more portable & scalable)
  • Uniform interface:
    • on HTTP, standard HTTP verbs to perform operations on resources (decoupled): GET, POST, PUT, PATCH, DELETE)
    • simpler system architecture
    • improved visibility of interactions
    • 4 interface components:
      • identification of resources
      • manipulation of resources through represenation
      • self descriptive messages
      • hypermedia as engine of app state: REST APIs are driven by hypermedia links, contained in the representation, e.g. {"id":3, "links":[["rel" : "product", "href":"http...", "action":"GET"...]]}
  • Stateless request model:
    • HTTP requests should be independent & many occur with any order ⇒ can't keep transient state information
    • Info stored only in the resources themselves & each request should be an atomic operation ⇒ scalable web services
    • any client's request can be handled by any server
    • maybe the backend datastore limits the scalability)
    • each request must contain all info needed to understand it (can't take advantage of any stored context on the server - session state kept entirely on client)
  • Cacheable: cache contraints require that the data within the response to a request are implicitly or explicitly labeled as cacheable or not.
    • if yes: a client cache is given the right to reus response data for later equivalent requests
  • Layered system: hierarchical layers by consraining component behaviours ⇒ component can't see beyond immediate layer with which it interacts.
  • Optional: Code on demand: client functionality can be extended by downloading & executing code (applets-scripts) so simpler clients (less features need to be pre-implemented)
  • Level 0: define one URI, all operations are POST requests to this URI
  • Level 1: seperate URIs for individual resources
  • Level 2: Use HTTP methods to define operations on resources
  • Level 3: Use Hypermedia (HATEOAS - Hypermedia as the engine of application state) ← true RESTful API

Organize the API around resources

  • Focus on the business entities that the web API exposes:

    https://app.com/entities $${\color{red}Bad \space practice}$$
    https://app.com/entity $${\color{green}Good \space Practice}$$
    https://app.com/create-entity $${\color{red}Bad \space practice}$$
    POST https://app.com/entity $${\color{green}Good \space Practice}$$
    1. Entity doesn't have to be based on a single physical data item (could be several tables in db, but represented in a single entity/DTO). BUT: A client should not be exposed to the internal implementation.
    GET https://app.com/entities Brings collction of entities and each entity has its own URI as well
    GET https://app.com/entities/5 Plural because collection, brings only entity with ID = 5
  • Focus on relationships of types of resources

    • e.g. /costumers/5/orders : bring all orders of customer with ID = 5
    • also can be orders/42/customer: get customer of order with ID = 42.
    • It's best to provide navigable links to associated resources in the body of the HTTP response message.
    • Try to keep URIs simple (more flexible & maintainable)
    • denormalize data & group in resources ⇒ less requests ⇒ less chatty API

API Operations & HTTP Methods

Resource POST GET PUT DELETE
/user create user get all users bulk update users remove all users
/user/1 ERROR get user 1 update user 1 if exists remove user 1
/user/1/orders create order for user 1 get all orders of user1 bulk update all orders of user1 remove all orders of user 1

POST, PUT & PATCH

  • POST: create entity
  • PUT: create entity if doesn't exist, update if exists
  • PATCH: partial update of entity if exists (explicitly we can make it create if doesn't exist)

HTTP Semantics

  1. Media Types: specify formats
    • MIME types: for non binary data: JSON or XML ⇒ HEADER content-type: application/json or application/xml
    • if server doesn't support: error 415
  2. GET:
    • 200 : OK
    • 404 : not found
  3. POST:
    • 201 : created new resource (URI of new resource included in the header in response ⇒ the response body contains a representation of the resource)
    • 204 : no content (if nothing to be returned)
    • 400 : bad request (if invalid data in request)
  4. PUT:
    • 201 : created (if new)
    • 200 : if updated existing
    • 204 : no content (if nothing to be returned)
    • 409 : conflict (if can't update existing resource - also useful for bulk updates)
  5. PATCH
    • we send a patch document, only containing the changes we wish to implement
    • easy to use: JSON file
      • JSON MERGE PATCH: application/merge-patch+json
        {                                                         {
          "name" : "foo",               {                           "name" : "foo" // unchanged
          "category" : "1",       +        "price" : 12,            "category" : "1", // unchanged
          "colour" : "blue",               "colour" : null,   ⇒     "price" : 12, // updated
          "price" : 10                     "size": "SMALL"          "size" : "SMALL" // added field-value
        }                               }                         } // removed "colour" field
        
        Not suitable for: 1. Resources with EXPLICIT null values 2. not ordered application of updates (may matter)
      • JSON PATCH: application/json
        • specifies changes as sequence of operations (add, remove, replace, copy, test)
        • 415 : patch document format not supported
        • 400 : malformed patch document
        • 409 : patch document is valid but changes can't be applied to the resource in its current state
  6. DELETE
    • 204 : successful
    • 404 : resource doesn't exist
  7. Asynchronous Operations
    • for lengthy in duration operations
    • get 202 Accepted for accepted but not completed requests
    • we should also expose the location URI of status endpoint for user to check as well as an estimated time of completion & link to cancel the operation
    • if new resource, 303 (see other after operation completes)

Filter & Paginate Data

  • `GET /entities?limit=25&offest=50)
  • setting an upper limit ⇒ no DOS attacks (metadata contains number of all entities)
  • sort, fields

Support Partial Responses for Large Binary Resource (files, images etc)

  • sent in chunks in case of unreliable & intermittent connections
  • also improves response times
  • ⇒ API needs to support Accept Ranges for GET requests with large resources
  • HEAD request: like GET but returns only HTTP headers

Designing RESTful APIs

Chapter 1

How to design an API:

  1. Select which functionality we wish to expose
  2. Figure out the best way to expose it
  3. Test these assumptions (+ user testing)
  4. Repeat steps 1-3 until

Good Practices:

  • Good/clear naming
  • Clear directions
  • Knowledge of use cases
  • Adaptability
  • Versioning
  • Backwards compatibility

Affordance = something which allows us to perform a task (irl e.g.: door knob - opens a door)

These 3 need to be aligned for the API to be well defined:

  • What the API allows us to do
  • What the API makes easy
  • What the user wants to do

Approaches to adding an API to a system:

  • Bolt-On

    • For existing APIs
    • Brute force approach
    • The fastest way to build sth useful
    • +: takes advantage of existing code & systems
    • -: existing problems in the architecture/application leak into the new API
  • Greenfield

    • For new systems
    • API/mobile first approach
    • +: use new technologies
    • -: often requires massive investment upfront before any payout is visible & it's the most complex way
  • Façade

    • Replacing piece by piece
    • +: ideal for legacy systems
    • -: multiple mindsets in the same system & hard to replicate behaviour

Good API modeling == key to success.

We need to make sure the API's capabilities make sense, are useful & ameliorates our users' lives

Tips for modeling APIs:

  • Don't worry about the tools
  • Have a consistent process
    • Involve teams early as not only will they all work together eventually, but also multiple perspectives will help develop the project (reinforce strengths, uncover weaknesses)
  • Document everything: assumptions, decisions, deferred tasks. Documentation is not negotiable.

Identify business process:

  1. Identify participants

    (=entities who will use the API aka Anyone who iniates an action || Anyone who waits an action to occur)

    Their info:

    • name, role
    • are they internal or external?
    • are they active or passive?
  2. Identify activities

    • Should be clearly defined, not abstractly
    • Should clearly showcase the actors/participants
    • Should have a defined order of events
  3. Break into steps

  4. Create API definitions

  5. Validate API

Drawing boundaries when defining activities:

  • +: clear scope
  • -: might fall into assumptions

Therefore we need to go back and ask our end user or product manager what they actually need (& we have to document the answer).

Chapter 2

We need to define our actions and then group them by entity. Then we map each action to an HTTP verb (GET, POST, etc).

API relationships:

  • Independend (exist w/o any other resource)
  • Dependend (can only exist when another resource does)

We need to validate our API via use cases

  • step through API calls
  • write code as if an API exists
  • look for gaps & potential issues

Use a framework.

Provide API documentation (does not need to be perfect, needs to be concise).

Chapter 3

SOAP REST
A fixed process Few requirements
Lots of documentation up front Docs discovered as you go
Detailed scenarios Flexible, based on needs
Complex error handling Flexible, based on patterns

HTTP Headers & Response Codes

Code Meaning
200 OK Request succesful
201 CREATED Resource was created
202 ACCEPTED Action has started
204 NO CONTENT Resource was deleted
301 Moved permanently
302 Moved temporarily
302 Moved temporarily
400 Bad request
401 Authentication
403 Forbidden
404 Not found

Constraints

  1. Client-server architecture
  2. Stateless architecture (each request independent from the other) => stability, scalability, reliability, flexibility
  3. Cacheable (state of resource doesn't change after repeating the same actions e.g. GET, PUT, DELETE whereas POST: state may change after repeated commands/non cacheable)
  4. Layered systems (clients do not connect directly to servers)
  5. Code on demand (when a client requests a resource, they receive also the code to act upon it => flexibility, upgradability, extensibility)
  6. Uniform interfaces (!)
    • Identify resources
    • Manipulate resources
    • Self descriptive messages
    • HATEOAS

Chapter 4

Authentication Authorization
Establish who you are Establish permissions
Log in with credentials Restricted access

Most common authentication protocol: OAuth (2.0 currently)

Versioning

  • In the URL (more direct & consistent)
  • Via the accept header

Media types & processing content

  • JSON: difficult to extend & add detail about data
  • Hypertext Application Language (HAL): separates payload to data and _links (references to other objects - easy to lose track), can be verbose
  • Ion Hypermedia Type: specifically for key-data

Hypermedia approaches

Hyper (non linear) + Media (data in various formats).

The user takes a unique journey starting at our API and traversing the links between data.

Advanced Headers

  • Content negotiation
  • Cacheing

Documentation

  • Goals:
    • snippet-friendly code
    • w/ version control
    • easy to update
    • searchable
  • Tools:
    • MediaWiki, Confluence, etc .

SDK design considerations

If you used HTTP properly & an established JSON media type, a simple HTTP library & JSON parser should be enough. But in reality APIs are more complex.

What makes a great API?

  • concise but precise
  • apply care
  • open source
  • use the patterns & covnentions of its language
  • be consistent & repeat common patterns

SDK's goal = make user's life easier

Microservices - Design Patterns

My notes on the LinkedIn Learning course by Frank P. Moley III


1. Decomposition Patterns

How can we divide our program?

  • Domain/Data based decomposition

    • lowest level
    • goal: achieve scalability
    • driver by data domain model, not underlying schema
    • focus on data patterns
    • start with the model (how consumed?), not the db
    • evaluate actions (beyond REST/CRUD)
    • build the service, contract first

    Process:

    1. Define model
    2. Define actions
    3. Service boundary & exposure actions as API
    4. Build datastore (model doesn't have to match the schema in db)
  • Business process based

    • Reused business logic, esp. when spanning multiple domains
    • Provides higher level of business functionality
    • Allows encapsulation of related domains
    • They don't have their own db access (best to keep tight boundaries)
    • Distinct functional uses
    • encapsulation to a module is recommended

    Process:

    1. Identify process we wish to expose
    2. Identify domains we will consume
    3. Define API handling the processes (contract focused, not underlying models)
    4. Wire service
  • Atomic transaction based

    • should be avoiced due to difficulty/complexity
    • esp. for financial apps
    • needed when spanning many data domains => guarantee atomicity, consistency, isolation & durability (ACID) transactions cross domains
    • provide failure domains & rollbacks
    • provide blocking until commit
    • don't use distributed transactions (v. complex)

    Process:

    1. Ensure the need for atomic service
    2. Domains must be in shared db
    3. Clearly get the transaction defined, including rollback conditions (& keep those written e.g. in readme)
    4. Implement the service as normal, with fail fast & rollback

Decomposition Strategies:

  • Strangler pattern

    • Break a monolithic app by "strangling" the dependency on it
    • Can be top down/bottom up
    • essentially craving up functionality

    Process:

    1. Start with monolithic app w/ its db & clients
    2. Identify business processes & data accesses (each data domain gets its own db)
    3. Build single new instance db
    4. Build data domain
    5. Sync data
    6. Move client to use this new db's data (they my still communicate with monolithic app for other purposes)
    7. When move is complete, remove data & its access & sync logic from monolith
    8. Migrate service: build business process to consume new db data.
    9. Connect client to new business process, remove calls to monolith from client & internal code
    10. Deprecate monolith
  • Sidecar pattern

    • Goal: offload processing through a single deployment
    • e.g. for reporting, logging, monitoring (repeating code paths not accomplished via repeating code)

    Process

    1. Determine process (specific for immediate needs yet generic for other system parts)
    2. Write code
    3. Schedule & deploy w/ appropriate services (parent service manifest)
    4. Functionality appears w/o embedding it (!)

2. Integration Patterns

They allow to solve orchestration & ingress needs across the whole system

  • Gateway pattern

    • is an ingress pattern
    • for clients communicating w/ our system services
    • solves the chaos of this situation (if any client access anything, we need to control these)
    • provides proxy/facade
    • interface to outside world as a whole, including our clients
    • it can:
      • proxy calls to our API (security & authorization in a single point)
      • mutate calls to our API (e.g. headers, apply decoration or aggregation - NOT business logic)
      • limit calls to our API (e.g. low bandwidth client)
    • can also become a single point of failure (we must ensure robustness & scalability)
    • Client specific & situation specific mutation of payloads
    • movement buffer: contract driver API point, inner change doesn't affect clients

    Process

    1. Define contracts
    2. Expose APIs (client focused)
    3. Strict version control & passive changes only (do not affect client)
    4. Implement gateway to call own services & clients call gateway
  • Process aggregation pattern

    • Develop complex processes (2+ with complex payload)
    • Provides clients w/ a single API call for all these processes => simplification
    • Should introduce its own processing logic (otherwise, gateway)
    • can introduce choke point to app/long blocking calls (!)

    Process

    1. Determine business processes needed
    2. Determine processing rules
    3. Design a consolidated model
    4. Design API for these actions (should be easy as long as model is defined well - we can REST or CRUD it)
    5. Wire service & implement internal processing
  • Edge pattern

    • Subset of gateway pattern
    • Problem: client use varies by platform & scaling a getway presents wasted resources or clients need special business logic => Client specific gateways
    • isolating calls for client systems

    Process

    1. Identify client
    2. Build contracts
    3. Implement contracts
    4. Maintain passivity as long as client is needed (!)

Gateway vs Edge

  • Edge targets clients => more flexible
  • Edge more scalable (as client needs change)
  • Edge more flexible for new clients (whereas gateway has larger changes when new client comes)
  • Gateway has less moving parts => more synch clients

Data Patterns

  • Single service database

    • The most common & effective
    • Problem: scalabilty (db & service needs should be proportional)
    • each service gets its own db
    • datastore distributes w/ service
    • proportional scaling, no impact to the system

    Process:

    1. As service needs grow, we add new instances of a service
    2. Increase load to db => scale db (give more IOPS)
  • Shared service database

    • When shared db must be used
    • Legacy
    • We use them as seperate even if physically they share db
    • Data distribution handling by db (=> choose db well)
    • Structure data so we can isolate & prepare for future breakup (schemas, similar contract etc)
    • Users don't span schemas (unique creds)
    • Data domain to single schema
  • Command query responsibility segregation (CQRS)

    • The most complex
    • Improves data behaviour very well
    • Multi-model bounded context
    • Multi-interface operations, wite vs read
    • Diverges from CRUD
    • Best used for:
      • task based UI ops (write model focuses on tasks, read models based on system state after the interactions w/ that task)
      • eventual consistency is a must
      • event based models
  • Asynchronus eventing

    • used when processes cannot be done in real time (through blocking calls)
    • service API triggers the event
    • event can cascade async from API
    • or event can trigger from messaging
    • v. powerful for distributed systems

Operational Patterns

How you run a system, not how you build it

Logs:

  • We need to know what is going on (debugging, runtime behaviour).
  • They can also be written and linked to other system's logs, too => logging must be consisent across services & structured
  • Must share taxonomy

Patterns:

  • Log aggregation patterns

    • Problem: logs are everywhere in microservices architectures
    • Also parses logs (structure is important)
    • Correlation of logs (taxonomy is important)
    • (may be optional) indexing of logs (helps search)
  • Metric aggregation patterns

    • Matrics => issue diagnosis
    • requires only a bit of insturmentation
    • Solves: need to see what's going on with the system
    • Taxonomy is key
    • Tips:
      • Build high level dashboards (overview)
      • Build also detailed dashboards (details)
      • embed links to logs
      • Inject events such a deployments
      • trace alarms on your graphs
      • ensure you have runbooks for all alarms
  • Tracing patterns

    • When call stacks span processes, code traces are less valuable
    • Use trace identifiers
    • Tracing <=> recreate call stack
    • should span from edge to db
    • no call is lost

    Process

    1. Use standards-base approach
    2. Inject at entry point of system (e.g. browser)
    3. Every log message should embed trace ID (structured logs & taxonomy)
    4. Use tracing tools & APM (application performance management) for visualization
  • External configuration

    • Config outside code helps debug
    • Use consisent tooling like Kubernetes
    • use consisent naming
    • Err on the side of externalization (expose more is better)
    • Protect secrets

    Process

    1. Config is injected or retrieved
    2. App uses externalized values in favour of embedded values (defaults can be useful)
    3. Commons libs/tools help
    4. Read - config - act
  • Service discovery

    • Saves time when large complexity
    • "What service do I need to call?"
    • A central location of all services exists
    • Each service "advertises" what they offer
    • We then find which service offers what we need => URI to this service is returned
    • The URI is then consumed (instead of config)

API Integration Patterns – The Difference between REST, RPC, GraphQL, Polling, WebSockets and WebHooks

Request-Response Integration

The client initiates the action by sending a request to the server and then waits for a response.

REST (=Representational State Transfer)

  • most popular
  • use a stateless, client-server communication model, wherein each message contains all the information necessary to understand and process the message.
  • use resources (=entities that the API exposes, which can be accessed and manipulated using URL paths)
  • the client initiates requests to the server by specifying exactly what it wants using HTTP methods (such as GET, POST, PUT, DELETE) on specific URLs (the menu items). Each interaction is stateless, meaning that each request from the client to the server must contain all the information needed to understand and process the request. The server then processes the request and returns the appropriate response.

RPC (=Remote Procedure Call)

  • all about actions: the client executes a block of code on the server
  • this might give the client very specific and tailored results, but lacks the flexibility and ease of use of REST.

GraphQL

  • the client specifies exactly what data it needs, which can include specific fields from various resources ( => a high degree of flexibility and only retrieve exactly the data it needs)
  • the server processes this query, retrieves the exact data, and returns it to the client ( => requires the server to be capable of handling more complex and unique queries)
  • more customisable form of REST. You still deal with resources (unlike actions in RPC) but you can customise how you want the resource returned to you.
  • CONs: adds complexity to the API since the server needs to do additional processing to parse complex queries
  • PROs: response payload sizes are typically smaller, which means faster response times

Event Driven Integration

Ideal for services with fast changing data

Polling

  • when the client continuously asks the server if there is new data available, with a set frequency => not efficient because many requests may return no new data, thus unnecessarily consuming resources
  • the more frequently you poll (make requests) the closer the client gets to real-time communication with the server
  • can remain stateless, making it more fault tolerant and scalable

Long Polling

  • The server only responds if there is an update.
  • As long as the client and server agree that the server will hold on to the client’s request, and the connection between the client and server remains open, this pattern works and can be more efficient than simply polling.
  • adds extra complexity to the process by requiring a directory of which server contains the connection to the client, which is used to send data to the client whenever the server is ready.

WebSockets

  • provide a persistent, two-way communication channel between the client and server
  • Once a WebSocket connection is established, both parties can communicate freely, which enables real-time data flows and is more resource-efficient than polling.
  • WebSockets are similar to long polling. They both avoid the wasteful requests of polling, but WebSockets have the added benefit of having a persistent connection between the client and the server.
  • ideal for fast, live streaming data, like real-time chat applications
  • the persistent connection consumes bandwidth, so may not be ideal for mobile applications or in areas with poor connectivity

WebHooks

  • allow the server to notify the client when there's new data available. The client registers a callback URL with the server and the server sends a message to that URL when there is data to send.
  • the client sends requests as usual, but can also listen for and receive requests like a server
  • you get real-time updates from the server once something changes, without having to make frequent, wasteful requests to the server about that change

Onion vs MVC Flows

  • CQRS = command query reponsibility segregation

MVC: knows how to save:

+----+
|    |     +-------+    +------------+    +------+
| db |  ↔  | model | ↔  | controller | ↔  | view |
|    |     +-------+    +------------+    +------+
+----+

controller: middlewear services

this guarantess more infrastructure coupling

vs

Onion


+---------------------+
| UI                  |
|  +---------------+  |
|  | App           |  |
|  |  +---------+  |  |
|  |  | Domain) |  |  |
|  |  +---------+  |  |
|  +---------------+  |
+---------------------+

more scalable

equals to

UI Driver Domain                      write domain flow downwards (one way write flow) ↓
     ↓                                read domain flow both ways (two way read flow) ↑↓
Application
     ↓
Domain
     ↓ (via app)
Driven Adapter (infrastructure)

Architecture Patterns

Backend For Frontend (BFF)

  • Involves creating specific backend services tailored to the requirements of individual frontend applications, optimizing communication and data delivery.
  • Focus: Creating backend services tailored to specific frontend applications.
  • Benefits: Optimizes communication and data delivery.
  • Trade-offs: Can lead to duplicated logic, requires additional maintenance.

Publish/Subscribe Pattern

  • Focus: Decoupling producers and consumers in messaging systems.
  • Benefits: Enhances scalability and system flexibility, supports asynchronous communication.
  • Trade-offs: Increases message management complexity, depends on broker reliability.

Sidecar Pattern

  • Focus: Enhancing or extending the functionality of a primary application.
  • Benefits: Isolates and modularizes functionalities, easier maintenance.
  • Trade-offs: Can increase system complexity, additional resource consumption.

Data-Driven Testing

  • Focus: Enhancing testing processes by using data-driven methodologies.
  • Benefits: Increases test coverage, improves efficiency.
  • Trade-offs: Requires thorough data management, potential for data-related errors.

Circuit Breaker

  • Provides a way to handle failures gracefully, preventing a cascade of failures in distributed systems. It acts like an electrical circuit breaker, stopping the flow of requests to a failing service to allow recovery.
  • Focus: Handling failures in distributed systems.
  • Benefits: Prevents cascading failures, allowing services time to recover.
  • Trade-offs: Requires careful threshold settings to avoid false

API Gateway

  • Acts as an intermediary layer that manages and routes requests from clients to various microservices, providing a unified interface, security, and other cross-cutting concerns.
  • Focus: Managing requests to microservices.
  • Benefits: Provides a unified interface, enhances security.
  • Trade-offs: Potential single point of failure, increased complexity.

Command Query Responsibility Segregation (CQRS)

  • Separates read and write operations into distinct models, optimizing performance and scalability, especially in complex and high-demand environments.
  • Focus: Separating read and write operations.
  • Benefits: Optimizes performance and scalability.
  • Trade-offs: Adds complexity, may lead to eventual consistency issues.

Outbox Pattern

  • Addresses the challenge of ensuring reliable message delivery in distributed systems, particularly in microservices architectures, by temporarily storing messages before forwarding them.
  • Focus: Ensuring reliable message delivery in distributed systems.
  • Benefits: Prevents data loss during transmission failures.
  • Trade-offs: Adds implementation complexity, may introduce latency.

Multi-tenancy

  • Discusses implementing a multi-tenant system using Keycloak for authentication, and Angular and Springboot for frontend and backend development, respectively.
  • Focus: Implementing a multi-tenant system using specific technologies.
  • Benefits: Efficient authentication management in SaaS applications.
  • Trade-offs: Complex setup, requiring integration of multiple technologies.

Architecture tools

C4 Model

  • Focuses on providing a comprehensive visualization of software architecture, breaking it down into four levels: Context, Containers, Components, and Code. It aids in understanding and communicating the software structure at different abstraction levels.
  • Focus: Visualization of software architecture across four levels.
  • Benefits: Enhances understanding and communication of software structure.
  • Trade-offs: May be overly complex for smaller systems.

Domain-Driven Design (DDD)

  • Centers on aligning software design closely with domain complexities, using a model-driven approach. It emphasizes collaboration between technical and domain experts to create a common language and shared understanding.
  • Focus: Aligning software design with domain complexities.
  • Benefits: Facilitates collaboration and creates a shared language.
  • Trade-offs: Requires deep understanding of the domain, potentially complex.

Strangler Pattern

  • Aims at gradually replacing parts of a legacy system, allowing for incremental updates and smooth migration to new technologies without disrupting existing functionalities.
  • Focus: Gradual replacement of legacy systems.
  • Benefits: Allows incremental updates without disrupting existing functionalities.
  • Trade-offs: Can be slow and resource-intensive.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment