Skip to content

Instantly share code, notes, and snippets.

@mathcodes
Created July 30, 2023 07:08
Show Gist options
  • Save mathcodes/3473597284a9add9ba3e98d5e911f8f3 to your computer and use it in GitHub Desktop.
Save mathcodes/3473597284a9add9ba3e98d5e911f8f3 to your computer and use it in GitHub Desktop.
SPI_Dev_Notes.md
API developer :
Leveraged Partitioning:
Leveraged partitioning is a technique used to distribute and store data across multiple servers or database shards. This approach is commonly employed in distributed systems to manage large datasets efficiently. As an API developer, your role would involve designing APIs that interact with the underlying data storage, which is partitioned.
API tasks related to leveraged partitioning:
a. Data distribution: You need to ensure that data is correctly distributed among the partitions to maintain a balanced load across servers.
b. Shard key management: Decide how to select shard keys or partition keys for efficient data retrieval.
c. Request routing: Implement logic to route API requests to the appropriate shard based on the partition key.
d. Fault tolerance: Plan for redundancy and replication across partitions to handle server failures gracefully.
Caching Solutions (like Memcached):
Caching solutions are employed to store frequently accessed data in memory, reducing the need to fetch it from the original source repeatedly. Memcached is one of the popular caching systems used for this purpose. As an API developer, you would integrate caching into your APIs to improve response times and reduce the load on backend resources.
API tasks related to caching solutions:
a. Cache integration: Implement cache storage and retrieval mechanisms within the API codebase.
b. Cache eviction: Decide on cache eviction policies to handle memory constraints and manage cache expiration.
c. Cache consistency: Determine how to keep cached data up-to-date, considering data updates in the backend systems.
d. Cache validation: Design strategies to validate cache integrity and re-fetch data from the source when necessary.
Connection Pooling Mechanisms:
Connection pooling is a technique used to manage a pool of established connections to a database or another external service. Instead of opening and closing connections for each API request, connection pooling maintains a set of reusable connections, reducing the overhead of connection establishment.
API tasks related to connection pooling mechanisms:
a. Connection management: Implement connection pooling libraries or mechanisms within the API codebase.
b. Pool size optimization: Decide on the optimal size of the connection pool based on the API's expected traffic and resource constraints.
c. Connection reusability: Ensure connections are properly reused and released after serving requests to avoid resource wastage.
d. Connection timeout handling: Handle scenarios where connections become stale or unresponsive.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment