Skip to content

Instantly share code, notes, and snippets.

View AdityaSoni19031997's full-sized avatar
🎯
Focusing

Aditya Soni AdityaSoni19031997

🎯
Focusing
View GitHub Profile
@AdityaSoni19031997
AdityaSoni19031997 / README.md
Created April 3, 2024 14:55 — forked from nymous/README.md
Logging setup for FastAPI, Uvicorn and Structlog (with Datadog integration)

Logging setup for FastAPI

This logging setup configures Structlog to output pretty logs in development, and JSON log lines in production.

Then, you can use Structlog loggers or standard logging loggers, and they both will be processed by the Structlog pipeline (see the hello() endpoint for reference). That way any log generated by your dependencies will also be processed and enriched, even if they know nothing about Structlog!

Requests are assigned a correlation ID with the asgi-correlation-id middleware (either captured from incoming request or generated on the fly). All logs are linked to the correlation ID, and to the Datadog trace/span if instrumented. This data "global to the request" is stored in context vars, and automatically added to all logs produced during the request thanks to Structlog. You can add to these "global local variables" at any point in an endpoint with `structlog.contextvars.bind_contextvars(custom

@AdityaSoni19031997
AdityaSoni19031997 / file_streaming_pytorch.py
Created April 21, 2020 02:19
In this gist i have tried to explain a very smart way of loading datasets by streaming them from bytes into PyTorch; It can be achieved in multiple ways, but here my focus was confined to David's idea of streaming records from a bytes file;
import torch
import io
import pandas as pd
import gc
import numpy as np
import transformers
'''
Original Code Author [@dlibenzi](https://github.com/dlibenzi)
gunicorn run:app --workers=9
gunicorn run:app --workers=9 --worker-class=meinheld.gmeinheld.MeinheldWorker

Macbook Pro 2015 Python 3.7

Framework Server Req/s Max latency +/- Stdev
@AdityaSoni19031997
AdityaSoni19031997 / kaggle.py
Last active November 16, 2022 15:14
Kaggle Helper Scripts
import seaborn as sns
from sklearn import preprocessing, ensemble
from scipy.stats import kendalltau
import pandas as pd
import random
#todo change module name
from tqdm import tqdm
import numpy as np
import pandas as pd
@AdityaSoni19031997
AdityaSoni19031997 / count_lakes.py
Created September 2, 2022 10:53
count_lakes google question!
import collections
# 0 is water, 1 is land.
arr = [
[0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0],
[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0],
@AdityaSoni19031997
AdityaSoni19031997 / best_path.py
Created August 14, 2022 15:06 — forked from nlehrer/best_path.py
Best path through a grid of point values
# Nathan Lehrer
def get_best_path(grid):
# Finds the best path through an M x N grid of point values, and that path's score
# Input: grid = grid of point values = M x N list of lists
# Returns: best_score = best possible score = int, path = best possible path = string
M,N = len(grid),len(grid[0])
scores = {(0,0):grid[0][0]} # best score for a path to each cell; score of (0,0) is grid value
trace = {} # whether we optimally come from up ('U') or left ('L') into each cell
@AdityaSoni19031997
AdityaSoni19031997 / grokking_to_leetcode.md
Created August 2, 2022 02:54 — forked from tykurtz/grokking_to_leetcode.md
Grokking the coding interview equivalent leetcode problems

GROKKING NOTES

I liked the way Grokking the coding interview organized problems into learnable patterns. However, the course is expensive and the majority of the time the problems are copy-pasted from leetcode. As the explanations on leetcode are usually just as good, the course really boils down to being a glorified curated list of leetcode problems.

So below I made a list of leetcode problems that are as close to grokking problems as possible.

Pattern: Sliding Window

class Node(object):
def __init__(self, val):
self.val = val
self.left = None
self.right = None
self.height = 1
class AVLTree(object):
def __init__(self):

Scaling your API with rate limiters

The following are examples of the four types rate limiters discussed in the accompanying blog post. In the examples below I've used pseudocode-like Ruby, so if you're unfamiliar with Ruby you should be able to easily translate this approach to other languages. Complete examples in Ruby are also provided later in this gist.

In most cases you'll want all these examples to be classes, but I've used simple functions here to keep the code samples brief.

Request rate limiter

This uses a basic token bucket algorithm and relies on the fact that Redis scripts execute atomically. No other operations can run between fetching the count and writing the new count.