Skip to content

Instantly share code, notes, and snippets.

View novasush's full-sized avatar
😉
Happy and Focussed

Sushrut Ashtikar novasush

😉
Happy and Focussed
View GitHub Profile
@novasush
novasush / index.js
Last active July 26, 2023 15:57
Index.js for firebase function
const functions = require("firebase-functions");
const cors = require("cors");
const express = require("express");
const bodyParser = require('body-parser');
const compression = require("compression");
// Express app config
const tasksApp = express();
tasksApp.use(compression());
tasksApp.use(bodyParser.json());
Processor Intel core i7
Generation 11 Gen
RAM 16GB
SSD 1 TB
Graphics RTX 3050
Graphics Memory 4GB
from tensorflow.python.compiler.tensorrt import trt_convert as trt
# Instantiate the TF-TRT converter
# Here saved model directory is the path to the saved model
# You can customise precision mode to FP32 FP16 or INT8
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=SAVED_MODEL_DIR,
precision_mode=trt.TrtPrecisionMode.FP32
)
@novasush
novasush / Dockerfile
Created March 7, 2021 17:19
Dockerfile for fastapi nginx unit
# Using base image provided by nginx unit
FROM nginx/unit:1.22.0-python3.9
# Alternatively you can use different tags from https://hub.docker.com/r/nginx/unit
COPY requirements.txt /fastapi/requirements.txt
RUN pip install -r /fastapi/requirements.txt
COPY config.json /docker-entrypoint.d/config.json
@novasush
novasush / config.json
Created March 7, 2021 16:59
Config file for nginx unit running fastapi in docker
{
"listeners": {
"*:80": {
"pass": "applications/fastapi"
}
},
"applications": {
"fastapi": {
"type": "python 3.9",
@novasush
novasush / asgi.py
Created March 7, 2021 16:53
A Hello World app of FastAPI
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def index():
"""
A simple Hello World GET request
"""
class floatingrange(object):
def __init__(self, start=None, stop=None, decimal=0):
pass
def __str__(self):
pass
def __repr__(self):
pass
# Shuffle and batch the train_dataset. Use a buffer size of 1024
# for shuffling and a batch size 32 for batching.
train_dataset = train_dataset.shuffle(1024).batch(32)
# Parallelize the loading by prefetching the train_dataset.
# Set the prefetching buffer size to tf.data.experimental.AUTOTUNE.
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
# Get the number of CPU cores.
cores = multiprocessing.cpu_count()
print(cores)
# Parallelize the transformation of the train_dataset by using
# the map operation with the number of parallel calls set to
# the number of CPU cores.
train_dataset = train_dataset.map(read_tfrecord,num_parallel_calls=cores)
def read_tfrecord(serialized_example):
# Create the feature description dictionary
feature_description = {
'image': tf.io.FixedLenFeature((), tf.string, ""),
'label': tf.io.FixedLenFeature((), tf.int64, -1),
}
# Parse the serialized_example and decode the image
example = tf.io.parse_single_example(serialized_example, feature_description)
image = tf.io.decode_jpeg(example['image'], channels=3)