Skip to content

Instantly share code, notes, and snippets.

@ClementWalter
ClementWalter / decode_and_serve.py
Created September 23, 2020 15:20
Signatures to decode a base64 image and serve inference
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.string),))
def decode(image_bytes):
"""
Takes a base64 encoded image and returns the preprocessed input tensor ready for the inference
"""
# not working on GPU if tf.__version__ < 2.3, see https://github.com/tensorflow/tensorflow/issues/28007
with tf.device("/cpu:0"):
input_tensor = tf.map_fn(
lambda x: preprocessing(tf.io.decode_jpeg(contents=tf.io.decode_base64(x), channels=3))["output_0"],
image_bytes,
tf.saved_model.save(
classifier,
export_dir="classifier/1",
signatures={
"serving_default": decode_and_serve,
"preprocessing": preprocessing,
},
)
@ClementWalter
ClementWalter / batch_request_served_model.py
Created September 23, 2020 15:29
Perform a batch request onto a Tensorflow served model with docker
response = requests.post(
"http://localhost:8501/v1/models/classifier:predict",
json={
"signature_name": "serving_default", # can be omitted
"inputs": {
"image_bytes": [image.numpy().decode("utf-8") for image in image_bytes][:2], # batch request
},
},
)
@ClementWalter
ClementWalter / Dockerfile
Last active September 23, 2020 16:08
Tf servint Heroku dockerfile
FROM tensorflow/serving
ENV MODEL_BASE_PATH /models
ENV MODEL_NAME classifier
COPY models/classifier /models/classifier
# Fix because base tf_serving_entrypoint.sh does not take $PORT env variable while $PORT is set by Heroku
# CMD is required to run on Heroku
COPY tf_serving_entrypoint.sh /usr/bin/tf_serving_entrypoint.sh
@ClementWalter
ClementWalter / tf_serving_entrypoint.sh
Created September 23, 2020 15:36
Modified to take $PORT env variable
#!/bin/bash
tensorflow_model_server --port=8500 --rest_api_port="${PORT}" --model_name="${MODEL_NAME}" --model_base_path="${MODEL_BASE_PATH}"/"${MODEL_NAME}" "$@"
@ClementWalter
ClementWalter / tf_serving_heroku.md
Created September 23, 2020 15:42
How to deploy a tensorflow model on Heroku with tensorflow serving

How to deploy a tensorflow model on Heroku with tensorflow serving

After spending minutes or hours playing around with all the wonderful examples available for instance on the Google AI hub, one may wants to deploy one model or another online.

This article presents a fast, optimal and neat way of doing it with Tensorflow Serving and Heroku.

Introduction

@ClementWalter
ClementWalter / interactive_eager_few_shot_od_training_colab.ipynb
Last active January 25, 2021 11:18
interactive_eager_few_shot_od_training_colab.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
date
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
@ClementWalter
ClementWalter / Chainlink_VRF_V2_unittest.md
Created March 9, 2022 20:12
How to unit-test with Chainlink VRF V2
@ClementWalter
ClementWalter / on_chain_less_1_eth.md
Created March 23, 2022 18:13
How I deployed an on-chain 10k pfp project for less than 0.1 ETH

How I deployed an on-chain 10k pfp NFT project for less than 0.1 ETH

Yes, as few as 0.1 ETH or more precisely as you can see on the etherscan contract transaction page for as few as 0.096212736214 ETH, most of it being the contract itself (0.075760070358 ETH), i.e. all the general decoding functions that could be embedded once for all in a library. In other words, the image part of the cost is only about 0.02 ETH!

Of course the gas price at the time of deploying was low (approximately 20 gwei) but even with a fairly high price (say ten times bigger) this would have resulted, for the image part, to only 0.2 ETH.