Skip to content

Instantly share code, notes, and snippets.

@codeperfectplus
Created March 22, 2021 08:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save codeperfectplus/ff8ef80acf7f3da6951d5378d6fe8e37 to your computer and use it in GitHub Desktop.
Save codeperfectplus/ff8ef80acf7f3da6951d5378d6fe8e37 to your computer and use it in GitHub Desktop.
Post Quantization TFLITE model
# save this file as postQuantization.py
def representative_dataset():
for _ in range(100):
data = np.random.rand(1, 320, 320, 3)
yield [data.astype(np.float32)]
import numpy as np
import tensorflow as tf
saved_model_dir = "output/exported_models/tflite_infernce/saved_model"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.allow_custom_ops = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.inference_input_type = tf.uint8 # or tf.uint8
converter.inference_output_type = tf.uint8 # or tf.uint8
tflite_quant_model = converter.convert()
with tf.io.gfile.GFile(tf_lite_model_path, 'wb') as f:
f.write(tflite_quant_model)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment