Skip to content

Instantly share code, notes, and snippets.

@bgalvao
Last active October 1, 2019 15:37
Show Gist options
  • Save bgalvao/9460a8518a48985603b38ce9e690d38c to your computer and use it in GitHub Desktop.
Save bgalvao/9460a8518a48985603b38ce9e690d38c to your computer and use it in GitHub Desktop.
Convert a frozen graph to tflite on your local machine, with compilation of TOCO with the Bazel build system if need be. Proper directory structure is not clarified here, but you get the idea.
# installed bazel in a new conda env
# git clone https://github.com/tensorflow/tensorflow --depth 1
# move this file into the repo clone
# touch WORKSPACE inside the clone
# if [ ! -d tensorflow ]; then
# echo "tensorflow clone not yet here. cloning it..."
# git clone https://github.com/tensorflow/tensorflow --depth 1
# fi
# cd tensorflow
# if [ ! -f WORKSPACE ]; then
# touch WORKSPACE
# fi
# this is an adaptation of the bazel instructions found here:
# running_on_mobile_tensorflowlite.md
# it has everything you need to know, even how to get a f
# frozen graph compatible for tflite conversion
BAZEL_BIN_PATH="./bazel-bin" # have you compiled TOCO yet?
INPUT_FILE=../../../releases/v3.0/tflite_graph.pb
OUTPUT_FILE=../../../releases/v3.0/detect.tflite
INPUT_ARRAYS=normalized_input_image_tensor
OUTPUT_ARRAYS='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1',\
'TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'
# pick one
# INFERENCE_TYPE=QUANTIZED_UINT8
INFERENCE_TYPE=FLOAT
echo -e "\n\nParameter Config:"
echo "- INPUT_FILE :: $INPUT_FILE"
echo "- OUTPUT_FILE :: $OUTPUT_FILE"
echo "- INPUT_ARRAYS :: $INPUT_ARRAYS"
echo "- OUTPUT_ARRAYS :: $OUTPUT_ARRAYS"
echo "- INFERENCE_TYPE :: $INFERENCE_TYPE"
if [ ! -d "$BAZEL_BIN_PATH" ]; then
# Take action if $BAZEL_BIN_PATH
exists. #
echo "\n\nBuilding TOCO with bazel + attempting compiling to TFLite\n\n"
bazel run -c opt tensorflow/lite/toco:toco -- \
--input_file=$INPUT_FILE \
--output_file=$OUTPUT_FILE \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays=$OUTPUT_ARRAYS \
--mean_values=128 \
--std_values=128 \
--inference_type=$INFERENCE_TYPE \
--change_concat_input_ranges=false \
--allow_custom_ops
else
echo -e "\nIt seems that TOCO is built. Skipping to compiling to TFLite\n\n"
echo $INPUT_FILE
./bazel-bin/tensorflow/lite/toco/toco \
--input_file=$INPUT_FILE \
--output_file=$OUTPUT_FILE \
--input_shapes=1,300,300,3 \
--input_arrays=$INPUT_ARRAYS \
--output_arrays=$OUTPUT_ARRAYS \
--mean_values=128 \
--std_values=128 \
--change_concat_input_ranges=false \
--inference_type=$INFERENCE_TYPE \
--allow_custom_ops
echo -e "\n\n"
fi
# --default_ranges_min=0\ # if using UINT8
# --default_ranges_max=1\
# Note that when INFERENCE_TYPE=QUANTIZED_UINT8 without passing
# --default_ranges_min= and --default_ranges_max=
# (found on )
#
# 2019-09-18 10:52:47.997901: F tensorflow/lite/toco/tooling_util.cc:1728]
# Array FeatureExtractor/MobilenetV2/Conv/Relu6, which is an input to the
# DepthwiseConv operator producing the output array
# FeatureExtractor/MobilenetV2/expanded_conv/depthwise/Relu6,
# is lacking min/max data, which is necessary for quantization.
# If accuracy matters, either target a non-quantized output format,
# or run quantized training with your model from a floating point checkpoint
# to change the input graph to contain min/max information.
# If you don't care about accuracy, you can pass
# --default_ranges_min= and --default_ranges_max= for easy experimentation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment