Convert a Tensorflow Object Detection SavedModel to a Web Model For TensorflowJS
#Currently within the Tensorflow ecosystem there is a ton of confusing similar words. | |
#Maybe this is because of my own understanding and is not due to their naming. | |
#Regardless I struggled making this work, so I #figured I would write a Gist. | |
# Firstly we are trying to convert a previously created Tensorflow frozen graph or checkpoint files. | |
# Specifically I wanted to #convert some of the Tensorflow Object Detection API models. | |
# We already have a SavedModel in the download from the object detection model zoo. | |
# If you need to get a SavedModel from your own trained Object Detection Model, you will need to export it using the script | |
# provided by the object_detection module. The python script is called exporter. It is not that well documented, but if you | |
# send it a pipeline config along with the correct params it will create a frozen graph as well as a SavedModel. | |
# Download the model files. | |
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz | |
# Untar the model . | |
tar -xzvf ssd_mobilenet_v2_coco_2018_03_29.tar.gz | |
# Change Directory | |
cd ssd_mobilenet_v2_coco_2018_03_29 | |
pip install tensorflow-gpu # Or just tensorflow for CPU | |
pip install tensorflowjs | |
# So now we have a SavedModel in the directory of ssd_mobilenet_v2_coco_2018_03_29/saved_model | |
# We need to find the output and inputs for the model however. The easiest way to do this with a SavedModel is to use | |
# the saved_model_cli that is included with the tensorflow pip package. | |
# this command will give you only the information that matters | |
saved_model_cli show --dir ssd_mobilenet_v2_coco_2018_03_29/saved_model --tag_set serve --signature_def serving_default | |
# output of command | |
#The given SavedModel SignatureDef contains the following input(s): | |
#inputs['inputs'] tensor_info: | |
# dtype: DT_UINT8 | |
# shape: (-1, -1, -1, 3) | |
# name: image_tensor:0 | |
#The given SavedModel SignatureDef contains the following output(s): | |
#outputs['detection_boxes'] tensor_info: | |
# dtype: DT_FLOAT | |
# shape: (-1, 100, 4) | |
# name: detection_boxes:0 | |
#outputs['detection_classes'] tensor_info: | |
# dtype: DT_FLOAT | |
# shape: (-1, 100) | |
# name: detection_classes:0 | |
#outputs['detection_scores'] tensor_info: | |
# dtype: DT_FLOAT | |
# shape: (-1, 100) | |
# name: detection_scores:0 | |
#outputs['num_detections'] tensor_info: | |
# dtype: DT_FLOAT | |
# shape: (-1) | |
# name: num_detections:0 | |
#Method name is: tensorflow/serving/predict | |
# this will give you all the information stored within the graph. | |
saved_model_cli show --dir ssd_mobilenet_v2_coco_2018_03_29/saved_model --all | |
#output is the same but much longer | |
# now that we know the node names we can run our tensorflow JS command | |
current_dir=$(pwd) | |
tensorflowjs_converter \ | |
--input_format=tf_saved_model \ | |
--output_node_names='detection_boxes,detection_scores,num_detections,detection_classes' \ | |
--saved_model_tags=serve \ | |
${current_dir}/ssd_mobilenet_v2_coco_2018_03_29/saved_model \ | |
${current_dir}/ssd_mobilenet_v2_coco_2018_03_29/web_model | |
# after the normal GPU intialization spam your going to see this. | |
# Converted 0 variables to const ops. | |
# Unsupported Ops in the model | |
# TopKV2, Exit, Split, TensorArrayGatherV3, | |
# NonMaxSuppressionV2, Assert, TensorArraySizeV3, | |
# Unpack, TensorArrayWriteV3, TensorArrayReadV3, | |
# All, LoopCond, Merge, Switch, Enter, Where, | |
# NextIteration, TensorArrayV3, TensorArrayScatterV3, | |
# Rank, StridedSlice, Size | |
# on the plus side, the serving model and the output nodes that you have | |
# now gained can already be used with tensorflow serving. | |
# Yay us, anyways once these are better supported I will update this gist appropriately. | |
# Currently I have tested the above procedure on the following models from the Tf-ObjectDetection Model Zoo. | |
# Ill add more as I continue to test the different models. | |
# ssd_inception_v2_coco | |
# faster_rcnn_inception_v2_coco | |
# ssd_mobilenet_v1_coco | |
This comment has been minimized.
This comment has been minimized.
CobbleVision
commented
Jul 18, 2018
Currently the following variables are still unsupported: Assert, CropAndResize, Where, NonMaxSuppressionV2, TopKV2 |
This comment has been minimized.
This comment has been minimized.
mrgoonie
commented
Oct 24, 2018
Have you succeed using this converted SavedModel in TensorFlow.js? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This comment has been minimized.
Kint-Kang commentedJun 2, 2018
any updated? still Unsupported Ops?