Skip to content

Instantly share code, notes, and snippets.

@bmabir17
Last active February 6, 2022 20:54
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bmabir17/754a6e0450ec4fd5e25e462af949cde6 to your computer and use it in GitHub Desktop.
Save bmabir17/754a6e0450ec4fd5e25e462af949cde6 to your computer and use it in GitHub Desktop.
Converts the mask-rcnn keras model https://github.com/matterport/Mask_RCNN/releases/tag/v2.0 to tflite
import tensorflow as tf
import numpy as np
import mrcnn.model as modellib # https://github.com/matterport/Mask_RCNN/
from mrcnn.config import Config
import keras.backend as keras
PATH_TO_SAVE_FROZEN_PB ="./"
FROZEN_NAME ="saved_model.pb"
def load_model(Weights):
global model, graph
class InferenceConfig(Config):
NAME = "coco"
NUM_CLASSES = 1 + 80
IMAGE_META_SIZE = 1 + 3 + 3 + 4 + 1 + NUM_CLASSES
DETECTION_MAX_INSTANCES = 100
DETECTION_MIN_CONFIDENCE = 0.7
DETECTION_NMS_THRESHOLD = 0.3
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
Weights = Weights
Logs = "./logs"
model = modellib.MaskRCNN(mode="inference", config=config,
model_dir=Logs)
model.load_weights(Weights, by_name=True)
graph = tf.get_default_graph()
# Reference https://github.com/bendangnuksung/mrcnn_serving_ready/blob/master/main.py
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = tf.graph_util.convert_variables_to_constants(
session, input_graph_def, output_names, freeze_var_names)
return frozen_graph
def freeze_model(model, name):
frozen_graph = freeze_session(
sess,
output_names=[out.op.name for out in model.outputs][:4])
directory = PATH_TO_SAVE_FROZEN_PB
tf.train.write_graph(frozen_graph, directory, name , as_text=False)
def keras_to_tflite(in_weight_file, out_weight_file):
sess = tf.Session()
keras.set_session(sess)
load_model(in_weight_file)
global model
freeze_model(model.keras_model, FROZEN_NAME)
# https://github.com/matterport/Mask_RCNN/issues/2020#issuecomment-596449757
input_arrays = ["input_image"]
output_arrays = ["mrcnn_class/Softmax","mrcnn_bbox/Reshape"]
converter = tf.contrib.lite.TocoConverter.from_frozen_graph(
PATH_TO_SAVE_FROZEN_PB+"/"+FROZEN_NAME,
input_arrays, output_arrays,
input_shapes={"input_image":[1,256,256,3]}
)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS]
converter.post_training_quantize = True
tflite_model = converter.convert()
open(out_weight_file, "wb").write(tflite_model)
print("*"*80)
print("Finished converting keras model to Frozen tflite")
print('PATH: ', out_weight_file)
print("*" * 80)
keras_to_tflite("./mask_rcnn_coco.h5","./mask_rcnn_coco.tflite")
@zactodd
Copy link

zactodd commented Apr 17, 2020

what to obtain the correct image shape for configs which are adapted from matterports config? i.e what to change in {"input_image":[1,1024,1024,3]}

@bmabir17
Copy link
Author

Sorry, didn't understand what you meant by that. do you want to know how i got that shape value ?

@zactodd
Copy link

zactodd commented Apr 18, 2020

Yes, I have tried and that was the only part that failed as I have changed the attributes in the config class. If you had a way to to take the config class and get the shape that would be cool.

@bmabir17
Copy link
Author

can you provide me the changes that you have made to the attributes in the config class? maybe i can help you find out what to change in this script.

@zactodd
Copy link

zactodd commented Apr 21, 2020

So we do use Mask RCNN in a variety of projects but Ill give an example.

Configurations:
"BACKBONE" :resnet101,
"BACKBONE_STRIDES" :[4, 8, 16, 32, 64],
"BATCH_SIZE" :2,
"BBOX_STD_DEV" :[0.1 0.1 0.2 0.2],
"COMPUTE_BACKBONE_SHAPE" :None,
"DETECTION_MAX_INSTANCES" :30,
"DETECTION_MIN_CONFIDENCE" :0.7,
"DETECTION_NMS_THRESHOLD" :0.3,
"FPN_CLASSIF_FC_LAYERS_SIZE" :1024,
"GPU_COUNT" :1,
"GRADIENT_CLIP_NORM" :5.0,
"IMAGES_PER_GPU" :2,
"IMAGE_CHANNEL_COUNT" :3,
"IMAGE_MAX_DIM" :1024,
"IMAGE_META_SIZE" :14,
"IMAGE_MIN_DIM" :1024,
"IMAGE_MIN_SCALE" :0,
"IMAGE_RESIZE_MODE" :square,
"IMAGE_SHAPE" :[1024 1024 3],
"LEARNING_MOMENTUM" :0.9,
"LEARNING_RATE" :0.001,
"LOSS_WEIGHTS" :{'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0},
"MASK_POOL_SIZE" :14,
"MASK_SHAPE" :[28, 28],
"MAX_GT_INSTANCES" :100,
"MEAN_PIXEL" :[123.7 116.8 103.9],
"MINI_MASK_SHAPE" :(56, 56),
"NAME" : TEST0114,
"NUM_CLASSES" :2,
"POOL_SIZE" :7,
"POST_NMS_ROIS_INFERENCE" :1000,
"POST_NMS_ROIS_TRAINING" :2000,
"PRE_NMS_LIMIT" :6000,
"ROI_POSITIVE_RATIO" :0.33,
"RPN_ANCHOR_RATIOS" :[0.8, 1.6, 2.3],
"RPN_ANCHOR_SCALES" :[15, 75, 150, 240, 300],
"RPN_ANCHOR_STRIDE" :1,
"RPN_BBOX_STD_DEV" :[0.1 0.1 0.2 0.2],
"RPN_NMS_THRESHOLD" :0.7,
"RPN_TRAIN_ANCHORS_PER_IMAGE" :128,
"STEPS_PER_EPOCH" :1000,
"TOP_DOWN_PYRAMID_SIZE" :256,
"TRAIN_BN" :False,
"TRAIN_ROIS_PER_IMAGE" :200,
"USE_MINI_MASK" :True,
"USE_RPN_ROIS" :True,
"VALIDATION_STEPS" :200,
"WEIGHT_DECAY" :0.0001

@zactodd
Copy link

zactodd commented Apr 21, 2020

I think my issues is that I don't know where the 1 comes from [1, w, h, c], I'm guessing w, h and c are the width, height and colour channels.

@bmabir17
Copy link
Author

bmabir17 commented May 9, 2020

1 is actually the batch size. But in your above config the batch size is 2

@sayakpaul
Copy link

@bmabir17, thank you for providing this. Could you provide an example that shows how to use the TFLite model in Python?

@ulhaqi12
Copy link

ulhaqi12 commented Jun 19, 2020

Hi,
Thank you for this code. I have tried this and obtained tflite file. But this tflite file is not working on tflite interpreter in python. Have you been totally succeeded to implement and deploy the model on Andriod?
best,
Ikram

@bmabir17
Copy link
Author

Unfortunately, i was not able to implement it on python or on Android as faced an error.
Here is the issue that i have opened on tensorflow repo regarding the error
tensorflow/tensorflow#39179

@ulhaqi12
Copy link

@bmabir17 Thank you so much for your response. I think in Mask RCNN, they used a custom layer named "Batch Norm". While layer available in Keras is named as "Batch Normalization". so tired but couldn't succeed storing the whole model in .h5 format. plus there may be some operators used that are not currently supported by TensorFlow lite.

@bmabir17
Copy link
Author

bmabir17 commented Jun 19, 2020

Layer names does not matter as they will run using their respective platform specification and the converter actually does that job of conversion. And batch normalization is not a custom computation layer, just have different naming convention in kears and tflite.

Yes, there are some ops that are not supported in tflite, that is why i used tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS on line #72 as suggested in this doc

@ulhaqi12
Copy link

Oh, okay. well, thank you for your explanation. I hope you will do it soon.
Best,
Ikram

@aa7817
Copy link

aa7817 commented Jul 7, 2020

@bmabir17

I am getting below error when using your convert.py. Seems like input_graph_def.nodes is empty. Your thoughts will be valuable here.

 ---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-57-9b6c22a16bff> in <module>()
     11 #tflite_model = converter.convert()
     12 
---> 13 keras_to_tflite(model_path,"./mask_rcnn_coco.tflite")
     14 
     15 img = load_img("..//..//..//drive//My Drive//sample-images//__HP__2YConfDAHT__1024x768.png")

7 frames
<ipython-input-56-597bc145a9d9> in keras_to_tflite(in_weight_file, out_weight_file)
     51     load_model(in_weight_file)
     52     global model
---> 53     freeze_model(model.keras_model, FROZEN_NAME,sess)
     54     # https://github.com/matterport/Mask_RCNN/issues/2020#issuecomment-596449757
     55     input_arrays = ["input_image"]

<ipython-input-56-597bc145a9d9> in freeze_model(model, name, sess)
     41     frozen_graph = freeze_session(
     42         sess,
---> 43         output_names=[out.op.name for out in model.outputs][:4])
     44     directory = PATH_TO_SAVE_FROZEN_PB
     45     tf.train.write_graph(frozen_graph, directory, name , as_text=False)

<ipython-input-56-597bc145a9d9> in freeze_session(session, keep_var_names, output_names, clear_devices)
     35 
     36         frozen_graph = tf.compat.v1.graph_util.convert_variables_to_constants(
---> 37             session, input_graph_def, output_names, freeze_var_names)
     38         return frozen_graph
     39 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    322               'in a future version' if date is None else ('after %s' % date),
    323               instructions)
--> 324       return func(*args, **kwargs)
    325     return tf_decorator.make_decorator(
    326         func, new_func, 'deprecated',

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/graph_util_impl.py in convert_variables_to_constants(sess, input_graph_def, output_node_names, variable_names_whitelist, variable_names_blacklist)
    357   # This graph only includes the nodes needed to evaluate the output nodes, and
    358   # removes unneeded nodes like those involved in saving and assignment.
--> 359   inference_graph = extract_sub_graph(input_graph_def, output_node_names)
    360 
    361   # Identify the ops in the graph.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    322               'in a future version' if date is None else ('after %s' % date),
    323               instructions)
--> 324       return func(*args, **kwargs)
    325     return tf_decorator.make_decorator(
    326         func, new_func, 'deprecated',

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/graph_util_impl.py in extract_sub_graph(graph_def, dest_nodes)
    203   name_to_input_name, name_to_node, name_to_seq_num = _extract_graph_summary(
    204       graph_def)
--> 205   _assert_nodes_are_present(name_to_node, dest_nodes)
    206 
    207   nodes_to_keep = _bfs_for_reachable_nodes(dest_nodes, name_to_input_name)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/graph_util_impl.py in _assert_nodes_are_present(name_to_node, nodes)
    158   """Assert that nodes are present in the graph."""
    159   for d in nodes:
--> 160     assert d in name_to_node, "%s is not in graph" % d
    161 
    162 

AssertionError: mrcnn_detection_3/Reshape_1 is not in graph_

@bmabir17
Copy link
Author

bmabir17 commented Jul 8, 2020

are you using the same model weight provided in the mask_rcnn repo?
If not, can you inspect your model weights with https://lutzroeder.github.io/netron/ and check if it contains mrcnn_detection_3/Reshape_1

@ulhaqi12
Copy link

ulhaqi12 commented Jul 8, 2020

@bmabir17
Hi,
Hope you are doing well. Any progress on using it on Android Application?

@bmabir17
Copy link
Author

bmabir17 commented Jul 9, 2020

I am still waiting for the tensorflow lite team to get the tf_ops running on tflite. They told me it would work with nightly build, but the error is still the same.

@akrsrivastava
Copy link

ValueError: Invalid tensors 'input_image' were found.

Any help with the above error?

@Jainam0
Copy link

Jainam0 commented Jul 20, 2020

I'm getting this error :( please help me..........................................

FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[{{node bn4a_branch2a_8/gamma}}]]
[[res3d_branch2a_8/bias/_2417]]
(1) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[{{node bn4a_branch2a_8/gamma}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

FailedPreconditionError Traceback (most recent call last)
/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
1382 '\nsession_config.graph_options.rewrite_options.'
1383 'disable_meta_optimizer = True')
-> 1384 raise type(e)(node_def, op, message)
1385
1386 def _extend_graph(self):

FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[node bn4a_branch2a_8/gamma (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]]
[[res3d_branch2a_8/bias/_2417]]
(1) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[node bn4a_branch2a_8/gamma (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'bn4a_branch2a_8/gamma':
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 664, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 456, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 486, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 438, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2828, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 81, in
keras_to_tflite("./mask_rcnn_coco.h5","./mask_rcnn_coco.tflite")
File "", line 61, in keras_to_tflite
load_model(in_weight_file)
File "", line 27, in load_model
model_dir=Logs)
File "/content/Mask_RCNN/mrcnn/model.py", line 1837, in init
self.keras_model = self.build(mode=mode, config=config)
File "/content/Mask_RCNN/mrcnn/model.py", line 1901, in build
stage5=True, train_bn=config.TRAIN_BN)
File "/content/Mask_RCNN/mrcnn/model.py", line 194, in resnet_graph
x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a', train_bn=train_bn)
File "/content/Mask_RCNN/mrcnn/model.py", line 150, in conv_block
x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 576, in call
self.build(input_shapes[0])
File "/usr/local/lib/python3.6/dist-packages/keras/layers/normalization.py", line 103, in build
constraint=self.gamma_constraint)
File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 400, in add_weight
constraint=constraint)
File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 380, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 258, in call
return cls._variable_v1_call(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 219, in _variable_v1_call
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 197, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variable_scope.py", line 2519, in default_variable_creator
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 262, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 1688, in init
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 1846, in _init_from_args
shape, self._initial_value.dtype.base_dtype, name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/state_ops.py", line 79, in variable_op_v2
shared_name=shared_name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/gen_state_ops.py", line 1621, in variable_v2
shared_name=shared_name, name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

@bmabir17
Copy link
Author

bmabir17 commented Jul 21, 2020

@akrsrivastava

ValueError: Invalid tensors 'input_image' were found.

Any help with the above error?

Please check your model with netron if its input node is named input_image or not, if not then change accordingly.

@bmabir17
Copy link
Author

@Jainam0
it seems the weight file you are using was generated from a custom-modified(or old) model structure rather than the one currently present in the matterport repo. If you still want to use this weights file, you have to modifiy this line #L150 in your imported mask-rcnn library path. From 2a -> 2a_8. There might be more lines you have to modify you have to find those.

I would rather suggest using the weights given here it will be an easier way.

@Jainam0
Copy link

Jainam0 commented Jul 27, 2020

@bmabir17
previous issue was solved it was some session error i was facing , can you help me with this . i have converted the epoch 10 .h5 in to . pb no wile converting it into .tflte is not working and giving some error like this
UnicodeDecodeError Traceback (most recent call last)
in ()
4 export_dir1 = "/content/drive/My Drive/NumberPlateDetection/saved_model.pb"
5 with open(export_dir1, 'r', encoding="utf8") as f:
----> 6 save_file = f.read()
7 input_arrays = ["input_image"]
8 #input_arrays = ["input_image","input_anchors","input_image_meta"]

/usr/lib/python3.6/codecs.py in decode(self, input, final)
319 # decode input (taking the buffer into account)
320 data = self.buffer + input
--> 321 (result, consumed) = self._buffer_decode(data, self.errors, final)
322 # keep undecoded input until the next call
323 self.buffer = data[consumed:]

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 46: invalid start byte

@bmabir17
Copy link
Author

@bmabir17
previous issue was solved it was some session error i was facing , can you help me with this . i have converted the epoch 10 .h5 in to . pb no wile converting it into .tflte is not working and giving some error like this
UnicodeDecodeError Traceback (most recent call last)
in ()
4 export_dir1 = "/content/drive/My Drive/NumberPlateDetection/saved_model.pb"
5 with open(export_dir1, 'r', encoding="utf8") as f:
----> 6 save_file = f.read()
7 input_arrays = ["input_image"]
8 #input_arrays = ["input_image","input_anchors","input_image_meta"]

/usr/lib/python3.6/codecs.py in decode(self, input, final)
319 # decode input (taking the buffer into account)
320 data = self.buffer + input
--> 321 (result, consumed) = self._buffer_decode(data, self.errors, final)
322 # keep undecoded input until the next call
323 self.buffer = data[consumed:]

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 46: invalid start byte

Your model was not saved properly or your model file has become corrupted

@Rubeen-Mohammad
Copy link

I'm getting this error :( please help me..........................................

FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[{{node bn4a_branch2a_8/gamma}}]]
[[res3d_branch2a_8/bias/_2417]]
(1) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[{{node bn4a_branch2a_8/gamma}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

FailedPreconditionError Traceback (most recent call last)
/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
1382 '\nsession_config.graph_options.rewrite_options.'
1383 'disable_meta_optimizer = True')
-> 1384 raise type(e)(node_def, op, message)
1385
1386 def _extend_graph(self):

FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[node bn4a_branch2a_8/gamma (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]]
[[res3d_branch2a_8/bias/_2417]]
(1) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[node bn4a_branch2a_8/gamma (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'bn4a_branch2a_8/gamma':
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 664, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 456, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 486, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 438, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2828, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 81, in
keras_to_tflite("./mask_rcnn_coco.h5","./mask_rcnn_coco.tflite")
File "", line 61, in keras_to_tflite
load_model(in_weight_file)
File "", line 27, in load_model
model_dir=Logs)
File "/content/Mask_RCNN/mrcnn/model.py", line 1837, in init
self.keras_model = self.build(mode=mode, config=config)
File "/content/Mask_RCNN/mrcnn/model.py", line 1901, in build
stage5=True, train_bn=config.TRAIN_BN)
File "/content/Mask_RCNN/mrcnn/model.py", line 194, in resnet_graph
x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a', train_bn=train_bn)
File "/content/Mask_RCNN/mrcnn/model.py", line 150, in conv_block
x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 576, in call
self.build(input_shapes[0])
File "/usr/local/lib/python3.6/dist-packages/keras/layers/normalization.py", line 103, in build
constraint=self.gamma_constraint)
File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 400, in add_weight
constraint=constraint)
File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 380, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 258, in call
return cls._variable_v1_call(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 219, in _variable_v1_call
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 197, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variable_scope.py", line 2519, in default_variable_creator
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 262, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 1688, in init
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 1846, in _init_from_args
shape, self._initial_value.dtype.base_dtype, name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/state_ops.py", line 79, in variable_op_v2
shared_name=shared_name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/gen_state_ops.py", line 1621, in variable_v2
shared_name=shared_name, name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

Hi @Jainam0! I'm getting this issue again and again, but in different lines, can you please help me in resolving this?
Thank you

@Jainam0
Copy link

Jainam0 commented Nov 26, 2020

I'm getting this error :( please help me..........................................
FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[{{node bn4a_branch2a_8/gamma}}]]
[[res3d_branch2a_8/bias/_2417]]
(1) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[{{node bn4a_branch2a_8/gamma}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
FailedPreconditionError Traceback (most recent call last)
/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
1382 '\nsession_config.graph_options.rewrite_options.'
1383 'disable_meta_optimizer = True')
-> 1384 raise type(e)(node_def, op, message)
1385
1386 def _extend_graph(self):
FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[node bn4a_branch2a_8/gamma (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]]
[[res3d_branch2a_8/bias/_2417]]
(1) Failed precondition: Attempting to use uninitialized value bn4a_branch2a_8/gamma
[[node bn4a_branch2a_8/gamma (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.
Original stack trace for 'bn4a_branch2a_8/gamma':
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 664, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 456, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 486, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 438, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2828, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 81, in
keras_to_tflite("./mask_rcnn_coco.h5","./mask_rcnn_coco.tflite")
File "", line 61, in keras_to_tflite
load_model(in_weight_file)
File "", line 27, in load_model
model_dir=Logs)
File "/content/Mask_RCNN/mrcnn/model.py", line 1837, in init
self.keras_model = self.build(mode=mode, config=config)
File "/content/Mask_RCNN/mrcnn/model.py", line 1901, in build
stage5=True, train_bn=config.TRAIN_BN)
File "/content/Mask_RCNN/mrcnn/model.py", line 194, in resnet_graph
x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a', train_bn=train_bn)
File "/content/Mask_RCNN/mrcnn/model.py", line 150, in conv_block
x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 576, in call
self.build(input_shapes[0])
File "/usr/local/lib/python3.6/dist-packages/keras/layers/normalization.py", line 103, in build
constraint=self.gamma_constraint)
File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py", line 400, in add_weight
constraint=constraint)
File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 380, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 258, in call
return cls._variable_v1_call(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 219, in _variable_v1_call
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 197, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variable_scope.py", line 2519, in default_variable_creator
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 262, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 1688, in init
shape=shape)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py", line 1846, in _init_from_args
shape, self._initial_value.dtype.base_dtype, name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/state_ops.py", line 79, in variable_op_v2
shared_name=shared_name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/gen_state_ops.py", line 1621, in variable_v2
shared_name=shared_name, name=name)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

Hi @Jainam0! I'm getting this issue again and again, but in different lines, can you please help me in resolving this?
Thank you

tensorflow/tensorflow#28287 (comment) , i solved that from here but couldn't able to convert it into .tflite , i had only converted ssd models into .tflite

@lbininhbl
Copy link

Hi @bmabir17
Thanks your code that I can convert the tflite successfully. But when I ran the tflite on iOS, I got bellow error
TensorFlow Lite Error: Input tensor 314 lacks data

I look through the code and notice this code in keras_to_tflite function:
input_arrays = ["input_image"] output_arrays = ["mrcnn_class/Softmax","mrcnn_bbox/Reshape"]

But the real keras model needs more inputs when predict function was executed. I don't know if that has anything to do with it. What do think?

@kaamlaS
Copy link

kaamlaS commented Jan 27, 2022

hi @bmabir17
I cant seem to move ahead from this point
ValueError: Invalid tensors 'input_image' were found.
i checked the model using model.keras_model.summary() and it shows that the first layer has the name input_image. What do i do?

@Tubhalooter
Copy link

@kaamlaS i got past this , check this out matterport/Mask_RCNN#2020

let me know if you get it working
also i cant get the output names to work so i left it out but that is stopping the conversion , how did you pass the output names to the parameter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment