Skip to content

Instantly share code, notes, and snippets.

@markdtw
Created December 29, 2017 07:21
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save markdtw/02ece6b90e75832bd44787c03a664e8d to your computer and use it in GitHub Desktop.
Save markdtw/02ece6b90e75832bd44787c03a664e8d to your computer and use it in GitHub Desktop.
Extracting detection features from tensorflow object detection API.
"""This file extracts faster-rcnn features and bounding box coordinates"""
import pdb
import argparse
import numpy as np
import tensorflow as tf
import PIL.Image as PILI
def session(sess, feat_conv, feat_avg, boxes, classes, scores, image_tensor, image):
feat_conv_out, feat_avg_out, boxes_out, classes_out, scores_out = sess.run([
feat_conv, feat_avg, boxes, classes, scores], feed_dict={image_tensor: image})
feat_conv_out = feat_conv_out.squeeze()
feat_avg_out = feat_avg_out.squeeze()
boxes_out = boxes_out.squeeze()
classes_out = classes_out.squeeze().astype(np.int32)
scores_out = scores_out.squeeze()
return feat_conv_out, feat_avg_out, boxes_out, classes_out, scores_out
def load_graph(graph, ckpt_path):
with graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(ckpt_path, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--img', metavar='', type=str, default=None, help='Image path.')
parser.add_argument('--model', metavar='', type=str, default='frcnn_res101', help='frcnn_incresv2 or frcnn_res101.')
args, unparsed = parser.parse_known_args()
if len(unparsed) != 0: raise SystemExit('Unknown argument: {}'.format(unparsed))
graph = tf.Graph()
if args.model == 'frcnn_incresv2':
ckpt_path = './faster_rcnn_inception_resnet_v2_atrous_coco_2017_11_08/frozen_inference_graph.pb'
load_graph(graph, ckpt_path)
# (1, ?, ?, 3)
image_tensor = graph.get_tensor_by_name('image_tensor:0')
# (100, 8, 8, 1536)
feat_conv = graph.get_tensor_by_name('SecondStageFeatureExtractor/InceptionResnetV2/Conv2d_7b_1x1/Relu:0')
# (100, 1, 1, 1536)
feat_avg = graph.get_tensor_by_name('SecondStageBoxPredictor/AvgPool:0')
elif args.model == 'frcnn_res101':
ckpt_path = './faster_rcnn_resnet101_coco_2017_11_08/frozen_inference_graph.pb'
load_graph(graph, ckpt_path)
# (1, ?, ?, 3)
image_tensor = graph.get_tensor_by_name('image_tensor:0')
# (100, 7, 7, 2048)
feat_conv = graph.get_tensor_by_name('SecondStageFeatureExtractor/resnet_v1_101/block4/unit_3/bottleneck_v1/Relu:0')
# (100, 1, 1, 2048)
feat_avg = graph.get_tensor_by_name('SecondStageBoxPredictor/AvgPool:0')
else:
raise SystemExit('Unknown model: {}'.format(args.model))
boxes = graph.get_tensor_by_name('detection_boxes:0')
scores = graph.get_tensor_by_name('detection_scores:0')
classes = graph.get_tensor_by_name('detection_classes:0')
print ('model: {}'.format(args.model))
# Load tf model into memory
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config, graph=graph)
print ('Detect a single image')
# Load image
image = PILI.open(args.img)
image = np.asarray(image)
# Run session
feat_conv, feat_avg, boxes, classes, scores = session(
sess, feat_conv, feat_avg, boxes, classes, scores, image_tensor, np.expand_dims(image, 0))
print ('Done')
@jshi31
Copy link

jshi31 commented Sep 7, 2020

Feature extraction support seems to have been recently added (in this PR: tensorflow/models#7208). You can use it by re-exporting the existing models. The features extracted from bounding boxes will then be named detection_features:0.

Is there a comment on how should we enable this? From the export_inference_graph.py I see that it is an optional parameter but can't figure out how to enable it.

This stack overflow answer worked for me:
https://stackoverflow.com/a/57536793/1886357

Even if I set output_final_box_features as True in the fastercnn.proto, I cannot get the `detection_features' key in output_dict...

Now it works, after modifying the fastser_rcnn.proto, you have to recompile the protobuf into python file again.

two question:
1-after modifying the fastser_rcnn.proto only recompile the protobuf into python file again is enough?
2-how to I recompile the protobuf into python file again?

Hi,
When you install the tf detection API, you will firstly compile proto as indicated here https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md#python-package-installation. So before you compile the proto, you need to set output_final_box_features to true in faster_rcnn.proto https://github.com/tensorflow/models/blob/master/research/object_detection/protos/faster_rcnn.proto#L180.
After compilation of proto, just follow the demo example of the stackflow answer https://stackoverflow.com/a/57536793/1886357, then you will get it.

@parseh123
Copy link

Feature extraction support seems to have been recently added (in this PR: tensorflow/models#7208). You can use it by re-exporting the existing models. The features extracted from bounding boxes will then be named detection_features:0.

Is there a comment on how should we enable this? From the export_inference_graph.py I see that it is an optional parameter but can't figure out how to enable it.

This stack overflow answer worked for me:
https://stackoverflow.com/a/57536793/1886357

Even if I set output_final_box_features as True in the fastercnn.proto, I cannot get the `detection_features' key in output_dict...

Now it works, after modifying the fastser_rcnn.proto, you have to recompile the protobuf into python file again.

two question:
1-after modifying the fastser_rcnn.proto only recompile the protobuf into python file again is enough?
2-how to I recompile the protobuf into python file again?

Hi,
When you install the tf detection API, you will firstly compile proto as indicated here https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md#python-package-installation. So before you compile the proto, you need to set output_final_box_features to true in faster_rcnn.proto https://github.com/tensorflow/models/blob/master/research/object_detection/protos/faster_rcnn.proto#L180.
After compilation of proto, just follow the demo example of the stackflow answer https://stackoverflow.com/a/57536793/1886357, then you will get it.

Thanks
how to I set output_final_box_features to true in faster_rcnn.proto? (optional bool output_final_box_features = 42 [default = true];) or (optional bool output_final_box_features = true [default = false];)
I try both but not solved

@jshi31
Copy link

jshi31 commented Sep 7, 2020

Feature extraction support seems to have been recently added (in this PR: tensorflow/models#7208). You can use it by re-exporting the existing models. The features extracted from bounding boxes will then be named detection_features:0.

Is there a comment on how should we enable this? From the export_inference_graph.py I see that it is an optional parameter but can't figure out how to enable it.

This stack overflow answer worked for me:
https://stackoverflow.com/a/57536793/1886357

Even if I set output_final_box_features as True in the fastercnn.proto, I cannot get the `detection_features' key in output_dict...

Now it works, after modifying the fastser_rcnn.proto, you have to recompile the protobuf into python file again.

two question:
1-after modifying the fastser_rcnn.proto only recompile the protobuf into python file again is enough?
2-how to I recompile the protobuf into python file again?

Hi,
When you install the tf detection API, you will firstly compile proto as indicated here https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md#python-package-installation. So before you compile the proto, you need to set output_final_box_features to true in faster_rcnn.proto https://github.com/tensorflow/models/blob/master/research/object_detection/protos/faster_rcnn.proto#L180.
After compilation of proto, just follow the demo example of the stackflow answer https://stackoverflow.com/a/57536793/1886357, then you will get it.

Thanks
how to I set output_final_box_features to true in faster_rcnn.proto? (optional bool output_final_box_features = 42 [default = true];) or (optional bool output_final_box_features = true [default = false];)
I try both but not solved

(optional bool output_final_box_features = 42 [default = true];) is correct. I was using the detection API1 and tensorflow1.15, and just did the things above to enable the box feature.

@parseh123
Copy link

Feature extraction support seems to have been recently added (in this PR: tensorflow/models#7208). You can use it by re-exporting the existing models. The features extracted from bounding boxes will then be named detection_features:0.

Is there a comment on how should we enable this? From the export_inference_graph.py I see that it is an optional parameter but can't figure out how to enable it.

This stack overflow answer worked for me:
https://stackoverflow.com/a/57536793/1886357

Even if I set output_final_box_features as True in the fastercnn.proto, I cannot get the `detection_features' key in output_dict...

Now it works, after modifying the fastser_rcnn.proto, you have to recompile the protobuf into python file again.

two question:
1-after modifying the fastser_rcnn.proto only recompile the protobuf into python file again is enough?
2-how to I recompile the protobuf into python file again?

Hi,
When you install the tf detection API, you will firstly compile proto as indicated here https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md#python-package-installation. So before you compile the proto, you need to set output_final_box_features to true in faster_rcnn.proto https://github.com/tensorflow/models/blob/master/research/object_detection/protos/faster_rcnn.proto#L180.
After compilation of proto, just follow the demo example of the stackflow answer https://stackoverflow.com/a/57536793/1886357, then you will get it.

Thanks
how to I set output_final_box_features to true in faster_rcnn.proto? (optional bool output_final_box_features = 42 [default = true];) or (optional bool output_final_box_features = true [default = false];)
I try both but not solved

(optional bool output_final_box_features = 42 [default = true];) is correct. I was using the detection API1 and tensorflow1.15, and just did the things above to enable the box feature.

I was using "get_tensor_by_name('SecondStageBoxPredictor/AvgPool:0')" and achieve to feature vector with dim=2048 for each box but I don't know this feature vectors are discriminative in which space? euclidean space or cosine space or another space?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment