Created
August 3, 2020 12:39
-
-
Save bhavesh-b/11dd81cad80307e6acf5878424ce67f4 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#getting Image from COCO dataset | |
!wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O input.jpg | |
im = cv2.imread("./input.jpg") | |
cv2_imshow(im) | |
#Creating a detectron2 config and a detectron2 `DefaultPredictor` to run inference on this image. | |
cfg = get_cfg() | |
# add project-specific config (e.g., TensorMask) here if you're not running a model in detectron2's core library | |
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) | |
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model | |
# Find a model from detectron2's model zoo. You can use the https://dl.fbaipublicfiles... url as well | |
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") | |
predictor = DefaultPredictor(cfg) | |
outputs = predictor(im) | |
# look at the outputs. See https://detectron2.readthedocs.io/tutorials/models.html#model-output-format for specification | |
print(outputs["instances"].pred_classes) | |
print(outputs["instances"].pred_boxes) | |
#Last and final step is to visualize our processed image. | |
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2) | |
out = v.draw_instance_predictions(outputs["instances"].to("cpu")) | |
cv2_imshow(out.get_image()[:, :, ::-1]) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment