Skip to content

Instantly share code, notes, and snippets.

@aoikonomop
Created May 8, 2018 12:41
Show Gist options
  • Save aoikonomop/394c26c7efae6011a2387af37262255f to your computer and use it in GitHub Desktop.
Save aoikonomop/394c26c7efae6011a2387af37262255f to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SMOT + Re3 motion model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this notebook we will look at how the `Simple Multi-Object Tracking + Re3 Motion Model` algorithm implemented in `holmes` perform on the two currently available ground truth tracking sequences that we have at our disposal."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will start by installing and/or donwloading the rellevant Python packages used in this notebook:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Upgrade pip and conda\n",
"!pip install pip --upgrade\n",
"!conda update -n base conda -y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install `pythia`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Install Pythia\n",
"!pip install hudl_pythia==0.0.0.master.64"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Proceed to install holmes. Make sure you have correctly set up your local path to the holmes root directory."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"local_holmes_path = '/root/holmes'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Install holmes special dependencies via conda# Insta \n",
"!conda install -c conda-forge opencv==3.2.0 openblas==0.2.19 -y\n",
"\n",
"# Install holmes dependencies, depending on where you are running this you might not want to install tensorflow\n",
"!pip install -e $os.path.join(local_holmes_path, 'requirements.txt')\n",
"\n",
"# Install holmes itself\n",
"!pip install -e $local_holmes_path"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we will `import` all the relevant `packages`, `classes` and `functions` needed to run the entire notebook on a `single cell`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import ipywidgets as widgets\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n",
"import matplotlib.patches as patches\n",
"from matplotlib import colors as mcolors\n",
"\n",
"import os\n",
"import io\n",
"import json\n",
"import gzip\n",
"import boto3\n",
"import shutil\n",
"import logging\n",
"import zipfile\n",
"\n",
"from PIL import Image\n",
"\n",
"import numpy as np\n",
"\n",
"from copy import deepcopy\n",
"\n",
"from elementary import Annotations, AnnotationsType, Frame, FrameType\n",
"\n",
"from hudl_terrarium import get_config\n",
"\n",
"from hudl_pythia.mot import MotEvaluator\n",
"from hudl_pythia.visualization.static import create_video\n",
"\n",
"from holmes.track import tracker_factory, OnlineTracker\n",
"from holmes.track.online.associate import SpatialAssociator\n",
"from holmes.track.online.tracklet import SimpleTracklet, TrackletBuilder\n",
"from holmes.track.online.tracklet.motion import Re3MotionModel\n",
"\n",
"logger = logging.getLogger()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook assumes that we have access to detection predictions for the 2 sequences we will test the algorithm on. Luckily, we recently ran the `beatrix` `ReInspect` detector on these sequences and stored the results on the following `s3 location`: \n",
"\n",
"- [s3://hudlrd-beatrix/datasets/sequence_watson](https://s3.console.aws.amazon.com/s3/buckets/hudlrd-beatrix/datasets/sequence_watson/?region=us-east-1&tab=overview)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download detections"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first thing we will need to do is to download the pre-saved `ReInspect` detections from `s3`. In order to do that, we can quickly define a function for `downloading` files from the previous `s3` location:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"_s3_bucket = 'hudlrd-beatrix'\n",
"_s3_key = 'datasets/sequence_watson/'\n",
"\n",
"def download_from_s3(local_cache, s3_file, overwrite=False):\n",
" local_file_path = os.path.join(local_cache, s3_file)\n",
" ext = local_file_path.rsplit('.', 1)[-1]\n",
" \n",
" if ext == 'zip' or ext == 'gz':\n",
" local_file_path_final = local_file_path.rsplit('.', 1)[0]\n",
" else:\n",
" local_file_path_final = local_file_path\n",
" \n",
" if os.path.isdir(local_file_path_final) or os.path.isfile(local_file_path_final) and not overwrite:\n",
" logger.info('File {name} already exists in the cache directory {cache} '\n",
" .format(name=s3_file, cache=local_cache))\n",
" return local_file_path_final\n",
" elif not os.path.isdir(local_cache):\n",
" logger.info('Creating cache directory at {}'.format(local_cache))\n",
" os.makedirs(local_cache)\n",
"\n",
" _s3_key2 = os.path.join(_s3_key, s3_file)\n",
" s3 = boto3.client('s3')\n",
"\n",
" logger.info('Downloading {} file...'.format(s3_file))\n",
" s3.download_file(_s3_bucket, _s3_key2, local_file_path)\n",
"\n",
" if ext == 'zip' or ext == 'gz':\n",
" if ext == 'zip':\n",
" with zipfile.ZipFile(local_file_path, 'r') as zip_archive:\n",
" zip_archive.extractall(\n",
" path=os.path.join(local_cache, s3_file.rsplit('.', 1)[0])\n",
" )\n",
" elif ext == 'gz':\n",
" with gzip.open(local_file_path, 'rb') as gz_archive:\n",
" extracted_file = open(os.path.join(local_cache, s3_file.rsplit('.', 1)[0]), 'wb')\n",
" shutil.copyfileobj(gz_archive, extracted_file)\n",
" \n",
" os.remove(local_file_path)\n",
" local_file_path = local_file_path.rsplit('.', 1)[0]\n",
" \n",
" return os.path.join(local_file_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Prior to downloading them we will find it convenient to define a `local data cache` directory where to store them."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_data_cache = '/data1/antonis/data/smot_re3/'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let us download the `detections`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"detections_file_68726 = download_from_s3(local_data_cache, '68726_detections.json.gz', overwrite=True)\n",
"print(detections_file_68726)\n",
"\n",
"detections_file_68720 = download_from_s3(local_data_cache, '68720_detections.json.gz', overwrite=True)\n",
"print(detections_file_68720)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can check what exact files have been downloaded:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls $local_data_cache"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And, while we are at it, we will find it convenient to also download the `ground truth sequence` themselves, since we will need them for grading the algorithm (and also for getting a feel of the appropriate magnitude of one of the algorithms hyperparameters (although I recognise that this is not ideal we are very data sparse at the moment so this will have to do)):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"_s3_key = 'datasets/sequence_watson/'\n",
"\n",
"true_file_68726 = download_from_s3(local_data_cache, '68726_corrected_frames_withIDs.json.gz', overwrite=True)\n",
"print(true_file_68726)\n",
"\n",
"true_file_68720 = download_from_s3(local_data_cache, '68720_corrected_frames_withIDs.json.gz', overwrite=True)\n",
"print(true_file_68720)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us check that they have been indeed downloaded:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls $local_data_cache"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we will define a general function for easily `loading` the previous detection and ground truth files as `elementary` `annotations` objects:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_annotations(file_name):\n",
" try:\n",
" annotations = Annotations.init_from_file(file_name)\n",
" except Exception as e:\n",
" logging.error(e)\n",
" logging.warning('File {} is not an annotations file. Trying to load '\n",
" 'it as a frame list file.'.format(file_name))\n",
"\n",
" with open(file_name, 'r') as f:\n",
" frames_dict = json.load(f)\n",
"\n",
" annotations = Annotations(annotations_type=AnnotationsType.ground_truth)\n",
" for frame_dict in frames_dict:\n",
" annotations.add_frame(Frame.init_from_dict(frame_dict))\n",
"\n",
" return annotations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Track using SMOT + Re3 motion model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to track the sequences we will start by loading the detection predictions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"detections_seq = []\n",
"\n",
"for frames_file in [detections_file_68726, detections_file_68720]: \n",
" detections_seq.append(get_annotations(frames_file))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we will proceed to define a simple `Re3 motion model`. This motion model will be simply wrapped around the `Re3Tracker` class that whose mission wil be to `predict` the position of the players in the next frame."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, we will create a `terrarium configuration object` defining the behaviour of the `simple tracker algorithm` in `holmes`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"online_tracker_config = get_config(OnlineTracker)(\n",
" tracklet_builder=get_config(TrackletBuilder)(\n",
" tracklet=get_config(SimpleTracklet)(\n",
" motion_model=get_config(Re3MotionModel)(\n",
" reinitialize=False\n",
" ),\n",
" lost_threshold=3\n",
" ),\n",
" ),\n",
" associator=get_config(SpatialAssociator)(\n",
" threshold=40\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This configuration specifies important aspects of tracking algorithm:\n",
"\n",
"- The `motion model` used by the tracking algorithm is the `naive` one explained on the previous section (referred here as the `dummy motion model`).\n",
"- Tracklets only `survive` for a `single frame` `without` being `associated` to a detection prediction.\n",
"- The `center distance association` `threshold` is the one specified on the previous section."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Remember that we can easily `print` the configuration object to double check that it has been properly set up and review all its different parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(online_tracker_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to perform tracking with the `Re3MotionModel` we will need access to the original `image data`. To gain easy access to it, at this point we will load the ground truth tracking data files which contain `s3` links to the image data:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"true_seq = []\n",
"\n",
"for true_file in [true_file_68726, true_file_68720]: \n",
" true_seq.append(get_annotations(true_file))\n",
" \n",
" \n",
"factor = 1.5\n",
"\n",
"for frame in true_seq[1]:\n",
" for bb in frame:\n",
" bb._x = bb.x * factor\n",
" bb._y = bb.y * factor\n",
" bb._width = bb.width * factor\n",
" bb._height = bb.height * factor"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will find it useful to define a simple function to retrieve the image data from `s3`. Here it is:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_image_from_s3(s3_image_path):\n",
" s3_image_path = s3_image_path.split('//')[-1]\n",
" bucket, key = s3_image_path.split('/', 1)\n",
"\n",
" s3 = boto3.resource('s3')\n",
" bucket_object = s3.Bucket(bucket)\n",
" image_object = bucket_object.Object(key)\n",
" image = mpimg.imread(io.BytesIO(image_object.get()['Body'].read()), 'jpg')\n",
" if image.shape != (1080, 1920, 3):\n",
" image = Image.fromarray(image)\n",
" image = image.resize((1920, 1080), Image.BICUBIC)\n",
" image = np.asarray(image)\n",
" \n",
" return image"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we can proceed to retrieve the images in question. Note that this might take a little while."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"images_seq = []\n",
"\n",
"for annotations in true_seq:\n",
" images = []\n",
" for frame in annotations:\n",
" image = get_image_from_s3(frame.image_path)\n",
" images.append(image)\n",
" images_seq.append(deepcopy(images))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point we can proceed to track both sequences:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import time\n",
"\n",
"trajectories_seq = []\n",
"\n",
"for detections, images in zip(detections_seq, images_seq):\n",
" \n",
" start_time = time.time()\n",
" \n",
" trajectories = []\n",
" online_tracker = tracker_factory(online_tracker_config)\n",
" for i, (detection, image) in enumerate(zip(detections, images)):\n",
" if i % 100 == 0:\n",
" logger.info('Tracked {} of {}'.format(i, len(images)))\n",
" \n",
" if i == 0:\n",
" previous_image = image\n",
" else:\n",
" previous_image = images[i-1]\n",
" trajectory = online_tracker.track(detection, image, previous_image, i)\n",
" trajectory.image_path = detection.image_path\n",
" trajectories.append(deepcopy(trajectory))\n",
" trajectories_seq.append(trajectories)\n",
" \n",
" logger.info('Elapsed = {}'.format(time.time() - start_time))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Erase trajectories that are shorter than 10 frames, since they are very unlikely to be useful."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ts = []\n",
"\n",
"for i, annotations in enumerate(trajectories_seq):\n",
" t = {}\n",
" for frame in annotations:\n",
" for box in frame: \n",
" key = box.id\n",
" if key not in t:\n",
" t[key] = [box]\n",
" else:\n",
" t[key].append(box) \n",
" ts.append(t)\n",
" \n",
" \n",
"ts2 = [] \n",
"\n",
"for t in ts:\n",
" t2 = {}\n",
" for k, v in t.items():\n",
" if len(v) >= 15:\n",
" t2[k] = len(v)\n",
" ts2.append(t2)\n",
" \n",
" \n",
"for annotations, t in zip(trajectories_seq, ts2):\n",
" for frame in annotations:\n",
" for i, box in enumerate(frame):\n",
" if box.id not in t:\n",
" frame.bounding_boxes = [bb for bb in frame.bounding_boxes if bb.id != box.id]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualize tracking results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have tracked the sequences, the next step will be to visualize the results to double check that no obvious errors have occured and to get an initial feeling of the performance of the algorithm. To do that, we will find it convenient to define a couple of `auxiliary` visualization functions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"colors = list(dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS).values())\n",
"\n",
"\n",
"def display_image(image, prediction, unique, figure_size=(16, 9)):\n",
" fig, ax = plt.subplots(1, figsize=figure_size)\n",
" ax.set_axis_off()\n",
" ax.imshow(image)\n",
" \n",
" for box in prediction:\n",
" i = unique.index(box.id)\n",
"\n",
" rect = patches.Rectangle(\n",
" (box.x, box.y), \n",
" box.width, \n",
" box.height, \n",
" color=colors[i%len(colors)], \n",
" fill=False,\n",
" linewidth=2,\n",
" alpha=1.0\n",
" )\n",
" ax.add_patch(rect)\n",
" \n",
" circle = patches.Circle(\n",
" (box.center[0], box.center[1]), \n",
" radius=40,\n",
" color='orange', \n",
" fill=True, \n",
" alpha=0.2\n",
" )\n",
" ax.add_patch(circle)\n",
" \n",
" ax.annotate(\n",
" box.id[:4], \n",
" (box.x+box.width, box.y+box.height), \n",
" color='white', \n",
" weight='bold', \n",
" fontsize=15, \n",
" ha='center', \n",
" va='center'\n",
" )\n",
" \n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def notebook_view_sequence(images_seq, trajectories_seq):\n",
" unique_ids = []\n",
" for trajectories in trajectories_seq:\n",
" unique = []\n",
" for frame in trajectories:\n",
" for box in frame:\n",
" unique.append(box.id)\n",
" unique_ids.append(list(set(unique)))\n",
" \n",
" def _view_image(sequence_index, index):\n",
" img = images_seq[sequence_index][index]\n",
" pred = trajectories_seq[sequence_index][index]\n",
" unique = unique_ids[sequence_index]\n",
" display_image(img, pred, unique)\n",
" \n",
" sequence_slider = widgets.IntSlider(\n",
" value=0,\n",
" min=0,\n",
" max=len(images_seq) - 1,\n",
" step=1,\n",
" description='sequence: \\t',\n",
" disabled=False,\n",
" continuous_update=False,\n",
" orientation='horizontal',\n",
" readout=True,\n",
" readout_format='d',\n",
" slider_color='white'\n",
" )\n",
"\n",
" slider = widgets.IntSlider(\n",
" value=0,\n",
" min=0,\n",
" max=len(images_seq[0]) - 1,\n",
" step=1,\n",
" description='image: \\t',\n",
" disabled=False,\n",
" continuous_update=False,\n",
" orientation='horizontal',\n",
" readout=True,\n",
" readout_format='d',\n",
" slider_color='white'\n",
" )\n",
"\n",
" def update_sequence_range(*args):\n",
" i = sequence_slider.value\n",
" slider.max = len(images_seq[i]) - 1\n",
" slider.value = 0\n",
"\n",
" sequence_slider.observe(update_sequence_range)\n",
"\n",
" widgets.interact(_view_image,\n",
" sequence_index=sequence_slider,\n",
" index=slider)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us visualize the results:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"notebook_view_sequence(images_seq, trajectories_seq)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that `bounding boxes` are `colored by trajectory` and that part of the `unique trajectory id` is also plotted at the right bottom of the bounding boxes. Finally, the ligth `orange circles` highlight the `valid association region` for each bounding box."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By visual inspections it seems that this naive algorithm performs `fairly well` on `open play` situations (where the `detector` results are `very accurate`) while it `struggles` on `high density moments` where the `false positives` and `false negative` `break trajectories` and cause `identity switches`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Grade trajectories"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After assessing that the previous results look reasonable, we can proceed to grade them quantitatively. We will do that by using the `MotEvaluator` in `Pythia"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `MotEvaluator` requires us to especify an `intersection over union` `threshold` to determine if a bounding box is accurate enough (with respect to the ground truth bounding box) to be considered a `true positive`. To start with, we will set this value at `0.5`. Later, on another notebook, we can investigate how different values for this threshold affect the quantitative results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iou_threshold = 0.5\n",
"\n",
"mot_evaluator = MotEvaluator(iou_threshold=iou_threshold)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the evaluator object has been created we can use it to `grade` the results:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graded_seq = []\n",
"for true_annotations, predicted_annotations in zip(true_seq, trajectories_seq):\n",
" graded_annotations = mot_evaluator(true_annotations, predicted_annotations)\n",
" graded_seq.append(graded_annotations)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now print the results for the first sequence:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(json.dumps(graded_seq[0].metadata, indent=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the results for the second one:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(json.dumps(graded_seq[1].metadata, indent=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now save these grading annotations to files and load them using the interactive visualization tool in `Python` to examine them further:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"graded_seq[0].to_file(\n",
" os.path.join(\n",
" local_data_cache, \n",
" '68726_' + str(iou_threshold).replace('.', '') + '_graded_smot_re3.json'\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"annotations = deepcopy(graded_seq[1])\n",
"\n",
"for frame in annotations:\n",
" for bb in frame:\n",
" bb._x = bb.x / factor\n",
" bb._y = bb.y / factor\n",
" bb._width = bb.width / factor\n",
" bb._height = bb.height / factor\n",
" \n",
"annotations.to_file(\n",
" os.path.join(\n",
" local_data_cache, \n",
" '68720_' + str(iou_threshold).replace('.', '') + '_graded_smot_re3.json'\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create video results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally we can also use the `create_video` function in `Pythia` to create a videos of the tracking results. Bare in mind this step might take some time."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the first sequence:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"annotations1 = Annotations.init_from_file(\n",
" os.path.join(\n",
" local_data_cache, \n",
" '68726_' + str(iou_threshold).replace('.', '') + '_graded_smot_re3.json'\n",
" )\n",
")\n",
"\n",
"create_video(\n",
" os.path.join(\n",
" local_data_cache, \n",
" '68726_' + str(iou_threshold).replace('.', '') + '_threshold_smot_re3'\n",
" ), \n",
" annotations1\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And for the second:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"annotations2 = Annotations.init_from_file(\n",
" os.path.join(\n",
" local_data_cache, \n",
" '68720_' + str(iou_threshold).replace('.', '') + '_graded_smot_re3.json'\n",
" )\n",
")\n",
"\n",
"create_video(\n",
" os.path.join(\n",
" local_data_cache,\n",
" '68720_' + str(iou_threshold).replace('.', '') + '_threshold_smot_re3'\n",
" ), \n",
" annotations2\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment