Skip to content

Instantly share code, notes, and snippets.

@qfgaohao
Last active November 27, 2019 08:06
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save qfgaohao/fde0e68ec5d2a893265977fc46042f67 to your computer and use it in GitHub Desktop.
Save qfgaohao/fde0e68ec5d2a893265977fc46042f67 to your computer and use it in GitHub Desktop.
Generate SSD Prior Boxes.
import collections
import numpy as np
import itertools
SSDBoxSizes = collections.namedtuple('SSDBoxSizes', ['min', 'max'])
Spec = collections.namedtuple('Spec', ['feature_map_size', 'shrinkage', 'box_sizes', 'aspect_ratios'])
# the SSD orignal specs
specs = [
Spec(38, 8, SSDBoxSizes(30, 60), [2]),
Spec(19, 16, SSDBoxSizes(60, 111), [2, 3]),
Spec(10, 32, SSDBoxSizes(111, 162), [2, 3]),
Spec(5, 64, SSDBoxSizes(162, 213), [2, 3]),
Spec(3, 100, SSDBoxSizes(213, 264), [2]),
Spec(1, 300, SSDBoxSizes(264, 315), [2])
]
def generate_ssd_priors(specs, image_size=300, clip=True):
"""Generate SSD Prior Boxes.
Args:
specs: Specs about the shapes of sizes of prior boxes. i.e.
specs = [
Spec(38, 8, SSDBoxSizes(30, 60), [2]),
Spec(19, 16, SSDBoxSizes(60, 111), [2, 3]),
Spec(10, 32, SSDBoxSizes(111, 162), [2, 3]),
Spec(5, 64, SSDBoxSizes(162, 213), [2, 3]),
Spec(3, 100, SSDBoxSizes(213, 264), [2]),
Spec(1, 300, SSDBoxSizes(264, 315), [2])
]
image_size: image size.
Returns:
priors: a list of priors: [[center_x, center_y, h, w]]. All the values
are relative to the image size (300x300).
"""
boxes = []
for spec in specs:
scale = image_size / spec.shrinkage
for j, i in itertools.product(range(spec.feature_map_size), repeat=2):
x_center = (i + 0.5) / scale
y_center = (j + 0.5) / scale
# small sized square box
size = spec.box_sizes.min
h = w = size / image_size
boxes.append([
x_center,
y_center,
h,
w
])
# big sized square box
size = np.sqrt(spec.box_sizes.max * spec.box_sizes.min)
h = w = size / image_size
boxes.append([
x_center,
y_center,
h,
w
])
# change h/w ratio of the small sized box
# based on the SSD implementation, it only applies ratio to the smallest size.
# it looks wierd.
size = spec.box_sizes.min
h = w = size / image_size
for ratio in spec.aspect_ratios:
ratio = sqrt(ratio)
boxes.append([
x_center,
y_center,
h * ratio,
w / ratio
])
boxes.append([
x_center,
y_center,
h / ratio,
w * ratio
])
boxes = np.array(boxes)
if clip:
boxes = np.clip(boxes, 0.0, 1.0)
return boxes
@chi0tzp
Copy link

chi0tzp commented Jul 16, 2018

Could you please explain what a SSDBoxSizes's min and max fields are? In a PyTorch's implementation, these are called min_sizes and max_sizes, respectively.

What I don't understand is how these number came up for various datasets. It makes sense to be different for different datasets, but they have been computed? I'm trying to train SSD for a new dataset, which also includes many "small" objects (with sizes at the 20%, let's say, of a person -- as he/she appears in the image). How should I set them up? And why the maximum for a specific layer is the minimum of the next layer?

Many thanks for your time.

@belorenz
Copy link

Thanks for this code! I was able to find a little bug.
size and image_size are no floats here. So
h = w = size / image_size is a int division.
As a result h and w are always 0 which seems not correct to me.

Can be fixed with a cast to float e.g. size = float(spec.box_sizes.min)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment