Skip to content

Instantly share code, notes, and snippets.

@yjxiong
Last active November 14, 2018 00:14
Show Gist options
  • Save yjxiong/be68f175643e0640583b to your computer and use it in GitHub Desktop.
Save yjxiong/be68f175643e0640583b to your computer and use it in GitHub Desktop.
The model spec for the baseline CNN Model on WIDER dataset

This gist holds the model spec for the baseline CNN model on the WIDER dataset.

The CNN structure is AlexNet. Network parameters are initialized using a model pretrained on ImageNet.

The weights can be downloaded at

cuhk_wider_baseline_cnn.caffemodel

Please refer to

WIDER Dataset

and

the CVPR'15 paper

for more details

#Note: This model has the same sturcture of the AlexNet
name: "WIDER_Baseline_CNN"
input: "data"
input_dim: [1,3,224,224]
layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1"
convolution_param {num_output: 96 kernel_size: 11 stride: 4}}
layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1"}
layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1"
pooling_param {pool: MAX kernel_size: 3 stride: 2}}
layer { name: "norm1" type: "LRN" bottom: "pool1" top: "norm1"
lrn_param {local_size: 5 alpha: 0.0001 beta: 0.75}}
layer { name: "conv2" type: "Convolution" bottom: "norm1" top: "conv2"
convolution_param {num_output: 256 pad: 2 kernel_size: 5 group: 2}}
layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2"}
layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2"
pooling_param {pool: MAX kernel_size: 3 stride: 2}}
layer { name: "norm2" type: "LRN" bottom: "pool2" top: "norm2"
lrn_param {local_size: 5 alpha: 0.0001 beta: 0.75}}
layer { name: "conv3" type: "Convolution" bottom: "norm2" top: "conv3"
convolution_param {num_output: 384 pad: 1 kernel_size: 3}}
layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3"}
layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4"
convolution_param {num_output: 384 pad: 1 kernel_size: 3 group: 2}}
layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4"}
layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5"
convolution_param {num_output: 256 pad: 1 kernel_size: 3 group: 2}}
layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5"}
layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5"
pooling_param {pool: MAX kernel_size: 3 stride: 2 }}
layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6"
inner_product_param {num_output: 4096}}
layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6"}
layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6"
dropout_param {dropout_ratio: 0.5}}
layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7"
inner_product_param {num_output: 4096}}
layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7"}
layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7"
dropout_param {dropout_ratio: 0.5}}
layer { name: "fc8_event" type: "InnerProduct" bottom: "fc7" top: "fc8"
inner_product_param {num_output: 61}}
layer { name: "prob" type: "Softmax" bottom: "fc8" top: "prob_raw"}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment