Skip to content

Instantly share code, notes, and snippets.

@mavenlin
Last active February 5, 2023 13:02
Show Gist options
  • Star 65 You must be signed in to star a gist
  • Fork 36 You must be signed in to fork a gist
  • Save mavenlin/d802a5849de39225bcc6 to your computer and use it in GitHub Desktop.
Save mavenlin/d802a5849de39225bcc6 to your computer and use it in GitHub Desktop.
Network in Network Imagenet Model

Info

name: Network in Network Imagenet Model

caffemodel: nin_imagenet.caffemodel

caffemodel_url: https://www.dropbox.com/s/cphemjekve3d80n/nin_imagenet.caffemodel?dl=1 license: BSD

caffe_commit: pull request yet to be merged

gist_id: d802a5849de39225bcc6

Descriptions

This model is a 4 layer Network in Network model trained on imagenet dataset.

Thanks to the replacement of fully connected layer with a global average pooling layer, this model has greatly reduced parameters, which results in a snapshot of size 29MB, compared to AlexNet which is about 230MB, it is one eighth the size.

The top 1 performance of this model on validation set is 59.36%, which is slightly better than AlexNet. (Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy.)

The training time of the model is also greatly reduced compared to AlexNet because of the faster convergence. It takes 4-5 days to train on a GTX Titan.

License

BSD

net: "models/nin_imagenet/train_val.prototxt"
test_iter: 1000
test_interval: 1000
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 200000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "models/nin_imagenet/nin_imagenet_train"
solver_mode: GPU
name: "nin_imagenet"
layers {
top: "data"
top: "label"
name: "data"
type: DATA
data_param {
source: "/home/linmin/IMAGENET-LMDB/imagenet-train-lmdb"
backend: LMDB
batch_size: 64
}
transform_param {
crop_size: 224
mirror: true
mean_file: "/home/linmin/IMAGENET-LMDB/imagenet-train-mean"
}
include: { phase: TRAIN }
}
layers {
top: "data"
top: "label"
name: "data"
type: DATA
data_param {
source: "/home/linmin/IMAGENET-LMDB/imagenet-val-lmdb"
backend: LMDB
batch_size: 89
}
transform_param {
crop_size: 224
mirror: false
mean_file: "/home/linmin/IMAGENET-LMDB/imagenet-train-mean"
}
include: { phase: TEST }
}
layers {
bottom: "data"
top: "conv1"
name: "conv1"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
weight_filler {
type: "gaussian"
mean: 0
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "conv1"
top: "conv1"
name: "relu0"
type: RELU
}
layers {
bottom: "conv1"
top: "cccp1"
name: "cccp1"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 96
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp1"
top: "cccp1"
name: "relu1"
type: RELU
}
layers {
bottom: "cccp1"
top: "cccp2"
name: "cccp2"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 96
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp2"
top: "cccp2"
name: "relu2"
type: RELU
}
layers {
bottom: "cccp2"
top: "pool0"
name: "pool0"
type: POOLING
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
bottom: "pool0"
top: "conv2"
name: "conv2"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "conv2"
top: "conv2"
name: "relu3"
type: RELU
}
layers {
bottom: "conv2"
top: "cccp3"
name: "cccp3"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 256
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp3"
top: "cccp3"
name: "relu5"
type: RELU
}
layers {
bottom: "cccp3"
top: "cccp4"
name: "cccp4"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 256
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp4"
top: "cccp4"
name: "relu6"
type: RELU
}
layers {
bottom: "cccp4"
top: "pool2"
name: "pool2"
type: POOLING
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
bottom: "pool2"
top: "conv3"
name: "conv3"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "conv3"
top: "conv3"
name: "relu7"
type: RELU
}
layers {
bottom: "conv3"
top: "cccp5"
name: "cccp5"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 384
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp5"
top: "cccp5"
name: "relu8"
type: RELU
}
layers {
bottom: "cccp5"
top: "cccp6"
name: "cccp6"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 384
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp6"
top: "cccp6"
name: "relu9"
type: RELU
}
layers {
bottom: "cccp6"
top: "pool3"
name: "pool3"
type: POOLING
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
bottom: "pool3"
top: "pool3"
name: "drop"
type: DROPOUT
dropout_param {
dropout_ratio: 0.5
}
}
layers {
bottom: "pool3"
top: "conv4"
name: "conv4-1024"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 1024
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "conv4"
top: "conv4"
name: "relu10"
type: RELU
}
layers {
bottom: "conv4"
top: "cccp7"
name: "cccp7-1024"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 1024
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.05
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp7"
top: "cccp7"
name: "relu11"
type: RELU
}
layers {
bottom: "cccp7"
top: "cccp8"
name: "cccp8-1024"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
num_output: 1000
kernel_size: 1
stride: 1
weight_filler {
type: "gaussian"
mean: 0
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layers {
bottom: "cccp8"
top: "cccp8"
name: "relu12"
type: RELU
}
layers {
bottom: "cccp8"
top: "pool4"
name: "pool4"
type: POOLING
pooling_param {
pool: AVE
kernel_size: 6
stride: 1
}
}
layers {
name: "accuracy"
type: ACCURACY
bottom: "pool4"
bottom: "label"
top: "accuracy"
include: { phase: TEST }
}
layers {
bottom: "pool4"
bottom: "label"
name: "loss"
type: SOFTMAX_LOSS
include: { phase: TRAIN }
}
@krishkoushik
Copy link

Hi, when I try to load the model in lua using

model = loadcaffe.load('deploy.prototxt', 'nin_imagenet.caffemodel', 'ccn2')

I get the following error

Successfully loaded nin_imagenet.caffemodel
MODULE data UNDEFINED
warning: module 'data [type 5]' not found
.../torch/install/share/lua/5.1/ccn2/SpatialConvolution.lua:16: Assertion failed: [math.fmod(nOutputPlane, 16) == 0]. Number of output planes has to be a multiple of 16.
stack traceback:
[C]: in function 'error'
.../torch/install/share/lua/5.1/ccn2/SpatialConvolution.lua:16: in function '__init'
/home/krishnan/torch/install/share/lua/5.1/torch/init.lua:54: in function </home/krishnan/torch/install/share/lua/5.1/torch/init.lua:50>
[C]: in function 'SpatialConvolution'
deploy.prototxt.lua:31: in main chunk
[C]: in function 'dofile'
...hnan/torch/install/share/lua/5.1/loadcaffe/loadcaffe.lua:24: in function 'load'
[string "model = loadcaffe.load('deploy.prototxt', 'ni..."]:1: in main chunk
[C]: at 0x7f13f591ce10

I tried changing the last layer's output to 1024 instead of 1000. Still the deploy.prototxt.lua file generated is the same - it has 1000 and not 1024. I can't quite understand what's happening here. Can anyone please help me?

Thanks

@Seinzhu
Copy link

Seinzhu commented Dec 29, 2015

@taoari
Hi, I also evaluated this on ILSVRC2012 with generated data source ilsvrc2012_train_lmdb and ilsvrc2012_val_lmdb, while I can only get 21.369%(224 version). Any suggestions would be appreciated :) Thanks!

@bhargavaurala
Copy link

Hi @mavenlin. I am using the NiN architecture to train ImageNet 2012. After about 50k iterations, the validation accuracy is around 0.1% which corresponds to random chance. I am using the same structure and initialization as you have. Can you please let me know when (iteration number) the validation accuracy starts to increase? This will help me decide if the network is learning anything useful and if I should restart with different hyperparameters.

Thanks.

@ProGamerGov
Copy link

Has anyone else trained any other Network In Network (NIN) models? Or is this the only one?

@mrgloom
Copy link

mrgloom commented Oct 15, 2016

layers {
  bottom: "cccp8"
  top: "pool4"
  name: "pool4"
  type: POOLING
  pooling_param {
    pool: AVE
    kernel_size: 6
    stride: 1
  }
}

Seems this is old Caffe .prototxt, do we need now specify global_pooling: true?
As far as I can see NIN use global average pooling layer, not just average pooling. [link to paper](global average poolin)

layer {
  name: "pool4"
  type: "Pooling"
  bottom: "cccp8"
  top: "pool4"
  pooling_param {
    pool: AVE
    global_pooling: true
  }
}

@moyix
Copy link

moyix commented Oct 9, 2017

Hi @mavenlin,

I noticed that the SHA1 of the caffe model does not match what's listed here (the SHA1 listed here is 8e89c8fcd46e02780e16c867a5308e7bb7af0803 but the SHA1 of the downloaded model is 2794deb2aada04f667894b7d6d929371b4689ea9). Maybe this should be fixed so that people can be sure their download was successful and they're getting the correct model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment