name: Network in Network CIFAR10 Model
caffemodel: cifar10_nin.caffemodel
caffemodel_url: https://www.dropbox.com/s/blrajqirr1p31v0/cifar10_nin.caffemodel?dl=1
license: BSD
sha1: 8e89c8fcd46e02780e16c867a5308e7bb7af0803
caffe_commit: c69b3b49084b503e23b95dc387329975245949c2
gist_id: e56253735ef32c3c296d
This model is a 3 layer Network in Network model trained on CIFAR10 dataset.
The performance of this model on validation set is 89.6% The detailed descriptions are in the paper Network in Network
The preprocessed CIFAR10 data is downloadable in lmdb format here:
The data used to train this model comes from http://www.cs.toronto.edu/~kriz/cifar.html Please follow the license there if used.
@diaomin , @PeterPan1990
I would like to share my experiences on repeating the experiments results for network-in-network on cifar 10 [1]. In summary, the paper results is correct and can be repeated somehow.
There are two open-source implementation:
1) caffe implementation
2) convnet implementation
3) my implementation
4) Others
some information that may be useful.
from http://arxiv.org/abs/1505.00853. I get results closed to the paper[2] reported.
(RELU, 200 iterations gives test accuracy 86% ). The data are normalized as to be in the range of -1 to 1 by: color = (color-mean )/128. The code and caffe network and solver prototxt files in (3) are used.
https://github.com/s9xie/DSN/tree/master/tools/extra
Here, the training and testing logs are provided. 89.4% accuracy is reported without data augmentation.
Data is preprocessed using GCN without ZAC. learning rate of 0.05 and 0.005 are used.
http://torch.ch/blog/2015/07/30/cifar.html
https://github.com/szagoruyko/cifar.torch/blob/master/models/nin.lua
data is preprocessed using color space (ycrcb )+ normalisation.
[1]"Network In Network" - M. Lin, Q. Chen, S. Yan, ICLR-2014.
[2] "Empirical Evaluation of Rectified Activations in Convolutional Network"-Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li, arxiv 2015