Skip to content

Instantly share code, notes, and snippets.

@shelhamer
Last active May 20, 2016 01:47
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save shelhamer/3f2c75f3c8c71357f24c to your computer and use it in GitHub Desktop.
Save shelhamer/3f2c75f3c8c71357f24c to your computer and use it in GitHub Desktop.
FCN-AlexNet Fully Convolutional Semantic Segmentation on PASCAL
@Eranpaz
Copy link

Eranpaz commented Jul 5, 2015

@mansirankawat, You can see how to create the lmdb files here: BVLC/caffe#1698
I'm still struggling with how the label file should look like (is it HxWx1 or HxWxK, where K in the number of classes)

@mansirankawat
Copy link

@Eranpaz thanks a lot

@rahman-mdatiqur
Copy link

@Eranpaz, I am really new to caffe as well as dense prediction. Could you please tell me how to process the ground truth segmentation images to feed in this network? I see that your above link for converting images to lmdb requires a text file listing the path to each image file followed by the ground truth label. But, for this task of semantic prediction, how can i convert the images to lmdb as the ground truths are no longer scalar labels, rather spatial images?

@mansirankawat
Copy link

@Eranpaz could you figure out whether the label file should look like HxWx1 or HxWxK?

@Eranpaz
Copy link

Eranpaz commented Jul 9, 2015

@mansirankawat, this is the process I've used:

  • you create 2 lmdbs one for images and another for labels. If image is HxWxC you label should be HxWx1, with each value running from 0....#OfClasses.
  • the text file in the script hold only the paths to the images (without labels, they get label 0 by default) and another text file holds the path to the label images. you need to run the script twice, once for images and once for labels.
    -once you have 2 lmdbs your network simply calls them both, images for data and labels for labels (which go into the loss layer).

@mansirankawat
Copy link

Thanks @Eranpaz This is really helpful

@rahman-mdatiqur
Copy link

Dear @Eranpaz
thanks a lot for sharing the process to convert images and label files to lmbd.
However, the .caffemodel takes a fixed size input (say, 500x500x3x1) and produces the same size output (500x500).
Then -

  1. Do I need to resize the images to that shape before training starts?
  2. Do I need to resize ground-truth label images to that shape for validation? If so, how do I convert the gound-truth labels to a fixed size (500x500), since every pixel in here denotes a number from 0 to #number_of_classes?

I would highly appreciate your response. Please reply.

@rahman-mdatiqur
Copy link

Hi @Eranpaz,
I followed your steps for converting the images and ground truth label files to lmdb database format. As you said above -

"the text file in the script hold only the paths to the images (without labels, they get label 0 by default) ..."

If I do not provide any label after the path to the image in the text file , then the lmdb conversion is unsuccessful, as the generated Data.mdb file size is only 8.0 KB. But, if i provide a label (any arbitrary number) after each path, only then the Data.mdb file has a meaningful size.

So, for this purpose, is it Ok to give some unrelated arbitrary number (lets say, -1) to convert the image and ground-truth files to lmdb format?

I would highly appreciate your feedback.

@mansirankawat
Copy link

Hi @Eranpaz,

I am training FCN-32 network using pretrained weights from ILSVRC 16 layer VGG net and finetuning on PASCAL VOC 11 dataset. Even after 17000 iterations the loss remains constant at 3.04452. Please advice me as to what could be the reason behind the loss not decreasing at all and remaining constant. I have created the 2 lmdb files as you had suggested in post above.

Thanks,
Mansi

@rahman-mdatiqur
Copy link

Hi, @mansirankawat

I am still finding difficulties in converting the image and ground-truth label files to lmdb format. According to the guideline @Eranpaz provided above, I need to put some label (e.g. any integer number) after the file path in the text files. Otherwise, the lmdb's are not generated. Is this the same thing you did while converting your files to lmdb format? Or, you didn't provide any label number after each file path?

Please reply.

@mansirankawat
Copy link

Hi @atique81,
I didn't have to provide any label after the filepath. I think the label files are HxW and you need to convert them to HxWx1 by adding one more axis. This might be the reason why you are not able to convert the label files to lmdb format. You do not have to provide any label after the file path.

Thanks

@rahman-mdatiqur
Copy link

Dear @mansirankawat,
A lot many thanks for your such a prompt and valuable reply.

So, you mean that while converting the label (.png) files, first I need to convert them from HxW to HxWx1 just by simple matlab reshape() function and then list their paths in a text file? If it is the case, then reshape() will not allow me to convert HxW to HxWx1, as the last dimension is a singleton one. How did you actually convert the label images form HxW to HxWx1?

While converting the images (.jpg files), or the labels (.png files), if I do not provide any integer number after the filepath in the following format,
/home/data/voc2011/2008_000031.jpg 0
then the lmdb conversion (using the script mentioned above ) is not successful. The resultant lmdb database generates a data.mdb file which is of only 8KB in size.

could you please provide sample two/three lines of excerpt from your text files listing the images/labels?

Thanks again for your wonderful reply...

@mansirankawat
Copy link

Hi, @atique81
I just added the line : im = im[:,:,np.newaxis] to the python script for converting to lmdb after opening the image i.e. after the line im = np.array(Image.open(in_)).

@rahman-mdatiqur
Copy link

@mansirankawat....THANKS AGAIN....I got it...

Actually, I was trying with the shell script provided with caffe installation in "cafferoot/build/tools/convert_imageset.sh" and was trying everything with Matlab...thats why probably it was not working...Now, I will try with that python script.

One more thing, did you pass the "make runtest" successfully with all the tests after installing this PR?
I don't know why I am not being able to pass the following test which is generating errors as below..

[----------] 21 tests from NetTest/1, where TypeParam = caffe::DoubleCPU
[ RUN ] NetTest/1.TestUnsharedWeightsDataNet
libprotobuf ERROR google/protobuf/text_format.cc:169] Error parsing text-format caffe.NetParameter: 1:346: Expected identifier.
F0719 20:05:25.849918 19362 test_net.cpp:28] Check failed: google::protobuf::TextFormat::ParseFromString(proto, &param)
*** Check failure stack trace: ***
@ 0x7f7d59fbfa5d google::LogMessage::Fail()
@ 0x7f7d59fc3ef7 google::LogMessage::SendToLog()
@ 0x7f7d59fc1d59 google::LogMessage::Flush()
@ 0x7f7d59fc205d google::LogMessageFatal::~LogMessageFatal()
@ 0x66195d caffe::NetTest<>::InitNetFromProtoString()
@ 0x667967 caffe::NetTest<>::InitUnsharedWeightsNet()
@ 0x68429f caffe::NetTest_TestUnsharedWeightsDataNet_Test<>::TestBody()
@ 0x7961bd testing::internal::HandleExceptionsInMethodIfSupported<>()
@ 0x7880b1 testing::Test::Run()
@ 0x788197 testing::TestInfo::Run()
@ 0x7882d7 testing::TestCase::Run()
@ 0x78d1df testing::internal::UnitTestImpl::RunAllTests()
@ 0x795d6d testing::internal::HandleExceptionsInMethodIfSupported<>()
@ 0x7876da testing::UnitTest::Run()
@ 0x4ae82f main
@ 0x38ee21ed5d (unknown)
@ 0x4ae589 (unknown)
make: *** [runtest] Aborted

@mansirankawat
Copy link

@atique81 I did pass all the tests in make runtest

@rahman-mdatiqur
Copy link

@mansirankawat, could you please tell me the protoc version, NVIDIA driver, cuda version and cuDNN version you are using?

@mansirankawat
Copy link

@atique81 Protobuf version 2.5.0, NVIDIA K20, CUDA 7.0.28 and am not using cuDNN

@rahman-mdatiqur
Copy link

thanks @mansirankawat...
however, I have now created the lmdb files according to your instruction and using the @Eranpaz provided python script. But, now when I go to train, it generates the following error.

Any idea about what is going wrong?

I0720 20:42:18.998431 20463 layer_factory.hpp:74] Creating layer data
I0720 20:42:18.998450 20463 net.cpp:85] Creating Layer data
I0720 20:42:18.998457 20463 net.cpp:339] data -> data
I0720 20:42:18.998486 20463 net.cpp:114] Setting up data
I0720 20:42:18.998569 20463 db.cpp:34] Opened lmdb /home/atique/caffe-future/sem_seg/data/lmdb_files/voc2011_train_img_lmdb
I0720 20:42:18.999052 20463 data_layer.cpp:67] output data size: 1,3,281,500
I0720 20:42:18.999843 20463 net.cpp:121] Top shape: 1 3 281 500 (421500)
I0720 20:42:18.999852 20463 layer_factory.hpp:74] Creating layer data_data_0_split
I0720 20:42:18.999862 20463 net.cpp:85] Creating Layer data_data_0_split
I0720 20:42:18.999866 20463 net.cpp:381] data_data_0_split <- data
I0720 20:42:18.999873 20463 net.cpp:339] data_data_0_split -> data_data_0_split_0
I0720 20:42:18.999881 20463 net.cpp:339] data_data_0_split -> data_data_0_split_1
I0720 20:42:18.999886 20463 net.cpp:114] Setting up data_data_0_split
I0720 20:42:18.999894 20463 net.cpp:121] Top shape: 1 3 281 500 (421500)
I0720 20:42:18.999898 20463 net.cpp:121] Top shape: 1 3 281 500 (421500)
I0720 20:42:18.999902 20463 layer_factory.hpp:74] Creating layer label
I0720 20:42:18.999909 20463 net.cpp:85] Creating Layer label
I0720 20:42:18.999914 20463 net.cpp:339] label -> label
I0720 20:42:18.999922 20463 net.cpp:114] Setting up label
F0720 20:42:18.999960 20463 db.hpp:109] Check failed: mdb_status == 0 (2 vs. 0) No such file or directory
*** Check failure stack trace: ***
@ 0x7f566d083a5d google::LogMessage::Fail()
@ 0x7f566d087ef7 google::LogMessage::SendToLog()
@ 0x7f566d085d59 google::LogMessage::Flush()
@ 0x7f566d08605d google::LogMessageFatal::~LogMessageFatal()
@ 0x7f5672ada21c caffe::db::LMDB::Open()
@ 0x7f5672a4f9f0 caffe::DataLayer<>::DataLayerSetUp()
@ 0x7f56729f59da caffe::BaseDataLayer<>::LayerSetUp()
@ 0x7f56729f5ae9 caffe::BasePrefetchingDataLayer<>::LayerSetUp()
@ 0x7f5672aa8124 caffe::Net<>::Init()
@ 0x7f5672aaa882 caffe::Net<>::Net()
@ 0x7f5672ab37cf caffe::Solver<>::InitTrainNet()
@ 0x7f5672ab3d3f caffe::Solver<>::Init()
@ 0x7f5672ab4175 caffe::Solver<>::Solver()
@ 0x40d7a8 caffe::GetSolver<>()
@ 0x4077b2 train()
@ 0x405cee main
@ 0x38ee21ed5d (unknown)
@ 0x4053b9 (unknown)
Aborted

@mansirankawat
Copy link

@atique81 you might want to ask this at caffe users google groups. I do not have any idea why this problem, maybe its not able to locate the file. I have seen similar questions being asked at the caffe users group.

@rahman-mdatiqur
Copy link

@mansirankawat..
you are right in saying that people have already raised the same issue on caffe user group. I am actually following those posts....

Thank you so much for being so cooperative..... Hope to receive your cooperation in future....

@rahman-mdatiqur
Copy link

Hi, @mansirankawat.....
I am now able to run fine-tuning, though couldn't make cudnn work, which is showing some error code 8 (CUDNN_STATUS_EXECUTION_FAILED)...

But, now the main problem is, the loss is not decreasing, its reporting the same every after 20 iterations (Train net output #0: loss = 3.04452 (* 1 = 3.04452 loss) ) for all iterations from 1 to 2200. I didn't proceed any further as the loss was not going down at all. I am training it on pascal_voc 2011 trian set (1112 images) and testing it on the pascal_voc 2011 validation set (1111 images).

could you please advise and share your experience regarding this?

Thanks.

@rahman-mdatiqur
Copy link

Hi @mansirankawat.....@Eranpaz...

could you guys please provide any pointer to the above mentioned issue?

Thanks.

@shelhamer
Copy link
Author

I only now noticed this comment thread, but I suggest looking at this model zoo example https://gist.github.com/shelhamer/80667189b218ad570e82#file-readme-md that includes further details on working with FCNs.

@ShravanTata
Copy link

I have the python file for preparing the database. Hope this is helpful to all of you:

import caffe
import lmdb
from PIL import Image
import numpy as np
import glob
from random import shuffle

Initialize the Image set:

NumberTrain = 1464#572 # Number of Training Images

NumberTest = 1449#143 # Number of Testing Images

Rheight = 380 # Required Height

Rwidth = 500 # Required Width

RheightLabel = 380 # Height for the label

RwidthLabel = 500 # Width for the label

LabelWidth = 118 # Downscaled width of the label

LabelHeight = 88 # Downscaled height of the label

Read the files in the Data Folder

inputs_data_train = sorted(glob.glob("/home/rcar/cnn/caffe/examples/SemanticSegmentation/VOC2012/SegmentationTrainingData/.jpg"))
inputs_data_valid = sorted(glob.glob("/home/rcar/cnn/caffe/examples/SemanticSegmentation/VOC2012/SegmentationValidationData/
.jpg"))
inputs_label = sorted(glob.glob("/home/rcar/cnn/caffe/examples/SemanticSegmentation/VOC2012/SegmentationClass/*.png"))

shuffle(inputs_data_train) # Shuffle the DataSet
shuffle(inputs_data_valid) # Shuffle the DataSet

inputs_Train = inputs_data_train[:NumberTrain] # Extract the training data from the complete set

inputs_Test = inputs_data_valid[NumberTrain:NumberTrain+NumberTest] # Extract the testing data from the complete set

Creating LMDB for Training Data

print("Creating Training Data LMDB File ..... ")

in_db = lmdb.open('TrainVOC_Data_lmdb')

with in_db.begin(write=True) as in_txn:

for in_idx, in_ in enumerate(inputs_Train):
    print in_idx
    im = np.array(Image.open(in_)) # or load whatever ndarray you need
    Dtype = im.dtype
    im = im[:,:,::-1]
    im = Image.fromarray(im)
    im = im.resize([Rheight, Rwidth], Image.ANTIALIAS)
    im = np.array(im,Dtype)     

    im = im.transpose((2,0,1))
    im_dat = caffe.io.array_to_datum(im)
    in_txn.put('{:0>10d}'.format(in_idx),im_dat.SerializeToString())

in_db.close()

Creating LMDB for Training Labels

print("Creating Training Label LMDB File ..... ")

in_db = lmdb.open('TrainVOC_Label_lmdb',map_size=int(1e14))

with in_db.begin(write=True) as in_txn:

for in_idx, in_ in enumerate(inputs_Train):
    print in_idx
    in_label = in_[:-40]+'SegmentationClass/'+in_[-15:-3]+'png'
    L = np.array(Image.open(in_)) # or load whatever ndarray you need
    Dtype = L.dtype
    Limg = Image.fromarray(L)
    Limg = Limg.resize([LabelHeight, LabelWidth],Image.NEAREST) # To resize the Label file to the required size 

    L = np.array(Limg,Dtype)

    L = L.reshape(L.shape[0],L.shape[1],1)

    L = L.transpose((2,0,1))

    L_dat = caffe.io.array_to_datum(L)
    in_txn.put('{:0>10d}'.format(in_idx),L_dat.SerializeToString())

in_db.close()

Creating LMDB for Testing Data

print("Creating Testing Data LMDB File ..... ")

in_db = lmdb.open('TestVOC_Data_lmdb',map_size=int(1e14))

with in_db.begin(write=True) as in_txn:

for in_idx, in_ in enumerate(inputs_Test):
    print in_idx    
    im = np.array(Image.open(in_)) # or load whatever ndarray you need
    Dtype = im.dtype
    im = im[:,:,::-1]
    im = Image.fromarray(im)
    im = im.resize([Rheight, Rwidth], Image.ANTIALIAS)
    im = np.array(im,Dtype)     

    im = im.transpose((2,0,1))
    im_dat = caffe.io.array_to_datum(im)
    in_txn.put('{:0>10d}'.format(in_idx),im_dat.SerializeToString())

in_db.close()

Creating LMDB for Testing Labels

print("Creating Testing Label LMDB File ..... ")

in_db = lmdb.open('TestVOC_Label_lmdb',map_size=int(1e14))

with in_db.begin(write=True) as in_txn:

for in_idx, in_ in enumerate(inputs_Test):
    print in_idx    
    in_label = in_[:-40]+'SegmentationClass/'+in_[-15:-3]+'png'
    L = np.array(Image.open(in_)) # or load whatever ndarray you need
    Dtype = L.dtype
    Limg = Image.fromarray(L)
    Limg = Limg.resize([LabelHeight, LabelWidth],Image.NEAREST) # To resize the Label file to the required size 

    L = np.array(Limg,Dtype)

    L = L.reshape(L.shape[0],L.shape[1],1)

    L = L.transpose((2,0,1))

    L_dat = caffe.io.array_to_datum(L)
    in_txn.put('{:0>10d}'.format(in_idx),L_dat.SerializeToString())

in_db.close()

@paritosh0908
Copy link

I am having trouble making a deploy.prototxt file for the same. Can anybody help me with this?

@chriss2401
Copy link

@paritosh0908 just copy your train_val prototxt, remove the loss function and all the label and data layers and replace it with:

input: "data"
input_dim: 1
input_dim: 3
input_dim: 500
input_dim: 500

@paritosh0908
Copy link

Should I do net surgery of this to get better parameters?

@CarrieHui
Copy link

Using the given caffemodel, I only obtains 39.075 mean I/U on PASCAL VOC11 segval (the subset that does not intersect with SBD train), not 48.0. Did anybody test that caffemodel ? If yes, what score did you get?
Thanks in advance.

@jpsquare
Copy link

jpsquare commented Apr 23, 2016

Hi

I am trying to perform semantic segmentation on the PASCAL VOC 2012 dataset using the train_val prototxt files provided in FCN-AlexNet PASCAL and create a caffemodel. I did prepare the lmdb files for the dataset, but when I run the files using the Caffe-future branch, I get this error associated with the loss layer.

I0428 20:38:35.996023 10452 layer_factory.hpp:76] Creating layer prob
F0428 20:38:36.005141 10452 softmax_loss_layer.cpp:42] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (190000 vs. 10384) Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N_H_W, with integer values in {0, 1, ..., C-1}.
*** Check failure stack trace: ***
@ 0x7f7c17556778 (unknown)
@ 0x7f7c175566b2 (unknown)
@ 0x7f7c175560b4 (unknown)
@ 0x7f7c17559055 (unknown)
@ 0x7f7c179573c8 caffe::SoftmaxWithLossLayer<>::Reshape()
@ 0x7f7c178d61db caffe::Net<>::Init()
@ 0x7f7c178d7948 caffe::Net<>::Net()
@ 0x7f7c17912b22 caffe::Solver<>::InitTrainNet()
@ 0x7f7c17913e2a caffe::Solver<>::Init()
@ 0x7f7c17914159 caffe::Solver<>::Solver()
@ 0x4117a5 caffe::GetSolver<>()
@ 0x408e13 train()
@ 0x4067b7 main
@ 0x7f7c135b7b45 (unknown)
@ 0x406fb4 (unknown)
@ (nil) (unknown)

Has the error got to do with the way the labels have been assigned?, if yes how should I modify the dimensions of the label layer?

Thanks, any help is appreciated

@snakehaihai
Copy link

I0427 21:44:38.665948 16601 layer_factory.hpp:76] Creating layer data
I0427 21:44:38.666298 16601 net.cpp:111] Creating Layer data
I0427 21:44:38.666328 16601 net.cpp:434] data -> data
I0427 21:44:38.666980 16604 db_lmdb.cpp:22] Opened lmdb /home/snake/caffe-FCN/segnet/FCN-AlexNet/create_data/Train_Data_lmdb
I0427 21:44:38.667425 16601 data_layer.cpp:44] output data size: 1,3,380,500
I0427 21:44:38.676369 16601 net.cpp:156] Setting up data
I0427 21:44:38.676419 16601 net.cpp:164] Top shape: 1 3 380 500 (570000)
I0427 21:44:38.676439 16601 layer_factory.hpp:76] Creating layer data_data_0_split
I0427 21:44:38.676473 16601 net.cpp:111] Creating Layer data_data_0_split
I0427 21:44:38.676488 16601 net.cpp:478] data_data_0_split <- data
I0427 21:44:38.676507 16601 net.cpp:434] data_data_0_split -> data_data_0_split_0
I0427 21:44:38.676540 16601 net.cpp:434] data_data_0_split -> data_data_0_split_1
I0427 21:44:38.676568 16601 net.cpp:156] Setting up data_data_0_split
I0427 21:44:38.676584 16601 net.cpp:164] Top shape: 1 3 380 500 (570000)
I0427 21:44:38.676594 16601 net.cpp:164] Top shape: 1 3 380 500 (570000)
I0427 21:44:38.676604 16601 layer_factory.hpp:76] Creating layer label
I0427 21:44:38.676654 16601 net.cpp:111] Creating Layer label
I0427 21:44:38.676682 16601 net.cpp:434] label -> label
I0427 21:44:38.677767 16606 db_lmdb.cpp:22] Opened lmdb /home/snake/caffe-FCN/segnet/FCN-AlexNet/create_data/Train_Label_lmdb
I0427 21:44:38.678103 16601 data_layer.cpp:44] output data size: 1,1,88,118
I0427 21:44:38.678669 16601 net.cpp:156] Setting up label
I0427 21:44:38.678704 16601 net.cpp:164] Top shape: 1 1 88 118 (10384)
I0427 21:44:38.678715 16601 layer_factory.hpp:76] Creating layer conv1
I0427 21:44:38.678735 16601 net.cpp:111] Creating Layer conv1
I0427 21:44:38.678766 16601 net.cpp:478] conv1 <- data_data_0_split_0
I0427 21:44:38.678787 16601 net.cpp:434] conv1 -> conv1
I0427 21:44:38.680908 16601 net.cpp:156] Setting up conv1
I0427 21:44:38.680953 16601 net.cpp:164] Top shape: 1 96 143 173 (2374944)
I0427 21:44:38.680979 16601 layer_factory.hpp:76] Creating layer relu1
I0427 21:44:38.680996 16601 net.cpp:111] Creating Layer relu1
I0427 21:44:38.681008 16601 net.cpp:478] relu1 <- conv1
I0427 21:44:38.681021 16601 net.cpp:420] relu1 -> conv1 (in-place)
I0427 21:44:38.681241 16601 net.cpp:156] Setting up relu1
I0427 21:44:38.681254 16601 net.cpp:164] Top shape: 1 96 143 173 (2374944)
I0427 21:44:38.681293 16601 layer_factory.hpp:76] Creating layer pool1
I0427 21:44:38.681308 16601 net.cpp:111] Creating Layer pool1
I0427 21:44:38.681316 16601 net.cpp:478] pool1 <- conv1
I0427 21:44:38.681329 16601 net.cpp:434] pool1 -> pool1
I0427 21:44:38.681351 16601 net.cpp:156] Setting up pool1
I0427 21:44:38.681365 16601 net.cpp:164] Top shape: 1 96 71 86 (586176)
I0427 21:44:38.681375 16601 layer_factory.hpp:76] Creating layer norm1
I0427 21:44:38.681391 16601 net.cpp:111] Creating Layer norm1
I0427 21:44:38.681401 16601 net.cpp:478] norm1 <- pool1
I0427 21:44:38.681411 16601 net.cpp:434] norm1 -> norm1
I0427 21:44:38.681426 16601 net.cpp:156] Setting up norm1
I0427 21:44:38.681437 16601 net.cpp:164] Top shape: 1 96 71 86 (586176)
I0427 21:44:38.681447 16601 layer_factory.hpp:76] Creating layer conv2
I0427 21:44:38.681459 16601 net.cpp:111] Creating Layer conv2
I0427 21:44:38.681469 16601 net.cpp:478] conv2 <- norm1
I0427 21:44:38.681483 16601 net.cpp:434] conv2 -> conv2
I0427 21:44:38.689705 16601 net.cpp:156] Setting up conv2
I0427 21:44:38.689739 16601 net.cpp:164] Top shape: 1 256 71 86 (1563136)
I0427 21:44:38.689757 16601 layer_factory.hpp:76] Creating layer relu2
I0427 21:44:38.689774 16601 net.cpp:111] Creating Layer relu2
I0427 21:44:38.689784 16601 net.cpp:478] relu2 <- conv2
I0427 21:44:38.689795 16601 net.cpp:420] relu2 -> conv2 (in-place)
I0427 21:44:38.689807 16601 net.cpp:156] Setting up relu2
I0427 21:44:38.689818 16601 net.cpp:164] Top shape: 1 256 71 86 (1563136)
I0427 21:44:38.689827 16601 layer_factory.hpp:76] Creating layer pool2
I0427 21:44:38.689839 16601 net.cpp:111] Creating Layer pool2
I0427 21:44:38.689848 16601 net.cpp:478] pool2 <- conv2
I0427 21:44:38.689859 16601 net.cpp:434] pool2 -> pool2
I0427 21:44:38.689874 16601 net.cpp:156] Setting up pool2
I0427 21:44:38.689885 16601 net.cpp:164] Top shape: 1 256 35 43 (385280)
I0427 21:44:38.689895 16601 layer_factory.hpp:76] Creating layer norm2
I0427 21:44:38.689908 16601 net.cpp:111] Creating Layer norm2
I0427 21:44:38.689918 16601 net.cpp:478] norm2 <- pool2
I0427 21:44:38.689929 16601 net.cpp:434] norm2 -> norm2
I0427 21:44:38.689941 16601 net.cpp:156] Setting up norm2
I0427 21:44:38.689951 16601 net.cpp:164] Top shape: 1 256 35 43 (385280)
I0427 21:44:38.689961 16601 layer_factory.hpp:76] Creating layer conv3
I0427 21:44:38.689973 16601 net.cpp:111] Creating Layer conv3
I0427 21:44:38.689982 16601 net.cpp:478] conv3 <- norm2
I0427 21:44:38.689993 16601 net.cpp:434] conv3 -> conv3
I0427 21:44:38.713974 16601 net.cpp:156] Setting up conv3
I0427 21:44:38.714025 16601 net.cpp:164] Top shape: 1 384 35 43 (577920)
I0427 21:44:38.714046 16601 layer_factory.hpp:76] Creating layer relu3
I0427 21:44:38.714061 16601 net.cpp:111] Creating Layer relu3
I0427 21:44:38.714072 16601 net.cpp:478] relu3 <- conv3
I0427 21:44:38.714084 16601 net.cpp:420] relu3 -> conv3 (in-place)
I0427 21:44:38.714099 16601 net.cpp:156] Setting up relu3
I0427 21:44:38.714109 16601 net.cpp:164] Top shape: 1 384 35 43 (577920)
I0427 21:44:38.714119 16601 layer_factory.hpp:76] Creating layer conv4
I0427 21:44:38.714133 16601 net.cpp:111] Creating Layer conv4
I0427 21:44:38.714143 16601 net.cpp:478] conv4 <- conv3
I0427 21:44:38.714155 16601 net.cpp:434] conv4 -> conv4
I0427 21:44:38.731659 16601 net.cpp:156] Setting up conv4
I0427 21:44:38.731683 16601 net.cpp:164] Top shape: 1 384 35 43 (577920)
I0427 21:44:38.731709 16601 layer_factory.hpp:76] Creating layer relu4
I0427 21:44:38.731739 16601 net.cpp:111] Creating Layer relu4
I0427 21:44:38.731750 16601 net.cpp:478] relu4 <- conv4
I0427 21:44:38.731761 16601 net.cpp:420] relu4 -> conv4 (in-place)
I0427 21:44:38.731775 16601 net.cpp:156] Setting up relu4
I0427 21:44:38.731784 16601 net.cpp:164] Top shape: 1 384 35 43 (577920)
I0427 21:44:38.731794 16601 layer_factory.hpp:76] Creating layer conv5
I0427 21:44:38.731808 16601 net.cpp:111] Creating Layer conv5
I0427 21:44:38.731818 16601 net.cpp:478] conv5 <- conv4
I0427 21:44:38.731830 16601 net.cpp:434] conv5 -> conv5
I0427 21:44:38.743054 16601 net.cpp:156] Setting up conv5
I0427 21:44:38.743074 16601 net.cpp:164] Top shape: 1 256 35 43 (385280)
I0427 21:44:38.743101 16601 layer_factory.hpp:76] Creating layer relu5
I0427 21:44:38.743119 16601 net.cpp:111] Creating Layer relu5
I0427 21:44:38.743129 16601 net.cpp:478] relu5 <- conv5
I0427 21:44:38.743139 16601 net.cpp:420] relu5 -> conv5 (in-place)
I0427 21:44:38.743152 16601 net.cpp:156] Setting up relu5
I0427 21:44:38.743162 16601 net.cpp:164] Top shape: 1 256 35 43 (385280)
I0427 21:44:38.743172 16601 layer_factory.hpp:76] Creating layer pool5
I0427 21:44:38.743185 16601 net.cpp:111] Creating Layer pool5
I0427 21:44:38.743193 16601 net.cpp:478] pool5 <- conv5
I0427 21:44:38.743204 16601 net.cpp:434] pool5 -> pool5
I0427 21:44:38.743221 16601 net.cpp:156] Setting up pool5
I0427 21:44:38.743235 16601 net.cpp:164] Top shape: 1 256 17 21 (91392)
I0427 21:44:38.743245 16601 layer_factory.hpp:76] Creating layer fc6
I0427 21:44:38.743262 16601 net.cpp:111] Creating Layer fc6
I0427 21:44:38.743271 16601 net.cpp:478] fc6 <- pool5
I0427 21:44:38.743283 16601 net.cpp:434] fc6 -> fc6
I0427 21:44:39.708747 16601 net.cpp:156] Setting up fc6
I0427 21:44:39.708806 16601 net.cpp:164] Top shape: 1 4096 12 16 (786432)
I0427 21:44:39.708823 16601 layer_factory.hpp:76] Creating layer relu6
I0427 21:44:39.708839 16601 net.cpp:111] Creating Layer relu6
I0427 21:44:39.708850 16601 net.cpp:478] relu6 <- fc6
I0427 21:44:39.708863 16601 net.cpp:420] relu6 -> fc6 (in-place)
I0427 21:44:39.708878 16601 net.cpp:156] Setting up relu6
I0427 21:44:39.708889 16601 net.cpp:164] Top shape: 1 4096 12 16 (786432)
I0427 21:44:39.708897 16601 layer_factory.hpp:76] Creating layer drop6
I0427 21:44:39.708910 16601 net.cpp:111] Creating Layer drop6
I0427 21:44:39.708930 16601 net.cpp:478] drop6 <- fc6
I0427 21:44:39.708942 16601 net.cpp:420] drop6 -> fc6 (in-place)
I0427 21:44:39.708961 16601 net.cpp:156] Setting up drop6
I0427 21:44:39.708974 16601 net.cpp:164] Top shape: 1 4096 12 16 (786432)
I0427 21:44:39.708983 16601 layer_factory.hpp:76] Creating layer fc7
I0427 21:44:39.709010 16601 net.cpp:111] Creating Layer fc7
I0427 21:44:39.709022 16601 net.cpp:478] fc7 <- fc6
I0427 21:44:39.709033 16601 net.cpp:434] fc7 -> fc7
I0427 21:44:40.133029 16601 net.cpp:156] Setting up fc7
I0427 21:44:40.133085 16601 net.cpp:164] Top shape: 1 4096 12 16 (786432)
I0427 21:44:40.133101 16601 layer_factory.hpp:76] Creating layer relu7
I0427 21:44:40.133118 16601 net.cpp:111] Creating Layer relu7
I0427 21:44:40.133129 16601 net.cpp:478] relu7 <- fc7
I0427 21:44:40.133157 16601 net.cpp:420] relu7 -> fc7 (in-place)
I0427 21:44:40.133172 16601 net.cpp:156] Setting up relu7
I0427 21:44:40.133183 16601 net.cpp:164] Top shape: 1 4096 12 16 (786432)
I0427 21:44:40.133193 16601 layer_factory.hpp:76] Creating layer drop7
I0427 21:44:40.133219 16601 net.cpp:111] Creating Layer drop7
I0427 21:44:40.133229 16601 net.cpp:478] drop7 <- fc7
I0427 21:44:40.133240 16601 net.cpp:420] drop7 -> fc7 (in-place)
I0427 21:44:40.133255 16601 net.cpp:156] Setting up drop7
I0427 21:44:40.133265 16601 net.cpp:164] Top shape: 1 4096 12 16 (786432)
I0427 21:44:40.133275 16601 layer_factory.hpp:76] Creating layer score-fr
I0427 21:44:40.133288 16601 net.cpp:111] Creating Layer score-fr
I0427 21:44:40.133297 16601 net.cpp:478] score-fr <- fc7
I0427 21:44:40.133308 16601 net.cpp:434] score-fr -> score-fc7
I0427 21:44:40.133857 16601 net.cpp:156] Setting up score-fr
I0427 21:44:40.133888 16601 net.cpp:164] Top shape: 1 21 12 16 (4032)
I0427 21:44:40.133929 16601 layer_factory.hpp:76] Creating layer upsample
I0427 21:44:40.133965 16601 net.cpp:111] Creating Layer upsample
I0427 21:44:40.133975 16601 net.cpp:478] upsample <- score-fc7
I0427 21:44:40.133990 16601 net.cpp:434] upsample -> bigscore
I0427 21:44:40.134496 16601 net.cpp:156] Setting up upsample
I0427 21:44:40.134526 16601 net.cpp:164] Top shape: 1 21 415 543 (4732245)
I0427 21:44:40.134546 16601 layer_factory.hpp:76] Creating layer crop
I0427 21:44:40.134563 16601 net.cpp:111] Creating Layer crop
I0427 21:44:40.134574 16601 net.cpp:478] crop <- bigscore
I0427 21:44:40.134584 16601 net.cpp:478] crop <- data_data_0_split_1
I0427 21:44:40.134596 16601 net.cpp:434] crop -> score
I0427 21:44:40.134660 16601 net.cpp:156] Setting up crop
I0427 21:44:40.134673 16601 net.cpp:164] Top shape: 1 21 380 500 (3990000)
I0427 21:44:40.134683 16601 layer_factory.hpp:76] Creating layer prob
I0427 21:44:40.134699 16601 net.cpp:111] Creating Layer prob
I0427 21:44:40.134721 16601 net.cpp:478] prob <- score
I0427 21:44:40.134732 16601 net.cpp:478] prob <- label
I0427 21:44:40.134743 16601 net.cpp:434] prob -> loss
I0427 21:44:40.134763 16601 layer_factory.hpp:76] Creating layer prob
F0427 21:44:40.141227 16601 softmax_loss_layer.cpp:42] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (190000 vs. 10384) Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N_H_W, with integer values in {0, 1, ..., C-1}.
*** Check failure stack trace: ***
@ 0x7f867a93fdaa (unknown)
@ 0x7f867a93fce4 (unknown)
@ 0x7f867a93f6e6 (unknown)
@ 0x7f867a942687 (unknown)
@ 0x7f867ad7f7e0 caffe::SoftmaxWithLossLayer<>::Reshape()
@ 0x7f867acbd3be caffe::Net<>::Init()
@ 0x7f867acbe405 caffe::Net<>::Net()
@ 0x7f867acd28ba caffe::Solver<>::InitTrainNet()
@ 0x7f867acd39f4 caffe::Solver<>::Init()
@ 0x7f867acd3cf9 caffe::Solver<>::Solver()
@ 0x412915 caffe::GetSolver<>()
@ 0x40b61e train()
@ 0x4094a1 main
@ 0x7f8679443ec5 (unknown)
@ 0x409c3b (unknown)
@ (nil) (unknown)
Aborted (core dumped)

The size are different, in your sample you put it like 1X1X88X118. But here is 1 21 380 500. I`m a bit confused about the actual label size you are going to set. anyway. Its not working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment