Skip to content

Instantly share code, notes, and snippets.

@ajsander
Last active December 26, 2023 05:14
Show Gist options
  • Star 59 You must be signed in to star a gist
  • Fork 44 You must be signed in to fork a gist
  • Save ajsander/b65061d12f50de3cef5d to your computer and use it in GitHub Desktop.
Save ajsander/b65061d12f50de3cef5d to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@qigtang
Copy link

qigtang commented Jan 1, 2016

Follow exactly the steps, but got the following error when trying to create the solver
solver = caffe.SGDSolver('fcn_solver.prototxt')

I0101 00:09:39.586221 32459 net.cpp:111] Creating Layer upscore
I0101 00:09:39.586230 32459 net.cpp:478] upscore <- score_classes
I0101 00:09:39.586244 32459 net.cpp:434] upscore -> upscore
F0101 00:09:39.586258 32459 base_conv_layer.cpp:19] Check failed: !conv_param.has_kernel_size() != !(conv_param.has_kernel_h() && conv_param.has_kernel_w()) Filter size is kernel_size OR kernel_h and kernel_w; not both
*** Check failure stack trace: ***
Aborted

@vuptran
Copy link

vuptran commented Jan 2, 2016

You need to manually copy and paste the Deconvolution layer as given in the tutorial to replace the Deconvolution definition in your own train/test prototxt files. The Deconvolution layer should look something like this:

layer {
  name: "upscore"
  type: "Deconvolution"
  bottom: "score_classes"
  top: "upscore"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 2
    bias_term: true
    kernel_size: 31
    pad: 8
    stride: 16
    weight_filler { type: "bilinear" }
    bias_filler { type: "constant" value: 0.1 }
  }
}

@qigtang
Copy link

qigtang commented Jan 2, 2016

Vuptran,

It works if I change both fcn_train.prototxt and fcn_test.prototxt.

Thanks!

@gxr
Copy link

gxr commented Jan 8, 2016

Has anyone gotten a similar problem?

I0108 00:00:32.929188 14232 net.cpp:165] Memory required for data: 34008976
I0108 00:00:32.929191 14232 layer_factory.hpp:77] Creating layer score_classes
I0108 00:00:32.929214 14232 net.cpp:106] Creating Layer score_classes
I0108 00:00:32.929230 14232 net.cpp:454] score_classes <- conv4
I0108 00:00:32.929235 14232 net.cpp:411] score_classes -> score_classes
I0108 00:00:32.930004 14232 net.cpp:150] Setting up score_classes
I0108 00:00:32.930029 14232 net.cpp:157] Top shape: 1 2 17 17 (578)
I0108 00:00:32.930034 14232 net.cpp:165] Memory required for data: 34011288
I0108 00:00:32.930057 14232 layer_factory.hpp:77] Creating layer upscore
I0108 00:00:32.930063 14232 net.cpp:106] Creating Layer upscore
I0108 00:00:32.930066 14232 net.cpp:454] upscore <- score_classes
I0108 00:00:32.930071 14232 net.cpp:411] upscore -> upscore
F0108 00:00:32.930096 14232 base_conv_layer.cpp:36] Check failed: num_kernel_dims == 1 || num_kernel_dims == num_spatial_axes_ kernel_size must be specified once, or once per spatial dimension (kernel_size specified 0 times; 2 spatial dims).
*** Check failure stack trace: ***
[I 00:00:33.708 NotebookApp] KernelRestarter: restarting kernel (1/5)

I am running this on Jupyter notebook.

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17

Caffe version is the one on BVLC master, not sure if that's the reason this is happening

@vuptran
Copy link

vuptran commented Jan 8, 2016

This tutorial uses a future branch of Caffe that has unmerged Pull Requests, the Crop layer where you're getting error. Use the Caffe branch mentioned at the start of the tutorial and you should be able to run the tutorial without error.

@gxr
Copy link

gxr commented Jan 8, 2016

When I look at the future branch it seems that it has not been updated since February 2015 though. Is that good?

@vuptran
Copy link

vuptran commented Jan 9, 2016

That branch should be fine for the purpose of developing FCN models.

@jedau
Copy link

jedau commented Jan 15, 2016

I keep getting UnboundLocalError: local variable 'db_imgs' referenced before assignment while running Step 1. I'm not sure what I'm missing. Anybody have any idea?

@vuptran
Copy link

vuptran commented Jan 15, 2016

It could be the path to the Sunnybrook dataset is not defined, so the code could not create an LMDB instance. Check the paths to your copy of the Sunnybrook dataset.

@jedau
Copy link

jedau commented Jan 19, 2016

Oh wow, that was it! Thanks, @vuptran!

@DurhamSmith
Copy link

Hi I am having problems building the bleeding edge version of caffe.
I always run into the error :
ln: target ‘.build_release/tools/train_net’ is not a directorymake: *** [.build_release/tools/train_net] Error 1

More detailed information on my problem can be found at:
http://stackoverflow.com/questions/34890714/problems-building-the-future-branch-of-caffe-https-github-com-longjon-caffe-g

@vuptran
Copy link

vuptran commented Jan 21, 2016

It looks like the error occurs when caffe tries to link the target. Have you checked your env PATH and LD_LIBRARY_PATH to include the necessary libraries not specified in Makefile.config?

@DurhamSmith
Copy link

It seems to be correct and I can compile the normal caffe fine, however what libraries might be needed in this case so that I can double check.

@vuptran
Copy link

vuptran commented Jan 21, 2016

I looked at your Makefile.config and it looks like you un-commented the PYTHON_INCLUDE variable that may be in conflict with the Anaconda Python distribution:

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
#PYTHON_INCLUDE := /usr/include/python2.7 \
 /usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.

@DurhamSmith
Copy link

Good spot, unfortunately this didn't solve the problem.

@vuptran
Copy link

vuptran commented Jan 22, 2016

I was able to compile the future branch of Caffe on Ubuntu 14.04 using CPU only, so the error should be related to environment. Did you make sure to do make clean every time before you recompile?

@DurhamSmith
Copy link

I do indeed always make clean before I recompile. I however have setup a VM and everything works fine now. Thanks for the help

Copy link

ghost commented Feb 4, 2016

Is the it possible to get the same functionality of the Crop layer from Caffe's TransformationParameters?

@vuptran
Copy link

vuptran commented Feb 6, 2016

The Crop layer in Caffe's future branch does not perform cropping like the crop_size parameter from TransformationParameter. The Crop layer tracks/maps coordinates in order to align two blobs to establish a correspondence between input and output. This layer takes two bottom blobs and produces one top, which is a copy of the first bottom cropped to the size of the second so that coordinates exactly correspond. More information on the Crop layer here: BVLC/caffe#1976

For full image learning, it does not make sense to perform random cropping of the input image before feeding it to the network. This is because the input image and label need to be in lockstep during training. It is possible that the random crop of the image and the corresponding label will be out of sync due to randomness, and will cause the training loss to diverge.

@hh1985
Copy link

hh1985 commented Feb 22, 2016

I tried to follow this tutorial and used future branch, but ended up with error like Check failed: !conv_param.has_kernel_size() != !(conv_param.has_kernel_h() && conv_param.has_kernel_w()) Filter size is kernel_size OR kernel_h and kernel_w; not both
*** Check failure stack trace: ***

I then switched to the caffe-master branch and merged https://github.com/BlGene/caffe/tree/crop-nd into the master branch to get the crop layer. The error now is different,

I0222 00:01:04.905822  2873 net.cpp:106] Creating Layer score
I0222 00:01:04.905827  2873 net.cpp:454] score <- upscore
I0222 00:01:04.905833  2873 net.cpp:454] score <- data_data_0_split_1
I0222 00:01:04.905839  2873 net.cpp:411] score -> score
I0222 00:01:04.905879  2873 net.cpp:150] Setting up score
I0222 00:01:04.905897  2873 net.cpp:157] Top shape: 1 2 271 271 (146882)
I0222 00:01:04.905901  2873 net.cpp:165] Memory required for data: 35186344
I0222 00:01:04.905906  2873 layer_factory.hpp:77] Creating layer loss
I0222 00:01:04.905917  2873 net.cpp:106] Creating Layer loss
I0222 00:01:04.905923  2873 net.cpp:454] loss <- score
I0222 00:01:04.905928  2873 net.cpp:454] loss <- label
I0222 00:01:04.905936  2873 net.cpp:411] loss -> loss
I0222 00:01:04.905946  2873 layer_factory.hpp:77] Creating layer loss
F0222 00:01:04.906365  2873 softmax_loss_layer.cpp:47] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (73441 vs. 65536) Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W, with integer values in {0, 1, ..., C-1}.
*** Check failure stack trace: ***

The size of score should be 1 2 256 256, but no idea why it is still 1 2 271 271
Any idea? Otherwise I will try to go back to the future version and see if I can make it work.

Thanks a lot!

@sebastian-schlecht
Copy link

How did you guys get the data? I registered twice but sadly I wouldn't get an email.

@vuptran
Copy link

vuptran commented Feb 24, 2016

@hh1985, you should continue working with the future branch for consistency purposes. I read that the implementation of the crop-nd layer may be different from the original implementation of the Crop layer. As for your error, it looks like you did not define the parameters of the deconvolution layer; it needs to be manually defined in your train/val prototxt files.

@sebastian-schlecht, the link in the tutorial should lead you to the right registration site. They could be overwhelmed with registration requests...

@xuleiyang
Copy link

@vuptran, i am trying to run the tutorial, got a problem to generate _caffe.so, since the folder /caffe_FCN/include/caffe/layers does not exist, just wondering how to get it? or just copy from the master branch? thanks a lot.

@xuleiyang
Copy link

I had run through the tutorial, it works. What i dont understand is "how to properly set up the network in order for the crop layer to work?" I try to modify some params, then it always shows error like " Check failed: (crop_map.coefs()[i].first) >= (1)-0.000000000000001L (0.888889 vs. 1)" . Can anyone help here? Thanks a lot.

@iskode
Copy link

iskode commented Jan 18, 2017

@vuptran Hello thank you for your sharing. I'm now working on this dataset and got lost while finding the matching between images and labels. Your post is helping me a lot understand that. But my question why you only look for the contour with highest number? What about the others? There are not SAX series?
Another question: in the SAX series dictionary: what do folders and values association mean?
SAX_SERIES = { # challenge training "SC-HF-I-1": "0004", "SC-HF-I-2": "0106", ...... }
I looked in SC-HF-I-1 folder in contour label and didn't find any contour named: IM-0001-0004-......txt but in the corresponding image, I've found it so it's like I've an datum without label. Same goes for the SC-HF-I-2 folder, it has instead 0107 as a contour number.
Please explain me a bit?
Thank you so much.

@gautamashwini60
Copy link

@vuptran The link in which data of sunnybrook is given is not working so if any of you have download the data then Please share with me.

@fra-nsabi
Copy link

he link in which data of sunnybrook is given is not working so if any of you have download the data then Please share with me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment