Skip to content

Instantly share code, notes, and snippets.

@shelhamer
Last active January 26, 2018 15:55
Show Gist options
  • Star 30 You must be signed in to star a gist
  • Fork 22 You must be signed in to fork a gist
  • Save shelhamer/80667189b218ad570e82 to your computer and use it in GitHub Desktop.
Save shelhamer/80667189b218ad570e82 to your computer and use it in GitHub Desktop.
FCN-32s Fully Convolutional Semantic Segmentation on PASCAL-Context
@acgtyrant
Copy link

I have to use a quick and dirty approach to hack the kDefaultTotalBytesLimit and kDefaultTotalBytesWarningThreshold in the coded_stream.h, recompile, reinstall. So eval.py can execute successfully finally.

I will try to limit the sizes of the big messages now.

@xuzhenqi
Copy link

Could anybody explain what the 'Crop' layer is for?

@eswears
Copy link

eswears commented Dec 17, 2015

What is the train/val split on the PASCAL Context data that is needed to get the 35.1 mean I/U?

There is a train.txt and val.txt in the VOC2010/ImageSets/Segmentation folder of the PASCAL VOC 2010 download, but these only have 964 images in each. The Long et al. paper seems to use half of the 10,103 images available for training and the other half for testing. So, I don't think these are the correct files. The PASCAL Context Dataset download also doesn't have any files for the train/val split.

@eswears
Copy link

eswears commented Jan 6, 2016

I was able to get a mean 53.5% accuracy after 10150 iteration with a loss of 67968. This was with a 50/50 random split for train vs test. The accuracy was determined by adding an 'accuracy' layer to train_val.prototxt. How is the mean I/U determined? Is there a similar layer that can be added?

@ch977
Copy link

ch977 commented Jan 15, 2016

Why the learning rate is so small, can anyone explain it,plz?

@JoestarK
Copy link

Which should I use to train my own data,the solver.py or the solver.prototxt? I used the solver.prototxt to train my data,but the loss didn't decrease. Dose anyone have a solution? Thanks

@zmonoid
Copy link

zmonoid commented Jan 16, 2016

Hi, anyone know where to download "vgg16fc.caffemodel" in the solver.py?

@mhsung
Copy link

mhsung commented Jan 18, 2016

I would like to share my script for creating a custom data set in lmdb format. It would be very appreciated if you could let me know when you find any bug on this code.

#!/usr/bin/python

import caffe
import glob
import lmdb
import numpy as np
from PIL import Image
import os
import sys

# Variables
img_width = 500
img_height = 500


# Paths
# PNG images
color_dir = './input/color_image_dir'
# PNG images
# Per-pixel labels are stored in a gray image
label_dir = './input/label_image_dir'
output_dir = './lmdb/'


inputs = glob.glob(color_dir + '/*.png')

color_lmdb_name = output_dir + '/color-lmdb'
if not os.path.isdir(color_lmdb_name):
    os.makedirs(color_lmdb_name)
color_in_db = lmdb.open(color_lmdb_name, map_size=int(1e12))

label_lmdb_name = output_dir + '/label-lmdb'
if not os.path.isdir(label_lmdb_name):
    os.makedirs(label_lmdb_name)
label_in_db = lmdb.open(label_lmdb_name, map_size=int(1e12))

num_images = 0;
color_mean_color = np.zeros((3))


with color_in_db.begin(write=True) as color_in_txn:
    with label_in_db.begin(write=True) as label_in_txn:

        for in_idx, in_ in enumerate(inputs):
            img_name = os.path.splitext( os.path.basename(in_))[0]
            color_filename = color_dir + img_name + '.png'
            label_filename = label_dir + img_name + '.png'
            print(str(in_idx + 1) + ' / ' + str(len(inputs)))

            # load image
            im = np.array(Image.open(color_filename)) # or load whatever ndarray you need
            assert im.dtype == np.uint8            
            # RGB to BGR
            im = im[:,:,::-1]
            # in Channel x Height x Width order (switch from H x W x C)
            im = im.transpose((2,0,1))

            # compute mean color image
            for i in range(3):
                color_mean_color[i] += im[i,:,:].mean()
            num_images += 1

            #color_im_dat = caffe.io.array_to_datum(im)
            color_im_dat = caffe.proto.caffe_pb2.Datum()
            color_im_dat.channels, color_im_dat.height, color_im_dat.width = im.shape
            assert color_im_dat.height == img_height
            assert color_im_dat.width == img_width
            color_im_dat.data = im.tostring()
            color_in_txn.put('{:0>12d}'.format(in_idx), color_im_dat.SerializeToString())

            im = np.array(Image.open(label_filename)) # or load whatever ndarray you need
            assert im.dtype == np.uint8
            label_im_dat = caffe.proto.caffe_pb2.Datum()
            label_im_dat.channels = 1
            label_im_dat.height, label_im_dat.width = im.shape
            assert label_im_dat.height == img_height
            assert label_im_dat.width == img_width
            label_im_dat.data = im.tostring()
            label_in_txn.put('{:0>12d}'.format(in_idx), label_im_dat.SerializeToString())

    label_in_db.close()
color_in_db.close()

color_mean_color /= num_images
np.savetxt(output_dir + '/{}.csv'.format('color-mean'), color_mean_color, delimiter=",", fmt='%.4f')

@fqnchina
Copy link

pad: 100 in conv1_1 layer is wrong..

@masakinakada
Copy link

When I tried to run deploy.prototxt, it takes very large memory, and the process crash on CPU.
Can anyone help me to reduce the use of memory or solve this problem?

Thanks in advance.

@arasharchor
Copy link

Hi guys,
how to finetune on different number of classes?
thanks

@laotao
Copy link

laotao commented Mar 25, 2016

Can I fine-tune this model for high-resolution image classification? The input sizes of Alexnet/Caffenet/GoogleLenet are too small for my application.

@CarrieHui
Copy link

@weiliu89 Hi, I encountered conflicts when I merged PR #2016 , it says "Automatic merge failed". What should I do next? Thank you in advance.

@twtygqyy
Copy link

Hi, I tried to use deconv layer with group and bilinear as upsampling instead of using the solver script, but could hardly reproduce the result. Anybody knows the reason?

@yjc04
Copy link

yjc04 commented Apr 21, 2016

@acgtyrant
Hi I am getting this error :
I0421 02:38:28.223543 24891 net.cpp:299] Memory required for data: 1277452160
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 597011289

I installed the protobuf using the command given in google tensorflow website
$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/protobuf-3.0.0b2.post2-cp27-none-linux_x86_64.whl

However, once I uninstalled that and tried to compile using the github source code from google protobuf, python can't find the google protobuf at all. Can you help me finding a way here?

Thanks in advance

@tianzq
Copy link

tianzq commented Apr 22, 2016

@yjc04
I had same error. I just modified "kDefaultTotalBytesLimit" and "kDefaultTotalBytesWarningThreshold" in the "/usr/include/google/protobuf/io/coded_stream.h". I didn't recompile and reinstall, it works well.

@mjohn123
Copy link

mjohn123 commented Feb 6, 2017

Hello all, Have anyone try to export prediction image using C++, instead of python? I am not the family of python code. Thank all

@wgeppert
Copy link

wgeppert commented Feb 12, 2017

any c++ for classification or Segmentation would be great.

@sara-eb
Copy link

sara-eb commented Mar 6, 2017

Hi,
I trained the FCN32 from the scratch. But the output is zero values (a black image). Could someone help what is the reason? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment