Skip to content

Instantly share code, notes, and snippets.

Last active January 26, 2018 15:55
Show Gist options
  • Save shelhamer/80667189b218ad570e82 to your computer and use it in GitHub Desktop.
Save shelhamer/80667189b218ad570e82 to your computer and use it in GitHub Desktop.
FCN-32s Fully Convolutional Semantic Segmentation on PASCAL-Context
Copy link

mhsung commented Jan 18, 2016

I would like to share my script for creating a custom data set in lmdb format. It would be very appreciated if you could let me know when you find any bug on this code.


import caffe
import glob
import lmdb
import numpy as np
from PIL import Image
import os
import sys

# Variables
img_width = 500
img_height = 500

# Paths
# PNG images
color_dir = './input/color_image_dir'
# PNG images
# Per-pixel labels are stored in a gray image
label_dir = './input/label_image_dir'
output_dir = './lmdb/'

inputs = glob.glob(color_dir + '/*.png')

color_lmdb_name = output_dir + '/color-lmdb'
if not os.path.isdir(color_lmdb_name):
color_in_db =, map_size=int(1e12))

label_lmdb_name = output_dir + '/label-lmdb'
if not os.path.isdir(label_lmdb_name):
label_in_db =, map_size=int(1e12))

num_images = 0;
color_mean_color = np.zeros((3))

with color_in_db.begin(write=True) as color_in_txn:
    with label_in_db.begin(write=True) as label_in_txn:

        for in_idx, in_ in enumerate(inputs):
            img_name = os.path.splitext( os.path.basename(in_))[0]
            color_filename = color_dir + img_name + '.png'
            label_filename = label_dir + img_name + '.png'
            print(str(in_idx + 1) + ' / ' + str(len(inputs)))

            # load image
            im = np.array( # or load whatever ndarray you need
            assert im.dtype == np.uint8            
            # RGB to BGR
            im = im[:,:,::-1]
            # in Channel x Height x Width order (switch from H x W x C)
            im = im.transpose((2,0,1))

            # compute mean color image
            for i in range(3):
                color_mean_color[i] += im[i,:,:].mean()
            num_images += 1

            #color_im_dat =
            color_im_dat = caffe.proto.caffe_pb2.Datum()
            color_im_dat.channels, color_im_dat.height, color_im_dat.width = im.shape
            assert color_im_dat.height == img_height
            assert color_im_dat.width == img_width
   = im.tostring()
            color_in_txn.put('{:0>12d}'.format(in_idx), color_im_dat.SerializeToString())

            im = np.array( # or load whatever ndarray you need
            assert im.dtype == np.uint8
            label_im_dat = caffe.proto.caffe_pb2.Datum()
            label_im_dat.channels = 1
            label_im_dat.height, label_im_dat.width = im.shape
            assert label_im_dat.height == img_height
            assert label_im_dat.width == img_width
   = im.tostring()
            label_in_txn.put('{:0>12d}'.format(in_idx), label_im_dat.SerializeToString())


color_mean_color /= num_images
np.savetxt(output_dir + '/{}.csv'.format('color-mean'), color_mean_color, delimiter=",", fmt='%.4f')

Copy link

pad: 100 in conv1_1 layer is wrong..

Copy link

When I tried to run deploy.prototxt, it takes very large memory, and the process crash on CPU.
Can anyone help me to reduce the use of memory or solve this problem?

Thanks in advance.

Copy link

Hi guys,
how to finetune on different number of classes?

Copy link

laotao commented Mar 25, 2016

Can I fine-tune this model for high-resolution image classification? The input sizes of Alexnet/Caffenet/GoogleLenet are too small for my application.

Copy link

@weiliu89 Hi, I encountered conflicts when I merged PR #2016 , it says "Automatic merge failed". What should I do next? Thank you in advance.

Copy link

Hi, I tried to use deconv layer with group and bilinear as upsampling instead of using the solver script, but could hardly reproduce the result. Anybody knows the reason?

Copy link

yjc04 commented Apr 21, 2016

Hi I am getting this error :
I0421 02:38:28.223543 24891 net.cpp:299] Memory required for data: 1277452160
[libprotobuf WARNING google/protobuf/io/] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/] The total number of bytes read was 597011289

I installed the protobuf using the command given in google tensorflow website
$ pip install --upgrade

However, once I uninstalled that and tried to compile using the github source code from google protobuf, python can't find the google protobuf at all. Can you help me finding a way here?

Thanks in advance

Copy link

tianzq commented Apr 22, 2016

I had same error. I just modified "kDefaultTotalBytesLimit" and "kDefaultTotalBytesWarningThreshold" in the "/usr/include/google/protobuf/io/coded_stream.h". I didn't recompile and reinstall, it works well.

Copy link

mjohn123 commented Feb 6, 2017

Hello all, Have anyone try to export prediction image using C++, instead of python? I am not the family of python code. Thank all

Copy link

wgeppert commented Feb 12, 2017

any c++ for classification or Segmentation would be great.

Copy link

sara-eb commented Mar 6, 2017

I trained the FCN32 from the scratch. But the output is zero values (a black image). Could someone help what is the reason? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment