Skip to content

Instantly share code, notes, and snippets.

@klokik
Last active July 27, 2017 23:27
Show Gist options
  • Save klokik/3b98c856da8b6bef8f38c4ea03f87fff to your computer and use it in GitHub Desktop.
Save klokik/3b98c856da8b6bef8f38c4ea03f87fff to your computer and use it in GitHub Desktop.
Milestone 2 readme

README Milestone 2

Input data

Robosherlock pipeline consumes data from .bag file you uploaded on Trello. So rosbag play --loop it before running demos.

What was implemented

Mapping the 2d transformation obtained from procrustes analisys to 3d space is performed by solving a Perspective-n-Point problem for the CAD model's vertices projected onto the plane (and transformed on it) and CAD model's vertices. This process is performed for each pose hypothesis of each segment. Then, for each pose estimation a shape's silhouette is extracted and chamfer distance to segmented contour is calculated. Distances are used to build a ranking table for each segment. A whole table is rejected if all it's hypotheses have too large distance. A few closest hypotheses are left for further processing. I've posted a video of how it works on gitter. If you want to try it yourself checkout to commit f2a0421b at https://github.com/klokik/robosherlock.git and launch transparent_segmentation.launch

For each segment a few probable poses are visualised in PCL visualiser (the lighter the color of the shape - the higher is rank of the pose), and on rgb visualisation one can see a hystogram of the table ranks (red items are rejected, green ones processed further). I have additionally to perform non-maxima supression for it to get rid of the close poses of the same object;

Now that we have initial pose estimations we have to refine them. Canny edges are extracted for each segment's area in order to obtain surface edges. 3d surface edges are created manually (for now) for each object (I have a few ideas of how to make it automatically, for now these are loaded from separate PLY file). It looks like this edge model. A custom ICP algorithm implementation is used to match surface edges to 3d edges: we solve an error minimisation problem updating test point cloud on each step, since we optimize error function with the respect to object's pose. The paper suggests to use Levenberg-Marquard algorithm, but seems both Gradient descend and Gauss-Newton work just fine.

The implementation in robosherlock annotator does not work properly for now, but I have created a separate demo app for it https://github.com/klokik/PlaygroundGSoC.git brach ipc_stuff (should've been icp not ipc, lol), cmake . && make && ./icp2d3d. It loads mesh from the PLY file, applies some random transformation and performs an optimisation procedure fitting it on a rectangle template. A jacobian calculation might be sligntly optimised by using distance transform lookup instead of KdTree, I will do it once everything else will work properly.

Then a support plane assumption is applied to put all the meshes onto the table, it is performed by rotating the object by an angle between object's up direction and plane normal, and then shifting the object onto the table. Further we perform another icp run with the constraint that object stays on the table (is not yet ready).

A demonstration is on branch ipc2d3d at https://github.com/klokik/robosherlock.git compile and run the same transparent_segmentation.launch :) Now on the rgb visualizer one can see surface edges, that are used for 2d-3d ICP matching, and matched pose (in blue, matcher does not properly work still), and that all the objects stay straight on the table. Their positions are slightly off if the initial estimation was ambiguous. The paper suggests to solve another minimization problem to dress the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment