Skip to content

Instantly share code, notes, and snippets.

@natowi
Last active April 7, 2019 09:27
Show Gist options
  • Save natowi/4040d1db0309d0f4ea501e0a9499006a to your computer and use it in GitHub Desktop.
Save natowi/4040d1db0309d0f4ea501e0a9499006a to your computer and use it in GitHub Desktop.
clipboard

https://github.com/danini/graph-cut-ransac The Graph-Cut RANSAC algorithm proposed in paper: Daniel Barath and Jiri Matas; Graph-Cut RANSAC, Conference on Computer Vision and Pattern Recognition, 2018. It is available at http://openaccess.thecvf.com/content_cvpr_2018/papers/Barath_Graph-Cut_RANSAC_CVPR_2018_paper.pdf

https://github.com/AIBluefisher/GraphSfM Robust and Efficient Graph-based Structure from Motion https://aibluefisher.github.io/GraphSfM/ Our Structure from Motion approach, named Graph Structure from Motion, is aimed at large scale 3D reconstruction. Besides, we aimed at exploring the computation ability of computer and making SfM easily transferred to distributed system. Our work is partially based on an early version of OpenMVG, while more robust and efficient than state-of-the-art open source Structure from Motion approaches (We rank 5-th in Tanks and Temples dataset - the highest rank of open-source 3D reconstruction systems).

-- learning

https://github.com/val-iisc/3d-lmnet Repository for 3D-LMNet: Latent Embedding Matching for Accurate and Diverse 3D Point Cloud Reconstruction from a Single Image [BMVC 2018] https://val-iisc.github.io/3d-lmnet/ 3D-LMNet is a latent embedding matching approach for 3D point cloud reconstruction from a single image. To better incorporate the data prior and generate meaningful reconstructions, we first train a 3D point cloud auto-encoder and then learn a mapping from the 2D image to the corresponding learnt embedding. For a given image, there may exist multiple plausible 3D reconstructions depending on the object view. To tackle the issue of uncertainty in the reconstruction, we predict multiple reconstructions that are consistent with the input view, by learning a probablistic latent space using a view-specific ‘diversity loss’. We show that learning a good latent space of 3D objects is essential for the task of single-view 3D reconstruction.

https://github.com/fangchangma/sparse-to-dense.pytorch This repo implements the training and testing of deep regression neural networks for "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" by Fangchang Ma and Sertac Karaman at MIT. A video demonstration is available on YouTube.

https://github.com/vcg-uvic/learned-correspondence-release Learning to Find Good Correspondences (CVPR 2018) This repository is a reference implementation for K. Yi*, E. Trulls*, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua, "Learning to Find Good Correspondences", CVPR 2018 (* equal contributions). If you use this code in your research, please cite the paper.

https://github.com/phuang17/DeepMVS DeepMVS is a Deep Convolutional Neural Network which learns to estimate pixel-wise disparity maps from a sequence of an arbitrary number of unordered images with the camera poses already known or estimated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment