-
7* https://en.wikipedia.org/wiki/List_of_datasets_for_machine_learning_research
-
https://github.com/deeplearningais/curfil/wiki/Training-and-Prediction-with-the-NYU-Depth-v2-Dataset
-
http://rgbd.cs.princeton.edu/ SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite
-
http://rgbd.cs.princeton.edu/challenge.html SUNRGB-D 3D Object Detection Challenge
-
http://lsun.cs.princeton.edu/2016/ includes ROOM LAYOUT ESTIMATION
-
6* http://robotvault.bitbucket.org/scenenet-rgbd.html
SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth Indoor Synthetic Dataset, RGBD
SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth
John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J. Davison
We introduce SceneNet RGB-D, expanding the previous work of SceneNet to enable large scale photorealistic rendering of indoor scene trajectories. It provides pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction. Random sampling permits virtually unlimited scene configurations, and here we provide a set of 5M rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses. Each layout also has random lighting, camera trajectories, and textures. The scale of this dataset is well suited for pre-training data-driven computer vision techniques from scratch with RGB-D inputs, which previously has been limited by relatively small labelled datasets in NYUv2 and SUN RGB-D. It also provides a basis for investigating 3D scene labelling tasks by providing perfect camera poses and depth data as proxy for a SLAM system.
physical simulation for cluttered artifical 3d-scene generation
-
UnrealStereo: A Synthetic Dataset for Analyzing Stereo Vision
-
http://www.cs.nyu.edu/~deigen/deigen-thesis.pdf (2015) Predicting Images using Convolutional Networks: Visual Scene Understanding with Pixel Maps
-
http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf Indoor Segmentation and Support Inference from RGBD Images. Nathan Silberman, Derek Hoiem, Pushmeet Kohli, Rob Fergus
-
http://www.vision.caltech.edu/html-files/EE148-2005/uploads/KoeseckaZhang02Video.pdf (2005) Video Compass. Jana Koˇseck´a and Wei Zhang
-
http://cs.nyu.edu/~silberman/projects/indoor_scene_seg_sup.html