Shape2.5D: A Dataset of Texture-less Surfaces for Depth and Normals Estimation
Shape2.5D: A Dataset of Texture-less Surfaces for Depth and Normals Estimation
Reconstructing texture-less surfaces poses unique challenges in computer vision, primarily due to the lack of specialized datasets that cater to the nuanced needs of depth and normals estimation in the absence of textural information. We introduce "Shape2.5D," a novel, large-scale dataset designed to address this gap. Comprising 364k frames spanning 2635 3D models and 48 unique objects, our dataset provides depth and surface normal maps for texture-less object reconstruction. The proposed dataset includes synthetic images rendered with 3D modeling software to simulate various lighting conditions and viewing angles. It also includes a real-world subset comprising 4672 frames captured with a depth camera. Our comprehensive benchmarks, performed using a modified encoder-decoder network, showcase the dataset's capability to support the development of algorithms that robustly estimate depth and normals from RGB images. Our open-source data generation pipeline allows the dataset to be extended and adapted for future research.
TL;DR
- For depth and normals estimation on texture-less surfaces
- 302k synthetic frames for 35 3D models
- 62k more synthetic frames for 2600 3D models of 13 common ShapeNet objects
- 4672 real-world frames for 6 clothing and household items
How to Cite
If you find this useful, please include the following citation in your work:
@article{khan2024shape25d,
title={Shape2.5D: A Dataset of Texture-less Surfaces for Depth and Normals Estimation},
author={Khan, Muhammad Saif Ullah and Afzal, Muhammad Zeshan and Stricker, Didier},
journal={arXiv preprint arXiv:2406.14370},
year={2024}
}
Acknowledgements
This dataset was created by the first author as a part of his Master’s thesis at the German Research Center for Artificial Intelligence (DFKI). We would like to thank the DFKI for providing the necessary resources and support for this work.