Depth Reconstruction of Low-Texture Surfaces from a Single View
We propose a deep learning-based method for recovering depth maps and normal vectors of low-texture surfaces from a single RGB image. Our approach relies on an autoencoding network with multiple decoders which are trained jointly. It is based on a semantic segmentation network, called SegNet, with design modifications intended to speed up training. We demonstrate that despite reducing the network parameters and training time significantly, our performance is still comparable to the original network. We also present a new dataset of depth maps and surface normals for texture-less surfaces.
Dataset
Around 12 thousand synchronised RGB images, along with depth maps and surface normal maps for various textureless surfaces, obtained using a Microsoft Kinect v2 camera in real world. The dataset is available for download here.
How to Cite
If you use anything from this publication in your work, please cite it as follows:
@inbook{khandrec2021,
place={Kaiserslautern, Germany},
volume={SS 2021},
title={Depth Reconstruction of Low-Texture Surfaces from a Single View},
url={https://ags.cs.uni-kl.de/fileadmin/inf_ags/Project_Seminar/Proceedings_3DCV_SS2021.pdf},
booktitle={Seminar and Project 3D Computer Vision and Augmented Reality - Summer Semester 2021},
publisher={Department Augmented Vision},
author={Khan, Muhammad Saif Ullah and Afzal, Muhammad Zeshan},
year={2021},
pages={92–100}
}