16 March 2015 Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting
Author Affiliations +
Abstract
RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
V. V. Voronin, V. I. Marchuk, A. V. Fisunov, S. V. Tokareva, K. O. Egiazarian, "Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting", Proc. SPIE 9399, Image Processing: Algorithms and Systems XIII, 93990S (16 March 2015); doi: 10.1117/12.2076506; https://doi.org/10.1117/12.2076506
PROCEEDINGS
11 PAGES


SHARE
Back to Top