31 October 2014 High accuracy hole filling for Kinect depth maps
Author Affiliations +
Abstract
Hole filling of depth maps is a core technology of the Kinect based visual system. In this paper, we propose a hole filling algorithm for Kinect depth maps based on separately repairing of the foreground and background. There are two-part processing in the proposed algorithm. Firstly, a fast pre-processing to the Kinect depth map holes is performed. In this part, we fill the background holes of Kinect depth maps with the deepest depth image which is constructed by combining the spatio-temporal information of the pixels in Kinect depth map with the corresponding color information in the Kinect color image. The second step is the enhancement for the pre-processing depth maps. We propose a depth enhancement algorithm based on the joint information of geometry and color. Since the geometry information is more robust than the color, we correct the depth by affine transform in prior to utilizing the color cues. Then we determine the filter parameters adaptively based on the local features of the color image which solves the texture copy problem and protects the fine structures. Since L1 norm optimization is more robust to data outliers than L2 norm optimization, we force the filtered value to be the solution for L1 norm optimization. Experimental results show that the proposed algorithm can protect the intact foreground depth, improve the accuracy of depth at object edges, and eliminate the flashing phenomenon of depth at objects edges. In addition, the proposed algorithm can effectively fill the big depth map holes generated by optical reflection.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jianxin Wang, Jianxin Wang, Ping An, Ping An, Yifan Zuo, Yifan Zuo, Zhixiang You, Zhixiang You, Zhaoyang Zhang, Zhaoyang Zhang, } "High accuracy hole filling for Kinect depth maps", Proc. SPIE 9273, Optoelectronic Imaging and Multimedia Technology III, 92732L (31 October 2014); doi: 10.1117/12.2071437; https://doi.org/10.1117/12.2071437
PROCEEDINGS
17 PAGES


SHARE
RELATED CONTENT


Back to Top