11 July 2017 Depth map upsampling using joint edge-guided convolutional neural network for virtual view synthesizing
Author Affiliations +
Abstract
In texture-plus-depth format of three-dimensional visual data, both texture and depth maps are required to synthesize a desired view via depth-image-based rendering. However, the depth maps captured or estimated always exist with low resolution compared to their corresponding texture images. We introduce a joint edge-guided convolutional neural network that upsamples the resolution of a depth map on the premise of synthesized view quality. The network takes the low-resolution depth map as an input using a joint edge feature extracted from the depth map and the registered texture image as a reference, and then produces a high-resolution depth map. We further use local constraints that preserve smooth regions and sharp edges so as to improve the quality of the depth map and synthesized view. Finally, a global looping optimization is performed with virtual view quality as guidance in the recovery process. We train our model using pairs of depth maps and texture images and then make tests on other depth maps and video sequences. The experimental results demonstrate that our scheme outperforms existing methods both in the quality of the depth maps and synthesized views.
© 2017 SPIE and IS&T
Yan Dong, Yan Dong, Chunyu Lin, Chunyu Lin, Yao Zhao, Yao Zhao, Chao Yao, Chao Yao, } "Depth map upsampling using joint edge-guided convolutional neural network for virtual view synthesizing," Journal of Electronic Imaging 26(4), 043004 (11 July 2017). https://doi.org/10.1117/1.JEI.26.4.043004 . Submission: Received: 24 January 2017; Accepted: 19 June 2017
Received: 24 January 2017; Accepted: 19 June 2017; Published: 11 July 2017
JOURNAL ARTICLE
9 PAGES


SHARE
RELATED CONTENT

Rihamark: perceptual image hash benchmarking
Proceedings of SPIE (February 10 2011)
Robust stereo vision
Proceedings of SPIE (February 28 1991)

Back to Top