1 July 2011 Efficient multiview depth video coding using depth synthesis prediction
Author Affiliations +
Abstract
The view synthesis prediction (VSP) method utilizes interview correlations between views by generating an additional reference frame in the multiview video coding. This paper describes a multiview depth video coding scheme that incorporates depth view synthesis and additional prediction modes. In the proposed scheme, we exploit the reconstructed neighboring depth frame to generate an additional reference depth image for the current viewpoint to be coded using the depth image-based-rendering technique. In order to generate high-quality reference depth images, we used pre-processing on depth, depth image warping, and two types of hole filling methods depending on the number of available reference views. After synthesizing the additional depth image, we encode the depth video using the proposed additional prediction modes named VSP modes; those additional modes refer to the synthesized depth image. In particular, the VSP_SKIP mode refers to the co-located block of the synthesized frame without the coding motion vectors and residual data, which gives most of the coding gains. Experimental results demonstrate that the proposed depth view synthesis method provides high-quality depth images for the current view and the proposed VSP modes provide high coding gains, especially on the anchor frames.
© (2011) Society of Photo-Optical Instrumentation Engineers (SPIE)
Cheon Lee, Yo-Sung Ho, Byeongho Choi, "Efficient multiview depth video coding using depth synthesis prediction," Optical Engineering 50(7), 077004 (1 July 2011). https://doi.org/10.1117/1.3600575 . Submission:
JOURNAL ARTICLE
15 PAGES


SHARE
Back to Top