29 January 2007 Adaptive filtering for cross-view prediction in multi-view video coding
Author Affiliations +
Abstract
We consider the problem of coding multi-view video that exhibits mismatches in frames from different views. Such mismatches could be caused by heterogeneous cameras and/or different shooting positions of the cameras. In particular, we consider focus mismatches across views, i.e., such that different portions of a video frame can undergo different blurriness/sharpness changes with respect to the corresponding areas in frames from the other views. We propose an adaptive filtering approach for cross-view prediction in multi-view video coding. The disparity fields are exploited as an estimation of scene depth. An Expectation-maximization (EM) algorithm is applied to classify the disparity vectors into groups. Based on the classification result, a video frame is partitioned into regions with different scene-depth levels. Finally, for each scene-depth level, a two-dimensional filter is designed to minimize the average residual energy of cross-view prediction for all blocks in the class. The resulting filters are applied to the reference frames to generate better matches for cross-view prediction. Simulation results show that, when encoding across views, the proposed method achieves up to 0.8dB gain over current H.264 video coding.
© (2007) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
PoLin Lai, Yeping Su, Peng Yin, Cristina Gomila, Antonio Ortega, "Adaptive filtering for cross-view prediction in multi-view video coding", Proc. SPIE 6508, Visual Communications and Image Processing 2007, 650814 (29 January 2007); doi: 10.1117/12.707437; https://doi.org/10.1117/12.707437
PROCEEDINGS
12 PAGES


SHARE
Back to Top