17 February 2009 Coherent spatial and temporal occlusion generation
Author Affiliations +
A vastly growing number of productions from the entertainment industry are aiming at 3D movie theatres. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g different display sizes, display types, viewing distance), we need a flexible 3D format that can adjust the depth effect. Such a format is the image plus depth format in which a video frame is enriched with depth information of all pixels in the video. This format can be extended with an additional layer for occluded video and associated depth, that contains what is behind objects in the video. To produce 3D content in this extended format, one has to deduce what is behind objects. There are various axes along which this occluded data can be obtained. This paper presents a method to automatically detect and fill the occluded areas exploiting the temporal axis. To get visually pleasing results, it is of utmost importance to make the inpainting globally consistent. To do so, we start by analyzing data along the temporal axis and compute a confidence for each pixel. Then pixels from the future and the past that are not visible in the current frame are weighted and accumulated based on computed confidences. These results are then fed to a generic multi-source framework that computes the occlusion layer based on the available confidences and occlusion data.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
R. Klein Gunnewiek, R. Klein Gunnewiek, R.-P. M. Berretty, R.-P. M. Berretty, B. Barenbrug, B. Barenbrug, J. P. Magalhães, J. P. Magalhães, } "Coherent spatial and temporal occlusion generation", Proc. SPIE 7237, Stereoscopic Displays and Applications XX, 723713 (17 February 2009); doi: 10.1117/12.806818; https://doi.org/10.1117/12.806818

Back to Top