21 February 2013 Efficient streaming of stereoscopic depth-based 3D videos
Author Affiliations +
Abstract
In this paper, we propose a method to extract depth from motion, texture and intensity. We first analyze the depth map to extract a set of depth cues. Then, based on these depth cues, we process the colored reference video, using texture, motion, luminance and chrominance content, to extract the depth map. The processing of each channel in the YCRCB-color space is conducted separately. We tested this approach on different video sequences with different monocular properties. The results of our simulations show that the extracted depth maps generate a 3D video with quality close to the video rendered using the ground truth depth map. We report objective results using 3VQM and subjective analysis via comparison of rendered images. Furthermore, we analyze the savings in bitrate as a consequence of eliminating the need for two video codecs, one for the reference color video and one for the depth map. In this case, only the depth cues are sent as a side information to the color video.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Dogancan Temel, Dogancan Temel, Mohammed Aabed, Mohammed Aabed, Mashhour Solh, Mashhour Solh, Ghaassan AlRegib, Ghaassan AlRegib, } "Efficient streaming of stereoscopic depth-based 3D videos", Proc. SPIE 8666, Visual Information Processing and Communication IV, 86660I (21 February 2013); doi: 10.1117/12.2005161; https://doi.org/10.1117/12.2005161
PROCEEDINGS
10 PAGES


SHARE
Back to Top