Open Access
9 July 2013 Depth error compensation for camera fusion system
Cheon Lee, Sung-Yeol Kim, Byeongho Choi, Yong-Moo Kwon, Yo-Sung Ho
Author Affiliations +
Abstract
When the three-dimensional (3-D) video system includes a multiview video generation technique using depth data to provide more realistic 3-D viewing experiences, accurate depth map acquisition is an important task. In order to generate the precise depth map in real time, we can build a camera fusion system with multiple color cameras and one time-of-flight (TOF) camera; however, this method is associated with depth errors, such as depth flickering, empty holes in the warped depth map, and mixed pixels around object boundaries. In this paper, we propose three different methods for depth error reduction to minimize such depth errors. In order to reduce depth flickering in the temporal domain, we propose a temporal enhancement method using a modified joint bilateral filtering at the TOF camera side. Then, we fill the empty holes in the warped depth map by selecting a virtual depth and applying a weighted depth filtering method. After hole filling, we remove mixed pixels and replace them with new depth values using an adaptive joint multilateral filter. Experimental results show that the proposed method reduces depth errors significantly in near real time.
CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Cheon Lee, Sung-Yeol Kim, Byeongho Choi, Yong-Moo Kwon, and Yo-Sung Ho "Depth error compensation for camera fusion system," Optical Engineering 52(7), 073103 (9 July 2013). https://doi.org/10.1117/1.OE.52.7.073103
Published: 9 July 2013
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Cameras

Imaging systems

Time of flight cameras

Error analysis

Video

Optical engineering

Optical filters

Back to Top