When the three-dimensional (3-D) video system includes a multiview video generation technique using depth data to provide more realistic 3-D viewing experiences, accurate depth map acquisition is an important task. In order to generate the precise depth map in real time, we can build a camera fusion system with multiple color cameras and one time-of-flight (TOF) camera; however, this method is associated with depth errors, such as depth flickering, empty holes in the warped depth map, and mixed pixels around object boundaries. In this paper, we propose three different methods for depth error reduction to minimize such depth errors. In order to reduce depth flickering in the temporal domain, we propose a temporal enhancement method using a modified joint bilateral filtering at the TOF camera side. Then, we fill the empty holes in the warped depth map by selecting a virtual depth and applying a weighted depth filtering method. After hole filling, we remove mixed pixels and replace them with new depth values using an adaptive joint multilateral filter. Experimental results show that the proposed method reduces depth errors significantly in near real time.
In this paper, we propose a new scheme for sending three-dimensional (3-D) mesh models. In order to make a viewdependent representation of 3-D mesh models, we combine sequential and progressive mesh transmission techniques. After we partition a 3-D mesh model into a hierarchical tree, we determine the amount of information for each submesh. Then, we can send the 3-D model information by view-dependent selection with mesh merging and splitting operations.
Experimental results have demonstrated that the proposed scheme can send mesh information adaptively through mesh merging and splitting operations and provides good visual quality in a limited bandwidth channel.