In this paper, we propose a new free-view video system that generates 3D video from arbitrary point of view, using
multiple cameras. When target objects are captured by these cameras, the PC allocated to each capturing camera
segments the objects and transmits the masks and color textures to a 3D modeling server via the system's network. The
modeling server then generates 3D models of each object from the gathered masks. Finally, the server generates a 3D
video at the designated point of view with the 3D model and texture information. In 3D modeling, a reliability-based
shape-from-silhouette technique reconstructs a visual hull by carving a 3D space based on the intra-/inter-silhouette
reliabilities. In final view rendering, we use a cinematographic camera control system and an ARToolkit to control
KEYWORDS: Image segmentation, 3D modeling, RGB color model, Video, Cameras, Optical engineering, Image processing algorithms and systems, Light sources and illumination, Video surveillance, Imaging systems
We propose a robust method to extract silhouettes of foreground objects from color-video sequences. To cope with various changes in the background, we model the background as a Laplace distribution and update it with a selective running average and static pixel observation. All pixels in the input video image are classified into four initial regions using background subtraction with multiple thresholds. Shadow regions are eliminated using color components, and the final foreground silhouette is extracted by smoothing the boundaries of the foreground and eliminating errors inside and outside of the regions. Experimental results show that the proposed algorithm works very well in various background and foreground situations.
We propose a fast disparity estimation algorithm using background registration and object segmentation for stereo sequences from fixed cameras. Dense background disparity information is calculated in an initialization step, so that only disparities of moving object regions are updated in the main process. We propose a real-time segmentation technique using background subtraction and interframe differences, and a hierarchical disparity estimation using a region-dividing technique and shape-adaptive matching windows. Experimental results show that the proposed algorithm provides accurate disparity vector fields with an average processing speed of 15 frames/s for 320×240 stereo sequences on an ordinary PC.
We propose a fast depth reconstruction algorithm for stereo sequences using camera geometry and disparity estimation. In disparity estimation process, we calculate dense background disparity fields in an initialization step so that only disparities of moving object regions are updated in the main process using real-time segmentation and hierarchical disparity estimation techniques. The estimated dense disparity fields are converted into depth information by camera geometry. Experimental results show that the proposed algorithm provides accurate depth information with an average processing speed of 15 frames/sec for 320x240 stereo sequences on a common PC. We also verified the performance of the proposed algorithm by applying it to real applications.
A two-stage algorithm is proposed for locating smooth and detailed disparity vector fields in a stereo image pair. The algorithm consists of hierarchical disparity estimation using a region-dividing technique and edge-preserving regularization. The hierarchical region-dividing disparity estimation increases the efficiency and reliability of the estimation process. At the second stage, the vector fields are regularized with an energy model that produces smooth fields while preserving discontinuities resulting from object boundaries. The minimization problem is addressed by solving a corresponding partial differential equation using a finite-difference method. Experiments show that the proposed algorithm provides accurate and spatially correlated disparity vector fields in various types of stereo images, even in the case of images with large displacements.
Mixed reality is different from the virtual reality in that users can feel immersed in a space which is composed of not only virtual but also real objects. Thus, it is essential to realize seamless integration and mutual occlusion of the virtual and real worlds. Therefore, we need depth information of the real scene to perform the synthesis. We propose the depth estimation algorithm with sharp object boundaries for mixed reality system based on hierarchical disparity estimation. Initial disparity vectors are obtained from downsampled stereo images using region-dividing disparity estimation technique. Then, background region is detected and flattened. With these initial vectors, dense disparities are estimated and regularized with shape-adaptive window in full resolution images. Finally, depth values are calculated by stereo geometry and camera parameters. As a result, virtual objects can be mixed into the image of real world by comparing the calculated depth values with the depth information of generated virtual objects. Experimental results show that occlusion between the virtual and real objects are correctly established with sharp boundaries in the synthesized images, so that user can observe the mixed scene with considerably natural sensation.