Paper
6 March 2014 Integration of multiple view plus depth data for free viewpoint 3D display
Kazuyoshi Suzuki, Yuko Yoshida, Tetsuya Kawamoto, Toshiaki Fujii, Kenji Mase
Author Affiliations +
Proceedings Volume 9011, Stereoscopic Displays and Applications XXV; 901114 (2014) https://doi.org/10.1117/12.2039166
Event: IS&T/SPIE Electronic Imaging, 2014, San Francisco, California, United States
Abstract
This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kazuyoshi Suzuki, Yuko Yoshida, Tetsuya Kawamoto, Toshiaki Fujii, and Kenji Mase "Integration of multiple view plus depth data for free viewpoint 3D display", Proc. SPIE 9011, Stereoscopic Displays and Applications XXV, 901114 (6 March 2014); https://doi.org/10.1117/12.2039166
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

3D modeling

Imaging systems

Data modeling

Sensors

Clouds

3D displays

RELATED CONTENT


Back to Top