6 March 2014 Integration of multiple view plus depth data for free viewpoint 3D display
Author Affiliations +
This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kazuyoshi Suzuki, Kazuyoshi Suzuki, Yuko Yoshida, Yuko Yoshida, Tetsuya Kawamoto, Tetsuya Kawamoto, Toshiaki Fujii, Toshiaki Fujii, Kenji Mase, Kenji Mase, "Integration of multiple view plus depth data for free viewpoint 3D display", Proc. SPIE 9011, Stereoscopic Displays and Applications XXV, 901114 (6 March 2014); doi: 10.1117/12.2039166; https://doi.org/10.1117/12.2039166

Back to Top