4 November 2014 Partial scene reconstruction using Time-of-Flight imaging
Author Affiliations +
Abstract
This paper is devoted to generating the coordinates of partial 3D points in scene reconstruction via time of flight (ToF) images. Assuming the camera does not move, only the coordinates of the points in images are accessible. The exposure time is two trillionths of a second and the synthetic visualization shows that the light moves at half a trillion frames per second. In global light transport, direct components signify that the light is emitted from a light point and reflected from a scene point only once. Considering that the camera and source light point are supposed to be two focuses of an ellipsoid and have a constant distance at a time, we take into account both the constraints: (1) the distance is the sum of distances which light travels between the two focuses and the scene point; and (2) the focus of the camera, the scene point and the corresponding image point are in a line. It is worth mentioning that calibration is necessary to obtain the coordinates of the light point. The calibration can be done in the next two steps: (1) choose a scene that contains some pairs of points in the same depth, of which positions are known; and (2) take the positions into the last two constraints and get the coordinates of the light point. After calculating the coordinates of scene points, MeshLab is used to build the partial scene model. The proposed approach is favorable to estimate the exact distance between two scene points.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yuchen Zhang, Yuchen Zhang, Hongkai Xiong, Hongkai Xiong, } "Partial scene reconstruction using Time-of-Flight imaging", Proc. SPIE 9273, Optoelectronic Imaging and Multimedia Technology III, 927315 (4 November 2014); doi: 10.1117/12.2071937; https://doi.org/10.1117/12.2071937
PROCEEDINGS
8 PAGES


SHARE
Back to Top