3 April 2000 Mesh-based integration of range and color images
Author Affiliations +
Abstract
This paper discusses the construction of photorealistic 3D models from multisensor data. The data typically comprises multiple views of range and color images to be integrated into a unified 3D model. The integration process uses a mesh-based representation of the range data and the advantages of the mesh-based approach over a volumetric approach are mentioned. First, two meshes, corresponding to range images taken from two different viewpoints, are registered to the same world coordinate system and then integrated. This process is repeated until all views have been integrated. The integration is straightforward unless the two triangle meshes overlap. The overlapped measurements are detected and the less confident triangles are removed based on their distance from and orientation relative to the camera viewpoint. After removing the overlapping patches, the meshes are seamed together to build a single 3D model. The model is incrementally updated after each new viewpoint is integrated. The color images are used as texture in the finished scene model. The results show that the approach is efficient for the integration of large, multimodal data sets.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yiyong Sun, Yiyong Sun, Christophe Dumont, Christophe Dumont, Mongi A. Abidi, Mongi A. Abidi, } "Mesh-based integration of range and color images", Proc. SPIE 4051, Sensor Fusion: Architectures, Algorithms, and Applications IV, (3 April 2000); doi: 10.1117/12.381624; https://doi.org/10.1117/12.381624
PROCEEDINGS
8 PAGES


SHARE
Back to Top