30 October 2009 GPGPU-based parallel processing of massive LiDAR point cloud
Author Affiliations +
Proceedings Volume 7497, MIPPR 2009: Medical Imaging, Parallel Processing of Images, and Optimization Techniques; 749716 (2009) https://doi.org/10.1117/12.833740
Event: Sixth International Symposium on Multispectral Image Processing and Pattern Recognition, 2009, Yichang, China
Abstract
Processing the massive LiDAR point cloud is a time consuming process due to the magnitude of the data involved and the highly computational iterative nature of the algorithms. In particular, many current and future applications of LiDAR require real- or near-real-time processing capabilities. Relevant examples include environmental studies, military applications, tracking and monitoring of hazards. Recent advances in Graphics Processing Units (GPUs) open a new era of General-Purpose Processing on Graphics Processing Units (GPGPU). In this paper, we seek to harness the computing power available on contemporary Graphic Processing Units (GPUs), to accelerate the processing of massive LiDAR point cloud. We propose a CUDA-based method capable of accelerating processing of massive LiDAR point cloud on the CUDA-enabled GPU. Our experimental results showed that we are able to significantly reduce processing time of constructing TIN from LiDAR point cloud with GPGPU based parallel processing implementation, in comparison with the current state-of-the-art CPU-based algorithms.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xun Zeng, Xun Zeng, Wei He, Wei He, } "GPGPU-based parallel processing of massive LiDAR point cloud", Proc. SPIE 7497, MIPPR 2009: Medical Imaging, Parallel Processing of Images, and Optimization Techniques, 749716 (30 October 2009); doi: 10.1117/12.833740; https://doi.org/10.1117/12.833740
PROCEEDINGS
6 PAGES


SHARE
RELATED CONTENT


Back to Top