Translator Disclaimer
25 January 2011 Real-time 3D flash ladar imaging through GPU data processing
Author Affiliations +
We present real-time 3D image processing of flash ladar data using our recently developed GPU parallel processing kernels. Our laboratory and airborne experiences with flash ladar focal planes have shown that per laser flash, typically only a small fraction of the pixels on the focal plane array actually produce a meaningful range signal. Therefore, to optimize overall data processing speed, the large quantity of uninformative data are filtered out and removed from the data stream prior to the mathematically intensive point cloud transformation processing. This front-end pre-processing, which largely consists of control flow instructions, is specific to the particular type of flash ladar focal plane array being used and is performed by the computer's CPU. The valid signals along with their corresponding inertial and navigation metadata are then transferred to a GPU device to perform range-correction, geo-location, and ortho-rectification on each 3D data point so that data from multiple frames can be properly tiled together either to create a wide-area map or to reconstruct an object from multiple look angles. GPU parallel processing kernels were developed using OpenCL. Postprocessing to perform fine registration between data frames via complex iterative steps also benefits greatly from this type of high-performance computing. The performance improvements obtained using GPU processing to create corrected 3D images and for frame-to-frame fine-registration are presented.
© (2011) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chung M. Wong, Christopher Bracikowski, Brian K. Baldauf, and Steven A. Havstad "Real-time 3D flash ladar imaging through GPU data processing", Proc. SPIE 7872, Parallel Processing for Imaging Applications, 78720P (25 January 2011);

Back to Top