Translator Disclaimer
23 April 2020 DeepOSM-3D: recognition in aerial LiDAR RGBD imagery
Author Affiliations +
In this paper, we present a pipeline and prototype vision system for near-real-time semantic segmentation and classification of objects such as roads, buildings, and vehicles in large high-resolution wide-area real-world aerial LiDAR point-cloud and RGBD imagery. Unlike previous works, which have focused on exploiting ground- based sensors or narrowed the scope to detecting the density of large objects, here we address the full semantic segmentation of aerial LiDAR and RGBD imagery by exploiting crowd-sourced labels that densely canvas each image in the 2015 Dublin dataset.1 Our results indicate important improvements to detection and segmentation accuracy with the addition of aerial LiDAR over RGB imagery alone, which has important implications for civilian applications such as autonomous navigation and rescue operations. Moreover, the prototype system can segment and search geographic areas as big as 1km2 in a matter of seconds on commodity hardware with high accuracy (_ 90%), suggesting the feasibility of real-time scene understanding on small aerial platforms.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Abhejit Rajagopal, Noah Stier, William Nelson, Shivkumar Chandrasekaran, and Andrew P. Brown "DeepOSM-3D: recognition in aerial LiDAR RGBD imagery", Proc. SPIE 11398, Geospatial Informatics X, 113980A (23 April 2020);


Back to Top