Automated semantic labeling of complex urban scenes in remotely sensed 2D and 3D data is one of the most challenging steps in producing realistic 3D scene models and maps. Recent large-scale public benchmark data sets and challenges for semantic labeling with 2D imagery have been instrumental in identifying state of the art methods and enabling new research. 3D data from lidar and multi-view stereo have also been shown to provide valuable additional information to enable improved semantic labeling accuracy. In this work, we describe the development of a new large-scale data set combining public lidar and multi-view satellite imagery with pixel-level truth for ground labels and instance-level truth for building labels. We demonstrate the use of this data set to evaluate methods for ground and building labeling tasks to establish performance expectations and identify areas for improvement. We also discuss initial steps toward further leveraging this data set to enable machine learning for more complex semantic and instance segmentation and 3D reconstruction tasks. All software developed to produce this public data set and to enable metric scoring are also released as open source code.
Myron Brown, Hirsh Goldberg, Kevin Foster, Andrea Leichtman, Sean Wang, Shea Hagstrom, Marc Bosch, and Scott Almes, "Large-scale public lidar and satellite image data set for urban semantic labeling," Proc. SPIE 10636, Laser Radar Technology and Applications XXIII, 106360P (Presented at SPIE Defense + Security: April 18, 2018; Published: 10 May 2018); https://doi.org/10.1117/12.2304403.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 14,000 conference presentations, including many plenary and keynote presentations.