1 January 2009 Fusion of multiple image types for the creation of radiometrically-accurate synthetic scenes
Author Affiliations +
Abstract
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multi-spectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG's ability to generate scenes "on demand." To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, we have developed a comprehensive workflow aimed at reducing the man-in-the-loop requirements for many aspects of synthetic hyperspectral scene construction. Through a fusion of 3D lidar data with passive imagery, we have been able to partially-automate many of the required tasks in the creation of high-resolution urban DIRSIG scenes. This paper presents a description of these techniques.
Stephen R. Lach, John P. Kerekes, and Xiaofeng Fan "Fusion of multiple image types for the creation of radiometrically-accurate synthetic scenes," Journal of Applied Remote Sensing 3(1), 033501 (1 January 2009). https://doi.org/10.1117/1.3075896
Published: 1 January 2009
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

LIDAR

Image registration

Reflectivity

RGB color model

Image segmentation

Sensors

Back to Top