The goal of this research is to develop a general deep learning solution for atmospheric correction and target detection using multiple hyperspectral scenes. It is assumed that the scenes differ only in range and viewing angles, that they are acquired in rapid sequence using an airborne sensor orbiting a target, and that the target and the atmosphere remain invariant within the time scale of the collection. Several hundred thousand hyperspectral simulations were performed using the MODTRAN model and were used to train the deep learning solution, as well as to validate the proposed method. The input to the deep learning solution is a matrix of the simulated radiances at the sensor as function of wavelength and elevation angles. The output is atmospheric upwelling, downwelling, and transmission. This solution is repeated for all or a subset of pixels in the scene. We focus on emissive properties of targets, and simulations are performed in the longwave infrared between 7.5 and 12 μm. Results show that the proposed method is computationally efficient and it can characterize the atmosphere and retrieve the target spectral emissivity within one order of magnitude errors or less when compared with the original MODTRAN simulations.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.