To better understand the capabilities of hyperspectral imaging spectrometers, a number of organizations planned and carried out a data collection exercise at a desert site in the southwestern United States. As part of this collection, eight soil 'panels' were constructed; four filled with a coarse gravel/sand mixture and four flled with fine soil. Each set of four panels was prepared to represent two moisture and density conditions: wet versus dry and compacted versus loose. Unlike laboratory soil specimens, which use 'purified' samples, these soil flats contained more variability. They therefore better represented the 'natural' environment that would be viewed by an airborne hyperspectral imaging sensor, while still allowing an experimental study under more controlled conditions. This paper examines how well the eight soil types and conditions can be distinguished based on their VNIR/SWIR reflectance spectra derived from field measurements and from airborne hyperspectral measurements made at nearly the same time. A brief review of the phenomenology of soil reflectance spectra will be given. Based on physical attributes of the soils, some new classification approaches have been developed and were applied to the soil panels. These phenomenological methods include examining contrast in certain broadband features and, based on these, calculating various broadband spectral ratios over subsets of the VNIR/SWIR spectral region. The separability of the reflectance spectra from the eight soil panels were also analyzed by applying the Spectral Angle Mapper (SAM) hyperspectral distance metric to quantify the separations between all pairs of soil types and conditions. Finally, a neural network approach was applied to determine distinguishing features of the spectra. The phenomenological approaches, SAM analyses, and the neural network results will be compared.
This investigation explores how hyperspectral distance metrics may be used as indicators of relative water depth in a coastal region. Spectral reflectance characteristics of near-shore waters imaged by an airborne hyperspectral sensor are examined. Commonly used hyperspectral distance metrics are applied to the data with the goal of distinguishing the spectra derived from various water depths. To improve the separability of the spectra, this study also examines, for one distance metric, the effect of processing only a subset of spectral bands recorded by the sensor. The concept of selecting a subset of bands extends to improving the performance of algorithms that process hyperspectral data for detection, classification, or estimation. An additional benefit is reducing the dimensionality of the dat and, thereby, the computational load. The key to reaching both of these objectives is to understand and match physical processes to appropriate mathematical metrics performance measures in a comprehensive framework. The overall process is driven both by empirical analysis of hyperspectral data and by mathematical examination of the spectra.
In support of hyperspectral sensor system design and parameter tradeoff investigations, an analytical end-to-end remote sensing system performance forecasting model has been extended to the longwave infrared (LWIR). The model uses statistical descriptions of surface emissivities and temperature variations in a scene and propagates them through the effects of the atmosphere, the sensor, and processing transformations. A resultant system performance metric is then calculated based on these propagated statistics. This paper presents the theory and operation of extensions made to the model to cover the LWIR. Theory is presented on combining both surface spectral emissivity variation with surface temperature variation on the upwelling radiance measured by a downward-looking LWIR hyperspectral sensor. Comparisons of the model predictions with measurements from an airborne LWIR hyperspectral sensor at the DoE ARM site are presented. Also discussed is the implementation of a plume model and radiative transfer equations used to incorporate a thin man-made effluent plume in the upwelling radiance. Example parameter trades are included to show the utility of the model for sensor design and operation applications.
A number of organizations are using the data collected by the HYperspectral Digital Imagery Collection Experiment (HYDICE) airborne sensor to demonstrate the utility of hyperspectral imagery (HSI) for a variety of applications. The interpretation and extrapolation of these results can be influenced by the nature and magnitude of any artifacts introduced by the HYDICE sensor. A short study was undertaken which first reviewed the literature for discussions of the sensor's noise characteristics and then extended those results with additional analyses of HYDICE data. These investigations used unprocessed image data from the onboard Flight Calibration Unit (FCU) lamp and ground scenes taken at three different sensor altitudes and sample integration times. Empirical estimates of the sensor signal-to-noise ratio (SNR) were compared to predictions from a radiometric performance model. The spectral band-to-band correlation structure of the sensor noise was studied. Using an end-to-end system performance model, the impact of various noise sources on subpixel detection was analyzed. The results show that, although a number of sensor artifacts exist, they have little impact on the interpretations of HSI utility derived from analyses of HYDICE data.
In support of hyperspectral sensor system design and parameter tradeoff investigations, an analytical end-to-end remote sensing system performance forecasting model is being developed. The model uses statistical descriptions of class reflectances in a scene and propagates them through the effects of the atmosphere, the sensor, and any processing transformations. A resultant system performance metric is then calculated based on these propagated statistics. The model divides a remote sensing system into three main components: the scene, the sensor, and the processing algorithms. Scene effects modeled include the solar illumination, atmospheric transmittance, shade effects, adjacency effects, and overcast clouds. Sensor effects modeled include the following radiometric noise sources: shot noise, thermal noise, detector readout noise, quantization noise, and relative calibration error. The processing component includes atmospheric compensation, various linear transformations, and a spectral matched filter used to obtain detection probabilities. This model has been developed for the HYDICE airborne imaging spectrometer covering the reflective solar spectral region from 0.4 to 2.5 micrometers . The paper presents the theory and operation of the model, as well as provides the results of validation studies comparing the model predictions to results obtained using HYDICE data. An example parameter trade study is also included to show the utility of the model for system design and operation applications.
Understanding the sources of uncertainty in GOES Imager IR data is important to meteorologists and scientists who develop meteorological products. One component of radiometric uncertainty that is not well characterized, unlike noise and calibration errors, arises from the sensors's MTF. To understand this effect it is necessary to know the amount of power at high spatial frequencies in a typical scene. The sensor MTF, however, acts as a lower pass filter on the scene spatial frequency content, passing low frequencies and attenuating higher frequencies. To study the effect of the higher spatial frequencies in a scene, a model of both sensor MTF and scene spatial frequency content has been developed. The scene model is based on data from the Modis Airborne Simulator (MAS), a 50 channel radiometer- imager flown aboard a NASA ER-2. The MAS sensor has a 50 m IFOV at nadir, compared to the GOES channel radiometer- imaging flown aboard a NASA ER-2. The MAS sensor has a 50 m IFOV at nadir, compared to the GOES 4 km IFOV. The data sets from which the scene model was developed contain various combinations of land and clouds from several flights. Overlapping power spectral densities from the two sensors validate the use of MAS data for a GOES scene model at high spatial frequencies. The sensor MTF model is based both on measurements made during pre-launch testing and on theoretical calculations from sensor f-number, detector size and electronics filtering. The MTFs of Imager channels 2 and 4 are compared. Their difference is applied to the scene power spectra to evaluate the average radiometric error due to MTF differences.
The use of a range/passive-IR histogram as an approach to pixel-level fusion for cued target detection is discussed. Target detection algorithms for laser radar range imagery often use a number of computationally-intensive operations to locate targets in an image. These steps may include performing global geometric transforms, locating the ground plane, or applying size filters with associated rules. Each pixel in the image is processed multiple times, a time- consuming chore. An alternative to examining every pixel is to cue such detailed algorithms directly to an image location which is likely to contain a target. Cueing reduces the burden of searching or processing the entire image for regions of interest, greatly decreasing the number of computations needed for target detection. Cueing can be accomplished by combining registered laser radar range and passive-IR data. Since these data are taken simultaneously and are co-registered by Lincoln Laboratory's airborne laser radar, it is possible to combine them to form a powerful set of discriminants. There are two possible approaches to fusing the range and passive-IR data: (1) the domains can be processed for detections in two parallel streams and the resulting detection maps combined, or (2) the domains can be fused first and then processed as a single stream for detections. In the first method, sometimes called 'image-level' fusion, the processing algorithm for each domain can be optimized to obtain the best combination of detection and false alarm statistics. In the second method, the domains are combined at the pixel level, and the result is processed directly for target detection. The latter approach also reduces the number of computations since only data of interest to both domains is processed further. In this article, the multi-dimensional laser radar sensor and ongoing efforts to develop an automatic target recognition (ATR) system are described. The processing of pixel-registered laser radar range and passive-IR imagery for cued target detection using the range/passive-IR histogram is discussed. The authors present results for IR imagery with positive passive-IR target-to-background contrast to study the performance of the algorithm in real-world scenarios. These results are compared with a more typical detection method.