A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.
Commercial multispectral satellite sensors spend much of their time over the oceans. NRL has demonstrated an automatic processing system for finding ships at sea using commercially available multispectral data. To distinguish ships from whitecaps and clouds, a water/cloud clutter subspace is estimated and a continuum fusion derived anomaly detection algorithm is applied. This provides a maritime awareness capability with an acceptable detection rate while maintaining a low rate of false alarms. The system also provides a confidence metric, which can be used to further limit the false alarm rate.
Proc. SPIE. 7334, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV
KEYWORDS: Target detection, Signal to noise ratio, Hyperspectral imaging, Detection and tracking algorithms, Sensors, Image segmentation, Single mode fibers, Data conversion, Hyperspectral target detection, Algorithms
Irregular illumination across a hyperspectral image makes it difficult to detect targets in shadows, perform change
detection, and segment the contents of the scene. To correct for the data in shadow, we first convert the data from
Cartesian space to a hyperspherical coordinate system. Each N-dimensional spectral vector is converted to N-1 spectral
angles and a magnitude representing the illumination value of the spectra. Similar materials will have similar angles and
the differences in illumination will be described mostly by the magnitude.
In the data analyzed, we found that the distribution of illumination values is well approximated by the sum of two-
Gaussian distributions, one for shadow and one for non-shadow. The Levenberg-Marquardt algorithm is used to fit the
empirical illumination distribution to the theoretical Gaussian sum. The LM algorithm is an iterative technique that
locates the minimum of a multivariate function that is expressed as the sum of squares of non-linear real-valued
Once the shadow and non-shadow distributions have been modeled, we find the optimal point to be one standard
deviation out on the shadow distribution, allowing for the selection of about 84% of the shadows. This point is then used
as a threshold to decide if the pixel is shadow or not. Corrections are made to the shadow regions and a spectral matched
filter is applied to the image to test target detection in shadow regions. Results show a signal-to-noise gain over other
illumination suppression techniques.
Small object detection with a low false alarm rate remains a challenge for automated hyperspectral detection algorithms
when the background environment is cluttered. In order to approach this problem we are developing a compact
hyperspectral sensor that can be fielded from a small unmanned airborne platform. This platform is capable of flying low
and slow, facilitating the collection of hyperspectral imagery that has a small ground-sample distance (GSD) and small
atmospheric distortion. Using high-resolution hyperspectral imagery we simulate various ranges between the sensor and
the objects of interest. This numerical study aids in analysis of the effects of stand-off distance on detection versus false
alarm rates when using standard hyperspectral detection algorithms. Preliminary experimental evidence supports our
We have proposed a new method for illumination suppression in hyperspectral image data. This involves transforming
the data into a hyperspherical coordinate system, segmenting the data cloud into a large number of classes according to
the radius dimension, and then demeaning each class, thereby eliminating the distortion introduced by differential
absorption in shaded regions. This method was evaluated against two other illumination-suppression methods using two
metrics: visual assessment and spectral similarity of similar materials in shaded and fully illuminated regions. The
proposed method shows markedly superior performance by each of these metrics.
Designing and testing algorithms to process hyperspectral imagery is a difficult process due to the sheer volume of the
data that needs to be analyzed. It is not only time-consuming and memory-intensive, but also consumes a great amount
of disk space and is difficult to track the results. We present a system that addresses these issues by storing all
information in a centralized database, routing the processing of the data to compute servers, and presenting an intuitive
interface for running experiments on multiple images with varying parameters.
Hyperspectral focal plane arrays typically contain many pixels that are excessively noisy, dead, or exhibit poor signal to-
noise performance in comparison to the average pixel. These bad pixels can significantly impair the performance of
spectral target-detection algorithms. Even a single missed bad pixel can lead to false alarms. If the bad pixels are
sparsely populated across the focal plane, the over-sampling in both spatial and spectral dimensions of the array can be
capitalized upon to replace these pixels without significant loss of information. However, bad pixels are frequently
localized in clusters, requiring a replacement strategy that rather than providing a good estimate of the missing data will
instead minimize artifacts that may negatively affect the performance of spectral detection algorithms. In this paper, we
evaluate a robust method to automatically identify bad pixels for short-wavelength infrared (SWIR) hyperspectral
sensors. In addition, we introduce a novel procedure for the replacement of these pixels, which we demonstrate
provides a better estimate of the original pixel value compared to interpolation methods for bad pixels found as both
isolated individuals and in clusters. The advantages of our technique are discussed and demonstrated with data from
several different airborne sensor systems.
Covariance equalization (CE) is a method by which one can predict the change in an object's hyperspectral signature
due to changes in sun position, atmospheric conditions, and viewing angle and range. Specifically, CE produces a linear
transformation that relates the object's signature as measured at the sensor at a particular time to that measured at
another time and under different conditions. The transformation is based on the background statistics of a scene imaged
at the two times. Although CE was derived under the assumption that the two images cover mostly the same geographic
area, it also has been found to work well for objects that have moved from one location to another. The CE technique
has been previously verified with data from a nadir-viewing visible hyperspectral camera. In this paper, however, we
show results from the application of CE to highly oblique hyperspectral SWIR data. We evaluate the utility of CE
primaily through its effectiveness in transforming signatures acquired under one set of conditions for application to
matched-filter object detection under a second set of conditions (e.g., view angle, slant range, altitude, atmospheric
conditions, and time of day). Object detection with highly oblique sensors (75 deg. to 80 deg. off-nadir) is far more
difficult than with nadir-viewing sensors for several reasons: increased atmospheric optical thickness, which results in
lower signal-to-noise and higher adjacency effects; fewer pixels on object; the effects of the nonuniformity of the bidirection
reflectance function of most man-made objects; and the change in pixel size when measurements are taken at
different slant ranges.
In deep, homogeneous waters with no internal sources at a particular wavelength, the vertical profiles of the reflectance R(z) and the downward diffuse attentuation coefficient K<SUB>d</SUB>(z) approach asymptotic values that are inherent optical properties (IOPs) of the water. The apparent optical properties R(z) and K<SUB>d</SUB>(z) are obtained from the upward and downward monospectral irradiance measurements E<SUB>u</SUB>(z) and E<SUB>d</SUB>(z) that are commonly available to optical oceanographers. Given a specific scattering phase function, there are unique correlations between these asymptotic IOPs and the absorption and scattering coefficients a and b that can be derived from the radiative transfer equation. Here we evaluate a method for first determining the asymptotic IOPs from E<SUB>u</SUB>(z) and E<SUB>d</SUB>(z) and then using the correlations to estimate the absorption and scattering coefficients a and b. At depths near the asymptotic radiance regime, both R(z) and K<SUB>d</SUB>(z) can be fitted to a three-parameter model that sometimes helps in the determination of the asymptotic IOPs. A good estimation of a can be obtained from the asymptotic IOPs even when the scattering phase function is unknown; however, estimates of b are highly dependent on the assumed phase function.