An adaptive algorithm is described for deriving constant false alarm rate (CFAR) detection thresholds based on
statistically motivated models of actual spectral detector output distributions. The algorithm dynamically tracks the
distribution of detector observables and fits the observed distribution to a suitable mixture density model function. The
fitted distribution model is used to compute numerical detection thresholds that achieve a constant probability of false
alarm (Pfa) per pixel. Typically gamma mixture densities are used to model outputs of anomaly detectors based on
quadratic decision statistics, while normal mixture densities are used for linear matched filter type detectors. In order to
achieve the computational efficiency required for real-time implementations of the algorithm on mainstream
microprocessors, a robust yet considerably less complex exponential mixture model was recently developed as a general
approximation to common long-tailed detector distributions. Within the region of operational interest, namely between
the primary mode and the far tail, this approximation serves as an accurate model while providing significant reduction
in computational cost. We compare the performance of the exponential approximation against the full-blown gamma
and normal models. We also demonstrate the false alarm regulation performance of the adaptive CFAR algorithm using
anomaly and matched detector outputs derived from actual VNIR-band hyperspectral imagery collected by the Civil Air
Patrol (CAP) Airborne Real time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) system.
The theory of asymptotic eigenvalue distributions of sample covariance matrices has been applied to array processing and model identification problems that require characterization of signal and noise modes in vector-valued observations. It naturally applies in cases where the dimensionality of the observation space is large compared with the signal model order. A similar situation holds for most hyperspectral image observations. Hyperspectral data is frequently described in terms of a "signal" component composed of linear combinations of endmember basis spectra, plus random additive "noise" from the sensor and environment. The number of resolvable signal modes is typically much smaller than the number of spectral bands, and most of the orthogonal spectral dimensions generated by a principal components analysis are dominated by noise.
Analytical characterization of the "noise eigenmodes" of a hyperspectral data cube supports the development of objective methods for estimating image noise statistics, signal-to-noise ratio, and the complexity and content of the underlying spectral scene. This paper reviews some fundamental results in eigenvalue distribution theory for high-dimensional data, and explores potential applications of the theory to hyperspectral data analysis. Specific applications developed and illustrated in the paper include scene-based estimation of noise-equivalent spectral radiance (NESR), and automated selection of signal-bearing and noise-limited subspaces for spectral analysis.