Proc. SPIE. 7334, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV
KEYWORDS: Target detection, Signal to noise ratio, Hyperspectral imaging, Detection and tracking algorithms, Sensors, Image segmentation, Single mode fibers, Data conversion, Hyperspectral target detection, Algorithms
Irregular illumination across a hyperspectral image makes it difficult to detect targets in shadows, perform change
detection, and segment the contents of the scene. To correct for the data in shadow, we first convert the data from
Cartesian space to a hyperspherical coordinate system. Each N-dimensional spectral vector is converted to N-1 spectral
angles and a magnitude representing the illumination value of the spectra. Similar materials will have similar angles and
the differences in illumination will be described mostly by the magnitude.
In the data analyzed, we found that the distribution of illumination values is well approximated by the sum of two-
Gaussian distributions, one for shadow and one for non-shadow. The Levenberg-Marquardt algorithm is used to fit the
empirical illumination distribution to the theoretical Gaussian sum. The LM algorithm is an iterative technique that
locates the minimum of a multivariate function that is expressed as the sum of squares of non-linear real-valued
Once the shadow and non-shadow distributions have been modeled, we find the optimal point to be one standard
deviation out on the shadow distribution, allowing for the selection of about 84% of the shadows. This point is then used
as a threshold to decide if the pixel is shadow or not. Corrections are made to the shadow regions and a spectral matched
filter is applied to the image to test target detection in shadow regions. Results show a signal-to-noise gain over other
illumination suppression techniques.
Automatic target detection (ATD) is a very challenging problem for the Army in ground-to-ground scenarios using
infrared (IR) sensors. I propose an ATD algorithm based on vector quantization (VQ). VQ is typically used for image
compression where a codebook is created using the Linde Buzo Gray (LBG) algorithm from an example image. The
codebook will be trained on clutter images containing no targets thus creating a clutter codebook. The idea is to encode
and decode new images using the clutter codebook and calculate the VQ error. The error due solely to the compression
will be approximately consistent across the image. In the areas that contain new objects in the scene (objects the
codebook has not been trained on) we should see the consistent compression error plus an increased "non-training error"
due to the fact that pixel blocks representing the new object are not included in the codebook. After the decoding
process, areas in the image with large overall error will correlate to pixel blocks not in the codebook. The Kolomogorov-Smirnov distance is used to classify new objects from a reference clutter error distribution. The VQ algorithm trains on
clutter so it will never have a problem with new targets like many "trained algorithms". The algorithm is run over a data
set of images and the results show that the VQ detection algorithm performs as well as the Army benchmark algorithm.
We have proposed a new method for illumination suppression in hyperspectral image data. This involves transforming
the data into a hyperspherical coordinate system, segmenting the data cloud into a large number of classes according to
the radius dimension, and then demeaning each class, thereby eliminating the distortion introduced by differential
absorption in shaded regions. This method was evaluated against two other illumination-suppression methods using two
metrics: visual assessment and spectral similarity of similar materials in shaded and fully illuminated regions. The
proposed method shows markedly superior performance by each of these metrics.
Designing and testing algorithms to process hyperspectral imagery is a difficult process due to the sheer volume of the
data that needs to be analyzed. It is not only time-consuming and memory-intensive, but also consumes a great amount
of disk space and is difficult to track the results. We present a system that addresses these issues by storing all
information in a centralized database, routing the processing of the data to compute servers, and presenting an intuitive
interface for running experiments on multiple images with varying parameters.