PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656501 (2007) https://doi.org/10.1117/12.741104
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6565, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656502 (2007) https://doi.org/10.1117/12.718789
The basic multivariate anomaly detector ("the RX algorithm") of Kelly and Reed remains little altered after nearly 30 years and performs reasonably well with hyperspectral imagery. However, better performance can be achieved in spectral applications by recognizing a deficiency in the hypothesis test that generates RX. The problem is commonly associated with the improved performance that results from deleting several high-variance clutter dimensions before applying RX, a procedure not envisioned in the original algorithm. There is, moreover, a better way to enhance detection than simply deleting the offending subspace. Instead of invoking the "additive target" model, one can exploit expected differences in spectral variability between target and background signals in the clutter dimensions. Several methods are discussed for achieving detection gain using this principle. Two of these are based on modifications to the RX hypothesis test. One results in Joint Subspace Detection, the other in an algorithm with a similar form but which does not postulate a clutter subspace. Each of these modifies the RX algorithm to incorporate clutter-dependent weights, including "anti-RX" terms in the clutter subspace. A newer approach is also described, which effects a nonlinear suppression of false alarms that are detected by an RX-type algorithm, employed as a preprocessor. Both techniques rely ultimately on the incorporation of simple spectral phenomenology into the detection process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656503 (2007) https://doi.org/10.1117/12.719952
For known signals that are linearly superimposed on gaussian backgrounds, the linear adaptive matched filter (AMF) is well-known to be the optimal detector. The AMF has furthermore proved to be remarkably effective in a broad range of circumstances where it is not optimal, and for which the optimal detector is not linear. In these cases, nonlinear detectors are theoretically superior, but direct estimation of nonlinear detectors in high-dimensional spaces often leads to flagrant overfitting and poor out-of-sample performance. Despite this difficulty in the general case, we will describe several situations in which nonlinearity can be effectively combined with the AMF to detect weak signals. This allows improvement over AMF performance while avoiding the full force of dimensionality's curse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656504 (2007) https://doi.org/10.1117/12.719932
In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral
imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an
inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected
onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three
common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant
(FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition
to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the
four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a
nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the
kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of
the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel
spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to
exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of
the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold.
Comparisons are made using receiver operating characteristics (ROC) curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656505 (2007) https://doi.org/10.1117/12.720766
An algorithm called the constrained signal detector (CSD) was
recently introduced for the purpose of target detection in
hyperspectral images. The CSD assumes that hyperspectral pixels
can be modeled as linear mixtures of material signatures and
stochastic noise. In theory, the CSD is superior to the popular
orthogonal subspace projection (OSP) technique.
The CSD requires knowledge of the spectra of the background
materials in a hyperspectral image. But in practice the background
material spectra are often unknown due to uncertainties in
illumination, atmospheric conditions, and the composition of the
scene being imaged. In this paper, estimation techniques are used
to create an adaptive version of the CSD. This adaptive algorithm
uses training data to develop a description of the image
background and adaptively implement the CSD. The adaptive CSD only
requires knowledge of the target spectrum. It is shown through
simulations that the adaptive CSD performs nearly as well as the
CSD operating with complete knowledge of the background material
spectra. The adaptive CSD is also tested using real hyperspectral
image data and its performance is compared to OSP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656506 (2007) https://doi.org/10.1117/12.718381
The use of hyperspectral imaging (HSI) technology to support a variety of civilian, commercial, and military remote
sensing applications, is growing. The rich spectral information present in HSI allows for more accurate ground cover
identification and classification than with panchromatic or multispectral imagery. One class of problems where
hyperspectral images can be exploited, even when no a priori information about a particular ground cover class is
available, is anomaly detection. Here spectral outliers (anomalies) are detected based on how well each hyperpixel
(spectral irradiance vector for a given pixel position) fits within some background statistical model. Spectral anomalies
may correspond to areas of interest in a given scene. In this work, we compare several anomaly detectors found in the
literature in novel experiments. In particular, we study the performance of the anomaly detectors in detecting several
man-made painted panels in a natural background using visible/near-infrared hyperspectral imagery. The data have been
collected over the course of a nine month period, allowing us to test the robustness of the anomaly detectors with
seasonal change. The detectors considered include the simple Gaussian anomaly detector, a Gaussian mixture model
(GMM) anomaly detector, and the cluster-based anomaly detector (CBAD). We examine the effect of the number of
components for the GMM and the number of clusters for the CBAD. Our preliminary results suggest that the use of a
CBAD yields the best results for our data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656507 (2007) https://doi.org/10.1117/12.718601
This study presents a new theoretical approach for anomaly detection using a priori information about targets. This a
priori knowledge deals with the general spectral behavior and the spatial distribution of targets. In this study, we
consider subpixel and isolated targets which are spectrally anomalous in one region of the spectrum but not in another.
This method is totally different from matched filters which suffer from a relative sensitivity to low errors in the target
spectral signature. We incorporate the spectral a priori knowledge in a new detection distance and we propose a
Bayesian approach with a markovian regularization to suppress the potential targets that do not respect the spatial a
priori. The interest of the method is illustrated on simulated data consisting in realistic anomalies superimposed on a real
HyMap hyperspectral image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656509 (2007) https://doi.org/10.1117/12.718394
This paper presents an object detection algorithm based on stochastic expectation-maximization (SEM)
algorithm. SEM algorithm is based on the stochastic, expectation, and maximization steps to iteratively
estimate the parameters of the classes in many applications including hyperspectral data cube (HDC).
However, the application of SEM algorithm for classification of hyperspectral imagery becomes
impractical because of the huge amount of data (e.g. 512 x 512 x 220).
To avoid this problem, we proposed a preprocessing step for SEM algorithm to fast classify the data cube
formulating an Object Detection algorithm based on SEM for detecting small objects in hyperspectral
imagery. In the proposed preprocessing step, we utilize the exponential of Euclidian Distance for rapidly
separation of data cube into a potential object of interest class and a background class. Then, SEM
algorithm is employed to classify the potential object of interest class further into classes to detect the
object of interest class. In the conducted experiments using real hyperspectral imagery, the results of the
proposed algorithm show comparatively low false alarm rate even with a challenging scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650A (2007) https://doi.org/10.1117/12.720230
This paper presents an Independent Component Analysis (ICA) based linear unmixing algorithm for target detection
application. ICA is a relatively new method that attempts to separate statistically independent sources from a mixed
dataset. The developed algorithm contains two steps. In the first step, ICA based linear unmixing is used to
discriminate statistically independent sources to determine
end-members in a given dataset as well as their
corresponding abundance images. In the second step, unmixing results are analyzed to identify abundance images that
correspond to the target class. The performance of the developed algorithm has been evaluated with several real life
hyperspectral image datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650B (2007) https://doi.org/10.1117/12.719779
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Novel Spectral Data Representation and Performance Assessment Tools
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650C (2007) https://doi.org/10.1117/12.717657
We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650D (2007) https://doi.org/10.1117/12.719348
There are many reconnaissance tasks which involve an image analyst viewing data from hyperspectral imaging systems and attempting to interpret it. Hyperspectral image data is intrinsically hard to understand, even when armed with mathematical knowledge and a range of current processing algorithms. This research is motivated by the search for new ways to convey information about the spectral content of imagery to people. In order to develop and assess the novel algorithms proposed, we have developed a tool for transforming different aspects of spectral imagery into sounds that an analyst can hear. Trials have been conducted which show that the use of these sonic mappings can assist a user in tasks such as rejecting false alarms generated by automatic detection algorithms. This paper describes some of the techniques used and reports on the results of user trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650E (2007) https://doi.org/10.1117/12.721051
The Hyperspectral Image Analysis Toolbox (HIAT) is a collection of algorithms that extend the capability of
the MATLAB numerical computing environment for the processing of hyperspectral and multispectral imagery.
The purpose the Toolbox is to provide a suite of information extraction algorithms to users of hyperspectral
and multispectral imagery. HIAT has been developed as part of the NSF Center for Subsurface Sensing and
Imaging (CenSSIS) Solutionware that seeks to develop a repository of reliable and reusable software tools that can be shared by researchers across research domains. HIAT provides easy access to feature extraction/selection,
supervised and unsupervised classification algorithms, unmixing and visualization developed at Laboratory of
Remote Sensing and Image Processing (LARSIP). This paper presents an overview of the tools, application
available in HIAT using as example an AVIRIS image. In addition, we present the new HIAT developments,
unmixing, new oversampling algorithm, true color visualization, crop tool and GUI enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650F (2007) https://doi.org/10.1117/12.719363
In this paper an end-to-end hyperspectral imaging system model is described which has the ability to predict the
performance of hyperspectral imaging sensors in the visible through to the short-wave infrared regime for sub-pixel
targets. The model represents all aspects of the system including the target signature and background, the atmosphere,
the optical and electronic properties of the imaging spectrometer, as well as details of the processing algorithms
employed. It provides an efficient means of Monte-Carlo modelling for sensitivity analysis of model parameters over a
wide range. It is also capable of representing certain types of
non-Gaussian hyperspectral clutter arising from
heterogeneous backgrounds. The capabilities of the model are demonstrated in this paper by considering Uninhabited
Airborne Vehicle scenarios and comparing both multispectral and hyperspectral sensors. Both anomaly detection and
spectral matched-filter algorithms are characterised in terms of Receiver Operating Characteristic curves. Finally, some
results from a preliminary validation exercise are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650G (2007) https://doi.org/10.1117/12.718134
Radiance Technologies has developed and integrated a multispectral / hyperspectral data analysis toolbox into an easy to
use operator interface. HyperPACSTM (Hyperspectral data Processing Algorithm Comparison Software) allows the data
analyst to process spectral data in multiple input formats with many different spectral algorithms and/or different
algorithm parameters and options. Results are compared to user supplied ground truth, and Receiver Operating
Characteristic (ROC) curves providing a direct comparison of algorithm performance are generated. The HyperPACSTM
GUI makes the software easy to use, and the architecture readily allows for the integration of custom algorithms
provided by the analyst. Radiance has used HyperPACSTM in the evaluation of hyperspectral and multispectral
algorithms in support of an ongoing program. A description of HyperPACSTM, the GUI, and processing examples are
given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650H (2007) https://doi.org/10.1117/12.717037
We developed four new techniques to visualize hyper spectral image data for man-in-the-loop target detection. The methods respectively: (1) display the subsequent bands as a movie ("movie"), (2) map the data onto three channels and display these as a colour image ("colour"), (3) display the correlation between the pixel signatures and a known target signature ("match") and (4) display the output of a standard anomaly detector ("anomaly"). The movie technique requires no assumptions about the target signature and involves no information loss. The colour technique produces a single image that can be displayed in real-time. A disadvantage of this technique is loss of information. A display of the match between a target signature and pixels and can be interpreted easily and fast, but this technique relies on precise knowledge of the target signature. The anomaly detector signifies pixels with signatures that deviate from the (local) background. We performed a target detection experiment with human observers to determine their relative performance with the four techniques,. The results show that the "match" presentation yields the best performance, followed by "movie" and "anomaly", while performance with the "colour" presentation was the poorest. Each scheme has its advantages and disadvantages and is more or less suited for real-time and post-hoc processing. The rationale is that the final interpretation is best done by a human observer. In contrast to automatic target recognition systems, the interpretation of hyper spectral imagery by the human visual system is robust to noise and image transformations and requires a minimal number of assumptions (about signature of target and background, target shape etc.) When more knowledge about target and background is available this may be used to help the observer interpreting the data (aided target detection).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650I (2007) https://doi.org/10.1117/12.719982
We propose a method for selecting an optimal spatial filter based on both spectral and spatial information
to improve the discriminability of hyperspectral textures. The feature vector for each texture class contains
the covariance matrix elements in filtered versions of the texture. The new method reduces the length of the
representation by selecting an optimal subset of bands and also uses an optimized spatial filter to maximize
the distance between feature vectors for the different texture classes. Band selection is performed based on the
stepwise reduction of bands. We have applied this method to a database of textures acquired under different
illumination conditions and analyzed the classification results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650J (2007) https://doi.org/10.1117/12.745429
In this paper we present a new topology-based algorithm for anomaly detection in dimensionally large datasets.
The motivating application is hyperspectral imaging where the dataset can be a collection of ~ 106 points
in Rk, representing the reflected (or radiometric) spectra of electromagnetic radiation. The algorithm begins
by building a graph whose edges connect close pairs of points. The background points are the points in the
largest components of this graph and all other points are designated as anomalies. The anomalies are ranked
according to their distance to the background. The algorithm is termed Topological Anomaly Detection (TAD).
The algorithm is tested on hyperspectral imagery collected with the HYDICE sensor which contains targets of
known reflectance and spatial location. Anomaly maps are created and compared to results from the common
anomaly detection algorithm RX. We show that the TAD algorithm performs better than RX by achieving
greater separation of the anomalies from the background for this dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650L (2007) https://doi.org/10.1117/12.719034
In this paper, we report our recent investigation on principal components analysis (PCA) and JPEG2000 in hyperspectral image compression, where the PCA is for spectral coding and JPEG2000 is for spatial coding for principal component (PC) images (referred to as PCA+JP2K). We find out such an integrated scheme significantly outperforms the commonly used 3-dimensional (3D) JPEG2000 (3D-JP2K) in rate-distortion performance, where the discrete wavelet transform (DWT) is used for spectral coding. We also find out that the best rate-distortion performance occurs when a subset of PCs is used instead of all the PCs. In the AVIRIS experiments, PCA+JP2K can bring about 5-10 dB increase in SNR compared to 3D-JP2K, whose SNR in turn is about 0.5dB greater than other popular wavelet based compression approaches, such as 3D-SPIHT and 3D-SPECK. The performance on data analysis using the reconstructed data is also evaluated. We find out that using PCA for spectral decorrelation can provide better performance, in particular, in low bitrates. The schemes for
low-complexity PCA are also presented, which include the spatial
down-sampling in the estimation of covariance matrix and the use of data with non-zero mean. The compression performance on both radiance and reflectance data are also compared. The instructive suggestions on practical applications are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650M (2007) https://doi.org/10.1117/12.719221
Foreign object detection processes are improving thanks to imaging spectroscopy techniques through the employment of hyperspectral systems such as prism-grating-prism spectrographs. These devices offer a valuable but sometimes huge and redundant amount of spectral and spatial information that facilitates and speed up the classification and sorting procedures of materials in industrial production chains. In this work, different algorithms of supervised and non-supervised Principal Components Analysis (PCA) are thoroughly applied on the experimentally acquired hyperspectral images. The evaluated PCA versions implement different statistical mechanisms to maximize the class separability. PCA alternatives (traditional "m-method", "J-measure", SEPCOR and "Supervised PCA") are compared taking into account how the achieved spectral compression affects the classification performance in terms of accuracy and execution time. During the whole process, the classification stage is fixed and performed by an Artificial Neural Network (ANN). The developed techniques have been probed and successfully checked in tobacco industry where detection of plastics, cords, cardboards, papers, textile threads, etc. must be done in order to enter only tobacco leaves in the industrial chain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650N (2007) https://doi.org/10.1117/12.719533
We present some new grating designs for use in a computed tomographic imaging spectrometer (CTIS) and
discuss their differences with previous gratings. One of the advantages of the new designs is that they provide
added flexibility for a tunable CTIS instrument, and we show some preliminary data illustrating this advantage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650O (2007) https://doi.org/10.1117/12.719900
A Computed Tomography Imaging Spectrometer (CTIS) is an imaging spectrometer which can acquire a multi-spectral
data set in a single snapshot (one focal plane array integration time) with no moving parts. Currently, CTIS instruments
use a specially designed computer generated hologram (CGH) to disperse the light from a given spectral band into a
grid of diffraction orders. The capabilities of the CTIS instrument can be greatly improved by replacing the static CGH
dispersing element with a reconfigurable liquid crystal spatial light modulator. The liquid crystal spatial light modulator
is tuned electronically, enabling the CTIS to remain a non-scanning imaging spectrometer with no moving parts. The
ability to rapidly reconfigure the dispersing element of the CTIS allows the spectral and spatial resolution to change by
varying the number of diffraction orders, diffraction efficiency, etc. In this work, we present the initial results of using
a fully addressable, 2-D liquid crystal spatial light modulator as the dispersing element in a CTIS instrument.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650P (2007) https://doi.org/10.1117/12.719629
PAR Government Systems Corporation (PAR) with Advanced Coherent Technologies, LLC (ACT) has developed
affordable, narrow-band polarimetry sensor hardware and software based upon the PAR Mission Adaptable Narrowband
Tunable Imaging Sensor (MANTIS). The sensor has been deployed in multiple environments. Polarimetric imagery of
the clear blue sky and the sea surface has been collected. In addition, a significant amount of calibration data has been
collected to correctly calibrate the sensor for real-time Stokes Vector imaging. Data collected with the MANTIS
polarization sensor has been compared to modeled data. The sensor hardware and software is described and some
representative collected calibration data are presented and compared to a developing model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650R (2007) https://doi.org/10.1117/12.719247
Laser multi-spectral polarimetric diffuse scattering (LAMPODS) imaging is an approach that maps object intrinsic optical
scattering properties rather than the scattered light intensity like in conventional imaging. The technique involves
comprehensive measurements to parameterize object optical responses with respect to wavelength, polarization, and
diffuse scattering. The derived parametric properties constitute LAMPODS images, which are more fundamental than
conventional images. The application is to uncover and discriminate features that are not obvious or obtainable with
conventional imaging. The experiments were performed for a number of targets, using near-infrared lasers. A system
architecture design configured similarly to optical wireless network is described, which can be used as a design for a
LAMPODS "camera". The results for a number of targets indicate unique LAMPODS capabilities to distinguish and
classify features based on optics principles rather than phenomenological image processing. Examples of uncovering,
enhancing, and interpreting target features, which are unseen or ambiguous in conventional images, are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650S (2007) https://doi.org/10.1117/12.718559
Results of field testing of a LWIR hyperspectral imager based on a Sagnac interferometer and uncooled microbolometer
array are presented. Signal to noise ratios of nearly 1000 are found for 30 bands across the LWIR (8-14 microns) at
useful data collection rates (7000 spectra/second). Spectra are compared to those collected simultaneously with a
cryogenic LWIR hyperspectral system (U. Hawaii AHI).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650U (2007) https://doi.org/10.1117/12.721049
Benthic habitats are the different bottom environments as defined by distinct physical, geochemical, and biological
characteristics. Hyperspectral remote sensing has great potential to map and monitor the complex dynamics associated
with estuarine and nearshore benthic habitats. However, utilizing hyperspectral unmixing to map these areas requires
compensating for variable bathymetry and water optical properties. In this paper, we compare two methods to unmix
hyperspectral imagery in estuarine and nearshore benthic habitats. The first method is a two-stage method where
bathymetry and optical properties are first estimated using Lee's inversion model and linear unmixing is then performed
using variable endmembers derived from propagating bottom spectral signatures to the surface using the estimated
bathymetry and optical properties. In the second approach, a nonlinear optimization approach is used to simulatenously
retrieve abundances, optical properties, and bathymetry. Preliminary results are presented using AVIRIS data from
Kaneohe Bay, Hawaii. SHOALS data from the area is used to evaluate the accuracy of the retrieved bathymetry and
comparisons between abundance estimates for sand, algae and coral are performed. These results show the potential of
the nonlinear approach to provide better estimates of bottom coverage but at a significantly higher computational price.
The experimental work also points to the need for a well characterized site to use for unmixing algorithms testing and
validation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650V (2007) https://doi.org/10.1117/12.719885
LIDAR data taken over the Elkhorn Slough region in central California were analyzed for terrain classification. Data were collected on April 12th, 2005 over a 10 km × 20 km region that is mixed use agriculture and wetlands. LIDAR temporal information (elevation values), intensity of returned light and distribution of point returns (in both vertical and spatial dimensions) were used to distinguish land-cover types. Terrain classification was accomplished using LIDAR data alone, multi-spectral QuickBird data alone and a combination of the two data-types. Results are compared to significant ground truth information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jan M. H. Hendrickx, Jan Kleissl, Jesús D. Gómez Vélez, Sung-ho Hong, José R. Fábrega Duque, David Vega, Hernán A. Moreno Ramírez, Fred L. Ogden
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650W (2007) https://doi.org/10.1117/12.718124
Accurate estimation of sensible and latent heat fluxes as well as soil moisture from remotely sensed satellite images poses a great challenge. Yet, it is critical to face this challenge since the estimation of spatial and temporal distributions of these parameters over large areas is impossible using only ground measurements. A major difficulty for the calibration and validation of operational remote sensing methods such as SEBAL, METRIC, and ALEXI is the ground measurement of sensible heat fluxes at a scale similar to the spatial resolution of the remote sensing image. While the spatial length scale of remote sensing images covers a range from 30 m (LandSat) to 1000 m (MODIS) direct methods to measure sensible heat fluxes such as eddy covariance (EC) only provide point measurements at a scale that may be considerably smaller than the estimate obtained from a remote sensing method. The Large Aperture scintillometer (LAS) flux footprint area is larger (up to 5000 m long) and its spatial extent better constraint than that of EC systems. Therefore, scintillometers offer the unique possibility of measuring the vertical flux of sensible heat averaged over areas comparable with several pixels of a satellite image (up to about 40 Landsat thermal pixels or about 5 MODIS thermal pixels). The objective of this paper is to present our experiences with an existing network of seven scintillometers in New Mexico and a planned network of three scintillometers in the humid tropics of Panama and Colombia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650X (2007) https://doi.org/10.1117/12.723211
Advances in Hyperspectral imaging (HSI) sensor offer new avenues for precise detection, identification and
characterization of materials or targets of military interest. HSI technologies are capable of exploiting 10s to 100s of
images of a scene collected at contiguous or selective spectral bands to seek out mission-critical objects. In this paper,
we develop and analyze several HSI algorithms for detection, recognition and tracking of dismounts, vehicles and other
objects. Preliminary work on detection, classification and fingerprinting of dismount, vehicle and UAV has been
performed using visible band HSI data. The results indicate improved performance with HSI when compared to
traditional EO processing. All the detection and classification results reported in this paper were based on single HSI
pixel used for testing. Furthermore, the close-in Hyperspectral data were collected for the experiments at indoor or
outdoor by the authors. The collections were taken in different lighting conditions using a visible HSI sensor. The
algorithms studied for performance comparison include PCA, Linear Discriminant Analysis method (LDA), Quadratic
classifier and Fisher's Linear Discriminant and comprehensive results have been included in terms of confusion matrices
and Receiver Operating Characteristic (ROC) curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650Y (2007) https://doi.org/10.1117/12.719798
As the interest in polarization sensitive imaging systems increases, the modeling tools used to perform instrument trade studies and to generate data for algorithm testing must be adapted to correctly predict polarization signatures. The incorporation of polarization into the image chain simulated by these tools must address the modeling of the natural illuminants (e.g. Sun, Moon, Sky), background sources (e.g. adjacent objects), the polarized Bidirectional Reflectance Distribution Function (pBRDF) of surfaces, atmospheric propagation (extinction, scattering and self-emission) and sensor effects (e.g. optics, filters). Although, each of these links in the image chain may utilize unique modeling approaches, they must be integrated under a framework that addresses important aspects such as a unified coordinate space and a common polarization state convention. This paper presents a modeling framework for the prediction of polarized signatures within a natural scene. The proposed image chain utilizes community developed modeling tools including an experimental version of MODTRAN and BRDF models that have been either derived or extended for polarization (e.g. Beard-Maxwell, Priest-Germer, etc.). This description also includes the theory utilized in the modeling tools incorporated into the image chain model to integrate these links into a full signature prediction capability. Analytical and experimental lab studies are presented to demonstrate the correct implementation and integration of the described image chain framework within the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65650Z (2007) https://doi.org/10.1117/12.721207
Comparisons have been made showing that modeled multi and hyperspectral imagery can approach the complexity
of real data and the use of modeled data to perform algorithm testing and sensor modeling is well established.
With growing interest in the acquisition and exploitation of polarimetric imagery, there is a need to perform
similar comparisons for this imaging modality.
This paper will describe the efforts to reproduce polarimetric imagery acquired of a real world scene in a
synthetic image generation environment. Real data was collected with the Wildfire Airborne Sensor
Program-Lite (WASP-Lite) imaging system using three separate cameras to acquire simultaneously three polarization
orientations. Modeled data were created using the Digital Imaging and Remote Sensing Image Generation
(DIRSIG) model. This model utilizes existing tools such as polarized bi-directional reflectance distribution
functions (pBRDF), polarized atmospheric models, and
polarization-sensitive sensor models to recreate polarized
imagery. Results will show comparisons between the real and synthetic imagery, highlighting successes in the
model as well as areas where improved fidelity is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656510 (2007) https://doi.org/10.1117/12.720973
In this work, we develop a new method for multispectral and hyperspectral texture synthesis using the multiband
distribution and power spectral densities. Different approaches to this problem are mostly case specific and
include histogram explosion, equalization in HSV or some other color space, and equalization based on the
earth mover distance. For multiband images, the usual practice is to define the power spectral density for
each band separately. While this captures the in-band autocorrelations, the cross-band correlations are not
captured. Sometimes cross-psds are defined if it is known that cross-band correlations are important. However,
as the number of bands increases, this method becomes computationally prohibitive. We propose a method that
expresses psds for multiband images using a 3D fourier transform. An iterative scheme is used to equalize the
histogram and psds for an input and target image. Our experiments show that the iteration tends to converge
after 5-10 steps. The proposed method is computationally efficient and yields satisfactory results. We compare
synthesized multispectral textures with real multispectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656511 (2007) https://doi.org/10.1117/12.718666
Determining the temperature of an internal surface within cavernous targets, such as the interior wall of a mechanical draft cooling tower, from remotely sensed imagery is important for many surveillance applications that provide input to process models. The surface leaving radiance from an observed target is a combination of the self-emitted radiance and the reflected background radiance. The self-emitted radiance component is a function of the temperature-dependent blackbody radiation and the view-dependent directional emissivity. The reflected background radiance component depends on the bidirectional reflectance distribution function (BRDF) of the surface, the incident radiance from surrounding sources, and the BRDF for each of these background sources. Inside a cavity, the background radiance emanating from any of the multiple internal surfaces will be a combination of the self-emitted and reflected energy from the other internal surfaces as well as the downwelling sky radiance. This scenario provides for a complex radiometric inversion problem in order to arrive at the absolute temperature of any of these internal surfaces. The cavernous target has often been assumed to be a blackbody, but in field experiments it has been determined that this assumption does not always provide an accurate surface temperature. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) modeling tool is being used to represent a cavity target. The model demonstrates the dependence of the radiance reaching the sensor on the emissivity of the internal surfaces and the multiple internal interactions between all the surfaces that make up the overall target. The cavity model is extended to a detailed model of a mechanical draft cooling tower. The predictions of derived temperature from this model are compared to those derived from actual infrared imagery collected with a helicopter-based broadband infrared imaging system collected over an operating tower located at the Savannah River National Laboratory site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Julio M. Duarte-Carvajalino, Guillermo Sapiro, Miguel Vélez--Reyes, Paul Castillo
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656512 (2007) https://doi.org/10.1117/12.721036
This paper presents an algorithm that generates a scale-space representation of hyperspectral imagery using Algebraic Multigrid (AMG) solvers. The scale-space representation is obtained by solving with AMG a vector-valued anisotropic diffusion equation, with the hyperspectral image as its initial condition. AMG also provides the necessary structure to obtain a hierarchical segmentation of the image. The scale space representation of the hyperspectral image can be segmented in linear time complexity. Results in the paper show that improved segmentation is achieved. The proposed methodology to solve vector PDEs can be used to extend a number of techniques currently being developed for the fast computation of geometric PDEs and its application for the processing of hyperspectral and multispectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656513 (2007) https://doi.org/10.1117/12.719002
Hyperspectral imaging (HSI) data in the 0.4 - 2.5 micrometer spectral range allow direct identification of materials using
their fully resolved spectral signatures, however, spatial coverage is limited. Multispectral Imaging data (MSI) are
spectrally undersampled and may not allow unique identification, but they do provide synoptic spatial coverage. This
paper summarizes an approach that uses coincident HSI/MSI data to extend HSI results to cover larger areas. Airborne
Visible/Infrared (AVIRIS)/Hyperion and multispectral ASTER/MASTER data supported by field spectral measurements
are used to allow modeling and extension of hyperspectral signatures to multispectral data. Full-scene mapping using the
modeled signatures allows subsequent mapping of extended areas using the multispectral data. Both the hyperspectral
and multispectral data are atmospherically corrected using commercial-off-the-shelf (COTS) atmospheric correction
software. Hyperspectral data are then analyzed to determine spectral endmembers and their spatial distributions, and
validated using the field spectral measurements. Spectral modeling is used to convert the hyperspectral spectral
signatures to the multispectral data response. Reflectance calibrated multispectral data are then used to extend the
hyperspectral mapping to the larger spatial coverage of the multispectral data. Field verification of mapping results is
conducted and accuracy assessment performed. Additional sites are assessed with multispectral data using the modeling
methodology based on scene-external HSI and/or field spectra (but without scene-specific a priori hyperspectral analysis
or knowledge). These results are further compared to field measurements and subsequent hyperspectral analysis and
mapping to validate the spectral modeling approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656514 (2007) https://doi.org/10.1117/12.722211
Robust imagery conflation, co-registration and geo-referencing are critical in many applications such as fusion of multispectral data from multiple sensors. An algorithm that matches linear features from two images can be very accurate because it produces many matched points, but the selection of robust and invariant points is a long-standing challenge. This paper defines several concepts of invariance and robustness of image matching algorithms relative to pairs of transformations. A new affine invariant and noise robust registration/conflation algorithm (EAD algorithm) based on algebraic structures of linear and area features is proposed. A class of Equal Area Divider (EAD) points is a major new component of the EAD-based registration/conflation algorithm. These points are both affine invariant and robust to noise. EAD points augment several known invariant or robust points such as Ramer point (R-point, the curve point with max distance from its chord), curve middle (CM) point and equal shoulders (ES) points that we have used in our structural algorithms previously. R point is affine invariant but is not noise robust, CM and ES are noise robust but not affine invariant. It is shown in this paper that if CM and ES points are computed after affine transform of the first image to the second one using EAD points, then CM and ES points are the same (or in the T-robust vicinity) of correct CM and ES points found in the matched feature in the second image. This statement is formalized and is used in EAD algorithm design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A. Schaum, Eric Allman, John Kershenstein, Drew Alexa
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656515 (2007) https://doi.org/10.1117/12.729632
A new class of hyperspectral detection algorithm based on elliptically contoured distributions (ECDs) is described. ECDs have been studied previously, but only for modeling the tails of background clutter distributions in order better to approximate constant false alarm performance. Here ECDs are exploited to produce new target detection algorithms with performance no worse than the best prior methods. The ECD model affords two principal advantages over older methods: (1) Its selective decision surface automatically rejects outliers that are not easily modeled, and (2) it has no free parameters needing optimization. A particularly simple version of ECD has been applied to assist in automatic change detection in extreme (unnatural) clutter. The ECD version of change detection can detect low spectral contrast targets that are not easily found by standard methods, even when these use signature information. Preliminary results indicate, furthermore, that approximate forms of the component algorithms that have been implemented in deployed systems should be avoided. They can substantially degrade detection performance in high-clutter environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656516 (2007) https://doi.org/10.1117/12.723161
Change Detection (CD) is the process of identifying temporal or spectral changes in signals or images. Detection and
analysis of change provide valuable information of transformations in a scene. Hyperspectral sensors provide spatial and
spectrally rich information that can be exploited for Change Detection. This paper develops and analyzes various CD
algorithms for the detection of changes using single-pass and multi-pass Hyperspectral images. For the validation and
performance comparisons, changes obtained are compared for the conventional similarity correlation coefficient as well
as traditional change detection algorithms, such as image differencing, image ratioing, and principle component analysis
(PCA) methods. Another main objective is to incorporate Kernel based optimization by using a nonlinear mapping
function. Development of nonlinear versions of linear algorithms allows exploiting nonlinear relationships present in the
data. The nonlinear versions, however, become computationally intensive due to the high dimensionality of the feature
space resulting in part from application of the nonlinear mapping function. This problem is overcome by implementing
these nonlinear algorithms in the high-dimensional feature space in terms of kernels. Kernelization of a similarity
correlation coefficient algorithm for Hyperspectral change detection has been studied. Preliminary work on dismount
tracking using change detection over successive HSI bands has shown promising results. CD between multipass HSI
image cubes elicits the changes over time, whereas changes between spectral bands for the same cube illustrate the
spectral changes occurring in different image regions, and results for both cases are given in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656517 (2007) https://doi.org/10.1117/12.719686
This work studies the end-to-end performance of hyperspectral classification and unmixing systems. Specifically, it compares widely used current state of the art algorithms with those developed at the University of Puerto Rico. These include algorithms for image enhancement, band subset selection, feature extraction, supervised and unsupervised classification, and constrained and unconstrained abundance estimation. The end to end performance for different combinations of algorithms is evaluated. The classification algorithms are compared in terms of percent correct classification. This method, however, cannot be applied to abundance estimation, as the binary evaluation used for supervised and unsupervised classification is not directly applicable to unmixing performance analysis. A procedure to evaluate unmixing performance is described in this paper and tested using coregistered data acquired by various sensors at different spatial resolutions. Performance results are generally specific to the image used. In an effort to try and generalize the results, a formal description of the complexity of the images used for the evaluations is required. Techniques for image complexity analysis currently available for automatic target recognizers are included and adapted to quantify the performance of the classifiers for different image classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656518 (2007) https://doi.org/10.1117/12.720884
The radiance spectrum corresponding to a single pixel in an airborne or space-based hyperspectral image is
dependent on the reflectance spectra and orientations of the material surfaces within the pixel area. We develop
a hyperspectral demixing algorithm that estimates the pixel area fractions of multiple materials present within a
pixel. The algorithm exploits a nonlinear physics-based image formation model that allows surfaces with multiple
orientations within the pixel area. Geometric constraints are derived in conjunction with the image formation
model. The algorithm involves solving a constrained nonlinear optimization problem to estimate the pixel area
fractions and the surface orientation parameters. An experiment using simulated radiance spectra is presented to demonstrate the utility of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656519 (2007) https://doi.org/10.1117/12.718362
The characterization of separation between object spectral distributions by the use of any divergence-evolved
method, such as Informational Difference is problematic due to the relative sparsity of said
distributions. The existence of zero-probability points renders the calculation result irrelevant as the
separation is either infinite or undefined. A method to surmount this problem using available
experimental data is proposed.
We consider the statistical nature of measurement for all available visual data, e.g. pixel values, and
model the spectral distributions of these pixels as a congregate of Gaussian statistic measurements. The
inherent nature of Gaussian distributions smoothes over the zero-probability points of the original
discrete distribution, solving the divergence problem. The parameters of the Gaussian smoothing are experimentally determined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651A (2007) https://doi.org/10.1117/12.719543
A recently introduced concept, virtual dimensionality (VD) has been shown promise in many applications of hyperspectral data exploitation. It was originally developed for estimating number of spectrally distinct signal sources. This paper explores utility of the VD from various signal processing perspectives and further investigates four techniques, Gershgorin radius (GR), orthogonal projection subspace (OSP), signal subspace estimation (SSE), Neyman-Pearson detection (NPD), to be used to estimate the VD. In particular, the OSP-based VD estimation technique is new and has several advantages over other methods. In order to evaluate their performance, a comparative study and analysis is conducted via synthetic and real image experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651B (2007) https://doi.org/10.1117/12.718932
Several studies have reported that the use of derived spectral features, in addition to the original hyperspectral data, can
facilitate the separation of similar classes. Linear and nonlinear transformations are employed to project data into
mathematical spaces with the expectation that the decision surfaces separating similar classes become well defined.
Therefore, the problem of discerning similar classes in expanded space becomes more tractable. Recent work presented
by one of the authors discusses a dimension expansion technique based on generating real and imaginary complex
features from the original hyperspectral signatures. A complex spectral angle mapper was employed to classify the data.
In this paper, we extend this method to include other approaches that generate derivative-like and wavelet-based spectral
features from the original data. These methods were tested with several supervised classification methods with two
Hyperspectral Image (HSI) cubes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651C (2007) https://doi.org/10.1117/12.720750
A readily automated procedure for testing and calibrating the wavelength scale of a scanning hyperspectral imaging camera is described. The procedure is a laboratory calibration method and it uses the absorbance features from a commercial didymium oxide filter as a wavelength standard. The procedure was used to accurately determine the pixel positions. An algorithm was developed to determine the center of the wavelength for any given abscissa accurately. During this investigation we determined that the sampled pixels show both trend and serial correlation as a function of the spatial dimensions. The trend is more significant than the serial correlation. In this paper, the trend will be filtered out by modeling the trend using an efficient global linear regression model of different order for different spectral band. The order is selected automatically and different criteria for selecting the order are discussed. Experimental results will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651D (2007) https://doi.org/10.1117/12.720123
Many hyperspectral sensors collect data using multiple spectrometers to span a broad spectral range. These
spectrometers can be fed by optical fibers on the same or separate focal planes. The Modular Imaging Spectrometer
Instrument (MISI), a 70 band line scanner built by the Rochester Institute of Technology, is configured
in this manner. Visible and near infrared spectrometers at the primary focal plane are each fed by their own
optical fiber. The spatial offset between the two fibers on the focal plane causes an inherent misregistration
between the two sets of spectral bands. This configuration causes a relatively complicated misregistration which
cannot be corrected with a simple shift or rotation of the data. This mismatch between spectral channels has
detrimental effects on the spectral purity of each pixel, especially when dealing with the application of sub-pixel
target detection. A geometric model of the sensor has been developed to solve for the misregistration and achieve
image rectification. This paper addresses the issues in dealing with the misregistration and techniques used to
improve spectral purity on a per pixel basis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651E (2007) https://doi.org/10.1117/12.720050
Hyperspectral focal plane arrays typically contain many pixels that are excessively noisy, dead, or exhibit poor signal to-
noise performance in comparison to the average pixel. These bad pixels can significantly impair the performance of
spectral target-detection algorithms. Even a single missed bad pixel can lead to false alarms. If the bad pixels are
sparsely populated across the focal plane, the over-sampling in both spatial and spectral dimensions of the array can be
capitalized upon to replace these pixels without significant loss of information. However, bad pixels are frequently
localized in clusters, requiring a replacement strategy that rather than providing a good estimate of the missing data will
instead minimize artifacts that may negatively affect the performance of spectral detection algorithms. In this paper, we
evaluate a robust method to automatically identify bad pixels for short-wavelength infrared (SWIR) hyperspectral
sensors. In addition, we introduce a novel procedure for the replacement of these pixels, which we demonstrate
provides a better estimate of the original pixel value compared to interpolation methods for bad pixels found as both
isolated individuals and in clusters. The advantages of our technique are discussed and demonstrated with data from
several different airborne sensor systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651F (2007) https://doi.org/10.1117/12.718591
Hyperspectral imaging (HSI) sensors suffer from spatial misregistration, an artifact that prevents the accurate acquisition
of the spectra. Physical considerations let us assume that the influence of the spatial misregistration on the acquired data
depends both on the wavelength, and on the across-track position. A scene-based edge detection method is therefore
proposed. Such a procedure measures the variation on the spatial location of an edge between its various monochromatic
projections, giving estimation for spatial misregistration, and allowing also misalignments identification. The method has
been applied to several hyperspectral sensors, either prism, or grating-based designs. Results confirm the dependence
assumptions on &lgr; and &thgr;, spectral wavelength and across track pixel respectively. In order to correct for spatial
misregistration suggestions are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651G (2007) https://doi.org/10.1117/12.719755
Unlike straightforward registration problems encountered in broadband imaging, spectral imaging in fielded instruments
often suffers from a combination of imaging aberrations that make spatial co-registration of the images a challenging
problem. Depending on the sensor architecture, typical problems to be mitigated include differing focus, magnification,
and warping between the images in the various spectral bands due to optics differences; scene shift between spectral
images due to parallax; and scene shift due to temporal misregistration between the spectral images. However, typical
spectral images sometimes contain scene commonalities that can be exploited in traditional ways. As a first step toward
automatic spatial co-registration for spectral images, we exploit manually-selected scene commonalities to produce
transformation parameters in a four-channel spectral imager. The four bands consist of two mid-wave infrared channels
and two short-wave infrared channels. Each of the four bands is blurred differently due to differing focal lengths of the
imaging optics, magnified differently, warped differently, and translated differently. Centroid location techniques are
used on the scene commonalities in order to generate sub-pixel values for the fiducial markers used in the
transformation polygons, and conclusions are drawn about the effectiveness of such techniques in spectral imaging
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric Instrumentation, Measurements, and Forecasting
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651H (2007) https://doi.org/10.1117/12.718448
The AIRS Science Team Version 5.0 retrieval algorithm will become operational at the Goddard DAAC in early 2007 in the near real-time analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments are:
1) the development and implementation of a very accurate Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave CO2 channel observations in the spectral region 700 cm-1 to 750 cm-1 are used exclusively for cloud clearing purposes, while shortwave CO2 channels in the spectral region 2195 cm-1 to 2395 cm-1 are used for temperature sounding purposes. The new methodology is described briefly and results are shown, including comparison with those using AIRS Version 4, as well as a forecast impact experiment assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651I (2007) https://doi.org/10.1117/12.718466
Satellites provide an ideal platform to study the Earth-atmosphere system on practically all spatial and temporal
scales. Thus, one may expect that their rapidly growing datasets could provide crucial insights not only for
short-term weather processes/predictions but into ongoing and future climate change processes as well. Though
Earth-observing satellites have been around for decades, extracting climatically reliable information from their
widely varying datasets faces rather formidable challenges. AIRS/AMSU is a state of the art
infrared/microwave sounding system that was launched on the EOS Aqua platform on May 4, 2002, and has
been providing operational quality measurements since September 2002. In addition to temperature and
atmospheric constituent profiles, outgoing longwave radiation [OLR] and basic cloud parameters are also
derived from the AIRS/AMSU observations. However, so far the AIRS products have not been rigorously
evaluated/validated on a large scale. Here we present preliminary assessments of climatically important
"Level3" (monthly and 8-day means, 1° x 1° gridded) AIRS "Version 4.0" retrieved products (available to the
public through the DAAC at NASA/GSFC) to assess their utility for climate studies. Though the current AIRS
climatology covers only ~4.5 years, it will hopefully extend much further into the future. First we present
"consistency checks" by evaluating the ~4.5-yr long time series of global and tropical means, as well as grid-scale
variability and "anomalies" (relative to the first full years worth of AIRS "climate statistics" of several
climatically important retrieved parameters). Finally, we also present preliminary results regarding
interrelationships of some of these geophysical variables, to assess to what extent they are consistent with the
known physics of climate variability/change. In particular, we find at least one observed relationship which
contradicts current general circulation climate (GCM) model results: the global water vapor climate feedback
which is expected to be strongly positive is deduced to be slightly negative (shades of the "Lindzen effect"?).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651J (2007) https://doi.org/10.1117/12.718121
The Atmospheric Infrared Sounder (AIRS), together with the Advanced Microwave Sounding Unit (AMSU), represents
one of the most advanced space-based atmospheric sounding systems. Aside from monitoring changes in Earth's
climate, one of the objectives of AIRS is to provide sounding information with sufficient accuracy such that the
assimilation of the new observations, especially in data sparse regions, will lead to an improvement in weather forecasts.
The combined AIRS/AMSU system provides radiance measurements used as input to a sophisticated retrieval scheme
which has been shown to produce temperature profiles with an accuracy of 1 K over 1 km layers and humidity profiles
with accuracy of 10-15% in 2 km layers in both clear and partly cloudy conditions. The retrieval algorithm also provides
estimates of the accuracy of the retrieved values at each pressure level, allowing the user to select profiles based on the
required error tolerances of the application. The purpose of this paper is to describe a procedure to optimally assimilate
high-resolution AIRS profile data in a regional analysis/forecast model. The paper focuses on a U.S. East-Coast cyclone
from November 2005. Temperature and moisture profiles-containing information about the quality of each
temperature layer-from the prototype version 5.0 Earth Observing System (EOS) science team retrieval algorithm are
used in this study. The quality indicators are used to select the highest quality temperature and moisture data for each
profile location and pressure level. AIRS data are assimilated into the Weather Research and Forecasting (WRF)
numerical weather prediction model using the Advanced Regional Prediction System (ARPS) Data Analysis System
(ADAS), to produce near-real-time regional weather forecasts over the continental U.S. The preliminary assessment of
the impact of the AIRS profiles will focus on intelligent use of the quality indicators, analysis impact, and forecast
verification against rawinsondes and precipitation data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651K (2007) https://doi.org/10.1117/12.723408
The Atmospheric Infrared Sounder (AIRS) flying on NASA's EOS-AQUA platform has channels sensitive to both sulfur
dioxide (SO2) and nitric acid (HNO3). We have developed a simple regression retrieval for both of these gases that
illustrates the potential for AIRS, and other hyperspectral sounders, to retrieve these two gases. We have cross-validated
the SO2 retrievals to those from the OMI instrument flying on the EOS AURA platform. Similarly, we have cross-validated
the HNO3 retrievals with limb retrievals of HNO3 by from MLS instrument, also flying on the AURA platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651L (2007) https://doi.org/10.1117/12.720837
Determining the extent to which large power plant emission sources interacting with atmospheric
constituents affect the environment could play a significant role in future U.S. energy production
policy. The effects on the environment caused by the interaction between power plant emissions
and atmospheric constituents has not been investigated in depth due to the lack of calibrated
spectral data on a suitable temporal and spatial scale. The availability of NASA's space-based
Atmospheric Infrared Sounder (AIRS) data makes it possible to explore, and begin the first steps toward establishing, a correlation between known emission sources and environmental indicators. An exploratory study was conducted in which a time series of 26 cloud-free AIRS data containing two coal-fired power plants in northern New Mexico were selected, acquired, and analyzed for SO2 emissions. A generic forward modeling process was also developed to derive an estimate of the expected AIRS pixel radiance containing the SO2 emissions from the two power plants based on published combustion analysis data for coal and available power plant documentation. Analysis of the AIRS NE&Dgr;R calculated in this study and subsequent comparison with the radiance values for SO2 calculated from the forward model provided essential information regarding the suitability and risk in the use of a modified AIRS configuration for monitoring anthropogenic point source emissions. The results of this study along with its conclusions and recommendations in conjunction with additional research collaboration in several specific topics will provide guidance for the development of the next generation infrared spectrometer system that NASA is considering building for environmental monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651M (2007) https://doi.org/10.1117/12.718788
Development of a next-generation imager/sounder requires advances in optics, focal planes, mechanical systems and
electronics. Technologies exist today to meet the requirements of next generation imager/sounders, but they allow a
wide range of configurations. On one hand, the instrument aperture can be as small as diffraction will allow, but this
will require a very large field of view of the optical system and a large focal plane assembly. On the other hand, the
aperture can be large, minimizing the field of view and number of detectors. In this paper, we examine the relationship
between aperture size, field of view of the optical system and focal plane assembly size and the number of detector
elements needed to meet the requirements of a next generation imager/sounder system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651N (2007) https://doi.org/10.1117/12.717546
Recent work has demonstrated the feasibility of neural network estimation techniques for atmospheric profiling in
partially cloudy atmospheres using combined microwave (MW) and hyperspectral infrared (IR) sounding data.
In this paper, the global retrieval performance of the stochastic cloud-clearing / neural network (SCC/NN)
method is examined using atmospheric fields provided by the European Center for Medium-range Weather
Forecasting (ECMWF) and in situ measurements from the NOAA radiosonde database. Furthermore, the retrieval performance of the neural network method is compared with the AIRS Level 2 algorithm (Version 4). Comparisons of both forecast and radiosonde data indicate that the neural network retrieval performance is
similar to or exceeds that of the AIRS Level 2 (version 4) profile products, substantially so in very cloudy areas.
A novel statistical method for the global retrieval of atmospheric temperature and water vapor profiles in
cloudy conditions has been developed and evaluated with sounding data from the Atmospheric InfraRed Sounder
(AIRS) and the Advanced Microwave Sounding Unit (AMSU). The present work focuses on the cloud impact on
the AIRS radiances and explores the use of Stochastic Cloud Clearing (SCC) together with neural network estimation.
A stand-alone statistical algorithm will be presented that operates directly on cloud-impacted AIRS/AMSU
data, with no need for a physical cloud clearing process. The algorithm is implemented in three stages. First,
the infrared radiance perturbations due to clouds are estimated and corrected by combined processing of the
infrared and microwave data using the SCC method. The cloud clearing of the infrared radiances was performed
using principal components analysis of infrared brightness temperature contrasts in adjacent fields of view and
microwave-derived estimates of the infrared clear-column radiances to estimate and correct the radiance contamination
introduced by clouds. Second, a Projected Principal Components (PPC) transform is used to reduce the
dimensionality of and optimally extract geophysical profile information from the cloud-cleared infrared radiance
data. Third, an artificial feedforward neural network (NN) is used to estimate the desired geophysical parameters
from the projected principal components.
The performance of this method was evaluated using global (ascending and descending) EOS-Aqua orbits
co-located with ECMWF fields for a variety of days throughout 2002 and 2003. Over 500,000 fields of regard
(3x3 arrays of footprints) over ocean and land were used in the study. The NOAA radiosonde database was
also used to assess performance - approximately 2000 global, quality-controlled radiosondes were selected for
the comparison. The SCC/NN method requires significantly less computation (up to a factor of three orders
of magnitude) than traditional variational retrieval methods, while achieving comparable global performance.
Accuracies in areas of severe clouds (cloud fractions exceeding about 60 percent) is particular encouraging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651O (2007) https://doi.org/10.1117/12.721184
Testing MODTRATM5 (MOD5) capabilities against NASA satellite
state-of-the-art radiance and irradiance
measurements has recently been undertaken. New solar data have been acquired from the SORCE satellite team,
providing measurements of variability over solar rotation cycles, plus an ultra-narrow calculation for a new solar
source irradiance, extending over the full MOD5 spectral range. Additionally, a MOD5-AIRS analysis has been
undertaken with appropriate channel response functions. Thus, MOD5 can serve as a surrogate for a variety of
perturbation studies, including two different modes for including variations in the solar source function, Io: (1) ultra-high
spectral resolution and (2) with and without solar rotation. The comparison of AIRS-related MOD5
calculations, against a suite of 'surrogate' data generated by other radiative transfer algorithms, all based upon
simulations supplied by the AIRS community, provide validation in the Long Wave Infrared (LWIR). All ~2400
AIRS instrument spectral response functions (ISRFs) are expected to be supplied with MODTRANTM5. These
validation studies show MOD5 replicates line-by-line (LBL) brightness temperatures (BT) for 30 sets of
atmospheric profiles to approximately -0.02°K average offset and <1.0°K RMS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651P (2007) https://doi.org/10.1117/12.721769
Compared to nadir viewing, off-nadir viewing of the ground from a high-altitude platform provides opportunities to increase area coverage and to reduce revisit times, although at the expense of spatial resolution. In this study, the ability to atmospherically compensate off-nadir hyperspectral imagery taken from a space platform was evaluated for a worst-case viewing geometry, using EO-1 Hyperion data collected with an off-nadir angle of 63° at the sensor, corresponding to six air masses along the line of sight. Reasonable reflectance spectra were obtained using both
first-principles (FLAASH) and empirical (QUAC)
atmospheric-compensation methods. Some refinements to FLAASH that enable visibility retrievals with highly off-nadir imagery, and also improve accuracy in nadir viewing, were developed and are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651Q (2007) https://doi.org/10.1117/12.719784
In the hyperspectral thermal data analysis temperature-emissivity separation has the same function as reflectance retrieval in the visible and shortwave infrared. The problem however is more complicated since in the thermal the surface emits and reflects radiation. The measured radiance is a function of the materials' surface emissivity and temperature, reflected down welling radiance (clear sky, clouds environment) and the path radiance (temperature and gas (e.g. water vapor, ozone) profiles). The current implementation of the Automatic Retrieval of Temperature and EMIssivity using Spectral Smoothness (ARTEMISS) uses look-up-tables (LUT) to infer the best fitting atmosphere which results in the smallest residual to the In-Scene Atmospheric Compensation (ISAC) estimated transmission. Over last few years we have developed an end-to-end simulation of the hyper spectral exploitation process by generating synthetic data to simulate datasets with "known" ground truth, modeling propagation through the atmosphere, adding sensor effects (telescope, detector, read-out electronics), radiometric and spectral calibration, and test the temperature emissivity separation algorithm. We will present an error analysis where we shifted the band centers, varied the full-width half maximum (FWHM) of the spectral response function, changed the spectral resolution, added noise and varied the atmospheric model. We will also discuss a general method to retrieve the spectral smile as a function of wavelength and the FWHM from hyperspectral data with only approximate spectral calibration. We found that our algorithm has trouble finding a unique solution when the watervapor exceeds about 3 g/cm2 and will discuss remedies for this situation. To speedup the LUT generation we have developed fast and robust initial atmospheric parameter estimators (water vapor, ozone, near surface atmospheric layer temperature) based on channel ratios and brightness temperatures in atmospheric absorption regions for the LWIR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651R (2007) https://doi.org/10.1117/12.720842
The ability to rapidly calculate at-sensor radiance over a large number of lines of sight (LOSs) is critical for scene
simulations, which are increasingly used for sensor design, performance evaluation, and data analysis. We have recently
demonstrated a new radiation transport (RT) capability that combines an efficient multiple-LOS multiple scattering
algorithm with a broad-bandpass correlated-k methodology called kURT-MS. The multiple-LOS capability is based on
DISORT and exploits the existing MODTRAN-DISORT interface. kURT-MS is a new sensor-specific correlated-k (c-k)
ultra-fast radiative transfer formalism for UV-visible to LWIR wavelengths that is derived from MODTRAN's
correlated-k parameters. Scattering parameters, blackbody and solar functions are cast as compact k-dependent source
terms and used in the radiance computations. Preliminary transmittance results are within 2% of MODTRAN with a
two-orders-of-magnitude computational savings. Preliminary radiance computations in the visible spectrum are within a
few percent of MODTRAN results, but with orders of magnitude speed up over comparable MODTRAN runs. This new
RT capability has potential applications for hyperspectral scene simulations as well as target acquisition algorithms for
near earth scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651S (2007) https://doi.org/10.1117/12.716324
This study examines the effectiveness of specific hyperspectral change detection algorithms on scenes with different
illumination conditions such as shadows, low sun angles, and seasonal vegetation changes with specific emphasis placed
on background suppression. When data sets for the same spatial scene on different occasions exist, change detection
algorithms utilize linear predictors such as chronochrome and covariance equalization in an attempt to suppress
background and improve detection of atypical manmade changes. Using a push-broom style imaging spectrometer
mounted on a pan and tilt platform, visible to near infrared data sets of a scene containing specific objects are gathered.
Hyperspectral system characterization and calibration is performed to ensure the production of viable data. Data
collection occurs over a range of months to capture a myriad of conditions including daily illumination change, seasonal
illumination change, and seasonal vegetation change. Choosing reference images, the degree of background suppression
produced for various time-2 scene conditions is examined for different background classes. A single global predictor
produces a higher degree of suppression when the conditions between the reference and time-2 remain similar and
decreases as drastic illumination and vegetation alterations appear. Manual spatial segmentation of the scene coupled
with the application of a different linear predictor for each class can improve suppression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651T (2007) https://doi.org/10.1117/12.716659
Hyperspectral change detection has been shown to be a promising approach for detecting subtle targets in complex
backgrounds. Reported change detection methods are typically based on linear predictors that assume a space-invariant
affine transformation between image pairs. Unfortunately, several physical mechanisms can lead to significant space
variance in the spectral change associated with background clutter, including shadowing and other illumination
variations as well as seasonal impacts on the spectral nature of vegetation, and this can lead to poor change detection
performance. This paper outlines a methodology to deal with such space-variant change using spectral clustering and
other related least-squares optimization techniques. Several specific algorithms are developed and applied to change
imagery captured under controlled conditions, and the impacts on clutter suppression are quantified and compared. The
results indicate that such techniques can provide markedly increased clutter suppression and change detection
performance when the environmental conditions associated with the image pairs are substantially different.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651U (2007) https://doi.org/10.1117/12.719972
We investigate the problem of identifying pixels in pairs of co-registered images that correspond to real changes on the ground. Changes that are due to environmental differences (illumination, atmospheric distortion, etc.) or sensor differences (focus, contrast, etc.) will be widespread throughout the image, and the aim is to avoid these changes in favor of changes that occur in only one or a few pixels. Formal outlier detection schemes (such as the one-class support vector machine) can identify rare occurrences, but will be confounded by pixels that are "equally rare" in both images: they may be anomalous, but they are not changes. We describe a resampling scheme we have developed that formally addresses both of these issues, and reduces the problem to a binary classification, a problem for which a large variety of machine learning tools have been developed. In principle, the effects of misregistration will manifest themselves as pervasive changes, and our method will be robust against them - but in practice, misregistration remains a serious issue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651V (2007) https://doi.org/10.1117/12.721282
The Image Space Reconstruction Algorithm (ISRA) has been used in hyperspectral imaging applications to
monitor changes in the environment and specifically, changes in coral reef, mangrove, and sand in coastal areas.
This algorithm is one of a set of iterative methods used in the hyperspectral imaging area to estimate abundance.
However, ISRA is highly computational, making it difficult to obtain results in a timely manner. We present the
use of specialized hardware in the implementation of this algorithm, specifically the use of VHDL and FPGAs
in order to reduce the execution time. The implementation of ISRA algorithm has been divided into hardware
and software units. The hardware units were implemented on a Xilinx Virtex II Pro XC2VP30 FPGA and the
software was implemented on the Xilinx Microblaze soft processor. This case study illustrates the feasibility
of this alternative design for iterative hyperspectral imaging algorithms. The main bottleneck found in this
implementations was data transfer. In order to reduce or eliminate this bottleneck we introduced the use of
block-rams (BRAMS) to buffer data and have data readily available to the ISRA algorithm. The memory
combination of DDR and BRAMS improved the speed of the implementation.
Results demonstrate that the C language implementation is better than both FPGA's implementations.
Nevertheless, taking a detailed look at the improvements in the results, FPGA results are similar to results
obtained in the C language implementation and could further be improved by adding memory capabilities to the
FPGA board. Results obtained with these two implementations do not have significant differences in terms of
execution time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651W (2007) https://doi.org/10.1117/12.718373
One of the primary motivations for statistical LWIR background characterization studies is to support
the design, evaluation, and implementation of algorithms for the detection of various types of ground
targets. Typically, detection is accomplished by comparing the detection statistic for each test pixel
to a threshold. If the statistic exceeds the threshold, a potential target is declared. The threshold is
usually selected to achieve a given probability of false alarm. In addition, in surveillance applications,
it is almost always required that the system will maintain a constant false alarm rate (CFAR) as
the background distribution changes. This objective is usually accomplished by adaptively estimating
the background statistics and adjusting the threshold accordingly. In this paper we propose and
study CFAR threshold selection techniques, based on tail extrapolation, for a detector operating on
hyperspectral imaging data. The basic idea is to obtain reliable estimates of the background statistics
at low false alarm rates, and then extend these estimates beyond the range supported by the data to
predict the thresholds at lower false alarm rates. The proposed techniques are based on the assumption
that the distribution in the tail region of the detection statistics is accurately characterized by a member
of the extreme value distributions. We focus on the generalized Pareto distribution. The evaluation of
the proposed techniques will be done with both simulated data and real hyperspectral imaging data
collected using the Army Night Vision Laboratory COMPASS sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651X (2007) https://doi.org/10.1117/12.718162
Reduction of the potential health risks to consumers caused by food-borne infections is a very important food safety
issue of public concern; one of the leading causes of food-borne illnesses is fecal contamination. We consider detecting
fecal contaminants on chicken carcasses using hyperspectral imagery. We introduce our new improved floating forward
selection (IFFS) algorithm for feature selection of the wavebands to use in hyperspectral data for classification. Our
IFFS algorithm is an improvement on the state-of-the-art sequential floating forward selection (SFFS) algorithm. Our
initial results indicate that our method gives an excellent detection rate and performs better than other quasi-optimal
feature selection algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651Y (2007) https://doi.org/10.1117/12.720158
This paper describes a novel smart camera algorithm capable of detecting important changes in scenes. These changes
take the form of observed crowd dynamics, group behavior, and mounted and unmounted traffic flow. Using webcam
imagery of a football game, these cameras successfully exhibited scene understanding and detected anomalies without
prior training or examples. Our algorithm and results are summarized in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 65651Z (2007) https://doi.org/10.1117/12.719059
This study determined the relationship between in situ and remote sensing observation to derive an algorithm for PM10
mapping. The main objective of this study was to test the feasibility of using Landsat TM imagery captured on 17
January 2002 for PM10 mapping over Penang Island, Malaysia. A new algorithm was developed based on the aerosol
characteristic for air quality estimation. The corresponding PM10 data were measured simultaneously with the
acquisition of satellite scene for algorithm regression analysis. Accuracy of the retrieved surface reflectance values is
very importance to determine the atmospheric component from the remotely sensed data. In this study, we computed the
surface component properties by using ACTOR2 in the PCI Geomatica 9.1 image processing software. The proposed
algorithm produced high correlation coefficient (R) and low root-mean-square error (RMS) between the measured and
estimated PM10 values. A PM10 map was generated using the proposed algorithm. Finally, the created PM10 map was
geometrically corrected and colour-coded for visual interpretation. This study indicated the usefulness of remotely
sensed data for air quality studies using the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656520 (2007) https://doi.org/10.1117/12.719064
In this study, we used the Landsat TM data captured on 9 March 2006 for the retrieval of PM10 over the water surface of Penang Straits, Malaysia. PM10 measurements were collected using a handheld DustTrakTM meter simultaneously with the remotely sensed data acquisition. The PCI Geomatica version 9.1 digital image processing software was used in all image-processing analysis. An algorithm was developed based on the atmospheric optical characteristic. The digital numbers were extracted corresponding to the ground-truth locations for each band and then converted into radiance and reflectance values. The reflectance measured from the satellite [reflectance at the top of atmospheric, &rgr;(TOA)] was subtracted by the amount given by the surface reflectance to obtain the atmospheric reflectance. Then the atmospheric reflectance was related to the PM10 using regression analysis. These atmospheric reflectance values were used for calibration of the PM10 algorithm. The developed algorithm was used to correlate the digital signal and the PM10 concentration. The proposed algorithm produced a high correlation coefficient (R) and low root-mean-square error (RMS). The PM10 concentration was generated using this algorithm over the water surface of Penang straits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656521 (2007) https://doi.org/10.1117/12.718761
The statistics of natural backgrounds extracted from an Airborne Visible and Infrared Imaging Spectrometer (AVIRIS)
hyperspectral datacube collected over Fort AP Hill, VA, were used to demonstrate the effects of the two atmospheric
components of a statistical end-to-end performance prediction model. New capabilities in MODTRANTM5 were used to
generate coefficients for linear transformations used in the atmospheric transmission and compensation components of a
typical end-to-end model. Model radiance statistics, calculated using reflectance data, is found to be similar to the
original AVIRIS radiance data. Moreover, if identical atmospheric conditions are applied in the atmospheric
transmission and in the atmospheric compensation model components and the effects of sensor noise are disregarded, the
resulting reflectance statistics are identical to the original reflectance statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656522 (2007) https://doi.org/10.1117/12.719082
Ground to ground, sensor to object viewing perspective presents a major challenge for autonomous window based object
detection, since object scales at this viewing perspective cannot be approximated. In this paper, we present a fully
autonomous parallel approach to address this challenge. Using hyperspectral (HS) imagery as input, the approach
features a random sampling stage, which does not require secondary information (range) about the targets; a parallel
process is introduced to mitigate the inclusion by chance of target samples into clutter background classes during random
sampling; and a fusion of results. The probability of sampling targets by chance within the parallel processes is modeled
by the binomial distribution family, which can assist on tradeoff decisions. Since this approach relies on the effectiveness
of its core algorithmic detection technique, we also propose a compact test statistic for anomaly detection, which is based
on a principle of indirect comparison. This detection technique has shown to preserve meaningful detections (genuine
anomalies in the scene) while significantly reducing the number of false positives (e.g. transitions of background
regions). To capture the influence of parametric changes using both the binomial distribution family and actual HS imagery, we conducted a series of rigid statistical experiments and present the results in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656523 (2007) https://doi.org/10.1117/12.719555
Endmember extraction has received considerable interest in recent years. Of particular interest is the Pixel Purity Index (PPI) because of its publicity and availability in ENVI software. There are also many variants of the PPI have been developed. Among them is an interesting endmember extraction algorithm (EEA), called vertex component analysis (VCA) developed by Dias and Nascimento who extend the PPI to a simplex-based EEA while using orthogonal subspace projection (OSP) as a projection criterion rather than simplex volume used by another well-known EEA, N-finder algorithm (N-FINDR) developed by Winter. Interestingly, this paper will show that the VCA is essentially the same algorithm, referred to as Automatic Target Generation Process (ATGP) recently developed for automatic target detection and classification by Ren and Chang except the use of the initial condition to initialize the algorithm. In order to substantiate our findings, experiments using synthetic and real images are conducted for a comparative study and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656524 (2007) https://doi.org/10.1117/12.720101
In evaluating the performance of the detectors such as the orthogonal subspace projection (OSP) detector, it is often assumed that the model under which the detector is constructed is the correct model. However, in practice, the ability to identify all background endmembers might be limited. Consequently, the OSP detector would use only a subset of all background endmembers. This clearly creates
sub-optimality of such a detector. In this paper, we perform analytical calculations that allow us to assess how much of the detection power is lost due to the unidentified background endmembers. An analytical comparison is made between two OSP detectors-one using only the identified endmembers and the other one using even a smaller set of endmembers (in order to simulate the situation when only the smaller set of endmembers is identified).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656525 (2007) https://doi.org/10.1117/12.718378
Remote detection of chemical vapors in the atmosphere has a wide range of civilian and military
applications. In the past few years there has been significant interest in the detection of effluent
plumes using hyperspectral imaging spectroscopy in the 8-13&mgr;m atmospheric window. A major obstacle
in the full exploitation of this technology is the fact that everything in the infrared is a source of
radiation. As a result, the emission from the gases of interest is always mixed with emission by the
more abundant atmospheric constituents and by other objects in the sensor field of view. The radiance
fluctuations in this background emission constitute an additional source of interference which is much
stronger than the detector noise. In this paper we develop and evaluate parametric models for the
statistical characterization of LWIR hyperspectral backgrounds. We consider models based on the
theory of elliptically contoured distributions. Both models can handle heavy tails, which is a key
stastical feature of hyperspectral imaging backgrounds. The paper provides a concise description of
the underlying models, the algorithms used to estimate their parameters from the background spectral
measurements, and the use of the developed models in the design and evaluation of chemical warfare
agent detection algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, 656526 (2007) https://doi.org/10.1117/12.717923
Analysis of hyperspectral imagery requires the extraction of certain basis spectra called endmembers,
which are assumed to be the pure signatures in the image data.
N-FINDR algorithm developed by Winter is one of the most widely used technique for endmember extraction. This algorithm is based on the fact
that in L spectral dimensions, the L-dimensional volume contained by a simplex formed from the purest
pixels is larger than any other volume formed from other combination of pixels. Recently proposed
algorithm based on virtual dimensionality (VD) determines the number of endmembers present in the
dataset, where an endmember initialization algorithm (EIA) is used to select an appropriate set of pixels for
initializing the N-FINDR process. In this paper, we proposed a fast algorithm to implement the N-FINDR
technique which has much better computational efficiency than the existing techniques. In the proposed
technique, we used the VD to find the number of endmembers N. Then we reduced the dimensionality of
the hyperspectral dataset to N−1 by using the principal component transformation (PCT) and divided all the
pixels into N number of classes by using the spectral angle map (SAM). We extracted N number of the
most pure pixels from each group by using the classical N-FINDR algorithm but with exhaustive search.
Thus we get N2 pixels that are most likely to be the actual endmembers. The classical N-FINDR algorithm
is then again applied on these selected pixels to find the final N endmember. Grouping the pixels into
several classes makes the computation very fast. Since we extracted N number of pixels from each group
by exhaustive search, there is no possibility of loosing any endmember due to classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.