Hyper/multispectral imagery in optical remote sensing utilizes wavelengths that range from the visible to the reflective shortwave infrared. Inverse processes using machine learning are applied to the spectral profiles recorded for target detection, material identification, and associated environmental applications, which is the main purpose of remote sensing. This Field Guide covers the fundamentals of remote sensing spectral imaging for image understanding; image processing for correction and quality improvement; and image analysis for information extraction at subpixel, pixel, superpixel, and image levels, including feature mining and feature reduction. Basic concepts and fundamental understanding are emphasized to prepare the reader for exploring advanced methods.
Guest editors Kun Tan, Xiuping Jia, and Antonio J. Plaza summarize the Special Section on Satellite Hyperspectral Remote Sensing: Algorithms and Applications.
In consideration of within-class endmember variability, it is realistic to use multiple endmembers to model a pure class. We propose an advanced multi-endmember unmixing algorithm based on twin support vector machines (UTSVM), which derives the abundances based on the distances from the mixed pixels to each classification hyperplane. Unmixing uncertainty, an issue often neglected in multi-endmember unmixing, is also analyzed quantitatively for UTSVM. Two types of unmixing uncertainty, abundance overlap (i.e., different mixed pixels have the same abundances) and model overlap (i.e., one mixed pixel may be unmixed into different abundances), are introduced. Abundance overlap angle and abundance variability scale (AVS) are defined as two uncertainty indexes to measure abundance overlap and model overlap, respectively. The relationship between within-class endmember variability and unmixing uncertainty is discussed. When the unmixing uncertainty is high, we propose to use the mean value of abundances within AVS as the estimation of abundance to obtain the best compromised results. Experimental results show the feasibility and effectiveness of our study.
It is often useful to fuse remotely sensed data taken from different sensors. However, before this multi-sensor data fusion
can be performed the data must first be registered. In this paper we investigate the use of a new information-theoretic
similarity measure known as Cross-Cumulative Residual Entropy (CCRE) for multi-sensor registration of remote sensing
imagery. The results of our experiments show that the CCRE registration algorithm was able to automatically register
images captured with SAR and optical sensors with 100% success rate for initial maximum registration errors of up to 30
pixels and required at most 80 iterations in the successful cases. These results demonstrate a significant improvement
over a recent mutual-information based technique.
The detection of objects from a cluttered background using remote sensing data may cause many false alarms if the
target object and the background have overlapping spectra. In this study, we propose an integrated approach to combine
pixel-based spectral labeling with object-based spatial property measures. A hierarchical structure is developed in which
multileveled attributions and decision rules can be implemented. The targets are then extracted progressively.
Experimental results show a substantial reduction in the number of false alarms with the proposed method.
In this paper, a new image denoising method which is based on the uHMT(universal Hidden Markov Tree) model in
the wavelet domain is proposed. The MAP (Maximum a Posteriori) estimate is adopted to deal with the ill-conditioned
problem (such as image denoising) in the wavelet domain. The uHMT model in the wavelet domain is applied to construct
a prior model for the MAP estimate. By using the optimization method Conjugate Gradient, the closest approximation to
the true result is achieved. The results show that images restored by our method are much better and sharper than other
methods not only visually but also quantitatively.
For multi-sensor registration, previous techniques typically use mutual information (MI) rather than the sum-of-the-squared
difference (SSD) as the similarity measure. However, the optimization of MI is much less straightforward than is
the case for SSD-based algorithms. A new technique for image registration has recently been proposed that uses an
information theoretic measure called the Cross-Cumulative Residual Entropy (CCRE). In this paper we show that using
CCRE for multi-sensor registration of remote sensing imagery provides an optimization strategy that converges to a
global maximum with significantly less iterations than existing techniques and is much less sensitive to the initial
geometric disparity between the two images to be registered.
In this paper we introduce and test a new similarity measure for use in a template matching process for target detection
and recognition. The measure has recently been developed for multi-modal registration of medical images and is known
as phase mutual information (PMI). The key advantage of PMI is that it is invariant to lighting conditions, the ratio
between foreground and background intensity and the level of background clutter, which is critical for target detection
and recognition from the surveillance images acquired from various sensors. Several experiments were conducted using
real and synthetic datasets to evaluate the performance of PMI when compared with a number of commonly used
similarity measures including mean squared difference, gradient error and intensity mutual information. Our results show
that PMI consistently provided the most accurate detection and recognition performance.
In this study, we proposed a sampling strategy for a single step land cover change detection method. The sampling strategy
facilitates the derivation of samples of detailed "from-to" land cover change and no-change classes from images of
multiple dates. It consists of two steps. Firstly, classes of interest will be defined and their training samples will be derived
separately from the two date data sets. Secondly, the two sets of class data or signatures will be combined in pair artificially
as one single set for both change and no-change land cover classes. As a result, a full list of possible land cover changes and
no-changes classes are effectively trained. It is simple and able to eliminate those impossible land cover change directions
considered by expert knowledge. Our case study on Drayton Coal Mine and surrounding area demonstrated that the
sampling strategy when used together with the single-step classification method yielded a much meaningful and cleaner
land cover change map than that of the traditional two-step post-classification method. In addition, the one-step
classification also provided higher overall testing accuracy than that of the two-step post-classification (e.g., 82.3% vs
78.8%). On the other hand, the resultant map of the traditional two-step post-classification is more fragment, and the area
of land cover changes is clearly over-estimated (e.g., close to 50%). One disturbing fact is that the two-step
post-classification generated a large proportion of land cover change classes that are not existent in the study area. This
problem can be overcome by the developed training strategy.
Super-resolution (SR) recovery has become an important research area for remote sensing images ever since T.S. Huang
first published his frequency method in 1984. Because of the development of computer technology, more and more
efficient algorithms have been put forward in recent years. The Iteration Back Projection (IBP) method is one of the
popular methods with SR. In this paper, a modified IBP is proposed for Advanced Land Observing Satellite (ALOS)
imagery. ALOS is the Japanese satellite launched in January 2006 and carries three sensors: Panchromatic Remote-sensing
Instrument of Stereo Mapping (PRISM), Advanced Visible and Near Infrared Radiometer type-2 (AVNIR-2)
and Phased Array type L-band Synthetic Aperture Radar (PALSAR). The PRISM has three independent optical systems
for viewing nadir, forward and backward so as to produce a stereoscopic image along the satellite's track. While PRISM
is mainly used to construct a 3-D scene, here we use these three panchromatic low-resolution (LR) images captured by
nadir, backward and forward sensors to reconstruct one SR image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.