It is often useful to fuse remotely sensed data taken from different sensors. However, before this multi-sensor data fusion
can be performed the data must first be registered. In this paper we investigate the use of a new information-theoretic
similarity measure known as Cross-Cumulative Residual Entropy (CCRE) for multi-sensor registration of remote sensing
imagery. The results of our experiments show that the CCRE registration algorithm was able to automatically register
images captured with SAR and optical sensors with 100% success rate for initial maximum registration errors of up to 30
pixels and required at most 80 iterations in the successful cases. These results demonstrate a significant improvement
over a recent mutual-information based technique.
The detection of objects from a cluttered background using remote sensing data may cause many false alarms if the
target object and the background have overlapping spectra. In this study, we propose an integrated approach to combine
pixel-based spectral labeling with object-based spatial property measures. A hierarchical structure is developed in which
multileveled attributions and decision rules can be implemented. The targets are then extracted progressively.
Experimental results show a substantial reduction in the number of false alarms with the proposed method.
In this paper, a new image denoising method which is based on the uHMT(universal Hidden Markov Tree) model in
the wavelet domain is proposed. The MAP (Maximum a Posteriori) estimate is adopted to deal with the ill-conditioned
problem (such as image denoising) in the wavelet domain. The uHMT model in the wavelet domain is applied to construct
a prior model for the MAP estimate. By using the optimization method Conjugate Gradient, the closest approximation to
the true result is achieved. The results show that images restored by our method are much better and sharper than other
methods not only visually but also quantitatively.
For multi-sensor registration, previous techniques typically use mutual information (MI) rather than the sum-of-the-squared
difference (SSD) as the similarity measure. However, the optimization of MI is much less straightforward than is
the case for SSD-based algorithms. A new technique for image registration has recently been proposed that uses an
information theoretic measure called the Cross-Cumulative Residual Entropy (CCRE). In this paper we show that using
CCRE for multi-sensor registration of remote sensing imagery provides an optimization strategy that converges to a
global maximum with significantly less iterations than existing techniques and is much less sensitive to the initial
geometric disparity between the two images to be registered.
In this paper we introduce and test a new similarity measure for use in a template matching process for target detection
and recognition. The measure has recently been developed for multi-modal registration of medical images and is known
as phase mutual information (PMI). The key advantage of PMI is that it is invariant to lighting conditions, the ratio
between foreground and background intensity and the level of background clutter, which is critical for target detection
and recognition from the surveillance images acquired from various sensors. Several experiments were conducted using
real and synthetic datasets to evaluate the performance of PMI when compared with a number of commonly used
similarity measures including mean squared difference, gradient error and intensity mutual information. Our results show
that PMI consistently provided the most accurate detection and recognition performance.
In this study, we proposed a sampling strategy for a single step land cover change detection method. The sampling strategy
facilitates the derivation of samples of detailed "from-to" land cover change and no-change classes from images of
multiple dates. It consists of two steps. Firstly, classes of interest will be defined and their training samples will be derived
separately from the two date data sets. Secondly, the two sets of class data or signatures will be combined in pair artificially
as one single set for both change and no-change land cover classes. As a result, a full list of possible land cover changes and
no-changes classes are effectively trained. It is simple and able to eliminate those impossible land cover change directions
considered by expert knowledge. Our case study on Drayton Coal Mine and surrounding area demonstrated that the
sampling strategy when used together with the single-step classification method yielded a much meaningful and cleaner
land cover change map than that of the traditional two-step post-classification method. In addition, the one-step
classification also provided higher overall testing accuracy than that of the two-step post-classification (e.g., 82.3% vs
78.8%). On the other hand, the resultant map of the traditional two-step post-classification is more fragment, and the area
of land cover changes is clearly over-estimated (e.g., close to 50%). One disturbing fact is that the two-step
post-classification generated a large proportion of land cover change classes that are not existent in the study area. This
problem can be overcome by the developed training strategy.
Super-resolution (SR) recovery has become an important research area for remote sensing images ever since T.S. Huang
first published his frequency method in 1984. Because of the development of computer technology, more and more
efficient algorithms have been put forward in recent years. The Iteration Back Projection (IBP) method is one of the
popular methods with SR. In this paper, a modified IBP is proposed for Advanced Land Observing Satellite (ALOS)
imagery. ALOS is the Japanese satellite launched in January 2006 and carries three sensors: Panchromatic Remote-sensing
Instrument of Stereo Mapping (PRISM), Advanced Visible and Near Infrared Radiometer type-2 (AVNIR-2)
and Phased Array type L-band Synthetic Aperture Radar (PALSAR). The PRISM has three independent optical systems
for viewing nadir, forward and backward so as to produce a stereoscopic image along the satellite's track. While PRISM
is mainly used to construct a 3-D scene, here we use these three panchromatic low-resolution (LR) images captured by
nadir, backward and forward sensors to reconstruct one SR image.