PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6748, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pixel in multispectral images is highly correlated with the neighboring pixels both spatially and spectrally. Hence, data
transformation is performed before performing pan-sharpening. Principal component analysis (PCA) has been a popular
choice for spectral transformation of low resolution multispectral images. Current PCA-based pan-sharpening methods
make an assumption that the first principal component (PC) of high variance is an ideal choice for replacing or injecting
it with high spatial details from the high-resolution histogram-matched panchromatic (Pan) image. However, this paper,
using the statistical measures on the datasets, shows that the low-resolution first PC component is not always an ideal
choice for substitution. This paper presents a new method to improve the quality of the resultant images that are obtained
using the PCA-based pan-sharpening methods. This approach is based on adaptively selecting the PC component
required to be replaced or injected with high spatial details. The pan-sharpened image obtained by the proposed method
is evaluated using well-known quality indexes. Results show that the proposed method increases the quality of the
resultant fused images when compared to the standard approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a new hierarchical joint segmentation technique, which provides an effective fusion of a sequence of
multitemporal single-channel SAR images of a given area with a multispectral optical image over the same target area.
The proposed segmentation method is totally unsupervised, and it allows identifying regions that are homogeneous with
respect to the whole data set (both optical and multitemporal SAR images). This is accomplished, first, by modeling the
statistic of the joint distribution of SAR and optical data, then treating the multi-channel input images as a single entity,
and performing the segmentation using information from all channels simultaneously. To this purpose, we consider two
different statistical models: 1) multivariate Gaussian model for the multiband optical images and gamma distribution for
the SAR images, 2) again multivariate Gaussian model for the multiband optical images and multivariate log-normal
distribution for the SAR images.
The proposed segmentation algorithm is based on a fast multi-scale iterated weighted aggregation method and
generalized to multispectral remote sensing data in. A quantitative analysis of the proposed joint segmentation
technique for the fusion of multitemporal SAR and multispectral optical images is carried out using real images. To this
purpose, any desired classification schema can be applied after the segmentation step on the identified homogeneous
regions, which allows the full exploitation of the spatial-temporal information available in the multitemporal and
multisource data. Results show that the proposed joint segmentation technique, combined with even simple
classification methods, greatly improves the discrimination capability of the classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change-detection methods represent powerful tools for monitoring the evolution of the state of the Earth's surface.
In order to optimize the accuracy of the change maps, a multiscale approach can be adopted, in which
observations at coarser and finer scales are jointly exploited. In this paper, a multiscale contextual unsupervised
change-detection method is proposed for optical images, which is based on discrete wavelet transforms and
Markov random fields. Wavelets are applied to the difference image to extract multiscale features and Markovian
data fusion is used to integrate both these features and the spatial contextual information in the change-detection
process. Expectation-maximization and Besag's algorithms are used to estimate the model parameters. Experiments
on real optical images points out the improved effectiveness of the method, as compared with single-scale
approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a parcel-based multiscale technique robust to registration noise for unsupervised change detection in
multitemporal very high geometrical resolution images. The proposed technique is based on the analysis of the statistical
behavior of registration noise present in multitemporal images at different scales. In particular, the method exploits a
differential analysis of the direction distributions of spectral change vectors (SCVs) computed at different resolution
levels in the polar domain. The multiscale analysis permits to separate sectors associated with true changes from sectors
associated with residual registration noise. In order to improve the change-detection accuracy, the presented approach
exploits the spatial-contextual information contained in the neighborhood of each pixel by defining multitemporal
"parcels" (i.e. small homogeneous regions shared by both original images). Change detection is achieved by applying a
specific comparison algorithm to each pixel of the images at full resolution, by considering both the information on
registration noise obtained from the differential analysis and the spatial-contextual information contained in the parcels.
In particular, the computed change-detection map shows a high geometrical fidelity in detail representation and a sharp
reduction in false alarms due to the residual registration noise. Experimental results confirm the effectiveness of the
proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiometric normalization is a vital stage in the pre-processing of multi-temporal imagery. It aims to insure a reliable
exploitation of images acquired under different imaging conditions. In this study, we investigate whether a relative
normalization can replace atmospheric correction. The investigation was done using a time series of eighteen SPOT 5
images acquired over Reunion Island and intended to be used for sugarcane monitoring. An automatic method for
relative normalization is introduced, and its results are compared to atmospherically corrected data. The relative method
is based on the reflectances of invariant targets (IT) that are selected automatically. The atmospheric correction is carried
out by the 6S code. The comparison was performed a) by using a set of manually selected invariant targets (MSIT), and
b) by assessing the NDVI behavior of a set of sugarcane fields. An excellent correlation is obtained between relatively
and atmospherically corrected data: the coefficient of determination (R2) is higher than 0.96 for all spectral bands and for
the NDVI. Moreover, a comparable impact is observed on the temporal profiles of MSIT and on the NDVI trajectories of
sugarcane field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The iteratively re-weighted multivariate alteration detection (IR-MAD) transformation is proving to be very
successful for multispectral change detection and automatic radiometric normalization applications in remote
sensing. Various alternatives exist in the way in which the weights (no-change probabilities) are calculated
during the iteration procedure. These alternatives are compared quantitatively on the basis of multispectral
imagery from different sensors under a range of ground cover conditions exhibiting wide variations in the amount
of change present, as well as with a partially artificial data set simulating truly time-invariant observations. A
best re-weighting procedure is recommended.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper formulates the problem of distinguishing changed from unchanged pixels in remote sensing images as a
minimum enclosing ball (MEB) problem with changed pixels as target class. The definition of the sphere shaped
decision boundary with minimal volume that embraces changed pixels is approached in the context the support vector
formalism adopting a support vector domain description (SVDD) one-class classifier. The SVDD maps the data into a
high dimensional feature space where the spherical support of the high dimensional distribution of changed pixels is
computed. The proposed formulation of the SVDD uses both target and outlier samples for defining the MEB, and is
included here in an unsupervised system for change detection. For this purpose, nearly certain examples for the classes of
both targets (i.e., changed pixels) and outliers (i.e., unchanged pixels) for training are identified based on thresholding
the magnitude of spectral change vectors. Experimental results obtained on two different multitemporal and multispectral
remote sensing images pointed out the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from
being a sparse research tool into a commodity product available to a broad user community. As a result, there is
an emerging need for standardized data processing techniques, able to take into account the special properties of
hyperspectral data and to take advantage of latest-generation sensor instruments and computing environments.
The goal of this paper is to provide a seminal view on recent advances in techniques for hyperspectral data
classification. Our main focus is on the design of techniques able to deal with the high-dimensional nature of the
data, and to integrate the spatial and spectral information. The performance of the proposed techniques is evaluated
in different analysis scenarios, including land-cover classification, urban mapping and spectral unmixing.
To satisfy time-critical constraints in many remote sensing applications, parallel implementations for some of
the discussed algorithms are also developed. Combined, these parts provide a snapshot of the state-of-the-art in
those areas, and offer a thoughtful perspective on the potential and emerging challenges in the design of robust
hyperspectral data classification algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kernel-based Orthogonal Subspace Projection (KOSP) provides good results in the field of classification of
hyperspectral images. However, an open-problem is the evaluation from the ground-truth samples of the
prototypes that best represent the classes. In the original formulation of KOSP, this preliminary (training)
stage is very simple since for each class the prototype is computed as the centroid of the ground-truth samples.
In order to improve KOSP performances, in this paper we introduce a minimization problem to evaluate the
best prototypes from a given ground truth of a specific classification problem. K-fold cross-validation is used to
avoid overfitting. The performance of the proposed methodology is tested by classifying the widely used 'Indian
Pine' hyperspectral dataset collected by the AVIRIS spectrometer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing hyperspectral sensors are important and powerful instruments for addressing land-cover classification
problems, as they permit a detailed characterization of the spectral behavior of the considered information classes.
However, the processing of hyperspectral data is particularly complex both from the theoretical viewpoint (e.g. problems
related to the Hughes phenomenon [1]) and from the computational perspective. In this context, despite many
investigations have been presented in the literature on feature reduction and feature extraction in hyperspectral data, only
few studies analyzed the role of the spectral resolution on the classification accuracy in different application domains. In
this paper, we present an empirical study aimed at understanding the relationships among spectral resolution, classifier
complexity, and classification accuracy obtained with hyperspectral sensors in classification of forest areas. On the basis
of this study, important conclusions can be derived on the choice of the spectral resolution of hyperspectral sensors for
forest applications, also in relation to the complexity of the adopted classification methodology. These conclusions can
be exploited both in the context of the design of hyperspectral sensors (or for programming spectral channels of the
available instruments) and in the phase of development of classification system for hyperspectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Use of a Mean Class Propagation Model (MCPM) has been shown to be an effective approach in the expedient
propagation of hyperspectral data scenes through the atmosphere. In this approach, real scene data are spatially subdivided
into regions of common spectral properties. Each sub-region which we call a class possesses two important
attributes (1) the mean spectral radiance and (2) the spectral covariance. The use of this attributes can significantly
improve throughput performance of computing systems over conventional pixel-based methods.
However, this approach assumes that background clutter can be approximated as having multivariate Gaussian
distributions. Under such conditions, covariance propagations can be effectively performed from ground through the
atmosphere. This paper explores this basic assumption using real-scene Airborne Visible/Infrared Imaging Spectrometer
(AVIRIS) data and examines how the partitioning of the scene into smaller and smaller segments influences local clutter
characterization.
It also presents a clutter characterization metric that helps explain the migration of the magnitude of statistical clutter
from parent class to child sub-classes populations. It is shown that such a metric can be directly related to an
approximate invariant between the parent class and its child classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anomaly and Target Detection in Hyperspectral Images
This work presents a comparative experimental analysis of different Anomaly Detectors (ADs) carried out on a high
spatial resolution data set acquired by the prototype hyperspectral sensor SIM-GA. The benchmark AD for hyperspectral
anomaly detection is the Reed-Xiaoli (RX) algorithm. Its main limitation is the assumption that the local background can
be modeled by a Gaussian distribution. In the literature, several ADs have been presented, most of them trying to cope
with the problem of non-Gaussian background. Despite the variety of works carried out on such algorithms, it is difficult
to find a comparative analysis of these methodologies performed on the same data set and therefore in identical operating
conditions. In this work, the most known ADs, such as the RX, Orthogonal Subspace Projection (OSP) based algorithms,
the Cluster Based AD (CBAD), and the Signal Subspace Processing AD (SSPAD) are analyzed and compared,
highlighting their most interesting characteristics. The performance is evaluated on a new data set relative to a rural
scenario, in which several man-made targets have been embedded. The non-homogeneous nature of the background,
enhanced by the high spatial resolution of the sensor, and the presence of man-made artifacts, like buildings and
vehicles, make the anomaly detection process very challenging. Performance comparison is carried out on the basis of a
joint analysis of the Receiving Operative Characteristics and the image statistics. For this data set, the best performance
is obtained by the strong background suppression ability of the OSP-based algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Physical growth processes give rise to a number of hyperspectral vegetation background clutter properties which degrade
the ability to detect targets in such backgrounds. In order to gain insight into this complex problem a novel three-fold
method is proposed: the first appeals to growth processes to produce generative models of the background clutter; the
second examines the phenomenology of these models and compares it to real data; and the third devises new anomaly
detectors to mitigate the effects of these background clutter properties. Studies of model cellular automata are reported
here. These models aim to replicate the local conditions necessary for successful growth of the vegetation species and as
a result produce spatial correlations that match real vegetation. Non-competitive and competitive growth models, in
particular, are studied and produce hyperspectral images through the use of Cameosim. In general, no degrading effects
of using an enhanced spectral library were observed suggesting that the dominant factor in reducing anomaly detectors
effectiveness is the spatial inhomogeneity of vegetation abundances. In addition, evidence for several important
properties of the hyperspectral background is also reported. These support the conclusion that vegetation background
clutter distributions are non-Gaussian. Insight gleaned from these studies has been used to develop many new improved
anomaly detectors and their results are also reported and bench-marked against existing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to rapidly calculate at-sensor radiance over a large number of lines of sight (LOSs) is critical for
hyperspectral and multispectral scene simulations and look-up table generation, both of which are
increasingly used for sensor design, performance evaluation, data analysis, and software and systems
evaluations. We have demonstrated a new radiation transport (RT) capability that combines an efficient
multiple-LOS (MLOS) multiple scattering (MS) algorithm with a broad-bandpass correlated-k methodology
called kURT-MS, where kURT stands for correlated-k-based Ultra-fast Radiative Transfer. The MLOS
capability is based on DISORT and exploits the existing MODTRAN-DISORT interface. kURT-MS is a new
sensor-specific fast radiative transfer formalism for UV-visible to LWIR wavelengths that is derived from
MODTRAN's correlated-k parameters. Scattering parameters, blackbody and solar functions are cast as a
few sensor-specific and bandpass-specific k-dependent source terms for radiance computations. Preliminary
transmittance results are within 2% of MODTRAN with a two-orders-of-magnitude computational savings.
Preliminary radiance computations in the visible spectrum are within a few percent of MODTRAN results,
but with orders of magnitude speed up over comparable MODTRAN runs. This new RT capability
(embodied in two software packages: kURT-MS and MODTRAN-kURT) has potential applications for
remote sensing applications such as hyperspectral scene simulation and look-up table generation for
atmospheric compensation analysis as well as target acquisition algorithms for near earth scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new non-negative factorization method has been developed. The method is based on the concept of non-negative
rank (NNR). Bounds for the NNR of certain non-negative matrices are determined relative to the
rank of the matrix, with the lower bound being equal to the rank. The method requires that the data matrix be
non-negative and have a large first singular value. Unlike other non-negative factorization methods, the
approach does not assume or require that the factors be linearly independent and no assumption of statistical
independence is required. The rank of the matrix provides the number of linearly independent components
present in the data while the non-negative rank provides the number of non-negative independent components
present in the data. The method is described and illustrated in application to hyperspectral data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Estimation and Regression of Biophysical Parameters
Land surface temperature (LST) and sea surface temperature (SST) are important quantities for many environmental
models, and remote sensing is a feasible and promising way to estimate them on a regional and global
scale. In order to estimate LST and SST from satellite data many algorithms have been devised, most of which
require a-priori information about the surface and the atmosphere. However, the high variability of surface and
atmospheric parameters causes these traditional methods to produce significant estimation errors, thus making
their application on a global scale critical. A recently proposed approach involves the use of support vector
machines (SVMs). Based on satellite data and corresponding in-situ measurements, they generate an approximation
of the relation between them, which can be used subsequently to estimate unknown surface temperatures
for additional satellite data. Such a strategy requires the user to set several internal parameters.
In this paper a method is proposed for automatically setting these parameters to values that lead to minimum
estimation errors. This is achieved by minimizing a functional correlated to regression errors (i.e., the "spanbound"
upper bound on the leave-one-out error) which can be computed using only the training set, without the
need for a further validation set. In order to minimize this functional, the Powell's algorithm is used, because
it is applicable also to nondifferentiable functions. Experimental results generated by the proposed method turn
out to be very similar to those obtained by cross-validation and by a grid search for the parameter configuration
yielding the best test-set accuracy, although with a dramatic reduction in the computational times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember
signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance
fractions at each pixel in the image.
This paper introduces a new unmixing method termed dependent component analysis (DECA). This method
is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component
Analysis (ICA) and on geometrical based approaches.
DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures
weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet
densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process.
The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The
paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the application of the geostastistical method to quantify noise from a compact airborne spectrograhic imager (CASI) data set. Estimation of noise contained within a remote sensing image is essential in order to quanitfy the effects of noise contamination. Noise was estimated from CASI imagery by calculation the noise as the square root of the nugget variance, a parameter of a fitte semivariogram model. The signal-to-noise ratio (SNR) can then be estimated by dividing the mean vaue by the square root of the nugget variance. Three wavebands 0.46-049μm (blue), 0-63-0.64μm (red) and 0.70-071μm (near-infrared) were used in the analysis. A total of five land covers were selected, each representing a common land cover type in the area which are i)bracken ii)conifer woodland iii)grassland iv)heathland and v)deciduous woodland. The results shows that the noise varies in different land cover types and wavelengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is
investigated. Distortion is generally measured either as maximum absolute deviation (MAD) for near-lossless
methods, e.g. differential pulse code modulation (DPCM), or as mean square error (MSE) for lossy methods (e.g.
spectral decorrelation followed by JPEG 2000). Also the absolute angular error, or spectral angle mapper (SAM),
is used to quantify spectral distortion. A band add-on (BAO) technique was recently introduced to calculate
a modified version of SAM. Spectral bands are iteratively selected in order to increase the angular separation
between two pixel spectra by exploiting a mathematical decomposition of SAM. As a consequence, only a subset
of the original hyperspectral bands contributes to the new distance metrics, referred to as BAO-SAM, whose
operational definition guarantees its monotonicity as the number of bands increases. Two strategies of interband
distortion allocation are compared: given a target average bit rate, distortion, either MAD or MSE, may
be set to be constant varying with wavelength. Otherwise it may be allocated proportionally to the noise level
on each band, according to the virtually-lossless protocol. Thus, a different quantization step size depending
on the estimated standard deviation of the noise, is used to quantize either prediction residuals (DPCM) or
wavelet coefficients (JPEG 2000) of each spectral band, thereby determining band-varying MAD/MSE values.
Comparisons with the uncompressed originals show that the average spectral angle mapper (SAM) is minimized
by constant distortion allocation. Conversely, the average BAO-SAM is minimized by the noise-adjusted variable
spectral distortion allocation according to the virtually lossless protocol. Preliminary results of simulations
performed on reflectance data obtained from compressed radiance data show that, for a given compression ratio,
the virtually-lossless approach minimizes both BAO-SAM and SAM; hence, discrimination of spectrally similar
materials, e.g. clays, is significantly expedited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. Our group has designed a new file syntax for the Recommendation. The proposal
consists of adding embedded headers. Such modification provides scalability by quality, spatial location, resolution and
component. The main advantages of our proposal are: 1) the definition of multiple types of progression order, which enhances
abilities in transmission scenarios, and 2) the support for the extraction and decoding of specific windows of interest
without needing to decode the complete code-stream. In this paper we evaluate the performance of our proposal. First we
measure the impact of the embedded headers in the encoded stream. Second we compare the compression performance of
our technique to JPEG2000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses robust classification of hyperspectral images. Both methods for dimensionality reduction
and robust estimation of classifier parameters in full dimension are presented. A new approach to dimensionality
reduction that uses piecewise constant function approximation of the spectral curve is compared to conventional
dimensionality reduction methods like principal components, feature selection, and decision boundary
feature extraction. Computing robust estimates of the decision boundary in full dimension is an alternative to
dimensionality reduction. Two recently proposed techniques for covariance estimation based on the eigenvector
decomposition and the Cholesky decomposition are compared to Support Vector Machine classifiers, simple
regularized estimates, and regular quadratic classifiers. The experimental results on four different hyperspectral
data sets demonstrate the importance of using simple, sparse models. The sparse model using Cholesky decomposition
in full dimension performed slightly better than dimensionality reduction. However, if speed is an issue,
the piecewise constant function approximation method for dimensionality reduction could be used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates an ensemble framework which is proposed for accurate classification of hyperspectral
data. The usefulness of the method, designed to be a simple and robust supervised classification tool, is assessed
on real data, characterized by classes with very similar spectral responses, and limited amount of ground truth
labeled training samples. The method is inspired by the framework of the Random Forests method proposed
by Breiman (2001). The success of the method relies on the use of support vector machines (SVMs) as base
classifiers, the freedom of random selection of input features to create diversity in the ensemble, and the use of
the weighted majority voting scheme to combine classification results. Although not fully optimized, a simple
and feasible solution is adopted for tuning the SVM parameters of the base classifiers, aiming its use in practical
applications. Moreover, the effect of an additional pre-processing module for an initial feature reduction is
investigated. Encouraging results suggest the proposed method as promising, in addition to being easy to
implement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work focuses on an assessment of quality parameters characterizing a hyperspectral image collected by a new-generation
high-resolution sensor named Hyper-SIMGA, which is a spectrometer operating in the push-broom
configuration. By resorting to Shannon's information theory, the concept of quality is related to the information
conveyed to a user by the hyperspectral data, which can be objectively defined from both the signal-to-noise ratio (SNR)
and the mutual information between the unknown noise-free digitized signal and the corresponding noise-affected
observed digital samples. The estimation of the mutual information has been exploited by resorting to a lossless data
compression of the dataset. In fact, the bit-rate achieved by the reversible compression process is a suitable
approximation of the decorrelated data entropy, which takes into account both the contribution of the "observation"
noise, i.e. information regarded as statistical uncertainty, whose relevance is null to a user, and the intrinsic information
of hypothetically noise-free samples. Noise estimation can be obtained once a suitable parametric model of the noise,
assumed to be possibly non-Gaussian, has been preliminarily determined. Noise amplitude has been assessed by means
of two independent estimators relying on two automatic procedures based on a scatterplot method and a bit-plane
algorithm. Noise autocorrelation has been taken into account on the three allowed directions of the available data-volume.
Results are reported and discussed employing a hyperspectral image (768 spectral bands) recorded by the new
Hyper-SIMGA imaging spectrometer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we focus on different kinds of regularization for Linear Discriminant Analysis (LDA) in the
context of ill-posed remote sensing image classification problems. Several LDA-based classifiers are studied
theoretically and tested on various remote sensing datasets. In addition, we introduce an efficient version of
the standard regularized LDA recently presented in Ref. 1 to cope with high-dimensional small sample size
(ill-posed) problems. Experimental results demonstrate the suitability of the proposal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hyperspectral consists of a great number of spectral images. Although these spectral images are obtained
simultaneously, due to a number of optical effects the hyperspectral bands are not co-registered. This has been found to
be particularly prominent in images of regions near the absorption bands of atmospheric gases.
We have developed a method using subpixel image registration that enables us to identify the spatial misregistration
between spectral bands and correct it. We use one band as a reference band and match all other spectral images to this
band. We use the average row and column disparity to correct the spatial misregistration.
Other than the spatial correction that we obtain we also obtain a mapping of the atmospheric gas absorption features.
This is done by plotting the average disparities as a function of wavelength. Local peaks in this plot are clear evidence of
absorption features of the atmospheric gases. Our method is robust to the low snr in the atmospheric absorption images
with low transmission. The correction, results in a hyperspectral cube with all the bands spatially aligned. The algorithm
has been applied to the following hyperspectral imagers : AISA Hawk, AISA Eagle, AVIRIS and HYPERION.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural networks have received much attention in the field of remote sensing. Topology identification remains however
one of the major difficulties in the efficient application of neural networks. Currently, topology determination is based
on trial and error, on heuristics that amalgamate past experience and on weight pruning algorithms. It is argued in this
paper that global search methods such as genetic algorithms can be deployed in discovering near optimal network
topologies. An example on multisource classification for land cover mapping is presented. The results indicate that the
global search paradigm is worth further exploration especially now that computing becomes more and more powerful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Water resources in Northern Italy has dramatically shortened in the past 10 to 20 years, and recent phenomena connected to
the climate change have further sharpened the trend. To match the observable and collected information with this
experience and find methodologies to improve the water management cycle in the Lombardy Region, University of Milan
Bicocca, Fondazione Lombardia per l'Ambiente and ARPA Lombardia are currently funding a project, named "Regional
Impact of Climatic Change in Lombardy Water Resources: Modelling and Applications" (RICLIC-WARM). In the
framework of this project, the analysis of the fraction of water available and provided to the whole regional network by the
snow cover of the Alps will be investigated by means of remotely sensed data. While there are already a number of
algorithms devoted to this task for data coming from various and different sensors in the visible and infrared regions, no
operative comparison and analytical analysis of the advantages and drawbacks of using different data has been attempted.
This idea will pave the way for a fusion of the available information as well as a multi-source mapping procedure which
will be able to exploit successfully the huge quantity of data available for the past and the even larger amount that may be
accessed in the future. To this aim, a comparison on selected dates for the whole 2000/2006 period was performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of remote-sensing images based on multiple information sources offers a consistent method for the
automatic cartography of forest stands. However, fusion models reveal problems of combinatorial explosion due to the
calculation of the assignment functions. This article proposes an information-fusion approach that responds to the need
for updating the forest inventory, based on belief theory. It illustrates a solution that overcomes the problem of
combinatorial explosion that arises with the evaluation of evidence-mass functions which are used as the frame of
discernment events. This solution is based on a refinement of the frame of discernment based on the determination of all
focal elements (singleton or composite hypothesis of non null masses). Thus, the combination of information source
masses would involve only the focal elements masses. In the approach proposed here, the notions of fuzzy logic and
possibility theory have been used for the calculation of masses and combinations between classes as an intermediary
phase in arriving at belief functions. The result of the application of our fusion approach revealed a significant
improvement in optimizing the calculation of mass evidence functions and thus achieving a satisfactory classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of satellite imagery and specifically SAR data for vessel detection and identification has attracted
researchers during the last decade. The objective of this work is to provide a novel approach for ship identification based
mainly on polarimetric data, taking into consideration the different behaviour of the ship in various polarizations. For
this purpose new measures and accordingly a new feature vector is proposed. The feature vector is employed in order to
create a vessel signatures database and its efficiency is tested on ASAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-sensor fusion of data from maritime surveillance assets provides a consolidated surveillance picture that provides
a basis for downstream semi-automated anomaly-detection algorithms. The fusion approach that we pursue in this
paper leverages technology previously developed at NURC for undersea surveillance. We provide illustrations of the
potential of these techniques with data from recent at-sea experimentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, NOAA images have been provided very useful information about ecosystems, climate, weather and
water from all over the world. In order to use NOAA images, they need to be transformed from image coordinate system
into map coordinate system. This paper proposes a method that corrects the errors caused by this transformation. First,
elevation values are read from GTOPO30 database and they are verified to divide data into flat and rough blocks. The
elevation errors of all blocks are then calculated based on the elevation values. After correcting elevation errors, residual
errors are specified by GCP template matching. On the flat blocks, residual errors are corrected by affine transformation;
on the rough blocks, residual errors are corrected by applying Radial Basic Function Transformation to the residual
errors of the blocks that match GCP templates. With this correction method, residual errors are corrected precisely and
the errors of interpolation process are reduced. This method was applied to correct the errors for NOAA images
receiving in Tokyo, Bangkok and Ulaanbaatar. The results proved that this is a high accurate geometric correction
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration
Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a
Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which
gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands.
The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra,
which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the
GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures
can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for
the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second
block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting
imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional
random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected
spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram
scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral
responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody
(HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR)
from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets
and off-axis effects are also implemented. In the third block, we developed an efficient method of generating
pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel
performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GOSAT (Greenhouse Gases Observing Satellite) is a satellite to measure greenhouse gases from space with the Fourier
Transform Spectrometer (FTS). It measures spectra of Earth-reflected solar radiations with high spectral resolution of
about 0.2 cm-1, covering four spectral bands, including 0.76, 1.6, 2.0 and 14 micron wavelength regions. In first three
bands, we have two detectors for measuring two components of polarization. For the acquisition of one interferogram,
it takes 4 seconds or less, depending on the definition of measurement mode. Since the satellite moves in high speed, an
image motion compensation mirror (IMC) works to continuously stare the same position on the Earth surface during this
period of acquisition. For staring same position, the mirror is controlled by two-axis motors. Staring position could
slightly fluctuate around the right position, making the position of the instantaneous filed of view (IFOV) vibrate with an
amplitude of a few hundreds meters. Since the optical characteristics (such as the albedo) of the IFOV changes
place-by-place, the intensity of IFOV could also change due to this fluctuation. The intensity of IFOV could also change
due to the change of reflection angle, wind on the water surface, or other causes. During this period of the acquisition,
the optical path length and Doppler shift caused by the satellite moving could also change. In this paper, we examine the
effects of some of these kinds of disturbances to the signals of interferograms on resultant spectra and retrieval accuracies
of CO2, and discuss about the correction method to the interferogram and spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information contained in fully polarimetric SAR data is plentiful. How to exploit the information to improve accuracy is
important in segmentation of fully polarimetric SAR images. Several frequently used feature vectors and methods are
investigated, and a novel method is proposed for segmenting multi-look fully polarimetric SAR images in this paper,
starting from the statistical characteristic and the interaction between adjacent pixels. In order to use fully the statistical a
priori knowledge of the data and the spatial relation of neighboring pixels, the Wishart distribution of the covariance
matrix is integrated with the Markov random field, then the iterated conditional modes (ICM) is taken to implement the
maximum a posteriori estimation of pixel labels. Although the ICM has good robustness and fast convergence, it is
affected easily by initial conditions, so the Wishart-based ML is used to obtain the initial segmentation map, in order to
exploit completely the statistical a priori knowledge in the initial segmentation step. Using multi-look fully polarimetric
SAR images, acquired by the NASA/JPL AIRSAR sensor, the new approach is compared with several other commonly
used ones, better segmentation performance and higher accuracy are observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work deals with unsupervised change detection in bi-date Synthetic Aperture Radar (SAR) images. Whatever
the indicator of change used, e.g. log-ratio or Kullback-Leibler divergence, we have observed poor quality
change maps for some events when using the Hidden Markov Chain (HMC) model we focus on in this work.
The main reason comes from the stationary assumption involved in this model − and in most Markovian models
such as Hidden Markov Random Fields−, which can not be justified in most observed scenes: changed areas
are not necessarily stationary in the image. Besides the few non stationary Markov models proposed in the
literature, the aim of this paper is to describe a pragmatic solution to tackle stationarity by using a sliding
window strategy. In this algorithm, the criterion image is scanned pixel by pixel, and a classical HMC model is
applied only on neighboring pixels. By moving the window through the image, the process is able to produce
a change map which can better exhibit non stationary changes than the classical HMC applied directly on the
whole criterion image. Special care is devoted to the estimation of the number of classes in each window, which
can vary from one (no change) to three (positive change, negative change and no change) by using the corrected
Akaike Information Criterion (AICc) suited to small samples. The quality assessment of the proposed approach
is achieved with speckle-simulated images in which simulated changes is introduced. The windowed strategy is
also evaluated with a pair of RADARSAT images bracketing the Nyiragongo volcano eruption event in January
2002. The available ground truth confirms the effectiveness of the proposed approach compared to a classical
HMC-based strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we propose an automatic approach for detecting clouds, cloud shadows and mist present on optical remote
sensing images such as SPOT/HRVIR ones. This detection is necessary to not take their signal into account for land
studies from remote sensing data, such as land cover / land use classification, vegetation and soil moisture monitoring.
The adopted approach is based on Markov Random Field (MRF) modeling at two levels: pixel and object. The algorithm
is parameterized by six parameters that are rather robust since their value was kept identical for the processing of 39
SPOT/HRVIR images that corresponds to various acquisition conditions, seasons, and landscapes. Our method makes
use of three main cloud/shadow features:
- Clouds (or shadows) can be viewed as connex objects;
- Each cloud generates a shadow with similar shape and area;
- The direction of the relative position of a cloud and its shadow in the image is determined by acquisition conditions.
The first feature is modeled using a MRF on the pixel graph, and we show that the proposed model leads to the use of
hysteresis threshold techniques or growing region as far as local optimization is concerned. The two last features are
modeled using a MRF on the graph of cloud and shadow objects (detected from the previous step at pixel level), and we
show that the proposed model corresponds the mutual validation of cloud and shadow detections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A water wake detection method of airphotoes is proposed based on two-dimensional principal component analysis
(2DPCA) of the polar Fourier spectrum. This method improves the traditional Principal Components Analysis to obtain
the image direction from its Fourier power spectrum, transforms the Fourier spectrum to the polar coordinate based on
the image direction, so the polar Fourier spectrum is translation and rotation invariant. Compared to the previous method
of partitioning the Fourier spectrum to achieve texture features, the row 2DPCA, the column 2DPCA and the improved
2DPCA are used to analysis the polar Fourier spectrum. From experiment results of 40 images, it is proved that the
proposed algorithm can fetch the wake texture precisely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are two problems when associating multiple targets in remote sensing images: Firstly, with low temporal
resolution observation, the target's kinematic state cannot be estimated accurately and the classical Kalman filtering
association algorithms are no more applicable. Secondly, the classical image feature-based target matching algorithms
cannot deal with the illegibility of multiple targets' correspondence, which don't take into account the uncertainty of
feature extraction. To resolve above problems, a novel multiple targets association method based on Multi-scale
Autoconvolution(MSA) features matching and global association cost optimization through simulated annealing (SA)
algorithm is proposed. Experiments with remote sensing images show the applicability of the method for multiple targets
association.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
"HYCLASS", a new hybrid classification method for remotely sensed multi-spectral images is proposed. This
method consists of two procedures, the textural edge detection and texture classification. In the textural edge
detection, the maximum likelihood classification (MLH) method is employed to find "the spectral edges", and
the morphological filtering is employed to process the spectral edges into "the textural edges" by sharpening the
opened curve parts of the spectral edges. In the texture classification, the supervised texture classification method
based on normalized Zernike moment vector that the authors have already proposed. Some experiments using a
simulated texture image and an actual airborne sensor image are conducted to evaluate the classification accuracy
of the HYCLASS. The experimental results show that the HYCLASS can provide reasonable classification results
in comparison with those by the conventional classification method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landsat Thermal band measures the emitted radiation of the earth surface. In many studies the ETM+ thermal band with
60 meter resolution is excluded from processing and classification despite the valuable information content.
Two different methods of Bayesian segmentation algorithm were used with different band combinations. Sequential
Maximum a Posteriori (SMAP) is a Bayesian image segmentation algorithm which unlike the traditional Maximum
likelihood (ML) classification attempts to improve accuracy by taking contextual information into account, rather than
classifying pixels separately.
Landsat 7 ETM+ data with Path/Row 186-26, dated 30 September 2000 were used. In order to study the role of thermal
band with these methods, two data sets with and without the thermal band were used. Nine band combinations including
ETM+ and Principal Component (PC) data were selected based on the highest value of Optimum Index Factor (OIF).
Using visual and digital analysis, field observation data and auxiliary map data like CORINE land cover, 14 land cover
classes are identified. Spectral signatures were derived for every land cover. Spectral signatures as well as feature space
analysis were used for detailed analysis of efficiency of the reflective and thermal bands.
The result shows that SMAP as the superior method can improve Kappa values compared with ML algorithm for all
band combinations with on average 17%. Using all 7 bands both SMAP and ML classifications algorithm achieved the
highest Kappa accuracy of 80.37 % and 64.36 % respectively. Eliminating the thermal band decreased the Kappa values
by about 8% for both algorithms. The band combination including PC1, 2, 3, and 4 (PCA calculated for all 7 bands)
produced the same Kappa as bands 3, 4, 5 and 6. The Kappa value for band combination 3, 4, 5 and 6 was also about
4% higher than using 6 bands without the thermal band for both algorithms.
Contextual classification algorithm like SMAP can significantly improve classification results. The thermal band bears
complementary information to other spectral bands and despite the lower spatial resolution improves classification
accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During an 11 days mission in February 2000 the Shuttle Radar Topography Mission (SRTM) collected data over 80% of
the Earth's land surface, for all areas between 60 degrees N and 56 degrees S latitude. Since SRTM data became
available, many studies utilized them for application in topography and morphometric landscape analysis. Exploiting
SRTM data for recognition and extraction of topographic features is a challenging task and could provide useful
information for landscape studies at different scales.
In this study the 3 arc second SRTM digital elevation model was projected on a UTM grid with 90 meter spacing for a
mountainous terrain at the Polish - Ukrainian border. Terrain parameters (morphometric parameters) such as slope,
maximum curvature, minimum curvature and cross-sectional curvature are derived by fitting a bivariate quadratic
surface with a window size of 5×5 corresponding to 450 meters on the ground. These morphometric parameters are
strongly related to topographic features and geomorphological processes. Such data allow us to enumerate topographic
features in a way meaningful for landscape analysis.
Kohonen Self Organizing Map (SOM) as an unsupervised neural network algorithm is used for classification of these
morphometric parameters into 10 classes representing landforms elements such as ridge, channel, crest line, planar and
valley bottom. These classes were analyzed and interpreted based on spectral signature, feature space, and 3D
presentations of the area. Texture contents were enhanced by separating the 10 classes into individual maps and applying
occurrence filters with 9×9 window to each map. This procedure resulted in 10 new inputs to the SOM. Again SOM was
trained and a map with four dominant landforms, mountains with steep slopes, plane areas with gentle slopes, dissected
ridges and lower valleys with moderate to very steep slopes and main valleys with gentle to moderate slopes was
produced. Both landform maps were evaluated by superimposing contour lines.
Results showed that Self Organizing Map is a very promising and efficient tool for such studies. There is a very good
agreement between identified landforms and contour lines. This new procedure is encouraging and offers new
possibilities in the study of both type of terrain features, general landforms and landform elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multichannel (multi and hyperspectral, dual and multipolarization, multitemporal) remote sensing (RS) is widely used in
different applications. Noise is one of the basic factors that deteriorates RS data quality and prevents retrieval of useful
information. Because of this, image pre-filtering is a typical stage of multichannel RS data pre-processing. Most efficient
modern filters and other image processing techniques employ a priori information on noise type and its statistical characteristics
like variance. Thus, there is an obvious need in automatic (blind) techniques for determination of noise type
and its characteristics. Although several such techniques have been already developed, not all of them are able to perform
appropriately in cases when considered images contain a large percentage of texture regions and other locally active
areas. Recently we have designed a method of blind determination of noise variance based on minimal inter-quantile
distance. However, it occurred that its accuracy could be further improved. In this paper we describe and analyze several
ways to do this. One opportunity deals with better approximation of inter-quantile distance curve. Another opportunity
concerns the use of image pre-segmentation before forming an initial set of local estimates of noise variance. Both ways
are studied for model data and test images. Numerical simulation results confirm improvement of estimate accuracy for
the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we consider the use of circular moments for invariant classification of images which have been blurred by
motion. The test images used here have been acquired when the objects are vibrating at different frequencies. A
comparative analysis using Zernike and Wavelet-Fourier moment sets is presented. An intensity normalization of the
input images is done to homogenize them due to inhomogeneous illumination produced by the acquisition. The
classification method is tested using images from objects which have intrinsically little differences between them.
Experimental results show that, the proposed classification method based in Zernike and Wavelet-Fourier moments can
be well addressed to grade images smeared by motion, from objects under high frequency vibrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is a part of developing a software that predicts surface emitted radiance from ground objects by considering
solar irradiation and atmospheric convection. The radiance emitted from a surface can be calculated by using the
temperature and optical characteristics of the surface together with the spectral atmospheric transmittance. The thermal
modeling is essential for identifying objects on the scenes obtained from the satellites. And the temperature distribution
on the object is used to obtain their infrared images in contrast to the background. We considered the composite heat
transfer modes including conduction, convection and spectral solar radiation for objects within a scene to calculate the
surface temperature distribution. The software developed in this study could be used to model the thermal energy balance
to obtain the temperature distribution over the object by considering the direct and diffuse solar irradiances and by
assuming the conduction within the object as one-dimensional heat transfer into the depth. LOWTRAN7 are used to
model the spectral solar radiation including the direct and diffuse solar energy components. The object considered is
assumed to be consisted of several different materials with different properties, such as conductivity, absorptivity,
density, and specific heat etc. Resulting spectral radiances in the MWIR region arrived at the sensor are shown to be
strongly dependent on the spectral surface properties of the objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The change detection and analysis, as remote sensing application, is based on the multi-temporal and/or multi-sensors
approaches. However, the accuracy of such change detection activities can be limited by several factors. A key variable
that may reduce the accuracy of change detection is the misregistration error between the used images. By accounting for
the spatial variation in geometric and misregistration errors there is the potential to reduce their effects during change
detection, increasing the accuracy of land cover change mapping. The effect of misregistration on land cover mapping
and change detection could be more accurately predicted and ultimately removed if this spatial variation in error was
modeled. In this study, the estimation of the effect of misregistration on ASTER derived land cover types was attempt.
The proposed methodology is based on the comparison of the regression correlation coefficients between two images
derived either from one single band or from two bands. To check the level of correlation, a procedure of modifying the
geometric position of one single band or of two different bands, using the same resolution or different resolutions was
applied. In order to obtain this artificial degradation, a transformation on three directions: on x axis, on y axis and on
both x and y axis of one image comparing with itself or with another was applied. The study was performed for different
scales, different land cover types and different complexity to evaluate the most influencing factors. This approach
allowed quantification of the inappropriate image georeferencing, as well as the quantitative estimation of the size of
distortion of the final results, in case of comparison of images from different dates and different sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.