PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7455, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates a new concept, called Hyperpsectral Information Compression (HIC) as opposed to
Hyperspectral Data Compression (HDC) commonly used in the literature. A key feature that differentiates the HIC from
the HDC is its focus on compression of information rather than data size. Such information compression is completely
determined by a specific application in data exploitation. Accordingly, the HIC can be also referred to as exploitationbased
compression. In order to substantiate the utility of the HIC, experiments are conducted to demonstrate that the HIC
is indeed more effective than traditional HDC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For optimum compression performances of multispectral and hyperspectral images, algorithms must exploit both spectral and
spatial correlation of the data. To do this, different approaches are possible: a spectral decorrelation preprocessing stage followed
by application of an image compressor to the decorrelated bands or, an integrated solution dealing with the three dimensions
simultaneously. For example, Part II of JPEG2000 standard introduces a multi-component transform capability
applied prior Part I spatial wavelet decomposition and coding. This article proposes to use the CCSDS Image Data Compression
Recommendation together with a spectral transform to perform a multicomponent compression. Depending on the number
of spectral bands, an efficient spectral transform, such as the DCT, is applied first and the CCSDS algorithm encodes each
decorrelated bands. We compare the performances of such a scheme with JPEG2000 and also with a comparable scheme with
a very simple decorrelation stage. Thanks to a bit plane coding of blocks of wavelet coefficients, the CCSDS encoder is a
good tool to control the quality or the rate of these transformed bands. We present performances for different types of sensors:
multispectral and hyperspectral. This work is part of the CNES contribution to the new CCSDS Multispectral and Hyperspectral
Data Compression Working Group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral image has weak spatial correlation and strong spectral correlation. As to exploit spectrum redundancy
sufficiently, it must be pre-processed. In this paper, a new algorithm for lossless compression of hyperspectral images
based on adaptive band regrouping is proposed. Firstly, the affinity propagation clustering algorithm (AP) is chosen for
band regrouping according to interband correlation. Then a linear prediction algorithm based on context prediction is
applied to the hyperspectral images in different groups. Finally, the experimental results show that the proposed
algorithm achieves performance gains of 1.12bpp over the conventional algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has passed more than a decade since the Consultative Committee for Space Data Systems (CCSDS) made its recommendation for lossless data compression. The CCSDS standard is commonly used for scientific missions because it is a general-purpose lossless compression technique with a low computational cost which results in acceptable compression ratios. At the core of this compression algorithm it is the Rice coding method. Its performance rapidly degrades in the presence of noise and outliers, as the Rice coder is conceived for noiseless data following geometric distributions. To overcome this problem we present here a new coder, the so-called Prediction Error Coder (PEC), as well as its fully adaptive version (FAPEC) which we show is a reliable alternative to the CCSDS standard. We show that PEC and FAPEC achieve large compression ratios even when high levels of noise are present in the data. This is done testing our compressors with synthetic and real data, and comparing the compression ratios and processor requirements with those obtained using the CCSDS standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a bit allocation method for 2D compression of hyperspectral images to enhance classification
performance. First, we select a number of classes from original hyperspectral images. It is noted that the classes can be
automatically selected by applying an unsupervised segmentation method. Then, we apply a feature extraction method
and determine discriminately dominant feature vectors. By examining the feature vectors, we determine the discriminant
usefulness of each spectral band. Finally, based on the discriminant usefulness of the spectral bands, we determine bit
allocation of each spectral band. Experimental results show that it is possible to enhance the discriminant information at
the expense of PSNR. Depending on applications, one can either minimize the mean squared error or choose to preserve
the classification capability of the hyperspectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops to a new concept, called Progressive Dimensionality Reduction (PDR) which can perform data
dimensionality progressive in terms of information preservation. Two procedures can be designed to perform PDR in a
forward or backward manner, referred to forward PDR (FPDR) or backward PDR (BPDR) respectively where FPDR
starts with a minimum number of spectral-transformed dimensions and increases the spectral-transformed dimension
progressively as opposed to BPDR begins with a maximum number of spectral-transformed dimensions and decreases
the spectral-transformed dimension progressively. Both procedures are terminated when a stopping rule is satisfied. In
order to carry out DR in a progressive manner, DR must be prioritized in accordance with significance of information so
that the information after DR can be either increased progressively by FPDR or decreased progressively by BPDR. To
accomplish this task, Projection Pursuit (PP)-based DR techniques are further developed where the Projection Index (PI)
designed to find a direction of interestingness is used to prioritize directions of Projection Index Components (PICs) so
that the DR can be performed by retaining PICs with high priorities via FPDR or BPDR. In the context of PDR, two
well-known component analysis techniques, Principal Components Analysis (PCA) and Independent Component
Analysis (ICA) can be considered as its special cases when they are used for DR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral image compression has received considerable interest in recent years. However, an important issue
that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications,
which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances
(called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate
the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by
algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual
information in the spectral endmember search). The two considered algorithms are the automatic morphological
endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental
results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in
Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments
reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction
algorithms to compressed hyperspectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FPGA- or GPU-based Data Compression: Joint Session with Conference 7458
The amount of data generated by hyper- and ultraspectral imagers is so large that considerable savings in data storage and transmission bandwidth can be achieved using data compression. Due to the large amount of data, the data compression time is of importance. Increasing programmability of commodity Graphics Processing Units (GPUs) allows their usage as General Purpose computation on Graphical Processing Units (GPGPU). GPUs offer potential for considerable increase in computation speed in applications that are data parallel. Data parallel computation on image data executes the same program on many image pixels on parallel. We have implemented a spectral image data compression method called Linear Prediction with Constant Coefficients (LP-CC) using Nvidia's CUDA parallel computing architecture. CUDA is a parallel programming architecture that is designed for data-parallel computation.
CUDA hides the GPU hardware from the developers. Moreover, CUDA does not require the programmers to explicitly manage threads. This simplifies the programming model. Our GPU implementation is experimentally compared to the native CPU implementation. Our speed-up factor was over 30 compared to a single threaded CPU version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet image compression provides high compression ratios without producing noticeable visible artifacts. However,
wavelet image compression can be expensive in terms of memory usage and computational complexity. This paper
describes a low-cost algorithm suitable for implementation in Field-Programmable Gate Arrays (FPGAs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a lifting architecture based on a basic lifting unit, whose structure performs lifting
operations in a repetitive way. By analyzing computational processes in lifting in detail, the reusable Basic Lifting
Element (BLE) is presented. The BLE structure is designed and optimized from the viewpoint of hardware
implementation. The proposed lifting processor can be executed by arranging BLEs repeatedly. Experimental
results show that the proposed architecture can transform any size of tiles with 9/7 filter and 5/3 filter for lossy
and lossless compression, respectively. The lifting processor is designed in Verilog HDL and synthesized into
Xilinx FPGA, which can run up to 130MHz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a computationally efficient approach for lossy compression of remotely sensed hyperspectral
images which has been specifically tuned to preserve the relevant information required in spectral
mixture analysis (SMA) applications. The proposed method is based on two steps: 1) endmember extraction, and
2) linear spectral unmixing. Two endmember extraction algorithms: the pixel purity index (PPI) and the automatic
morphological endmember extraction (AMEE), and a fully constrained linear spectral unmixing (FCLSU)
algorithm have been considered in this work to devise the proposed lossy compression strategy. The proposed
methodology has been implemented in graphics processing units (GPUs) of NVidiaTM type. Our experiments
demonstrate that it can achieve very high compression ratios when applied to standard hyperspectral data sets,
and can also retain the relevant information required for spectral unmixing in a computationally efficient way,
achieving speedups in the order of 26 on a NVidiaTM GeForce 8800 GTX graphic card when compared to an
optimized implementation of the same code in a dual-core CPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Principal component analysis (PCA) is the most efficient spectral decorrelation approach for hyperspectral imagery. In
conjunction with JPEG2000 for optimal bit allocation and spatial coding, the resulting PCA+JPEG2000 can yield
superior rate-distortion performance and the following data analysis performance. However, the involved overhead bits
consumed by the large transformation matrix may affect the performance at low bitrates, particularly when the image
spatial size is relatively small compared to the spectral dimension. In this paper, we propose to apply the segmented
principal component analysis (SPCA) to mitigate this effect. The resulting SPCA+JPEG200 may improve the
compression performance even when PCA+JPEG2000 is applicable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral images used in remote sensing can reach hundreds of megabytes due to the large number of components and
high spatial and bit-depth resolution. When these images have to be transmitted, interactive transmission is necessary to
deliver only those portions of the image that the client has requested. In such a scenario, compression is a useful tool to
reduce the required amount of network bandwidth.
JPEG2000 is a powerful image and video coding standard that, among other features, provides scalability by spatial
location, component, quality, and resolution. This is used by the JPEG2000 Interactive Protocol (JPIP) to enable the
interactive transmission of imagery. One of the most important aspects of a JPIP server is the rate-control algorithm used
to select the portions of the compressed code-stream that will be delivered to client.
The purpose of this research is to introduce new rate-control methods for the JPIP server aimed to achieve optimal
performance. We focus our attention in the adaptation of two rate-control methods developed for the JPEG2000 coder
and decoder. Experimental results suggest that both methods significantly improve coding performance without penalizing
computational load.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel theory of information acquisition-"compressive sampling" has been applied in this paper, and goes against the
common wisdom in data acquisition of Shannon theorem. CS theory asserts that one can recover certain signals and
images perfectly from far fewer samples or measurements than traditional methods use. This paper presents an
improvement on genetic algorithm instead of match pursuit algorithm in consideration of the enormous computational
complexity on sparse decomposition. Then the whole image is divided into small blocks which can be processed by
sparse decomposition, and an end to decomposition is determined by PSNR threshold adaptively. At last, the experiment
results show that good performance on image reconstruction with less computational complexity has been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By using fixed-length codewords and not using an adaptive dictionary, the Tunstall coding stands out as an errorresilient
lossless compression technique. Our previous work has shown that use of the non-exhaustive parse tree can
improve the compression ratio of the Tunstall coding without compromising its error resilience. As change of codeword
assignment will only affect the error propagation, not the compression ratio, we experimented in this work the various
codeword assignment methods to minimize the error propagation of our revised Tunstall coding. The result shows that
the breadth first assignment of codewords to the Tunstall parse tree gives the lowest percent error rate. Also the linear
prediction followed by non-exhaustive Tunstall parse tree, and the alignment marker (LP+Tunstall non-exhaustive) has
the best performance in terms of compression ratio and percent error rate when compared to other LP+Bitcut+Tunstall
methods, JPEG-2000 and CCSDS Rice coding schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endmember extraction has recently received considerable attention in hyperspectral data exploitation since they
represent crucial and vital information for hyperspectral data analysis. So far, no work has been reported on how to
implement endmember extraction algorithms in real-time. An endmember is defined as an idealized signature and may
or may not exist as a data sample or an image pixel. The interest of endmember extraction arises in the use of hundreds
of contiguous spectral channels that allows a hyperspectral imaging sensor to uncover many subtle substances in
diagnostic bands. However, finding such substances also presents a great challenge to hyperspectral data analysts and
becomes imperative when it comes to satellite communication if a hyperspectral imaging sensor is operated in space
platform where bandwidths used by satellite links may be very limited and downloading all the data may not be realistic
in many practical applications. In order to address this need many endmember extraction algorithms have been
developed and designed in the past, but no work has been reported on how to implement endmember extraction
algorithms in real-time. This paper investigates this issue in designing algorithms for real time processing of endmember
extraction and developed several endmember extraction algorithms derived from the widely used N-finder algorithm (NFINDR)
that can be implemented in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The extraction of roads from high spatial resolution remote sensing images remains a problem though lots of efforts have
been made in this area. High spatial resolution remote sensing images represent the surface of the earth in detail. As
spatial resolution increases, spectral variability within the road cover units becomes complex and traditional remote
sensing image processing methods on pixel basis are no longer suitable. This paper studies automatic road extraction
from remote sensing images based on methods of Pulse-Coupled Neural Network and mathematical morphology. PCNN
is a useful biologically inspired algorithm, and has the properties of linking field and dynamic threshold which make
similar neurons generate pulses simultaneously. PCNN has the ability of a neuron to capture neighboring neurons which
are in similar states and the independency of the pulses within unattached neuron regions. The method of mathematical
morphology has the prime principle which is using a certain structure element to measure and extract the corresponding
form in an image. In this paper, the simplified PCNN is applied as the image segmentation algorithm, and morphological
transformation is used to purify the roads' information and to extract the road centerlines. Experimental results show that
this method is efficient in road extraction from remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop several parallel techniques for hyperspectral image processing that have been specifically
designed to be run on massively parallel systems. The techniques developed cover the three relevant areas of
hyperspectral image processing: 1) spectral mixture analysis, a popular approach to characterize mixed pixels in
hyperspectral data addressed in this work via efficient implementation of a morphological algorithm for automatic
identification of pure spectral signatures or endmembers from the input data; 2) supervised classification of hyperspectral
data using multi-layer perceptron neural networks with back-propagation learning; and 3) automatic
target detection in the hyperspectral data using orthogonal subspace projection concepts. The scalability of the
proposed parallel techniques is investigated using Barcelona Supercomputing Center's MareNostrum facility, one
of the most powerful supercomputers in Europe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the data characteristics of remote sensing stereo image pairs, a novel compression algorithm based on the
combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity
estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for
texture classification. Second, an improved ABM is used in the area with flat terrain (flat area), while the disparity
estimation, a combination of quadtree decomposition and FBM, is used in the area with alpine terrain (alpine area).
Furthermore, the radiation compensation is applied in every area. Finally, the disparities, the residual image, and the
reference image are compressed by JPEG2000 together. The new algorithm provides a reasonable prediction in different
areas according to characteristics of image textures, which improves the precision of the sensed image. The experimental
results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional
algorithm at low or medium bitrates, and the subjective quality is obviously enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to effectively store and transmit MODIS multispectral data, a lossless compression method based on mix coding
and integer wavelet transform (IWT) is proposed in this paper. Firstly, the algorithm computes the correlation
coefficients between spectrums in MODIS data. Using proper coefficient threshold, the original bands will be divided
two groups: one group use spectral prediction method and then compress residual error, while the other group data is
directly compressed by some standard compressor. For the spectral prediction group, we can find the current band that
has greatest correlation with the previous band by the judgments of correlation coefficient, thus the optimal spectral
prediction sequence is obtained by band reordering. The prediction band data can be computed with the previous band
data and optimal linear predictor, so the spectral redundancy can be eliminated by using spectral prediction. In order to
reduce residual differences in further, the block optimal linear predictor is designed in this paper. Next, except for the
first band of the spectral prediction sequence, the residual errors of other bands are encoded by IWT and SPIHT. The
direct compression bands and the first band of spectral prediction sequence are compressed by JPEG2000. Finally, the
coefficients of block optimal linear predictor and other side information are encoded by adaptive arithmetic coding. The
experimental results show that the proposed method is efficient and practical for MODIS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive Group of Pictures (GOP) is helpful for increasing the efficiency of video encoding by taking account of
characteristics of video content. This paper proposes a method for adaptive GOP structure selection for video encoding
based on motion coherence, which extracts key frames according to motion acceleration, and assigns coding type for
each key and non-key frame correspondingly. Motion deviation is then used instead of motion magnitude in the selection
of the number of B frames. Experimental results show that the proposed method for adaptive GOP structure selection
achieves performance gain of 0.2-1dB over the fixed GOP, and has the advantage of better transmission resilience.
Moreover, this method can be used in real-time video coding due to its low complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband
coding, so they can be on-board a satellite. It is well-known that the Karhunen-Loève Transform (KLT) can
be sub-optimal in transform coding for non Gaussian data. However, it is generally recommended as the best
calculable linear coding transform in practice. Now, the concept and the computation of optimal coding transforms
(OCT), under low restrictive hypotheses at high bit-rates, were carried out and adapted to a compression
scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for on-board
satellite image compression, leading to the concept and computation of Optimal Spectral Transforms (OST).
These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when
the spatial redundancies are reduced with a fixed 2-D Discrete Wavelet Transform (DWT). The problem of OST
is their heavy computational cost. In this paper we present the performances in coding of a quasi optimal spectral
transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of superspectral
images from the spectrometer MERIS. The performances are presented in terms of bit-rate versus distortion for
four various distortions and compared to the ones of the KLT. We observe good performances of the exogenous
OrthOST, as it was the case on Hyperion hyper-spectral images in previous works.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Satellite images and aerial images have different radiation attributes. Conclusions on aerial image compression effects
may not be applicable to satellite images. In this paper we study the effects of lossy compression on the digital terrain
model (DTM) derived from a selected stereo pair of SPOT-5 satellite images. The satellite images are compressed by the
Kakadu JPEG2000 compression software at the compression ratios of 2, 4, 6, and 8. The Imageinfo Pixelgrid V2.0
software is used as a DTM generator. The DTM results derived from original images are compared with that derived
from compressed images in terms of the mean error and the standard deviation. This paper's experiment indicates that
when the compression ratio rises to 4, both the DTM generation and the stereo observation by human will be affected
dramatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses assessment of different processing techniques for hyperspectral images target detection. An
ad-hoc quality assessment approach is adopted to compare different noise reduction techniques of hyperspectral
images for target detection applications. Two different noise reduction techniques are applied to a datacube
collected over a well-studied area with human made targets. The quality of these noise reduced datacubes in
preserving the identity of the targets of interest is compared with that of the original datacube. This is achieved
by applying different measures on the datacubes. First, the Virtual Dimensionality is used and the results for
both of the noise reduction methods are compared with those of the original datacube for several false-alarm
probabilities. Then Maximum Noise Fraction is applied to the datacubes and its capability in finding a transform
in which the information of the datacube is represented in a smaller number of bands is assessed. Finally using
set measures and knowing the location of the targets, different classes are defined and the intraclass and interclass
distances for each datacube is measured.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target detection in hyperspectral images is a task that has attracted a lot of attention recently. In the
last few years, several algoritms have been developed for this purpose, including the well-known RX algorithm
for anomaly detection, or the automatic target detection and classification algorithm (ATDCA), which uses an
orthogonal subspace projection (OSP) approach to extract a set of spectrally distinct targets automatically from
the input hyperspectral data. Depending on the complexity and dimensionality of the analyzed image scene,
the target/anomaly detection process may be computationally very expensive, a fact that limits the possibility
of utilizing this process in time-critical applications. In this paper, we develop computationally efficient parallel
versions of both the RX and ATDCA algorithms for near real-time exploitation of these algorithms. In the case of
ATGP, we use several distance metrics in addition to the OSP approach. The parallel versions are quantitatively
compared in terms of target detection accuracy, using hyperspectral data collected by NASA's Airborne Visible
Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center in New York, five days after the terrorist
attack of September 11th, 2001, and also in terms of parallel performance, using a massively Beowulf cluster
available at NASA's Goddard Space Flight Center in Maryland.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate automated display methods for hyperspectral images with unsupervised segmentation.
First, we apply an unsupervised segmentation method, which will produce a number of unlabeled classes. Then, we
choose the classes whose sizes are larger than a threshold value. Then, we apply a feature extraction method to the
chosen classes and find dominant discriminant features, which are used to display the hyperspectral images. We also
exploit the use of the principal component analysis for the display of hyperspectral images. Experimental images show
that the color images produced by the proposed methods show interesting characteristics compared to the conventional
pseudo-color image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cloud is one of common noises in MODIS remote sensing image. Because of cloud interference, much important
information covered with cloud can't be obtained. In this paper, an effective method is proposed to detect and remove
thin clouds with single MODIS image. The proposed method involves two processing-thin cloud detection and thin
cloud removal. As for thin cloud detection, through analyzing the cloud spectral characters in MODIS thirty-six bands,
we can draw the conclusion that the spectral reflections of ground and cloud are different in various MODIS band.
Hence, the cloud and ground area can be separately identified based on MODIS multispectral analysis. Then, the region
label algorithm is used to label thin clouds from many candidate objects. After cloud detection processing, thin cloud
removal method is used to process each cloud region. Comparing with traditional methods, the proposed method can
realize thin cloud detection and removal with single remote sensing image. Additionally, the cloud removal processing
mainly aims to the cloud label region rather than the whole image, so it can improve the processing efficiency.
Experiment results show the method can effectively remove thin cloud from MODIS image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass
methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper
presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma
noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to
get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise
strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The
simulation results show that the performance is better than existing filter. Several image experiments demonstrate its
theoretical performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal space diversity (SSD) has been viewed as a very efficient method to increase the diversity order and the reliability
of wireless transmission over fading channels. The nonbinary LDPC code can provide powerful error protection over
fading channels. The combination of non-binary LDPC codes and SSD is an effective technique to fading and strong
noise for fading channel. This paper discusses a signal space diversity with a precoder by adding a simple encoder before
applying the rotation matrix that can be viewed as a SSD with rate below 1. We propose three design criteria and two
implementation method. And we further consider a nonbinary LDPC-coded signal space diversity with precoding and
their rate re-allocation between channel coding and SSD. It is shown that our scheme outperforms the conventional coded-
SSD over the fading channel when we get a better rate allocation that depends on the specific channel coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an embedded satellite image compression method using the weighted zeroblock coding and
optimal sorting. Unlike the conventional quad-tree coding methods such as Set Partition Embedded block (SPECK) and
Embedded ZeroBlock Coder (EZBC), in the proposed weighted zeroblock coding (WZBC), 1) we use the unfixed
scanning order to code a significant block-set with fewer bits and achieve variable-length quad-tree coding; 2) we exploit
the significance degree of sub-blocks which is predicted by a novel context-based weighted strategy to optimize the
scanning order of the variable-length quad-tree coding and obtain the weighted zeroblock coding; 3) the rate-distortion
performance of WZBC is also optimized by using the above mentioned weight.
In the binary mode, the new method does not employ the arithmetic coding and has a fairly low complexity.
Experimental results show that the proposed WZBC in the binary coding mode can provide an excellent coding
performance compared with SPECK and Set Partitioning In Hierarchical Trees (SPIHT) with arithmetic coding, and can
even closely approach JPEG2000. When the arithmetic coding is extensively used, the proposed method can obtain more
obvious gain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method based on both 3D-wavelet transform and low-density parity-check codes to realize
the compression of hyperspectral images on the framework of DSC (Distributed Source Coding). The new approach
which combines DSC and 3D-wavelet transform technique makes it possible to realize low encoding complexity at the
encoder and achieve efficient performance of hyperspectral image compression. The experimental results for
hyperspectral image coding show that the new method can performs better than 3D-SPIHT and can outperform than
2D-SPIHT and JPEG2000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.