The availability of hyperspectral images has increased in recent years, which is used in military and civilian applications,
such as target recognition, surveillance, geological mapping and environmental monitoring. Because of its abundant data
quantity and special importance, now it exists lossless compression methods of hyperspectral images mainly exploiting
the strong spatial or spectral correlation. C-DPCM-APL is a method that achieves highest lossless compression ratio on
the CCSDS hyperspectral images acquired in 2006 but consuming longest processing time among existing lossless
compression methods to determine the optimal prediction length for each band. C-DPCM-APL gets best compression
performance mainly via using optimal prediction length but ignoring the correlationship between reference bands and the
current band which is a crucial factor that influences the precision of prediction. Considering this, we propose a method
that selects reference bands according to the atmospheric absorption characteristic of hyperspectral images. Experiments
on CCSDS 2006 images data set show that the proposed reduces the computation complexity heavily without decaying
its lossless compression performance when compared to C-DPCM-APL.
JPEG2000 is an important technique for image compression that has been successfully used in many fields. Due to the increasing spatial, spectral and temporal resolution of remotely sensed imagery data sets, fast decompression of remote sensed data is becoming a very important and challenging object. In this paper, we develop an implementation of the JPEG2000 decompression in graphics processing units (GPUs) for fast decoding of codeblock-based parallel compression stream. We use one CUDA block to decode one frame. Tier-2 is still serial decoded while Tier-1 and IDWT are parallel processed. Since our encode stream are block-based parallel which means each block are independent with other blocks, we parallel process each block in T1 with one thread. For IDWT, we use one CUDA block to execute one line and one CUDA thread to process one pixel. We investigate the speedups that can be gained by using the GPUs implementations with regards to the CPUs-based serial implementations. Experimental result reveals that our implementation can achieve significant speedups compared with serial implementations.
Spectral unmixing is an important research hotspot for remote sensing hyperspectral image applications. The unmixing process is comprised of the extraction of spectrally pure signatures (also called endmembers) and the determination of the abundance fractions of endmembers. Due to the inconspicuous signatures of pure spectra and the challenge of inadequate spatial resolution, sparse regression (SR) techniques are adopted in solving the linear spectral unmixing problem. However, the spatial information has not been fully utilized by state-of-art SR-based solutions. In this paper, we propose a new unmixing algorithm which involves in more suitable spatial correlations on sparse unmixing formulation for hyperspectral image. Our algorithm integrates the spectral and spatial information using Adapting Markov Random Fields (AMRF) which is introduced to exploit the spatial-contextual information. Compared with other SR-based linear unmixing methods, the experimental results show that the method proposed in this paper not only improves the characterization of mixed pixels but also obtains better accuracy in hyperspectral image unmixing.
Many algorithms have been proposed to automatically find spectral endmembers in hyperspectral data sets. Perhaps
one of the most popular ones is the pixel purity index (PPI), available in the ENVI software from Exelis Visual
Information Solutions. Although the algorithm has been widely used in the spectral unmixing community, it is highly
time consuming as its precision increases asymptotically. Due to its high computational complexity, the PPI algorithm
has been recently implemented in several high performance computing architectures including commodity clusters,
heterogeneous and distributed systems, field programmable gate arrays (FPGAs) and graphics processing units (GPUs).
In this letter, we present an improved GPU implementation of the PPI algorithm which provides real-time performance
for the first time in the literature.
The Consultative Committee for Space Data Systems (CCSDS) Rice Coding is a recommendation
for lossless compression of satellite data. It was also integrated with HDF (Hierarchical Data Format)
software for lossless compression of scientific data, and was proposed for lossless compression of
medical images. The CCSDS Rice coding is an approximate adaptive entropy coder. It uses a subset of
the family of Golomb codes to produce a simpler, suboptimal prefix code. The default preprocessor is a
unit-delay predictor with positive mapping. The adaptive entropy coder concurrently applies a set of
variable-length codes to a block of consecutive preprocessed samples. The code option that yields the
shortest codeword sequence for the current block of samples is then selected for transmission. A unique
identifier bit sequence is attached to the code block to indicate to the decoder which decoding option to
use. In this paper we explore the parallel efficiency of the CCSDS Rice code running on Graphics
Processing Units (GPUs) with Compute Unified Device Architecture (CUDA). The GPU-based
CCSDS Rice encoder will process several codeword blocks in a massively parallel fashion on different
GPU multiprocessors. We parallelized the CCSDS Rice coding by using reduction sum for code option
selection, prefix sum for intra-block and inter-block bit stream concatenation as well as asynchronous
data transfer. For NASA AVIRIS hyperspectral data, the speedup is near 6× as compared to the
single-threaded CPU counterpart. The CCSDS Rice coding has too many flow control instructions
which significantly affect the instruction throughput by causing threads of the same CUDA warp to
diverge. Consequently, the different execution paths must be serialized, increasing the total number of
instructions executed within the same warp. We conclude that this branching and divergence issue is
the bottleneck of the Rice coding that leads to smaller speedup than other entropy coding on GPUs.
Due to the restrained resources on board, compression methods with low complexity are desirable for hyperspectral
images. A low-complexity scalar coset coding based distributed compression method (s-DSC) has been proposed for
hyperspectral images. However there still exists much redundancy since the bitrate of the block to be encoded is
determined by its maximum prediction error. In this paper, a classified coset coding based lossless compression method
is proposed to further reduce the bitrate. The current block is classified to make the pixels with similar spectral
correlation cluster together. Then each class of pixels is coset coded respectively. The experimental results show that the
classification could reduce the bitrate efficiently.
Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on
distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift
between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of
each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which
the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be
removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain
comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with
3D-SPIHT at the cost of slight degrade in PSNR.