Next-generation instruments to be carried onboard spacecraft are collecting a large quantity of information at an increasing rate, due to recent technological improvements. The huge amount of data generated onboard is competing with the limited channel resources available for the transmission of data on-ground. The result of this scenario is the increasing importance that onboard payload data compression is gaining in the framework of a spacecraft design.
In remote sensing systems, a major problem lies in the limited availability of bandwidth and resources necessary for acquisition, processing and transmission of the images related to a given terrestrial area, as acquired by the sensors mounted on an airborne or spaceborne remote platform. Since the spatial, spectral, and radiometric resolutions of optical and radar sensors are getting finer and finer, the amount of collected data is huge. New-generation sensors aim at increasing the resolution even further. As a consequence, powerful compression algorithms are required to match the available channel resources. Moreover, many Earth observation satellites are required to transmit data on-ground in real time. This means that compression devices with very high throughput are requested. In science missions, one of the challenges is to have the capability to transmit to the ground stations a high amount of data through a limited downlink. Thanks to onboard data compression, it will be possible to transmit all of the data generated onboard, without any restriction, maximizing the science return.
This special section provides a picture of state-of-the-art research in the area of onboard payload data compression. We wish to express our deep appreciation to all authors and reviewers for their high-quality contributions and enthusiastic efforts to this special section. Many of the papers are extended versions of material that has been presented at the 3rd Onboard Payload Data Compression (OBPDC) workshop that was held in Barcelona in the fall of 2012, gathering individuals from academia and industry to share the most recent developments in this exciting field.
The special section contains nine papers. Three papers consider general compression methods. “Information-theoretic assessment of on-board near-lossless compression of hyperspectral data” by B. Aiazzi et al. addresses modeling of compression performance. In particular, a rate-distortion model is developed to measure the impact of high-rate compression of raw data on the information available once the compressed data have been received and decompressed. “Randomized methods in lossless compression of hyperspectral data” by Q. Zhang et al. applies recently developed randomized matrix decomposition methods to fast lossless compression of hyperspectral images. The authors build on simple random projection methods and develop a new double-random projection technique, showing promising lossless compression results. “Prediction Error Coder: a fast lossless compression method for satellite noisy data” by A. Villafranca et al. addresses the problem of handling outliers in the encoding of prediction residuals. They propose PEC, a fast and noise-resilient semiadaptive entropy coder that can achieve better performances than the CCSDS standard in the presence of noise or when the input data contains a sizable fraction of outliers, while requiring very low processing resources.
Another group of papers is concerned with CCSDS standards. In particular, “Discrete wavelet transform fully adaptive prediction error coder: image data compression based on CCSDS 122.0 and fully adaptive prediction error coder” by G. Artigues et al. employs a fully adaptive prediction error coder that is aimed to achieve excellent compression ratios under almost any situation, including large fractions of outliers in the data and large sample sizes. A new image compression solution based on the DWT stage of the CCSDS 122.0 recommendation is presented, followed by the FAPEC entropy coder, thus removing the BPE stage. “Efficient implementation of the CCSDS 122.0-B-1 compression standard on a space-qualified field programmable gate array” by Li et al., presents an efficient FPGA architecture of the CCSDS 122.0-B-1 image data compression. The implementation can provide both lossless and lossy compression with a simple and efficient rate control mechanism. The achievable throughput depends on embedding into an application architecture and is typically higher than 100 Mbps. “Predictor analysis for onboard lossy predictive compression of multispectral and hyperspectral images” by M. Ricci and E. Magli presents a study of prediction error statistics for various predictors and images, including the predictor employed in the upcoming CCSDS 123 standard for lossless multi- and hyperspectral image compression. “Performance impact of parameter tuning on the CCSDS-123 lossless multi- and hyperspectral image compression standard” by E. Augé et al. investigates the effect of the various parameters available in the CCSDS 123 standard on the obtained compression performance, and it is useful to help users of the standard to achieve the best possible performance.
Finally, two papers present algorithm implementations on multicore graphical processing units (GPU). In particular, “Multiparallel decompression simultaneously using multicore central processing unit and graphic processing unit” by A. Petta et al. proposes a decoder of a wavelet-based compression algorithm in which the GPU and multiple CPU threads are run in parallel. Maximum throughput is obtained using an additional workload balancing algorithm. Through the pipelined CPU and GPU heterogeneous computing, the entire decoding system approaches a speedup of 15 times as compared to its single-threaded CPU counterpart. “Lossy hyperspectral image compression on a graphics processing unit: parallelization strategy and performance evaluation” by L. Santos et al. presents the CUDA GPU implementation of an algorithm for lossy compression of hyperspectral images, focusing on the entropy coding and bit packing phases, for which a more sophisticated strategy is necessary due to the existing data dependencies. Experimental results show a speedup of about 15 times.