Data compression is essential for remote sensing based on hyperspectral sensors owing to the increasing amount of data generated by modern instrumentation. CCSDS issued the 123.0 standard for lossless hyperspectral compression, and a new lossy hyperspectral compression recommendation is being prepared. We have developed multispectral and hyperspectral pre-processing stages for FAPEC, a data compression algorithm based on an entropy coder. We can select a prediction-based lossless stage that offers excellent results and speed. Alternatively, a DWT-based lossless and lossy stage can be selected, which offers excellent results yet obviously requiring more compression time. Finally, a lossless stage based on our HPA algorithm can also be selected, only lossless for now but with the lossy option in preparation. Here we present the overall design of these data compression systems and the results obtained on a variety of real data, including ratios, speed and quality.
The LOFT mission concept is one of four candidates selected by ESA for the M3 launch opportunity as Medium Size missions of the Cosmic Vision programme. The launch window is currently planned for between 2022 and 2024. LOFT is designed to exploit the diagnostics of rapid X-ray flux and spectral variability that directly probe the motion of matter down to distances very close to black holes and neutron stars, as well as the physical state of ultradense matter. These primary science goals will be addressed by a payload composed of a Large Area Detector (LAD) and a Wide Field Monitor (WFM). The LAD is a collimated (<1 degree field of view) experiment operating in the energy range 2-50 keV, with a 10 m2 peak effective area and an energy resolution of 260 eV at 6 keV. The WFM will operate in the same energy range as the LAD, enabling simultaneous monitoring of a few-steradian wide field of view, with an angular resolution of <5 arcmin. The LAD and WFM experiments will allow us to investigate variability from submillisecond QPO’s to yearlong transient outbursts. In this paper we report the current status of the project.
The Large Observatory for X-ray Timing (LOFT) is one of the four candidate ESA M3 missions considered for
launch in the timeframe of 2022. It is specifically designed to perform fast X-ray timing and probe the status of
the matter near black holes and neutron stars. The LOFT scientific payload consists of a Large Area Detector
and a Wide Field Monitor.
The LAD is a 10m2-class pointed instrument with high spectral (200 eV @ 6 keV) and timing (< 10 μs)
resolution over the 2-80 keV range. It is designed to observe persistent and transient X-ray sources with a very
large dynamic range from a few mCrab up to an intensity of 15 Crab.
An unprecedented large throughput (~280.000 cts/s from the Crab) is achieved with a segmented detector,
making pile-up and dead-time, often worrying or limiting focused experiments, secondary issues.
We present the on-board data handling concept that follows the highly segmented and hierarchical structure
of the instrument from the front-end electronics to the on-board software. The system features customizable
observation modes ranging from event-by-event data for sources below 0.5 Crab to individually adjustable time
resolved spectra for the brighter sources. On-board lossless data compression will be applied before transmitting
the data to ground.
Proc. SPIE. 8183, High-Performance Computing in Remote Sensing
KEYWORDS: Environmental monitoring, Data storage, Computing systems, Computer programming, Data processing, Telecommunications, Pollution control, Java, Data communications, Computer programming languages
Java is a commonly used programming language, although its use in High Performance Computing (HPC) remains
relatively low. One of the reasons is a lack of libraries offering specific HPC functions to Java applications. In
this paper we present a Java-based framework, called DpcbTools, designed to provide a set of functions that fill
this gap. It includes a set of efficient data communication functions based on message-passing, thus providing,
when a low latency network such as Myrinet is available, higher throughputs and lower latencies than standard
solutions used by Java. DpcbTools also includes routines for the launching, monitoring and management of Java
applications on several computing nodes by making use of JMX to communicate with remote Java VMs. The
Gaia Data Processing and Analysis Consortium (DPAC) is a real case where scientific data from the ESA Gaia
astrometric satellite will be entirely processed using Java. In this paper we describe the main elements of DPAC
and its usage of the DpcbTools framework. We also assess the usefulness and performance of DpcbTools through
its performance evaluation and the analysis of its impact on some DPAC systems deployed in the MareNostrum
supercomputer (Barcelona Supercomputing Center).
Future space missions are based on a new generation of instruments that often generate vast amounts of data.
Transferring this data to ground, and once there, between different computing facilities is not an easy task
whatsoever. A clear example of these missions is Gaia, a space astrometry mission of ESA. To carry out the
data reduction tasks on ground, an international consortium has been set up. Among its tasks perhaps the most
demanding one is the Intermediate Data Updating, which will have to repeatedly re-process nearly 100 TB of
raw data received from the satellite using the latest instrument calibrations available. On the other hand, one
of the best data compression solutions is the Prediction Error Coder, a highly optimized entropy coder that
performs very well with data following realitic statistics. Regarding file formats, HDF5 provides a completely
indexed, easily customizable file with a quick and parallel access. Moreover, HDF5 has a friendly presentation
format and multi-platform compatibility. Thus, it is a powerful environment to store data compressed using
the above mentioned coder. Here we show the integration of both systems for the storage of Gaia raw data.
However, this integration can be applied to the efficient storage of any kind of data. Moreover, we show that
the file sizes obtained using this solution are similar to those obtained using other compression algorithms that
require more computing power.
Future space missions are based on a new generation of instruments. These missions find a serious constraint in the
telemetry system, which cannot download to ground the large volume of data generated. Hence, data compression
algorithms are often mandatory in space, despite the modest processing power usually available on-board. We present
here a compact solution implemented in hardware for such missions. FAPEC is a lossless compressor which typically
can outperform the CCSDS 121.0 recommendation on realistic data sets. With efficiencies higher than 90% of the
Shannon limit in most cases - even in presence of noise or outliers - FAPEC has been successfully validated in its
software version as a robust low-complexity alternative to the recommendation. This work describes the FAPEC
implementation on an FPGA, targeting the space-qualified Actel RTAX family. We prove that FAPEC is hardwarefriendly
and that it does not require external memory. We also assess the correct operation of the prototype for an initial
throughput of 32 Mbits/s with very low power consumption (about 20 mW). Finally, we discuss further potential
applications of FAPEC, and we set the basis for the improvements that will boost FAPEC performance beyond the
100 Mbit/s level.
Proc. SPIE. 7810, Satellite Data Compression, Communications, and Processing VI
KEYWORDS: Data compression, Satellites, Receivers, Data transmission, Navigation systems, High dynamic range imaging, Satellite communications, Data communications, Geophysics, Global Positioning System
The Global Positioning System (GPS) has long been used as a scientific tool, and it has turned into a very powerful
technique in domains like geophysics, where it is commonly used to study the dynamics of a large variety of
systems, like glaciers, tectonic plates and others. In these cases, the large distances between receivers as well
as their remote locations usually pose a challenge for data transmission. The standard format for scientific
applications is a compressed RINEX file - a raw data format which allows post-processing. Its associated
compression algorithm is based on a pre-processing stage followed by a commercial data compressor. In this
paper we present a new compression method which can achieve better compression ratios with a faster operation.
We have improved the pre-compression stage, split the resulting file into two, and applied the most appropriate
compressor to each file. FAPEC, a highly resilient entropy coder, is applied to the observables file. The results
obtained so far demonstrate that it is possible to obtain average compression gains of about 35% with respect
to the original compressor.
The Consultative Committee for Space Data Systems (CCSDS) recommends the use of a two-stage strategy
for lossless data compression in space. At the core of the second stage is the Rice coding method. The Rice
compression ratio rapidly decreases in the presence of noise and outliers, since this coder is specially conceived for
noiseless data following geometric distributions. This, in turn, makes the CCSDS recommendation too sensitive
in front of outliers in the data, leading to non-optimal ratios in realistic scenarios. In this paper we propose
to substitute the Rice coder of the CCSDS recommendation by a subexponential coder. We show that this
solution offers high compression ratios even when large amounts of noise are present in the data. This is done
by testing both compressors with synthetic and real data. The performance is actually similar to that obtained
with the FAPEC coder, although with slightly higher processing requirements. Therefore, this solution appears
as a simple improvement that can be done to the current CCSDS standard with an excellent return.
More than a decade has passed since the Consultative Committee for Space Data Systems (CCSDS) made its recommendation for lossless data compression. The CCSDS standard is commonly used for scientific missions because it is a general-purpose lossless compression technique with a low computational cost which results in acceptable compression ratios. At the core of this compression algorithm it is the Rice coding method. Its performance rapidly degrades in the presence of outliers, as the Rice coder is conceived for noiseless data following geometric distributions. To overcome this problem we present here a new entropy coder, the so-called Prediction Error Coder (PEC), as well as its fully adaptive version (FAPEC) which we show is a reliable alternative to the CCSDS standard. We show that PEC and FAPEC achieve high compression ratios even when a large amount of outliers are present in the data. This is done by testing our compressors with synthetic and real data, comparing the compression ratios and processor requirements with those obtained using the CCSDS standard.
It has passed more than a decade since the Consultative Committee for Space Data Systems (CCSDS) made its recommendation for lossless data compression. The CCSDS standard is commonly used for scientific missions because it is a general-purpose lossless compression technique with a low computational cost which results in acceptable compression ratios. At the core of this compression algorithm it is the Rice coding method. Its performance rapidly degrades in the presence of noise and outliers, as the Rice coder is conceived for noiseless data following geometric distributions. To overcome this problem we present here a new coder, the so-called Prediction Error Coder (PEC), as well as its fully adaptive version (FAPEC) which we show is a reliable alternative to the CCSDS standard. We show that PEC and FAPEC achieve large compression ratios even when high levels of noise are present in the data. This is done testing our compressors with synthetic and real data, and comparing the compression ratios and processor requirements with those obtained using the CCSDS standard.