Several recent NASA missions have used the state-of-the-art wavelet-based ICER Progressive Image Compressor
for lossy image compression. In this paper, we describe a methodology for using evolutionary computation to
optimize wavelet and scaling numbers describing reconstruction-only multiresolution analysis (MRA) transforms
that are capable of accepting as input test images compressed by ICER software at a reduced bit rate (e.g., 0.99 bits
per pixel [bpp]), and producing as output images whose average quality, in terms of mean squared error (MSE),
equals that of images produced by ICER’s reconstruction transform when applied to the same test images
compressed at a higher bit rate (e.g., 1.00 bpp). This improvement can be attained without modification to ICER’s
compression, quantization, encoding, decoding, or dequantization algorithms, and with very small modifications to
existing ICER reconstruction filter code. As a result, future NASA missions will be able to transmit greater amounts
of information (i.e., a greater number of images) over channels with equal bandwidth, thus achieving a no-cost
improvement in the science value of those missions.
The research described in this paper uses the CMA-ES evolution strategy to optimize matched forward and inverse
transform pairs for the compression and reconstruction of images transmitted from Mars rovers under conditions subject
to quantization error. Our best transforms outperform the 2/6 wavelet (whose integer variant was used onboard the
rovers), substantially reducing error in reconstructed images without allowing increases in compressed file size. This
result establishes a new state-of-the-art for the lossy compression of images transmitted over the deep-space channel.
The 9/7 wavelet is used for a wide variety of image compression tasks. Recent research, however, has established a
methodology for using evolutionary computation to evolve wavelet and scaling numbers describing transforms that
outperform the 9/7 under lossy conditions, such as those brought about by quantization or thresholding. This paper
describes an investigation into which of three possible approaches to transform evolution produces the most effective
transforms. The first approach uses an evolved forward transform for compression, but performs reconstruction using the
9/7 inverse transform; the second uses the 9/7 forward transform for compression, but performs reconstruction using an
evolved inverse transform; the third uses simultaneously evolved forward and inverse transforms for compression and
reconstruction. Three image sets are independently used for training: digital photographs, fingerprints, and satellite
images. Results strongly suggest that it is impossible for evolved transforms to substantially improve upon the
performance of the 9/7 without evolving the inverse transform.
State-of-the-art image compression and reconstruction schemes utilize wavelets. Quantization and thresholding are
commonly used to achieve additional compression, but cause permanent, irreversible information loss. This paper
describes an investigation into whether evolutionary computation (EC) may be used to optimize forward
(compression-only) transforms capable of matching or exceeding the compression capabilities of a selected wavelet,
while reducing the aggregate error in images subsequently reconstructed by that wavelet. Transforms are
independently trained and tested using three sets of images: digital photographs, fingerprints, and satellite images.
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES),
of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the
9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and
reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces
the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while
maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In
addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from
the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital
photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our
evolved transform greatly improves the quality of reconstructed images without substantial loss of compression
capability over a broad range of image classes.
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard  and the JPEG-2000 image compression standard , utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.