With the abilities of feature extraction and nonlinear representation, deep neural networks can remove spatial redundancy efficiently and perform well in terms of visible image compression. However, when it comes to multispectral images, it is necessary to consider both spatial and spectral redundancy. Based on this point, we propose an end-to-end feature domain residual coding network for multispectral image compression based on interspectral prediction. Specifically, a spatial-spectral feature extraction network and an interspectral prediction network are designed based on a pyramid structure, which can capture and fuse multi-scale features from coarse to fine. They make the best use of the spectral correlation to predict images accurately while combining with the feature domain residual coding network, which can further reduce the redundancy of spatial-spectral information. A single loss function jointly optimizes all modules in the network. There are lots of experiments with 8-band and 12-band multispectral image datasets. The experimental results demonstrate that the compression performance of the proposed method is superior to traditional compression methods (JPEG2000, 3D-SPIHT, Versatile Video Coding, PCA+JPEG2000) across evaluation indicators and even better than the advanced learned multispectral image compression algorithms. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image compression
Multispectral imaging
Image restoration
Feature extraction
Video coding
Image quality
Education and training