A multimodal image fusion method based on the joint sparse model (JSM), multiscale dictionary learning, and a structural similarity index (SSIM) is presented. As an effective signal representation technique, JSM is derived from distributed compressed sensing and has been successfully employed in many image-processing applications such as image classification and fusion. The highly redundant single dictionary always has difficulty satisfying the correlations between images in traditional JSM-based image fusion. Therefore, the proposed fusion model learns a more compact multiscale dictionary to effectively combine the multiscale analysis used in nonsubsampled contourlet transformation with the single-scale joint sparse representation used in image domains to solve the issues of single-scale sparse fusion and to improve fusion quality. The experimental results demonstrate that the proposed fusion method obtains the state-of-the-art performances in terms of both subjective visual quality and objective metrics, especially when fusing multimodal images.
A method for geometric distortion correction and space and time-varying blur reduction is proposed, which can recover the high-quality image from a single-frame image distorted by atmospheric turbulence. First, the U-net like deep-stacked autoencoder neural network model is proposed, which is composed of two deep convolutional autoencoder (CAE) neural networks and a U-net. The first CAE is used for feature extraction, the U-net is used for feature deconvolution, and the second CAE is used for image reconstruction. For the loss reduction of image information, transposed convolution instead of upsampling is selected in U-net networks. Moreover, in order to obtain sufficient feature information for reconstruction, the first CAE and the last CAE are symmetric skip connected. This not only enables the fusion of low-level and high-level information but also ensures the integrity of image information greatly. Then, a method of gradually mature training from simple to complex is proposed to overcome the difficulty of convergence on smaller training sets. It makes the network be converged and mature by increasing the complexity of training data gradually so as to restore the high turbulence-degraded image. Experimental results of actual observation data and simulation data show that the algorithm has a stronger antinoise ability and can recover image details and sharpen image edges more effectively. In particular, for atmospheric turbulence severely degraded image restoration, the peak signal-to-noise ratio index is increased by about 10% on average compared with state-of-the-art methods.
An adaptive joint sparsity model (JSM) is presented for multimodal image fusion. As a multisignal modeling technique, JSM, which is derived from distributed compressed sensing, has been successfully employed in multimodal image fusion. In traditional JSM-based fusion, a single dictionary learned by K-singular value decomposition (SVD) has higher coherence yet may result in potential visual confusion and misleading. In the proposed model, we first learn a plurality of subdictionaries and use a supervised classification approach based on gradient information. Then, one of the learned subdictionaries is adaptively applied to JSM to obtain the common and innovative sparse coefficients.. Finally, the fused image is reconstructed by the fused sparse coefficients and the adaptive dictionary. Infrared-visible images and medical images were selected to test the proposed approach. The results were compared with those of traditional methods, such as the multiscale transform-based methods, JSM-based method, and adaptive sparse representation (ASR) model-based method. Experimental results on multimodal images demonstrate that the proposed fusion method can obtain better performance than the conventional JSM-based method and ASR-based method in terms of both visual quality and objective assessment.
Optical and infrared imaging is often used in ground-based optical space target observation. The fusion of the two types of images for a more detailed observation is the key problem to be solved. A space target multimodal image fusion scheme based on the joint sparsity model, which takes the correlations among the native sparse characteristics of the image, clarity features of the image, and multisource images into consideration, is proposed. First, using an overcomplete dictionary, the source images are represented as a combination of a shared sparse component and exclusive sparse components. Second, a method for image clarity feature extraction is proposed to design the fusion rules of exclusive sparse components to obtain the fused exclusive sparse components. Finally, the fused image is reconstructed with the fused sparse components and overcompleted dictionary. The proposed method was tested on the space target image and nature scene image data sets. Compared with traditional methods such as the multiscale transform-based methods, sparse representation-based methods, and joint sparsity representation-based methods, the final experimental results demonstrated that our method outperforms the existing state-of-the-art methods on the human visual effect and the objective evaluation indexes. In particular, for the evaluation indexes QAB/F and QE, the scores increase to nearly 10% more than those for traditional methods, which indicates that the fused image of our method has better edge clarity.