Translator Disclaimer
21 February 2019 Multimodal image fusion with adaptive joint sparsity model
Author Affiliations +
Abstract
An adaptive joint sparsity model (JSM) is presented for multimodal image fusion. As a multisignal modeling technique, JSM, which is derived from distributed compressed sensing, has been successfully employed in multimodal image fusion. In traditional JSM-based fusion, a single dictionary learned by K-singular value decomposition (SVD) has higher coherence yet may result in potential visual confusion and misleading. In the proposed model, we first learn a plurality of subdictionaries and use a supervised classification approach based on gradient information. Then, one of the learned subdictionaries is adaptively applied to JSM to obtain the common and innovative sparse coefficients.. Finally, the fused image is reconstructed by the fused sparse coefficients and the adaptive dictionary. Infrared-visible images and medical images were selected to test the proposed approach. The results were compared with those of traditional methods, such as the multiscale transform-based methods, JSM-based method, and adaptive sparse representation (ASR) model-based method. Experimental results on multimodal images demonstrate that the proposed fusion method can obtain better performance than the conventional JSM-based method and ASR-based method in terms of both visual quality and objective assessment.
© 2019 SPIE and IS&T 1017-9909/2019/$25.00 © 2019 SPIE and IS&T
Chengfang Zhang, Liangzhong Yi, Ziliang Feng, Zhisheng Gao, Xin Jin, and Dan Yan "Multimodal image fusion with adaptive joint sparsity model," Journal of Electronic Imaging 28(1), 013043 (21 February 2019). https://doi.org/10.1117/1.JEI.28.1.013043
Received: 9 December 2018; Accepted: 31 January 2019; Published: 21 February 2019
JOURNAL ARTICLE
11 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT


Back to Top