Translator Disclaimer
28 December 2019 Salient feature multimodal image fusion with a joint sparse model and multiscale dictionary learning
Author Affiliations +
Abstract

A multimodal image fusion method based on the joint sparse model (JSM), multiscale dictionary learning, and a structural similarity index (SSIM) is presented. As an effective signal representation technique, JSM is derived from distributed compressed sensing and has been successfully employed in many image-processing applications such as image classification and fusion. The highly redundant single dictionary always has difficulty satisfying the correlations between images in traditional JSM-based image fusion. Therefore, the proposed fusion model learns a more compact multiscale dictionary to effectively combine the multiscale analysis used in nonsubsampled contourlet transformation with the single-scale joint sparse representation used in image domains to solve the issues of single-scale sparse fusion and to improve fusion quality. The experimental results demonstrate that the proposed fusion method obtains the state-of-the-art performances in terms of both subjective visual quality and objective metrics, especially when fusing multimodal images.

© 2020 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2020/$28.00 © 2020 SPIE
Chengfang Zhang, Ziliang Feng, Zhisheng Gao, Xin Jin, Dan Yan, and Liangzhong Yi "Salient feature multimodal image fusion with a joint sparse model and multiscale dictionary learning," Optical Engineering 59(5), 051402 (28 December 2019). https://doi.org/10.1117/1.OE.59.5.051402
Received: 30 May 2019; Accepted: 19 November 2019; Published: 28 December 2019
JOURNAL ARTICLE
19 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT


Back to Top