Translator Disclaimer
1 June 2011 Multimodal image fusion with joint sparsity model
Author Affiliations +
Image fusion combines multiple images of the same scene into a single image which is suitable for human perception and practical applications. Different images of the same scene can be viewed as an ensemble of intercorrelated images. This paper proposes a novel multimodal image fusion scheme based on the joint sparsity model which is derived from the distributed compressed sensing. First, the source images are jointly sparsely represented as common and innovation components using an over-complete dictionary. Second, the common and innovations sparse coefficients are combined as the jointly sparse coefficients of the fused image. Finally, the fused result is reconstructed from the obtained sparse coefficients. Furthermore, the proposed method is compared with some popular image fusion methods, such as multiscale transform-based methods and simultaneous orthogonal matching pursuit-based method. The experimental results demonstrate the effectiveness of the proposed method in terms of visual effect and quantitative fusion evaluation indexes.
©(2011) Society of Photo-Optical Instrumentation Engineers (SPIE)
Shutao Li and Haitao Yin "Multimodal image fusion with joint sparsity model," Optical Engineering 50(6), 067007 (1 June 2011).
Published: 1 June 2011


Back to Top