Image fusion is the process of combining partially focused images to a completely informative focused image of the same scene. Also, it addresses the loss of depth when using an optical lens. However, many state-of-the-art fusion methods cannot preserve all the significant features of the source images well to obtain an all-in-focus image. In addition, most of these methods are sensitive to noise, usually leading to fusion image distortion and image information loss. We propose a fusion method for multifocus image using two-scale decomposition and global sparse features. Two component decompositions are performed on the source images with gradient-domain-guided image filtering and robust principal component analysis, respectively. The corresponding obtained components are base layers, detail layers, principal components, and sparse components. Joint decision maps are constructed based on the salient features of the obtained layers and sparse components, which guides the fusion of base and detail layers. The fused base and detail layers are integrated to construct the final fused image. The experimental results demonstrate that the proposed method can achieve better fusion performance compared to other reported methods. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image fusion
Image filtering
Discrete wavelet transforms
Visualization
Image quality
Principal component analysis
Image processing