8 February 2005 Multisensor image fusion using multiresolution analysis and pixel-level weights
Author Affiliations +
Abstract
The goal of image fusion is to create new images that are more suitable for the purposes of human visual perception, object detection and target recognition. For Automatic Target Recognition (ATR), we can use multi-sensor data including visible and infrared images to increase the recognition rate. In this paper, we propose a new multiresolution data fusion scheme based on Daubechies Wavelet Basis (DWB) and pixel-level weights including thermal weights and visual weights. We use multiresolution decompositions to represent the input images at different scales, present a multiresolution/multimodal segmentation to partition the image domain at these scales. The crucial idea is to use this segmentation to guide the fusion process. Physical thermal weights and perceptive visual weights are used as segmentation multimodals. Daubechies Wavelet (at different levels) is choosen as the Wavelet Basis. Experimental results confirm that the proposed algorithm is the best image sharpening method and can best maintain the spectral information of the original infrared image. Also, the proposed technique performs better than the other ones in the literature, more robust and effective, from both subjective visual effects and objective statistical analysis results.
© (2005) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jin Wu, Ya Qiu, Jian Liu, Jinwen Tian, "Multisensor image fusion using multiresolution analysis and pixel-level weights", Proc. SPIE 5637, Electronic Imaging and Multimedia Technology IV, (8 February 2005); doi: 10.1117/12.576851; https://doi.org/10.1117/12.576851
PROCEEDINGS
12 PAGES


SHARE
Back to Top