Infrared and visible image fusion aims to obtain an integrated image which contains more recognizable in- formation. To attain this object, an effective infrared and visible image fusion algorithm through multi-level co-occurrence filtering is proposed in this paper. Firstly, the input images are decomposed into three layers through a co-occurrence filtering decomposition model. Secondly, a gradient-domain-based pulse-coupled neural network (PCNN) fusion strategy is applied in the three layers. Finally, the fused image is reconstructed by the three fused layers. Experiments show that the proposed algorithm outperform most state-of-the-art algorithms in both qualitative and quantitative measures.
With the development of sensor technologies, imaging technology is developing more rapidly. What followed was the widespread use of image processing technology in many kinds of applications. For instance, image processing technology has been widely used in video surveillance, medical diagnosis, remote sensing detection and object tracking. As a sub-field of image processing technology, image fusion is the one of most studied technology. The aim of image fusion is to acquire an integrated image that contains more information. This integrated image is more conductive for a human or a machine to understand and mine the information contained in the image. In all kinds of image fusion, infrared (IR) and visible (VIS) image fusion is one of the most valuable multisource image fusion. When imaging the same scene using both IR and VIS imaging system, more information can be obtained, but more redundant information is generated. The IR sensor acquires the thermal radiation information of the object in a scene, so the object can also be detected when the lighting conditions are poor. The image acquired by VIS light sensors has more spectral information, clearer texture details, and higher spatial resolution. Thus, the scene can be described more completely by integrating the IR and VIS images into one image. Meanwhile, the scene can be readily understood by observers, and the information of the scene can be easily perceived. In this paper, an effective IR and VIS image fusion via non-subsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN) in multi-scale morphological gradient (MSMG) domain is proposed. First, low frequency sub-image and high frequency sub-images are obtained through NSST. Then, the low frequency sub-image and high frequency sub-images are fused via a MSMG domain PCNN (MSMG-PCNN) strategy. Finally, the fused image is reconstructed by inverse NSST. Experimental results demonstrate that the proposed MSMG-PCNN-NSST algorithm performs effectively in most cases by qualitative and quantitative evaluation.
In order to overcome the high cost of manufacture and large volume or weight limitations, one solution is to arrange multiple sub-aperture optical systems in accordance with certain spatial rules. The stacking image of sub-aperture beam on focal plane is equivalent to large aperture optical system. However, due to the discretization of pupil distribution of sparse-aperture optical system, the signal-to-noise ratio of image is reduced, the modulation transfer function decreases at midband spatial frequencies, and the optical system errors increase. Aiming at the poor imaging quality of sparse-aperture optical system, in this article, the method of restoration algorithm based on improved Wiener filter and optimization of adjacent frames is proposed, which makes the restored video image have higher definition. The restoration algorithm based on improved Wiener filter and evaluate as well as optimize of adjacent frames in this article mainly contains four aspects, including the analyze of image degradation process, the establish of image restoration model, the evaluate of restored image’s definition, and the optimize of adjacent frame image. Firstly, Synthesize the effect of atmospheric transmission and array structure on image degradation, we have constructed an image degradation model and have calculated the degradation function under the model. Then, the restoration model based on Wiener filter is established and improved. Moreover, the definition evaluation factor of no reference image is built to measure the quality of the restoration image. Finally, construct the mapping relation between the adaptive constant K and the definition evaluation factor in Wiener filter, constantly optimize image restoration quality. In high altitude reconnaissance, remote sensing imaging and other fields, cameras are required to have very high resolution, so the algorithm in this article has great research value.
Both common information and unique information are included in the infrared polarization (IRP) images and infrared intensity (IRI) images. Aiming at the disadvantages of (1) loss of detail information; and (2) poor discrimination of fused image information, during fusion of IRP images and IRI images, a method of multi-scale sparse representation and pulse coupled neural network is proposed. A non-local means (NLM) fusion methods combined with sparse representation of image and adaptive Pulse coupled neural network (PCNN) is included in the method. Firstly, the non-local means filter is used to obtain the image information of the source image at different scales. Secondly, a non-subsampled directional filter bank (NSDFB) is used to decompose the high-frequency information of different scales into multiple highfrequency direction sub-bands. For multiple high-frequency directions, the spatial frequency (SF) transformation is first performed for multiple high frequency direction sub-bands, and the PCNN is used to obtain the high frequency subbands fused image according to its significance, where the link strength of PCNN is adaptively adjusted by region variance. Then, the joint matrix composed with the low-frequency components is trained by K-singular value decomposition method (K-SVD) to get the redundant dictionary. The common information and unique information are judged by the position information of non-zero values in the sparse coefficient, and are fused with different methods. Finally, the fused high and low frequency sub-bands are inversely transformed by a non-negative matrix to obtain a fused image. Experimental results demonstrate that the proposed fusion algorithm can not only highlight the common information of the source image, but also retain their unique information. Meanwhile, the fused image has higher contrast and detail information. In addition, the fused image performs well in terms of average gradient (AG), edge intensity (EI), information entropy (IE), standard deviation (STD), spatial frequency (SF) and image definition (IDEF).
Pixel-level image fusion, which is widely used in remote sensing, medical imaging, surveillance and etc., directly combines the original information in the source images. As a pixel-level method, multi-focus image fusion is designed to combine the partially focused images into one fully fused single image, which is expected to be more informative for human or machine perception. To achieve this purpose, an algorithm using spatial frequency (SF) measure and discrete wavelet transform (DWT) for multi-focus image fusion is proposed. In this work, the source images are decomposed into low frequency components and high frequency components by using DWT. Then the spatial frequency of the low frequency components is calculated. The spatial frequency is used to judge the focused regions, followed by the morphological filter and median filter. The fused low frequency can be obtained. And the high frequency components are fused using traditional method. Finally, the fused image is obtained by doing inverse discrete wavelet transform. To do the comparison, the proposed algorithm is compared with several existing fusion algorithms in qualitative and quantitative ways. Experimental results demonstrate that our method can be competitive or even outperforms the methods in comparison.
Traditional histogram equalization method always leads to the gray level reduction and loss of details. In this paper, an efficient and self-adaptive image enhancement algorithm is proposed based on canny operator and histogram equalization. The canny operator is used to extract the detail information which could be preserved in the enhanced image. The shortcomings of histogram equalization can thus be overcome. The experimental results with infrared images show that our method can preserve more image details and improve the image contrast and suppress noise effectively, which indicates a better performance for infrared image enhancement.