The Scale Invariant Feature Transform (SIFT) algorithm has been widely used for its excellent stability in rotation, scale and affine transformation. The local SIFT descriptor has excellent accuracy and robustness. However, it is only based on gray scale ignoring the overall color information of the image resulting in poorly recognizing to the images with rich color details. We proposed an optimized method of SIFT algorithm in this paper which shows superior performance in feature extraction and matching. RGB color space normalization is used to eliminate the effects of illumination position and intensity invariant on the image. Then we proposed a novel similarity retrieval method, which used K nearest neighbor search strategy by constructing K-D tree (k-dimensional tree), to process the key points extracted from the normalized color space. The key points of RGB space are filtered and combined efficiently. Experimental results demonstrate that the performance of the optimized algorithm is obviously better than the original SIFT algorithm in matching. The average matching accuracy of test samples is 87.05%, an average increase of 18.21%.
Image fusion is to get a fused image that contains all important information from source images of the same scene. Meanwhile, multi-scale transforms and sparse representation (SR) are the two most effective techniques for image fusion. However, the SR-based image fusion methods are time-consuming and do not take the structural information of the source images into consideration. In addition, different multi-scale transform-based methods have their inevitable defects waiting to be solved till now. Therefore, in this paper, a new image fusion method combining nonsubsampled contourlet transform (NSCT) with SR is proposed. A decision map for the low-frequency coefficients according to the high-frequency coefficients is made to overcome these problems. Furthermore, it can reduce the calculation cost of the fusion algorithm and retain the useful information of source images as far as possible. Comparing with conventional multi-scale transform based methods and sparse representation based methods with a fixed or learned dictionary, the proposed method has better fusion performance in the field of medical image fusion.
Image fusion has been widely used in medical, computer vision and other fields. However, the traditional based on PC, FPGA and DSP image fusion system cannot satisfy requirements of portable, low power consumption and low cost.Raspberry Pi is a new type of microcomputer based on ARM, compared with traditional image fusion system, Raspberry Pi volume, price and power consumption is very low. With Raspberry Pi as core, and special camera of Raspberry Pi, router, PC, mouse, keyboard hardware, C++, OpenCV software, and Yeelink cloud platform build innovative image fusion system is able to meet small volume, low power and price requirements. Yeelink is a new type of Internet of things, providing access to sensor data, storage and display services. The terminal user can observe required information in real time through local area network. NonSubsampledContourlet Transform (NSCT) with multi-scale, multi-direction, multi-resolution and good shift invariance. Because of down sampling, traditional Contourlet transform will cause Gibbs phenomenon, NSCT can overcome the disadvantage, obtaining better fusion image. This paper makes full use of characteristic of Raspberry Pi and Yeelink, construct a new image fusion and scene monitoring system, images is processed by Wavelet, Contourlet and NSCT algorithms, finally analysis the results. The new system has great research and application value.