A performance metric for infrared and visible image fusion is proposed based on Weber’s law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
In this paper, we propose a vision-based target detection scheme based on video saliency to find the rescue targets
automatically. A new definition of video saliency is first proposed to comprehend the target properties based on structure
similarity among adjacent video frames. Then, a cascaded target detection is devised with the local image feature and
regional video saliency. We treat the salient objects in aerial images as a basic feature and filter the false candidates
based on the region-based video saliency. The propose method can improve the search and rescue of targets, and reduce
the economical losses in marine accidents.