We present an efficient approach for measuring similarity between visual and IR images based on matching internal self-similarities. What is correlated across images is the internal layout of local self-similarities, even though geometric distortions and at multiple scales. These internal self-similarities are efficiently captured by a compact local "self-similarity descriptor". We compare our measure to commonly used SURF. Experimental results show that the proposed algorithm can realize the rotation invariance, scale invariance and robustness for occlusion. The proposed algorithm can match the shape in the IR and visible images efficiently and correctly.
This paper proposes an automatic method for IR and visible images matching without any assumption about initial
alignment. This paper details our interest region extraction method for optical images and also the efficient region
matching component. An improved shape context descriptor is constituted. The algorithm introduces in uniform pattern
to make the extracted component decrease to 20 with rotation invariance Experiments using several IR and visible
images illustrate the effectiveness of the proposed even when facing considerably geometric distortions. Even at different
time and under special weather condition, it still has a higher average correct matching ratio than SURF and better
robustness and it has a high efficiency, a short running time.