The fusion between visible and infrared images captured by unmanned aerial vehicles (UAVs), both complementary to each other, can improve the reliability of target detection and recognition and other tasks. The images captured by UAV are featured by high dynamics and complex air-ground target background. Pixel-level matching should be conducted for the two different-source images, prior to their fusion. Therefore, an improved matching algorithm has been proposed that combines the improved Shi–Tomasi algorithm with the shape context (SC)-based algorithm. First, the Shi–Tomasi algorithm is employed to conduct feature-point detection in the scale space. The tangential direction of the edge contour where the feature-point lies is taken as its main direction, so as to guarantee the algorithm’s rotational invariance. Then, this paper conducts the block description for the extracted feature-point within the
neighborhood of its edge contour to obtain its descriptors. Finally, a fast library for approximate nearest neighbors matching algorithm is adopted to match all the feature-points. And the experimental results show that, in the scene where the shape of the main target is clear, the algorithm can achieve better matching and registration results for infrared and visible images that have been transformed through rotation, translation, or zooming.