Translator Disclaimer
Presentation + Paper
17 September 2018 Robust night target tracking via infrared and visible video fusion
Author Affiliations +
Night target tracking usually fails due to various reasons such as insufficient light, appearance change, motion blur, illumination variation, and deformation. Because infrared (IR) and visible video data provides comple- mentary information that can be utilized suitably and efficiently, we explore a novel framework by combining correlation filter-based visible tracking and Markov chain Monte Carlo (MCMC)-based IR tracking to overcome these challenges. In this framework, the two types of videos are asynchronous, and the frame rate of visible video is several times faster than that of IR video. Visible video is first used for location and scale estimation by solving a ridge regression problem efficiently in the correlation filter domain. When recording IR data, we use a uniquely designed feature shape context descriptor for the best location and scale estimation of an IR video target by using the MCMC particle filter. Then, we use candidate region location-scale fusion rules for the final target tracking update. Meanwhile, we build an accurately labeled IR and visible target tracking dataset for experiments. The result shows that the performance of our proposed approach is better than the state-of-the-art trackers for night target tracking, and our approach can significantly improve re-tracking performance when there is the drift.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Keyan Ren, Xiao Zhang, Yu Han, and Yibin Hou "Robust night target tracking via infrared and visible video fusion", Proc. SPIE 10752, Applications of Digital Image Processing XLI, 1075206 (17 September 2018);


Object tracking in infrared imagery
Proceedings of SPIE (September 25 2003)

Back to Top