Night target tracking usually fails due to various reasons such as insufficient light, appearance change, motion blur, illumination variation, and deformation. Because infrared (IR) and visible video data provides comple- mentary information that can be utilized suitably and efficiently, we explore a novel framework by combining correlation filter-based visible tracking and Markov chain Monte Carlo (MCMC)-based IR tracking to overcome these challenges. In this framework, the two types of videos are asynchronous, and the frame rate of visible video is several times faster than that of IR video. Visible video is first used for location and scale estimation by solving a ridge regression problem efficiently in the correlation filter domain. When recording IR data, we use a uniquely designed feature shape context descriptor for the best location and scale estimation of an IR video target by using the MCMC particle filter. Then, we use candidate region location-scale fusion rules for the final target tracking update. Meanwhile, we build an accurately labeled IR and visible target tracking dataset for experiments. The result shows that the performance of our proposed approach is better than the state-of-the-art trackers for night target tracking, and our approach can significantly improve re-tracking performance when there is the drift.