In this paper, we propose a robust visual tracking algorithm based on online learning of a joint sparse dictionary. The joint sparse dictionary consists of positive and negative sub-dictionaries, which model foreground and background objects respectively. An online dictionary learning method is developed to update the joint sparse dictionary by selecting both positive and negative bases from bags of positive and negative image patches/templates during tracking. A linear classifier is trained with sparse coefficients of image patches in the current frame, which are calculated using the joint sparse dictionary. This classifier is then used to locate the target in the next frame. Experimental results show that our tracking method is robust against object variation, occlusion and illumination change.
This paper presents a robust method to search for the correct SIFT keypoint matches with adaptive distance ratio threshold. Firstly, the reference image is analyzed by extracting some characteristics of its SIFT keypoints, such as their distance to the object boundary and the number of their neighborhood keypoints. The matching credit of each keypoint is evaluated based on its characteristics. Secondly, an adaptive distance ratio threshold for the keypoint is determined based on its matching credit to identify the correctness of its best match in the source image. The adaptive threshold loosens the matching conditions for keypoints of high matching credits and tightens the conditions for those of low matching credits. Our approach improves the scheme of SIFT keypoint matching by applying adaptive distance ratio threshold rather than global threshold that ignores different matching credits of various keypoints. The experiment results show that our algorithm outperforms the standard SIFT matching method in some complicated cases of object recognition, in which it discards more false matches as well as preserves more correct matches.