24 December 2013 Robust visual tracking based on online learning of joint sparse dictionary
Author Affiliations +
Proceedings Volume 9067, Sixth International Conference on Machine Vision (ICMV 2013); 90671E (2013) https://doi.org/10.1117/12.2051541
Event: Sixth International Conference on Machine Vision (ICMV 13), 2013, London, United Kingdom
Abstract
In this paper, we propose a robust visual tracking algorithm based on online learning of a joint sparse dictionary. The joint sparse dictionary consists of positive and negative sub-dictionaries, which model foreground and background objects respectively. An online dictionary learning method is developed to update the joint sparse dictionary by selecting both positive and negative bases from bags of positive and negative image patches/templates during tracking. A linear classifier is trained with sparse coefficients of image patches in the current frame, which are calculated using the joint sparse dictionary. This classifier is then used to locate the target in the next frame. Experimental results show that our tracking method is robust against object variation, occlusion and illumination change.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Qiaozhe Li, Qiaozhe Li, Yu Qiao, Yu Qiao, Jie Yang, Jie Yang, Li Bai, Li Bai, } "Robust visual tracking based on online learning of joint sparse dictionary", Proc. SPIE 9067, Sixth International Conference on Machine Vision (ICMV 2013), 90671E (24 December 2013); doi: 10.1117/12.2051541; https://doi.org/10.1117/12.2051541
PROCEEDINGS
5 PAGES


SHARE
Back to Top