24 December 2014 Robust visual multitask tracking via composite sparse model
Author Affiliations +
Abstract
Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L1,q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L1,∞ and L1,1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers.
© 2014 SPIE and IS&T
Bo Jin, Zhongliang Jing, Meng Wang, Han Pan, "Robust visual multitask tracking via composite sparse model," Journal of Electronic Imaging 23(6), 063022 (24 December 2014). https://doi.org/10.1117/1.JEI.23.6.063022 . Submission:
JOURNAL ARTICLE
15 PAGES


SHARE
Back to Top