The development of deep convolutional neural networks (CNNs) has facilitated a type of Siamese network-based tracking methods. Such kind of trackers generally include two basic modules, i.e., the appearance model for target representation and the discriminative classifier to determine the location. In the existing methods, much attention has been paid on feature fusion for more robust appearance representation. However, they ignore the intrinsic relationship between features from multiple layers. Furthermore, the problem of data imbalance between positive and negative samples in offline training limits the discriminative ability of the CNN-based classifier. We investigate the joint learning of representation and discriminative classifier under the Siamese network for robust tracking. We integrate the top-down modulation for feature fusion by considering the intrinsic relationship between features from different layers. To solve the data imbalance problem, we propose an advanced hinge loss objective function to mine the hard examples in offline training, which helps to improve the robustness of the similarity measure. Experimental results on the public tracking benchmark datasets show that the proposed method obtains favorable tracking accuracy against the state-of-the-art trackers with a real-time tracking speed. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 4 scholarly publications.
Optical tracking
Mining
Modulation
Time division multiplexing
Video
Feature extraction
Multilayers