26 June 2020 Visual object tracking via precise localization
Author Affiliations +
Abstract

Recently, trackers composed of a target estimation module and a target classification module have presented excellent accuracy with high efficiency. However, they underperform when encountering background semantic interference, large-scale variation, and long-term tracking. To address these problems, we propose a two-stage tracking framework. First, we propose a more applicable objective function for tracking tasks named metrizable intersection over union by considering the alignment mode and the center distance between two bounding boxes. Second, multilevel features are used to eliminate the semantic ambiguity by exploring diverse semantic information. Third, a meta-synthetic decision strategy is proposed to determine the optimum location of the target. In comprehensive experiments on OTB100, Lasot, TrackingNet, TColor-128, UAV123, and UAV20L, our method performs favorably against state-of-the-art trackers.

© 2020 SPIE and IS&T 1017-9909/2020/$28.00© 2020 SPIE and IS&T
Xiaodong Liu, Min Jiang, and Jun Kong "Visual object tracking via precise localization," Journal of Electronic Imaging 29(3), 033018 (26 June 2020). https://doi.org/10.1117/1.JEI.29.3.033018
Received: 28 December 2019; Accepted: 12 June 2020; Published: 26 June 2020
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication and 3 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Optical tracking

Visualization

Chemical species

Video

Image filtering

Feature extraction

Classification systems

RELATED CONTENT


Back to Top