15 September 1995 Tracker fusion for robustness in visual feature tracking
Author Affiliations +
Proceedings Volume 2589, Sensor Fusion and Networked Robotics VIII; (1995) https://doi.org/10.1117/12.220965
Event: Photonics East '95, 1995, Philadelphia, PA, United States
Abstract
Task-directed vision obviates the need for general image comprehension by focusing attention only on features which contribute useful information to the task at hand. Window-based visual tracking fits into this paradigm as motion tracking becomes a problem of local search in a small image region. While the gains in speed from such methods allow for real-time feature tracking on off-the-shelf hardware, they lose robustness by giving up a more global perspective: Window-based feature trackers are prone to such problems as distraction, illumination changes, fast features, and so forth. To add robustness to feature tracking, we present `tracker fusion,' where multiple trackers simultaneously track the same feature while watching for various problematic circumstances and combine their estimates in a meaningful way. By categorizing different situations in which mistracking occurs, finding appropriate trackers to deal with each such situation, and fusing the resulting trackers together, we construct robust feature trackers which maintain the speed of simple window-based trackers, yet afford greater resistance to mistracking.
© (1995) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kentaro Toyama, Gregory D. Hager, "Tracker fusion for robustness in visual feature tracking", Proc. SPIE 2589, Sensor Fusion and Networked Robotics VIII, (15 September 1995); doi: 10.1117/12.220965; https://doi.org/10.1117/12.220965
PROCEEDINGS
12 PAGES


SHARE
Back to Top