Object trackers for full-motion-video (FMV) need to handle object occlusions (partial and short-term full), rotation, scaling, illumination changes, complex background variations, and perspective variations. Unlike traditional deep learning trackers that require extensive training time, the proposed Progressively Expanded Neural Network (PENNet) tracker methodology will utilize a modified variant of the extreme learning machine, which encompasses polynomial expansion and state preserving methodologies. This reduces the training time significantly for online training of the object. The proposed algorithm is evaluated on the DAPRA Video Verification of Identity (VIVID) dataset, wherein the selected highvalue-targets (HVTs) are vehicles.
Evan Krieger, Theus Aspiras, Vijayan K. Asari, Kevin Krucki, Bryce Wauligman, Yakov Diskin, and Karl Salva, "Vehicle tracking in full motion video using the progressively expanded neural network (PENNet) tracker," Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490I (Presented at SPIE Defense + Security: April 18, 2018; Published: 30 April 2018); https://doi.org/10.1117/12.2305391.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 14,000 conference presentations, including many plenary and keynote presentations.