Object trackers for full-motion-video (FMV) need to handle object occlusions (partial and short-term full), rotation, scaling, illumination changes, complex background variations, and perspective variations. Unlike traditional deep learning trackers that require extensive training time, the proposed Progressively Expanded Neural Network (PENNet) tracker methodology will utilize a modified variant of the extreme learning machine, which encompasses polynomial expansion and state preserving methodologies. This reduces the training time significantly for online training of the object. The proposed algorithm is evaluated on the DAPRA Video Verification of Identity (VIVID) dataset, wherein the selected highvalue-targets (HVTs) are vehicles.
Evan Krieger, Theus Aspiras, Vijayan K. Asari, Kevin Krucki, Bryce Wauligman, Yakov Diskin, and Karl Salva, "Vehicle tracking in full motion video using the progressively expanded neural network (PENNet) tracker," Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490I (Presented at SPIE Defense + Security: April 18, 2018; Published: 30 April 2018); https://doi.org/10.1117/12.2305391.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the proceedings. They include the speaker's narration with video of the slides and animations. Most include full-text papers. Interactive, searchable transcripts and closed captioning are now available for 2018 presentations, with transcripts for prior recordings added daily.
Search our growing collection of more than 16,000 conference presentations, including many plenaries and keynotes.