Presentation + Paper
27 April 2018 Learning a dictionary of activities from motion imagery tracking data
John M. Irvine, Richard J. Wood
Author Affiliations +
Abstract
Target tracking derived from motion imagery enables automated activity analysis. In this paper, we develop methods for automatically exploiting the track data to detect and recognize activities, develop models of normal behavior, and detect departure from normalcy. We have developed methods for representing activities through syntactic analysis of the track data, by “tokenizing” the track, i.e. converting the kinematic information into strings of symbols amenable to further analysis. The syntactic analysis of target tracks is the foundation for constructing an expandable “dictionary of activities.” Through unsupervised learning on the syntactic representations, we discover the canonical activities in a corpus of motion imagery data. The probability distribution of the learned activities is the “dictionary”. Newly acquired track data is compared to the dictionary to flag atypical behaviors as departures from normalcy. We demonstrate the methods with relevant data.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John M. Irvine and Richard J. Wood "Learning a dictionary of activities from motion imagery tracking data", Proc. SPIE 10645, Geospatial Informatics, Motion Imagery, and Network Analytics VIII, 1064508 (27 April 2018); https://doi.org/10.1117/12.2306006
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Machine learning

Kinematics

Sensors

Video processing

Data modeling

Matrices

Back to Top