Paper
23 May 2013 Learning and detecting coordinated multi-entity activities from persistent surveillance
Georgiy Levchuk, Matt Jacobsen, Caitlin Furjanic, Aaron Bobick
Author Affiliations +
Abstract
In this paper, we present our enhanced model of multi-entity activity recognition, which operates on person and vehicle tracks, converts them into motion and interaction events, and represents activities via multiattributed role networks encoding spatial, temporal, contextual, and semantic characteristics of coordinated activities. Our model is flexible enough to capture variations of behaviors, and is used for both learning of repetitive activity patterns in semi-supervised manner, and detection of activities in data with large ambiguity and high ratio of irrelevant to relevant tracks and events. We demonstrate our models using activities captured in CLIF persistent wide area motion data collections.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Georgiy Levchuk, Matt Jacobsen, Caitlin Furjanic, and Aaron Bobick "Learning and detecting coordinated multi-entity activities from persistent surveillance", Proc. SPIE 8745, Signal Processing, Sensor Fusion, and Target Recognition XXII, 87451L (23 May 2013); https://doi.org/10.1117/12.2014875
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Electro optical modeling

Motion models

Detection and tracking algorithms

Surveillance

Computer programming

Expectation maximization algorithms

Back to Top