Translator Disclaimer
Presentation + Paper
12 April 2021 Learning intent and behavior models from motion trajectories for unsupervised semantic labeling
Author Affiliations +
Trajectory data has numerous commercial applications, e.g., location-based services, travel forecasting, health monitoring, land use analysis, urban planning, and robotics. However, traditional trajectory mining algorithms do not explain how and why the motion was generated, limiting their utility in GEOINT applications when data is unlabeled, noisy, and does not contain contextual layers. In this paper, we describe a methodology that analyzes spatiotemporal trajectory data to produce semantic labels. We describe the methodology to learn behavior models that most likely generated input trajectory data, and use these models to transfer labels across unlabeled ambiguous tracks. Behavior models include both movers’ intent encoded as motion reward functions and behavior policy encoded as the state-conditioned movement action distribution. We show that learned behavior models provide an efficient mechanism for relating noisy tracks, allowing accurate semi-supervised learning (>90% f-score over labeling outcomes) with just few labeled examples per type of motion behavior. We further hypothesize that learned behavior models contain latent statistical and structural information that may be exploited to label trajectories in completely unsupervised manner in the future, which will allow military analysts or civilian consumers to explain the observed trajectory data, derive semantic motion-based features to improve object and region classification, and reason about motion changes in different contexts.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Georgiy Levchuk and Louis Penafiel "Learning intent and behavior models from motion trajectories for unsupervised semantic labeling", Proc. SPIE 11756, Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, 117560O (12 April 2021);

Back to Top