Trajectory data has numerous commercial applications, e.g., location-based services, travel forecasting, health monitoring, land use analysis, urban planning, and robotics. However, traditional trajectory mining algorithms do not explain how and why the motion was generated, limiting their utility in GEOINT applications when data is unlabeled, noisy, and does not contain contextual layers. In this paper, we describe a methodology that analyzes spatiotemporal trajectory data to produce semantic labels. We describe the methodology to learn behavior models that most likely generated input trajectory data, and use these models to transfer labels across unlabeled ambiguous tracks. Behavior models include both movers’ intent encoded as motion reward functions and behavior policy encoded as the state-conditioned movement action distribution. We show that learned behavior models provide an efficient mechanism for relating noisy tracks, allowing accurate semi-supervised learning (>90% f-score over labeling outcomes) with just few labeled examples per type of motion behavior. We further hypothesize that learned behavior models contain latent statistical and structural information that may be exploited to label trajectories in completely unsupervised manner in the future, which will allow military analysts or civilian consumers to explain the observed trajectory data, derive semantic motion-based features to improve object and region classification, and reason about motion changes in different contexts.
In both military and commercial domains, tasks are increasingly entrusted to autonomous systems and robots. These artificial intelligence (AI) systems are expected to be safe and intelligent, adapt to changing environments, and interact with other actors, both automated and human. In this paper, we present a framework and corresponding analytics for developing AI agents that possess (1) cognitive skills, including the ability to perform counter-factual reasoning and self-assessment and to exhibit human-like curiosity, biases, and errors; (2) the ability to learn complex tasks quickly with limited feedback; (3) the ability to coordinate and co-learn with human or AI teammates; and (4) the ability to function well over long time horizons (e.g. hours or days). Our framework is based on the theory of adaptive behavior called active inference, which was developed in computational neuroscience and psychology. Together with learnable deep factorized representations, the active inference provides the objective function, high-capacity predictions, and scalable computational mechanisms that enable AI agents to execute four processes fundamental to human cognition: learning, perception, planning, and simulation. We demonstrate the advantages of our AI solution in the domain of planning multi-agent maneuvers for executing area control missions. Our model achieves faster learning compared to the reinforcement learning baseline, producing faster point accumulation and game win rate.