There is a strong push to develop intelligent unmanned autonomy that complements human reasoning for applications as diverse as wilderness search and rescue, military surveillance, and robotic space exploration. More than just replacing humans for `dull, dirty and dangerous' work, autonomous agents are expected to cope with a whole host of uncertainties while working closely together with humans in new situations. The robotics revolution firmly established the primacy of Bayesian algorithms for tackling challenging perception, learning and decision-making problems. Since the next frontier of autonomy demands the ability to gather information across stretches of time and space that are beyond the reach of a single autonomous agent, the next generation of Bayesian algorithms must capitalize on opportunities to draw upon the sensing and perception abilities of humans-in/on-the-loop. This work summarizes our recent research toward harnessing `human sensors' for information gathering tasks. The basic idea behind is to allow human end users (i.e. non-experts in robotics, statistics, machine learning, etc.) to directly `talk to' the information fusion engine and perceptual processes aboard any autonomous agent. Our approach is grounded in rigorous Bayesian modeling and fusion of flexible semantic information derived from user-friendly interfaces, such as natural language chat and locative hand-drawn sketches. This naturally enables `plug and play' human sensing with existing probabilistic algorithms for planning and perception, and has been successfully demonstrated with human-robot teams in target localization applications.
This work addresses the problem of localizing a mobile intruder on a road network with a small UAV through fusion of event-based `hard data' collected from a network of unattended ground sensors (UGS) and `soft data' provided by human dismount operators (HDOs) whose statistical characteristics may be unknown. Current approaches to road network intruder detection/tracking have two key limitations: predictions become computationally expensive with highly uncertain target motions and sparse data, and they cannot easily accommodate fusion with uncertain sensor models. This work shows that these issues can be addressed in a practical and theoretically sound way using hidden Markov models (HMMs) within a comprehensive Bayesian framework. A formal procedure is derived for automatically generating sparse Markov chain approximations for target state dynamics based on standard motion assumptions. This leads to efficient online implementation via fast sparse matrix operations for non-Gaussian localization aboard small UAV platforms, and also leads to useful statistical insights about stochastic target dynamics that could be exploited by autonomous UAV guidance and control laws. The computational efficiency of the HMM can be leveraged in Rao-Blackwellized sampling schemes to address the problem of simultaneously fusing and characterizing uncertain HDO soft sensor data via hierarchical Bayesian estimation. Simulation results are provided to demonstrate the proposed approach.