Presentation + Paper
12 April 2021 Multi-modal sensor fusion and selection for enhanced situational awareness
Author Affiliations +
Abstract
Collaborative multi-sensor perception enables a sensor network to provide multiple views or observations of an environment, in a way that collects multiple observations into a cohesive display. In order to do this, multiple observations must be intelligently fused. We briefly describe our existing approach for sensor fusion and selection, where a weighted combination of observations is used to recognize a target object. The optimal weights that are identified control the fusion of multiple sensors, while also selecting those which provide the most relevant or informative observations. In this paper, we propose a system which utilizes these optimal sensor fusion weights to control the display of observations to a human operator, providing enhanced situational awareness. Our proposed system displays observations based on the physical locations of the sensors, enabling a human operator to better understand where observations are located in the environment. Then, the optimal sensor fusion weights are used to scale the display of observations, highlighting those which are informative and making less relevant observations simple for a human operator to ignore.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Brian Reily, Christopher Reardon, and Hao Zhang "Multi-modal sensor fusion and selection for enhanced situational awareness", Proc. SPIE 11759, Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations II, 117590K (12 April 2021); https://doi.org/10.1117/12.2587985
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Situational awareness sensors

Robots

Sensor fusion

Environmental sensing

RELATED CONTENT


Back to Top