Presentation + Paper
14 May 2019 Multisource deep learning for situation awareness
Author Affiliations +
Abstract
The resurgence of interest in artificial intelligence (AI) stems from impressive deep learning (DL) performance such as hierarchical supervised training using a Convolutional Neural Network (CNN). Current DL methods should provide contextual reasoning, explainable results, and repeatable understanding that require evaluation methods. This paper discusses DL techniques using multimodal (or multisource) information that extend measures of performance (MOP). Examples of joint multi-modal learning include imagery and text, video and radar, and other common sensor types. Issues with joint multimodal learning challenge many current methods and care is needed to apply machine learning methods. Results from Deep Multimodal Image Fusion (DMIF) using Electro-optical and infrared data demonstrate performance modeling based on distance to better understand DL robustness and quality to provide situation awareness.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Erik Blasch, Zheng Liu, Yufeng Zheng, Uttam Majumder, Alex Aved, and Peter Zulch "Multisource deep learning for situation awareness", Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880M (14 May 2019); https://doi.org/10.1117/12.2519236
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Molybdenum

Information fusion

Data modeling

Image processing

Mid-IR

Data fusion

Back to Top