1Air Force Office of Scientific Research (United States) 2The Univ. of British Columbia Okanagan (Canada) 3Alcorn State Univ. (United States) 4Air Force Research Lab. (United States)
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The resurgence of interest in artificial intelligence (AI) stems from impressive deep learning (DL) performance such as hierarchical supervised training using a Convolutional Neural Network (CNN). Current DL methods should provide contextual reasoning, explainable results, and repeatable understanding that require evaluation methods. This paper discusses DL techniques using multimodal (or multisource) information that extend measures of performance (MOP). Examples of joint multi-modal learning include imagery and text, video and radar, and other common sensor types. Issues with joint multimodal learning challenge many current methods and care is needed to apply machine learning methods. Results from Deep Multimodal Image Fusion (DMIF) using Electro-optical and infrared data demonstrate performance modeling based on distance to better understand DL robustness and quality to provide situation awareness.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Erik Blasch, Zheng Liu, Yufeng Zheng, Uttam Majumder, Alex Aved, Peter Zulch, "Multisource deep learning for situation awareness," Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880M (14 May 2019); https://doi.org/10.1117/12.2519236