Recent years have seen significant advances in artificial intelligence (AI) and machine learning (ML) technologies applicable to coalition situational understanding (CSU). However, state-of-the-art ML techniques based on deep neural networks require large volumes of training data; unfortunately, representative training examples of situations of interest in CSU are usually sparse. Moreover, to be useful, ML-based analytic services must be capable of explaining their outputs. We describe an integrated CSU architecture that combines neural networks with symbolic learning and reasoning to address the problem of sparse training data. We also demonstrate how explainability can be achieved for deep neural networks operating on multimodal sensor feeds. The work focuses on real-time decision making settings at the tactical edge, with both the symbolic and neural network parts of the system --- including the explainabilty approaches --- able to deal with temporal features.
Situational understanding is impossible without causal reasoning and reasoning under and about uncertainty, i.e. probabilistic reasoning and reasoning about the confidence in the uncertainty assessment. We therefore consider the case of subjective (uncertain) Bayesian networks. In previous work we notice that when observations are out of the ordinary, confidence decreases because the relevant training data, effective instantiations, to determine the probabilities for unobserved variables, on the basis of the observed variables, is significantly smaller than the size of the training data, the total number of instantiations. It is therefore of primary importance for the ultimate goal of situational understanding to be able to efficiently determine the reasoning paths that lead to low confidence whenever and wherever it occurs: this can guide specific data collection exercises to reduce such an uncertainty. We propose three methods to this end, and we evaluate them on the basis of a case-study developed in collaboration with professional intelligence analysts.