Head mounted displays (HMD) may prove useful for synthetic training and augmentation of military C5ISR decisionmaking. Motion sickness caused by such HMD use is detrimental, resulting in decreased task performance or total user dropout. The genesis of sickness symptoms is often measured using paper surveys, which are difficult to deploy in live scenarios. Here, we demonstrate a new way to track sickness severity using machine learning on data collected from heterogeneous, non-invasive sensors worn by users who navigated a virtual environment while remaining stationary in reality. We discovered that two models, one trained on heterogeneous sensor data and another trained only on electroencephalography (EEG) data, were able to classify sickness severity with over 95% accuracy and were statistically comparable in performance. Greedy feature optimization was used to maximize accuracy while minimizing the feature subspace. We found that across models, the features with the most weight were previously reported in the literature as being related to motion sickness severity. Finally, we discuss how models constructed on heterogeneous vs homogeneous sensor data may be useful in different real-world scenarios.
Collaborative decision-making remains a significant research challenge that is made even more complicated in real-time or tactical problem-contexts. Advances in technology have dramatically assisted the ability for computers and networks to improve the decision-making process (i.e. intelligence, design, and choice). In the intelligence phase of decision making, mixed reality (MxR) has shown a great deal of promise through implementations of simulation and training. However little research has focused on an implementation of MxR to support the entire scope of the decision cycle, let alone collaboratively and in a tactical context. This paper presents a description of the design and initial implementation for the Defense Integrated Collaborative Environment (DICE), an experimental framework for supporting theoretical and empirical research on MxR for tactical decision-making support.
Intelligent agents are devices, software, and simulations that perceive the environment and take actions to achieve a goal through the use of artificial intelligence. These AI agents are increasingly incorporated into every aspect of our lives. This is particularly true for soldiers and analysts as they must increasingly perform tasks in varied, dynamic, and fast paced operational environments. There is a common idea that, in the future, the pace of operations will increasingly far exceed soldiers’ or analysts’ ability to react to extreme, complex activities. Accelerated decision making in Army operations will relying on AI agents and enabling technologies such as autonomous systems and simulations. However, what happens when the decisions from these AI agents are wrong, produce results contrary to expectations, or simply in disagreement with a person? Explanations can help resolve these issues. Any errors or uncertainty from the AI agent in an accelerated environment will present unique and unforeseen challenges that may potentially inhibit analysts’ or soldiers’ ability to make decisions effectively and efficiently. Providing explanations for AI outputs, predictions, or behaviors is challenging. Algorithms or techniques frequently obfuscate features and how actions are decided. In addition, results from these systems do not always include uncertainty information related to the factors that influenced the actions or decisions. Therefore, information on the uncertainty explicitly in the explanation is necessary. We explore the use of abductive reasoning to provide explanations for situations where an agents answers are not in line with human assessment nor provide uncertainty information needed for human interpretation of the answers. The primary goal of this work is to strengthen the communication of information and increase the effectiveness of interactions between humans and non-human agents.
Tone mapping operators compress high dynamic range images to improve the picture quality on a digital display when the dynamic range of the display is lower than that of the image. However, tone mapping operators have been largely designed and evaluated based on the aesthetic quality of the resulting displayed image or how perceptually similar the compressed image appears relative to the original scene. They also often require per image tuning of parameters depending on the content of the image. In military operations, however, the amount of information that can be perceived is more important than the aesthetic quality of the image and any parameter adjustment needs to be as automated as possible regardless of the content of the image. We have conducted two studies to evaluate the perceivable detail of a set of tone mapping algorithms, and we apply our findings to develop and test an automated tone mapping algorithm that demonstrates a consistent improvement in the amount of perceived detail. An automated, and thereby predictable, tone mapping method enables a consistent presentation of perceivable features, can reduce the bandwidth required to transmit the imagery, and can improve the accessibility of the data by reducing the needed expertise of the analyst(s) viewing the imagery.