Intelligent agents are devices, software, and simulations that perceive the environment and take actions to achieve a goal through the use of artificial intelligence. These AI agents are increasingly incorporated into every aspect of our lives. This is particularly true for soldiers and analysts as they must increasingly perform tasks in varied, dynamic, and fast paced operational environments. There is a common idea that, in the future, the pace of operations will increasingly far exceed soldiers’ or analysts’ ability to react to extreme, complex activities. Accelerated decision making in Army operations will relying on AI agents and enabling technologies such as autonomous systems and simulations. However, what happens when the decisions from these AI agents are wrong, produce results contrary to expectations, or simply in disagreement with a person? Explanations can help resolve these issues. Any errors or uncertainty from the AI agent in an accelerated environment will present unique and unforeseen challenges that may potentially inhibit analysts’ or soldiers’ ability to make decisions effectively and efficiently. Providing explanations for AI outputs, predictions, or behaviors is challenging. Algorithms or techniques frequently obfuscate features and how actions are decided. In addition, results from these systems do not always include uncertainty information related to the factors that influenced the actions or decisions. Therefore, information on the uncertainty explicitly in the explanation is necessary. We explore the use of abductive reasoning to provide explanations for situations where an agents answers are not in line with human assessment nor provide uncertainty information needed for human interpretation of the answers. The primary goal of this work is to strengthen the communication of information and increase the effectiveness of interactions between humans and non-human agents.