This research is intended to contribute to the development of automated and human-in-the-loop systems for higher level fusion to respond to the information requirements of command decision making. In tactical situations with short time constraints, the analysis of information requirements may take place in advance for certain classes of problems, and provided to commanders and their staff as part of the control and communications systems that come with sensor networks. In particular, it may be possible that certain standing orders can assume the role of Priority Intelligence Requirements. Standing orders to a sensor network are analogous to standing orders to Soldiers. Trained Soldiers presumably don't need to be told to report contact with hostiles, for example, or to report any sighting of civilians with weapons. Such standing orders define design goals and engineering requirements for sensor networks and their control and inference systems. Since such standing orders can be defined in advance for a class of situations, they minimize the need for situation-specific human analysis. Thus, standing orders should be able to drive automatic control of some network functions, automated fusion of sensor reports, and automated dissemination of fused information. We define example standing orders, and outline an algorithm for responding to one of them based on our experience in the field of multisensor fusion.
This research is part of a proposed shift in emphasis in decision support from optimality to robustness. Computer simulation is emerging as a useful tool in planning courses of action (COAs). Simulations require domain models, but there is an inevitable gap between models and reality - some aspects of reality are not represented at all, and what is represented may contain errors. As models are aggregated from multiple sources, the decision maker is further insulated from even an awareness of model weaknesses. To realize the full power of computer simluations to support decision making, decision support systems should support the planner in exporing the <i>robustness of COAs</i> in the face of potential weaknesses in simulation models.
This paper demonstrates a method of exploring the robustness of a COA with respect to specific model assumptions about whose accuracy the decision maker might have concerns. The domain is that of peacekeeping in a country where three differenct demographic groups co-exist in tension. An external peacekeeping force strives to achieve stability, an improved economy, and a higher degree of democracy in the country. A proposed COA for such a force is simluated multiple times while varying the assumptions. A visual data analysis tool is used to explore COA robustness. The aim is to help the decision maker choose a COA that is likely to be successful even in the face of potential errors in the assumptions in the models.
The ability of contemporary military commanders to estimate and understand complicated situations already suffers from information overload, and the situation can only grow worse. We describe a prototype application that uses abductive inferencing to fuse information from multiple sensors to evaluate the evidence for higher-level hypotheses that are close to the levels of abstraction needed for decision making (approximately JDL levels 2 and 3). Abductive inference (abduction, inference to the best explanation) is a pattern of reasoning that occurs naturally in diverse settings such as medical diagnosis, criminal investigations, scientific theory formation, and military intelligence analysis. Because abduction is part of common-sense reasoning, implementations of it can produce reasoning traces that are very human understandable. Automated abductive inferencing can be deployed to augment human reasoning, taking advantage of computation to process large amounts of information, and to bypass limits to human attention and short-term memory.
We illustrate the workings of the prototype system by describing an example of its use for small-unit military operations in an urban setting. Knowledge was encoded as it might be captured prior to engagement from a standard military decision making process (MDMP) and analysis of commander's priority intelligence requirements (PIR). The system is able to reasonably estimate the evidence for higher-level hypotheses based on information from multiple sensors. Its inference processes can be examined closely to verify correctness. Decision makers can override conclusions at any level and changes will propagate appropriately.