We developed the Situation awareness-based Agent Transparency (SAT) model to support human operators’ situation awareness of the mission environment through teaming with intelligent agents. The model includes the agent's current actions and plans (Level 1), its reasoning process (Level 2), and its projection of future outcomes (Level 3). Human-inthe-loop simulation experiments have been conducted (Autonomous Squad Member and IMPACT) to illustrate the utility of the model for human-autonomy team interface designs. Across studies, the results consistently showed that human operators’ task performance improved as the agents became more transparent. They also perceived transparent agents as more trustworthy.
Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator’s ability to understand a robot’s behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans’ trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system’s action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans’ impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human–robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human–robot communication and human mental models regarding robots impact a human’s ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.