Intelligent robotic autonomous systems (unmanned aerial/ground/surface/underwater vehicles) are attractive for military application to relieve humans from tedious or dangerous tasks. These systems require awareness of the environment and their own performance to reach a mission goal. This awareness enables them to adapt their operations to handle unexpected changes in the environment and uncertainty in assessments. Components of the autonomous system cannot rely on perfect awareness or actuator execution, and mistakes of one component can affect the entire system. To obtain a robust system, a system-wide approach is needed and a realistic model of all aspects of the system and its environment. In this paper, we present our study on the design and development of a fully functional autonomous system, consisting of sensors, observation processing and behavior analysis, information database, knowledge base, communication, planning processes, and actuators. The system behaves as a teammate of a human operator and can perform tasks independently with minimal interaction. The system keeps the human informed about relevant developments that may require human assistance, and the human can always redirect the system with high-level instructions. The communication behavior is implemented as a Social AI Layer (SAIL). The autonomous system was tested in a simulation environment to support rapid prototyping and evaluation. The simulation is based on the Robotic Operating System (ROS) with fully modelled sensors and actuators and the 3D graphics-enabled physics simulation software Gazebo. In this simulation, various flying and driving autonomous systems can execute their tasks in a realistic 3D environment with scripted or user-controlled threats. The results show the performance of autonomous operation as well as interaction with humans.
In recent years advances in machine learning methods such as deep learning has led to significant improvements in our ability to track people and vehicles, and to recognise specific individuals. Such technology has enormous potential to enhance the performance of image-based security systems. However, wide-spread use of such technology has important legal and ethical implications, not least for individuals right to privacy. In this paper, we describe a technological approach to balance the two competing goals of system efficacy and privacy. We describe a methodology for constructing a “goal-function” that reflects the operators preferences for detection performance and anonymity. This goal function is combined with an image-processing system that provides tracking and threat assessment functionality and a decision-making framework that assesses the potential value gained by providing the operator with de-anonymized images. The framework provides a probabilistic approach combining user preferences, world state model, possible user actions and threat mitigation effectiveness, and suggests the user action with the largest estimated utility. We show results of operating the system in a perimeter-protection scenario