The military is looking to adopt artificial intelligence (AI)-based computer vision for autonomous systems and decision-support. This transition requires test methods to ensure safe and effective use of such systems. Performance assessment of deep learning (DL) models, such as object detectors, typically requires extensive datasets. Simulated data offers a cost-effective alternative for generating large image datasets, without the need for access to potentially restricted operational data. However, to effectively use simulated data as a virtual proxy for real-world testing, the suitability and appropriateness of the simulation must be evaluated. This study evaluates the use of simulated data for testing DL-based object detectors, focusing on three key aspects: comparing performance on real versus simulated data, assessing the cost-effectiveness of generating simulated datasets, and evaluating the accuracy of simulations in representing reality. Using two automotive datasets, one publicly available (KITTI) and one internally developed (INDEV), we conducted experiments with both real and simulated versions. We found that although simulations can approximate real-world performance, evaluating whether a simulation accurately represents reality remains challenging. Future research should focus on developing validation approaches independent of real-world datasets to enhance the reliability of simulations in testing AI models.
Intelligent robotic autonomous systems (unmanned aerial/ground/surface/underwater vehicles) are attractive for military application to relieve humans from tedious or dangerous tasks. These systems require awareness of the environment and their own performance to reach a mission goal. This awareness enables them to adapt their operations to handle unexpected changes in the environment and uncertainty in assessments. Components of the autonomous system cannot rely on perfect awareness or actuator execution, and mistakes of one component can affect the entire system. To obtain a robust system, a system-wide approach is needed and a realistic model of all aspects of the system and its environment. In this paper, we present our study on the design and development of a fully functional autonomous system, consisting of sensors, observation processing and behavior analysis, information database, knowledge base, communication, planning processes, and actuators. The system behaves as a teammate of a human operator and can perform tasks independently with minimal interaction. The system keeps the human informed about relevant developments that may require human assistance, and the human can always redirect the system with high-level instructions. The communication behavior is implemented as a Social AI Layer (SAIL). The autonomous system was tested in a simulation environment to support rapid prototyping and evaluation. The simulation is based on the Robotic Operating System (ROS) with fully modelled sensors and actuators and the 3D graphics-enabled physics simulation software Gazebo. In this simulation, various flying and driving autonomous systems can execute their tasks in a realistic 3D environment with scripted or user-controlled threats. The results show the performance of autonomous operation as well as interaction with humans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.