The Army Research Laboratory’s Robotics Collaborative Technology Alliance is a program intended to change robots from tools that soldiers use into teammates alongside which soldiers can work. This requires the integration of fundamental and applied research in robotic perception, intelligence, manipulation, mobility, and human-robot interaction. In this paper, we present the results of assessments conducted in 2016 to evaluate the capabilities of a new robot, the Robotic Manipulator (RoMan), and of a cognitive architecture (ACT-R). The RoMan platform was evaluated on its ability to conduct a search and grasp task under a variety of conditions. Specifically, it was required to search for and recognize a gas can placed on the floor, and then pick it up. The RoMan showed the potential to be a good platform for autonomous manipulation, but the autonomy used in these experiments will require improvement to make full use of the platform’s capabilities. The cognitive architecture was evaluated as to how well it could learn to select an appropriate set of features for a classification task. The task was to classify emotions that had been encoded using the Facial Action Coding System, with ACT-R learning to select the most effective set of features for correct classification. ACT-R leaned rules which required it to observe about half of the available features to make a decision, and the subsequent decisions had an accuracy ranging from 76% to 93% (depending on the emotion).
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.