As the component technologies for unmanned aerial vehicles mature, increased attention is being paid to the problem of
command and control. Many UAVs, even small lightweight versions, are seeing significant operational time as a result
of the Iraq war, and consequently, users are becoming increasingly proficient with the platform technologies and are
considering new and more elaborate tactics, techniques, and procedures (TTPs), as well as concepts of operations
(CONOPS), for their use, both individually and in teams. This paper presents one such concept and summarizes the
progress made toward that goal in a recent research program. In particularly, the means by which a team of UAVs can
be considered a tactical information resource is investigated, and initial experimental results are summarized.
Proc. SPIE. 4741, Battlespace Digitization and Network-Centric Warfare II
KEYWORDS: Mathematical modeling, Defense and security, Control systems, Software development, Embedded systems, Simulink, Control systems design, Systems modeling, Model-based design, Device simulation
The network-centric 'system-of-systems' concept popular in current defense programs has been viewed from a very functional perspective. However, the heart of such a system is going to be an embedded software infrastructure of unprecedented complexity, and the technology for developing and testing this software needs as much if not more immediate attention than the concept of operations for the envisioned applications. Such an embedded software system will need to be infinitely scalable, modular, verifiable, and distributed, yet satisfy the myriad hard real-time performance constraints imposed by each of perhaps many different device types and service demands. It is suggested here that the only path to a robust design methodology for such systems is with model-based design. Model-based embedded system design is the focus of the Model-Based Integration of Embedded Software (MoBIES) program, currently underway at the Defense Advanced Research Projects Agency (DARPA), managed by the author. This paper will motivate the model-based approach to large-scale embedded software design and explain how projects funded under MoBIES are contributing to the development of interoperable model-based design tool components. An application for such technology is provided in the context of digital flight control systems for aggressive aircraft maneuvers, which is the subject of another DARPA sponsored program, Software-Enabled Control (SEC).
Regardless of the level of autonomy provided by the individual robots, teams of mobile robots will always require human command and control for tasking, monitoring, error remediation, and coordination. Multiple mobile robot systems are hard real-time multi-agent systems and are often employed in hazardous or inaccessible environments. Thus, it is necessary that human-robot interfaces (HRIs) be developed and tailored specifically to multiple mobile robot systems. These HRIs will be different from single-robot control interfaces because of the heterogeneity of the robots, the multi-tasking nature of the application, and the divided attention demands on the operator. In addition to the requisite command and control functions of the unit (e.g., a graphical user interface, communications network management, and sensor data acquisition channels), the HRI will require efficient multi-agent planning and tasking interfaces and a feedback display and execution monitoring system able to fuse diverse data. In this paper we will discuss the characteristics of multirobot HRIs and present data that should guide the design of such interfaces. In particular, we will discuss the design of multi-modal command/feedback interfaces, alert-driven presentation managers, and graphical user interfaces. Experiments and ongoing work from the DARPA Tactical Mobile Robot (TMR) program will be presented that indicate current progress in the design and implementation of such systems.
A differentially steered three-wheeled vehicle has proven to be an effective platform for outdoor navigation. Many applications for this vehicle configuration, including planetary exploration and landmine/UXO location, require accurate localization. In spite of known problems, odometry, also called dead reckoning, remains one of the least expensive and most popular methods for localization. This paper presents the results of an investigation into the benefits of instrumenting the rear caster wheel to supplement the drive wheel encoders in odometry. A linear observer is used to fuse the data between the drive wheel encoders and the caster data. This method can also be extended using the standard form of the Kalman filter to allow for noise. Improvements in position estimation in the face of common problems such as slip and dimensional errors are quantified.
The advent of more complex mechatronic systems in industry has introduced new opportunities for entry-level and practicing engineers. Today, a select group of engineers are reaching out to be more knowledgeable in a wide variety of technical areas, both mechanical and electrical. A new curriculum in mechatronics developed at Virginia Tech is starting to bring students from both the mechanical and electrical engineering departments together, providing them wit an integrated perspective on electromechanical technologies and design. The course is cross-listed and team-taught by faculty from both departments. Students from different majors are grouped together throughout the course, each group containing at least one mechanical and one electrical engineering student. This gives group members the ability to learn from one another while working on labs and projects.
The Autonomous Vehicle Team of Virginia Tech is an undergraduate and graduate research and design project in the College of Engineering at Virginia Tech. The goal of the team is self-education and research in the area of autonomous vehicles and navigation system. The team also hopes to achieve the complementary goal of wining the annual unmanned ground robotics competition, which is sponsored by the Association of Unmanned Vehicle Systems International. The competition rules require that the entrants compete in two events. The design competition is an assessment of the vehicle control system architecture, component design and implementation, and overall vehicle design. The dynamic competition requires that the autonomous vehicles circumnavigate an outdoor obstacle course. Vehicle performance is evaluated based on the distance travelled through the course and the ability to avoid obstacles. In order to build effective autonomous vehicles and enrich the interdisciplinary curriculum of the university, the Autonomous Vehicle Team of Virginia Tech is composed of faculty advisors and undergraduate and communications studies. Team members may choose to join more than one of four major technical subgroups of the team and thus gain exposure to technologies they would otherwise not encounter in the conventional curriculum of their respective departments. This paper is an account of the education and research of the interdisciplinary team and the means by which these activities are executed.
The subsumption architecture is used to provide an autonomous vehicle with the means to stay within the boundaries of a course while avoiding obstacles. A three- layered network has been devised incorporating computer vision, ultrasonic ranging, and tactile sensing. The computer vision system operates at the lowest level of the network to generate a preliminary vehicle heading based upon detected course boundaries. The network's next level performs long-range obstacle detection using an array of ultrasonic sensors. The range map created by these sensors is used to augment the preliminary heading. At the highest level, tactile sensors are used for short-range obstacle detection and serve as an emergency response to obstacle collisions. The computer vision subsystem is implemented on a personal computer, while both ranging systems reside on a microcontroller. Sensor fusion within a subsumption framework is also executed on the microcontroller. The resulting outputs of the subsumption network are actuator commands to control steering and propulsion motors. THe major contribution of this paper is as a case study of the application of the subsumption architecture to the design of an autonomous ground vehicle.
The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.