Mobile robots are excellent examples of systems that need to show a
high level of autonomy. Often robots are loosely supervised by humans
who are not intimately familiar with the inner workings of the robot. We cannot generally predict exact environmental conditions in
which the robot will operate in advance. This means that the behavior
must be adapted in the field. Untrained individuals cannot (and
probably should not) program the robot to effect these changes. We
need a system that will (a) allow re-tasking, and (b) allow adaptation of the behavior to the specific conditions in the field. In this paper we concentrate on (b). We will describe how to assemble
controllers, based on high-level descriptions of the behavior. We will show how the behavior can be tuned by the human, despite not knowing how the code is put together. We will also show how this can be done automatically, using reinforcement learning, and point out the problems that must be overcome for this approach to work.
Fielded mobile robot systems will inevitably suffer hardware and software failures. Failures in a single subsystem can often disable the entire robot, especially if the controlling application does not consider such failures. Often simple measures, such as a software restart or the use of a secondary sensor, can solve the problem. However, these fixes must generally be applied by a human expert, who might not be present in the field. In this paper, we describe a recovery-oriented framework for mobile robot applications which
addresses this problem in two ways. First, fault isolation automatically provides graceful degradation of the overall system as individual software and hardware components fail. In addition, subsystems are monitored for known failure modes or aberrant behavior. The framework responds to detected or immanent failures by restarting or replacing the suspect component in a manner transparent to the application programmer and the robot's operator.
Writing control code for mobile robots can be a very time-consuming process. Even for apparently simple tasks, it is often difficult to specify in detail how the robot should accomplish them. Robot control code is typically full of magic numbers that must be painstakingly set for each environment that the robot must operate in. The idea of having a robot learn how to accomplish a task, rather than being told explicitly is an appealing one. It seems easier and much more intuitive for the programmer to specify what the robot should be doing, and let it learn the fine details of how to do it. In this paper, we describe JAQL, a framework for efficient learning on mobile robots, and present the results of using it to learn control policies for simple tasks.
With the increasing amount of computer power available in civilian flight decks, it is becoming feasible to use some of this power for non-flight-critical systems. One area which could benefit greatly from some additional computer assistance is the pilot-machine interface. We describe the ARCHIE project, an attempt to make man-machine interfaces more robust and reliable. The initial target areas of this project are civilian glass cockpit flight decks and air traffic control stations.