Recent studies of the types, numbers, and roles of robotic systems for use in the Space Exploration Initiative (SEI), with a focus on planet surface systems (PSS), are summarized in this paper. These high-level systems' engineering, modeling, and analysis activities have supported trade studies and development of preliminary requirements for intelligent systems including supervised autonomous robotic systems. The analyses are summarized, results presented, and conclusions and recommendations are made. One conclusion is that SEI will be `enabled' by the use of supervised intelligent systems on the planet surfaces. These intelligent systems include capabilities for control and monitoring of all elements including supervised autonomous robotic systems. With the proper level of intelligent systems, the number and skills of humans on the planet surface will be determined predominantly by surface science and technology (not outpost) objectives and requirements. A broad range of robotic system uses in Earth orbit or during space transport are indicated by current studies. These include assembly of very large spacecraft systems such as propulsion systems and aerobraking structures. Maintenance is another robotic system use being studied. The differences in requirements for these and other space robotic systems compared to current industrial robotic systems are presented. Improvements in safety, reliability, and maintainability for these remote systems are stressed. Space robotics, especially those systems being developed to operate on planetary surfaces, can be considered a form of the emerging technology of field robotics on Earth. The solutions to the problems we will be solving to make the exploration of our solar system possible and practical will apply to the many problems we have which require operating in hazardous environments on Earth and to critically improving human productivity in many fields.
For the last several years, NASA has been developing a telerobotic system as part of the Flight Telerobotic Servicer (FTS) Project at Goddard Space Flight Center. A development test flight of a robotic manipulator, labeled DTF-1, was planned for a shuttle mission in 1993. The purpose was to evaluate the design of the manipulator and workstation and correlate system performance in space with ground tests. Although the funding for DTF-1 was eliminated in September 1991, the design of the DTF-1 system has been completed and flight hardware is now in different stages of development, with some items, such as the gripper, already built, qualified, and delivered. With its manipulator, gripper, cameras, computer, and operator control station, the DTF-1 system design incorporates the fundamental building blocks of the original FTS, the end product of which was to have been a lightweight, dexterous telerobotic device that would evolve into an autonomous robot. The approach was to adapt current teleoperation and robotic technologies into a system that could operate in space. This was a new undertaking for NASA, something that had never been done before, and something that was full of challenges. This paper describes the DTF-1 system design and discusses the technical, operational, and safety considerations that affected the design. It also discuses the `lessons' that were learned during the design and early development stages in an effort to capture some of the knowledge from the program.
Autonomous landing capabilities will be critical to the success of planetary exploration missions, and in particular to the exploration of Mars. Past studies have indicated that the probability of failure associated with open-loop landings is unacceptably high. Two approaches to achieving autonomous landings with higher probabilities of success are currently under analysis. If a landing site has been certified as hazard free, then navigational aids can be used to facilitate a precision landing. When only limited surface knowledge is available and landing areas cannot be certified as hazard free, then a hazard detection and avoidance approach can be used, in which the vehicle selects hazard free landing sites in real-time during its descent. Issues pertinent to both approaches, including sensors and algorithms, are presented. Preliminary results indicate that one promising approach to achieving high accuracy precision landing is to correlate optical images of the terrain acquired during the terminal descent phase with a reference image. For hazard detection scenarios, a sensor suite comprised of a passive intensity sensor and a laser ranging sensor appears promising as a means of achieving robust landings.
This paper discusses the use of system identification of manipulator dynamics (especially manipulator and payload mass properties) for improvement of manipulator performance. The results presented are based on realistic error bounds on the sensor data. While there are many differences between terrestrial and space robots, one aspect in particular is most important from the standpoint of the development of effective control laws; payload range. For a typical industrial manipulator, the payload never exceeds 10% of the mass of the manipulator. In many cases it is far less than this. For manipulators employed in space applications, however, the payload mass may be orders of magnitude larger than that of the manipulator. While this has an obvious impact on the performance of the control system, it is only now that this problem is being seriously addressed.
This paper surveys past architectures accommodating autonomy and projects future directions in these architectures. In recent years research toward autonomous systems has been stimulated by Space Station Freedom, SDI, and DARPA's Strategic Computing Initiative. More recently, the Mars Rover studies and the Human Exploration Initiative are driving the needs for onboard computer systems which provide either autonomous or supervised autonomous operations. While early work focussed on defining functional requirements for such systems and the development of algorithms for each functional element, current research focuses on integrated sensori-motor control and techniques to assure that the processing architectures to execute these onboard functions will respect well-defined volume, weight, and power budgets. The success of programs which demonstrate autonomous systems such as the Martin Marietta Autonomous Land Vehicle, as well as large scale laboratory demonstrations of supervised autonomy, show this can be done. Integration requires many disciplines to be jointly considered: vision, planning, control, computer systems, and platform management. The system engineering discipline to balance the design imperatives of each within a well- engineered solution must advance as well. One of the intriguing aspects of this problem is that the approach and resulting architecture must accommodate changes to the mission and associated key mission timing parameters. Therefore, the ease of evolving both the architecture and mission contribute design imperatives of their own. This paper discusses processing architectures for autonomy and lessons learned in our past work, the impact of emerging techniques such as neural networks, and our recent work to exploit custom hardware to accommodate the increased number and complexity of onboard functions required for autonomous platforms while respecting stringent volume, weight, and power considerations.
Processing requirements for the cooperative robot systems necessary for space application are well beyond those of ground-based systems. The volume, weight, and power constraints for space-based robots restrict the designer's choices. This paper discusses the use of custom electronics to resolve the processing issues while maintaining the timing requirements for such systems. There are many robotic algorithms suitable for ASIC implementation -- inverse kinematic solution, state estimation, Riccati's equation, Kalman filtering, force/torque sensing, and force/torque prediction/reflection. We present a solution to a force/torque sensing problem.
Autonomous and teleautonomous operations have been defined in a variety of ways by different groups involved with remote robotic operations. For example, Conway describes architectures for producing intelligent actions in teleautonomous systems. Applying neural nets in such systems is similar to applying them in general. However, for autonomy, learning or learned behavior may become a significant system driver. Thus, artificial neural networks are being evaluated as components in fully autonomous and teleautonomous systems. Feed- forward networks may be trained to perform adaptive signal processing, pattern recognition, data fusion, and function approximation -- as in control subsystems. Certain components of particular autonomous systems become more amenable to implementation using a neural net due to a match between the net's attributes and desired attributes of the system component. Criteria have been developed for distinguishing such applications and then implementing them. The success of hardware implementation is a crucial part of this application evaluation process. Three basic applications of neural nets -- autoassociation, classification, and function approximation -- are used to exemplify this process and to highlight procedures that are followed during the requirements, design, and implementation phases. This paper assumes some familiarity with basic neural network terminology and concentrates upon the use of different neural network types while citing references that cover the underlying mathematics and related research.
This paper addresses the impact of neural networks on autonomous systems. Some neural network models are used to illustrate the effectiveness and suitability of these networks for space exploration. Fault tolerance and self learning capabilities of neural networks are used to illustrate such suitability. The advantages and disadvantages of the utilization of neural networks in autonomous systems are discussed and contrasted with the conventional systems currently in use.
An evaluation system called the associative rule memory (ARM) that operates with an interactive or automatic planner in a robot-based world, such as the world of the NASA Flight Telerobotic Servicer (FTS), is described. The ARM is constructed from a neural network model called a Boltzmann Machine, and ranks alternative robotic actions based on the probability that the action works as expected in achieving a desired effect. The system is experience-based, and can predict the probability of achieving a desired effect for robotic actions that have not been explicitly tested in the past. The ARM is designed to quickly and efficiently find high probability of effect for robotic actions for a given desired effect. This paper details the construction of the ARM for the NASA FTS robotic environment. Examples are also provided that demonstrate the use of the ARM within a current NASA symbolic planning system.
Despite progress in visual servo control of robot motions, to date the corresponding motion planning problem has not been investigated. In this paper, we present an implemented planner for the special case of a polyhedral world, extending previous preimage type planners to exploit visual constraint surfaces in a fixed-camera robotic system featuring closed-loop visual servo control. We present the mathematics of a hybrid (visual/position feedback) resolved-rate motion control strategy for executing these plans, featuring projection equations defined solely in terms of a small set of observable parameters that are directly obtained from our calibration process. We conclude with experimental results, a description of ongoing research, and the contribution of our work to date.
Many structures in space exhibit nonlinear behavior. These nonlinearities arise due to coupling of rigid body and structural flexibility effects. Large angle attitude maneuvers such as slewing, tracking, or precision pointing (under certain operating conditions) involve nonlinear dynamics associated with rigid body kinematics. The resulting dynamical equations of motion of such systems are coupled and highly nonlinear. However, the current state of the art in design of control laws for these systems is based on linear control theory. Nonlinear systems, therefore, need to be linearized before attitude control laws can be applied. Recent research in control theory has led to nonlinear control laws capable of completely decoupling the flexible and rigid body modes. The goal of our research effort is to assess the effectiveness of these linearizing and decoupling control laws. The results of this effort will serve to provide techniques for modeling of complex flexible nonlinear space system such as the space robotic systems. In general, the paper addresses nonlinear inversion based on feedback linearization and robust stabilization of the unobserved dynamics of the linearized and decoupled system. Simulation results with some interesting insights are presented for space systems subject to rapid, large angle slewing and precision pointing. The contribution of this research lies in the development of a unified approach for modeling of a nonlinear multibody flexible system based on nonlinear inversion. The approach is valid for modeling of space robotics systems which are nonlinear multilink flexible systems.
The method of cell-to-cell mapping for nonlinear dynamics analysis has been receiving increasing attention in recent years. The possibility of using cell-to-cell mapping for optimal control problems has also been explored by several researchers. In this paper we present results of applying cell-to-cell mapping methods to solve the problem of optimal trajectory generation for coordinated robotic manipulators handling a common object along specified geometric paths. This method uses the cell state concept together with discretized controls and cost functions. The optimal trajectories and the corresponding controls are determined by using the cell-to-cell mapping in a simple search process.
Inverse kinematics of robotic manipulators poses a challenging problem, especially near singular configurations where the joint velocities tend to become extremely high, even if the minimum-norm pseudo-inverse solution is used. The singularity robust inverse (SRI), which arises from the damped least-squares technique, damps joint velocities using a damping factor but causes some deviation of the end-effector from its specified trajectory. The trade-off between obtaining an accurate solution and a feasible one is decided by the damping factor. In this work, we present a new optimal method of computing the damping factor which yields minimum end-effector deviation while ensuring feasible joint velocities. This method is computationally efficient with an added advantage in that it can be implemented even if the SVD of the Jacobian is not available. The method is effective for both planar and spatial manipulators, redundant or non-redundant. This is borne out by the simulations presented at the end of this paper.
As robots are called upon to perform increasingly complex tasks, such as intelligent assembly, maintenance, and inspection in space, the problems of efficient task planning, contingency detection, and recovery become increasingly difficult. While work has progressed toward meeting some of these goals for robotic systems another, for the most part unrelated, area of research in artificial intelligence, qualitative reasoning (QR), has simultaneously been addressing similar reasoning tasks for different classes of complex systems. This paper presents some techniques for bridging the gap between the two research areas, QR representations for reasoning about robotic systems, and examples demonstrating some successful results.
We define the model based matching problem in terms of the correspondence and transformation that relate the model and scene and the search and evaluation measures needed to find the best correspondence and transformation. Simulated annealing is proposed as a method for search and optimization, and the minimum representation size criterion is used as the evaluation measure in an algorithm that finds the best correspondence. An algorithm based on simulated annealing is presented and evaluated. This algorithm is viewed as a part of an adaptive, hierarchical approach which provides robust results for a variety of model based matching problems.
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Construction, repair, and maintenance of space-based structures will require extensive planning of operations in order to effectively carry out these tasks. The path planning algorithm described here is a general approach to generating paths that guarantee collision avoidance for a single chain nonredundant or redundant robot. The algorithm uses a graph search of feasible points in position space followed by a local potential field method that guarantees collision avoidance among objects, structures, and the robot arm as well as the conformance to joint limit constraints. This algorithm is novel in its computation of goal attractive potential fields in Cartesian space and computation of obstacle repulsive fields in robot joint space. These effects are combined to generate robot motion. Computation is efficiently implemented through the computation of the robot arm Jacobian and not the full inverse arm kinematics. These planning algorithms have been implemented and evaluated using existing space-truss designs and are being integrated into the RPI-CIRSSE testbed environment.
The spherically extended polytope (S-tope) model is an extension of the polytope model. Objects are represented as the convex hull of a set of spheres. An efficient algorithm for calculating the distance between s-topes is presented. The s-tope model is particularly well suited for modeling dextrous manipulators and other anthropomorphic forms. As a practical example, the Utah/MIT dextrous manipulator is represented using this method, and the time to calculate distances between the manipulator and various objects is measured.
Despite the recent progress in the motion and force control of multiple manipulators, there has been a continuing question about which physical force should be and can be controlled. The frequently used orthogonal decomposition is plagued by a unit inconsistency problem, rendering its physical interpretation difficult if not impossible. This paper introduces a new and intuitively appealing concept of internal force, and shows how its regulation can be handled by the existing approaches.
Intelligent robots equipped with sophisticated sensors and online reasoning capabilities are expected to play major roles in future space operations. Such robots use their sensors to perceive their surroundings, reason about what action plan to do next, and execute the action plan. We extended the capability of our initial system and successfully demonstrated the system in the control of a PUMA 762 robot equipped with a wrist-mounted CCD camera and a wrist-mounted force sensor in the placing of an orbital replacement unit (ORU) onto its base at Goddard Space Flight Center. We employed a simple vision subsystem in both systems. The vision subsystem proceeds as follows: (1) digitizes images, (2) extracts compact homogeneous regions, (3) matches these regions against the stored object model, and (4) computes the pose of the object. While developing these systems, we found that the performance of the vision subsystem is critical to the overall success of the placement of the ORU task. Two of the most important performance issues demanding further analyses are the robustness of the vision subsystem under various lighting conditions and the accuracy of the computed 3-D pose of objects. In this paper, we report on the analyses of the vision subsystem. We found that a simple adaptive global thresholding method works quite well for images taken under various lighting conditions. We also found that the pose computed using the quadrangle method is reasonable for real images. The accuracy of the computed pose can be significantly reduced when image feature noise is present. However, we found that such a problem can be solved by using a 3-D marker and an alternative pose computation algorithm. Simulation results are included.
The ability to automatically locate objects using vision is a key technology for flexible, intelligent robotic operations. The vision task is facilitated by placing optical targets or markings in advance on the objects to be located. A number of researchers have advocated the use of circular target features as the features that can be most accurately located. This paper describes extensive analysis on circle centroid accuracy using both simulations and laboratory measurements. The work was part of an effort to design a video positioning sensor for NASA's Flight Telerobotic Servicer that would meet accuracy requirements. We have analyzed the main contributors to centroid error and have classified them into the following: (1) spatial quantization errors, (2) errors due to signal noise and random timing errors, (3) surface tilt errors, and (4) errors in modeling camera geometry. It is possible to compensate for the errors in (3) given an estimate of the tilt angle, and the errors from (4) by calibrating the intrinsic camera attributes. The errors in (1) and (2) cannot be compensated for, but they can be measured and their effects reduced somewhat. To characterize these error sources, we measured centroid repeatability under various conditions, including synchronization method, signal-to-noise ratio, and frequency attenuation. Although these results are specific to our video system and equipment, they provide a reference point that should be a characteristic of typical CCD cameras and digitization equipment.
Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.
To support the study of dynamics and control for long-reach, space-based manipulators, an experimental planar manipulator has been developed. The arm has a 15 ft reach with flexible links at the shoulder and elbow joints. The arm's equations of motion are derived with the aid of TREETOPS, a multibody dynamics analysis program. The resulting model is validated against experimental data. To serve as a base line for future work, two classically designed controllers have been implemented. One relies on sensors collocated with the joint actuators, while the other uses an end-point sensor measuring Cartesian displacements. Comparison of the controllers' experimental closed-loop responses demonstrate the performance improvements achievable using end-point position feedback; most notable is the more than two fold increase in control bandwidth. Experimental and simulation results also demonstrate the end-point controller's improved `Cartesian impedance.'
A vision technique applicable to the sixth degree-of-freedom navigation of free-flying space robots is discussed. The technique consist of a feature finder which matches points in the environment with image locations and a recursive estimator based on the extended Kalman filter which uses measurements of the image locations to update the state estimate recursively. Experimental results are presented which demonstrate the convergence and state tracking properties of the system. Results include the finding that a vision navigator can be implemented with current off-the-shelf equipment if a sufficiently simple object in the environment acts as the navigation target.
Space based robotic systems require sensing technology that is robust, flexible, and light weight. This paper presents a sensing system concept utilizing miniature fiber optic sensors capable of being integrated directly into robotic end effectors. This system is capable of providing range (distance) and pose (orientation) measurements independent of all lighting conditions, including direct sunlight. Range measurements are achieved via a frequency modulated laser radar scheme which utilizes the fiber sensor end reflection as the local oscillator. Range sensors can be multiplexed via a fiber optic switch to provide pose information or configured as a 'smart skin' for collision avoidance applications. Force measurements are also provided via an interferometric path length matching geometry in which the length of a miniature compliant sensing cell is determined and converted to an applied force. The sensing system's coherent optical configuration provides for flexibility in sensor allocation and immunity from environmental perturbations while allowing a single controller to be shared among a large number of sensors for efficient situation assessment and robotic control.
This paper describes new methods to deal with uncertainty in the position and orientation of objects in the world model in the context of robot teleoperations. The virtual obstacle (object) is defined to represent objects with uncertainty bounds which are constructed by the operator using geometric data base and a 6 d.o.f. input device while viewing video displays. These virtual obstacles are updated as the cameras move. Also, a new method to build the world model by so called `flying-and-matching' is introduced. Experiments have been performed with human subjects to evaluate the proposed schemes.
This paper provides a description of the Robotic Research Program being conducted at the Lockheed Research and Development Division Laboratories. It details the approach taken to fuse autonomy with teleoperative control. The component/enabling technologies are defined and the status of the development of those technologies is reported. CASE tools used in an accelerated development environment are identified and discussed.
This paper describes a concept called virtual image and discusses its applications to telerobotics. The virtual image is a graphical tool which allows the operator to see the environment from any perspective that he wants. This concept proves critical in two situations: (1) when a sensor (i.e., wrist camera) from which the operator needs information is obscured and the information is not directly provided by any sensor, and (2) when the task requires moving a selected object relative to another object which is itself moving. These two problems are discussed in detail along with the method by which the virtual image concept offers the solution. A simulation was created in which the virtual image concept was tested under a varying set of parameters for a teleoperational task. The parameter categories are camera/graphical views, graphical display content, written display content, and operator interface. The simulation is described in detail along with the results and conclusions. In addition, the possibility of introducing automation as an aid for teleoperation is discussed and a relevant simulation scenario is analyzed. Throughout the paper, the real-world applicability of these ideas is emphasized.
A project to develop a telerobotic `virtual control' capability, currently underway at the University of Toronto, is described. The project centers on a new mode of interactive telerobotic control based on the technology of combining computer generated stereographic images with remotely transmitted stereoscopic video images. A virtual measurement technique, in conjunction with a basic level of digital image processing, comprising zooming, parallax adjustment, edge enhancement, and edge detection has been developed to assist the human operator in visualization of the remote environment and in spatial reasoning. The aim is to maintain target recognition, tactical planning, and high-level control functions in the hands of the human operator with the computer performing low-level computation and control. Control commands initiated by the operator are implemented through manipulation of a virtual image of the robot system, merged with a live video image of the remote scene. This paper discusses the philosophy and objectives of the project, with emphasis on the underlying human factor considerations in the design, and reports the progress made to date in this effort.
A new method is proposed to tele-control the movement of a remote robot when the movement involves contact with objects and when the control involves a significant time delay. In this method, a vision system is incorporated into the tele-autonomous systems. The vision system is used to do vision sensory information feedback to update the local world model and to implement a relative move mode to control the remote robot. This method will effectively overcome some of the limitations of current tele-robot control systems.
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a `script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the interactive scene analysis module (ISAM) developed to provide task space database initialization and verification utilizing 3-D graphic overlay modeling, video imaging, and laser radar based range imaging. Through the fusion of task space database information and image sensor data, a verifiable task space model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
The problem of providing model-based proximity cues using force reflection for teleoperation under time delay is addressed. A novel use of artificial potential fields is proposed as a teleoperator aid to efficiently provide a predictive tactile display. Several new artificial potential models are presented which are used to convey accurate shape and proximity information by generating handcontroller forces based on the potential gradient. These new potential gradients are shown to have an efficient implementation via exact computation as a neural network. A real-time prototype implementation and integration with Martin Marietta's teleautonomous testbed is discussed. Evaluations are made with human operators performing tasks using industrial manipulators under time delay scenarios.
This paper presents an analysis of the mechanics for multifingered grasps of planar and solid objects. Squeezing and frictional effects between the fingers and the grasped objects is fully visualized through our approach. An algorithm for qualitively choosing the grasp points is developed based on the mechanics of grasping. it is shown further that our method can be easily extended for the soft-fingered grasp model where the torsional movements along the contact normals can be transmitted through the grasp points.
This paper presents a multiarmed robotic testbed for space servicing applications. Additional developments in the testbed have increased the intelligent capabilities of the overall robotic system. The testbed includes an autonomous diagnosis capability which is required for advanced space servicing tasks. Modifications to the testbed components that facilitate the additional autonomous operations are presented. The NASREM compliant architecture of the testbed is key to the ease of testbed upgrade and experiment definition. Fluent integration of machine intelligence for autonomous operation and teleoperator control is key to the testbed's utility in exploring the design of multiarm cooperative intelligent space robots.
An automated structures assembly testbed developed by Langley Research Center to study the practical problems associated with the automated assembly of large space structures using robotic manipulators is described. Emphasis is placed on the requirements and features of system upgrades and their impact on system performance, flexibility, and reliability. The current research program is aimed at evolving the baseline assembly system into a flexible, robust, sensor-based system capable of assembling more complex truss-supported satellite systems. To achieve this objective, five system upgrades have been developed including a machine vision capability to eliminate taught robot arm positions; an on-board end-effector microprocessor to reduce communications; a second-generation end-effector to construct contoured trusses for antennas and install payloads; the installation of reflector-type panels on the truss to produce a complete and functional system, and an expert system to significantly reduce the amount of software code required for system operation and provide greater flexibility in implementing new features.
The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice- supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the space station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a hierarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.
Although significant technical challenges still remain, perhaps the pacing item in the development
of space telerobotics is in the defmition of a programmatic need that is, in demonstrafing
capabilities that can lead to a body of committed and eager users in the operational
community. In the parallel area of extra-vehicular activity (EVA), such capabilities are
routinely demonstrated through the use of neutral buoyancy simulation. This paper addresses
the underlying rationale behind the use of neutral buoyancy simulation in telerobotics research,
including details of the well-modeled dynamic environment; the existence of a sizable
data base on EVA operations in neutral buoyancy with correlation to flight experience;
and routine access to a number of high-fidelity mockups of past and planned operational
spacecraft. Details are presented on the compromises necessary for the design and construction
of neutral buoyancy telerobotics systems, and data will be summarized from a number of
past simulations, including correlation of neutral buoyancy structural assembly with EVA
flight data, and some preliminary tests of telerobotic servicing of Hubble Space Telescope.
Finally, details will be given of a future telerobotic vehicle under detailed design at present,
aimed at providing low-cost space flight data on telerobotics and extending the EVA paradigm
of neutral buoyancy simulation based on known correlation factors to flight data.