The robotic testbed facility within Rensselaer''s Center for Intelligent Robotic Systems for Space Exploration consists of two PUMA robot manipulators, each mounted on a moving platform. Each platform has a tilt and rotate capability and can move along a two rail linear track. One purpose of this system is to demonstrate the coordination of two arms in relative motion to each other. The system is also useful for investigating the integration problems that occur in an environment that is heavily dependent upon the acquisition, transmission, and processing of data in the form of information, knowledge, and control commands.
This work presents the dynamic equations of motion for two or more cooperating manipulators on a freely moving mobile platform. The formulation includes the full dynamic interactions from arms-to-platform and arm-tip to arm-tip and the possible translation and rotation of the platform. The system of cooperating robot manipulators forms a closed kinematic chain where the forces of interaction are included in the simulation of robot and platform dynamics. The forces of interaction are outputs of the analysis giving force/torque sensor values at the tip of each manipulator. The equations of motion are shown to be identical in structure to the fixed-platform cooperative manipulator dynamics. The structure of the closed chain dynamics allows the use of any solution for the open topological tree of base and manipulator links where due to the large number of links linear recursive methods become more efficient.
Various approaches to three dimensional vision are reviewed including passive and active techniques. Emphasis is on the redundant 3-D vision system designed for CIRSSE which uses a controllable subset of five cameras programmable structured light patterns and sophisticated calibration routines. The purpose is the design and development of a 3-D vision system which can evaluate the space environment and correlate complete or incomplete object views to CAD-based models. The application stresses real-time operation human supervisory intervention and using 3-D vision to enhance the performance of cooperating robotic arms. Two techniques of estimating the location of a point using 3-D vision are discussed as is a computer simulation study which sought to optimize the positioning of multiple cameras to minimize noise perturbations caused by various forms of vibration. Finally the unique challenges of multiple camera CAD-based 3-D vision in the space environment and criteria for selecting the instantaneous subset of cameras are discussed.
The CIRSSE two-finger gripper is designed to be used with a 6-axis wrist-mounted force-torque sensor as part of an experimental testbed for cooperative robotic manipulation. The gripper system consists of a dedicated controller and a pneumatic powered gripper that is servoed in position or grasping force. The gripper also has a between fingertip light beam sensor. The major features of this new design are: high grasping force to weight ratio (25::1) and compactness (14 cm base-fingertip, 0.8 kilograms); low cost dedicated slave controller; and simple host-controller communications protocol to minimize command interpretation delay. Shared software functions between the host and the controller processor permit the host to select servo mode, servo gain-parameter values, and calibration offsets. Hidden primitive self-test, calibration and servo confidence functions are included.
A hierarchical planning paradigm for space truss assemblies is proposed which generates sequences of assembly operations and is suitable for human, teleoperator, and autonomous robotic implementation. Subgoals satisfying geometric (accessibility) and structural (rigidity) constraints are developed using a hierarchical state representation, and special cost functions are used to conduct a search for locally optimal sequences. Results of two case studies involving assemblies with up to 31 nodes and 102 struts are presented.
A research program to evaluate telerobotic methods for in-space assembly of large truss structures has been initiated at the NASA Langley Research Center. A commercial robot is mounted on a carriage positioning system and a tetrahedral truss which is composed of 1 02 members each 2 m long is assembled on a rotating motion base. The facility system is described and the current status of the assembly tests is discussed. Observations from these tests indicate that no problems have been encountered that would prohibit automated telerobotic assembly from being a viable in-space construction method.
A methodology for applying the Kennedy Space Center vehicle processing experience to similar operations at Space Station Freedom is described. First, the required on-orbit processing tasks are identified. These tasks are then evaluated for automation suitability, and robotic manipulator and artificial intelligence technologies are investigated to automate selected physical and cognitive tasks. Effects on processing times, extra-vehicular activity savings, and required resources for incorporating these automation enhancements are identified. Results of the following case studies are included: Phobos Gateway Vehicle On-Orbit Assembly and Launch, Lunar Evolution Vehicle On-Orbit Refurbishment, and Mars Mission Vehicle On-Orbit Assembly and Launch.
A multiarm robotic testbed for space servicing applications is presented. The system provides the flexibility for autonomous control with operator interaction at different levels of abstraction. Key Technologies from the areas of artificial intelligence, robotic control, computer vision, and human factors have been integrated in an architecture which has proven useful for resolving issues related to space-based servicing tasks. A system-level breakdown of testbed components is presented, outlining the function and role of each technology area. A key feature of the architecture is that it facilitates efficient transfer of teleoperation control to all levels in the system hierarchy, enabling the study of the relationship between the human operator and the remote system. This includes the ability to perform autonomous situation assessment so that operator control activities at lower levels can be interpreted in terms of system model updates at higher levels.
The areas of space exploration in which robotic devices will play a part are identified, and progress to date in the space agency plans to acquire this capability is briefly reviewed. Roles and functions on orbit for robotic devices include well known activities, such as inspection and maintenance, assembly, docking, berthing, deployment, retrieval, materials handling, orbital replacement unit exchange, and repairs. Missions that could benefit from a robotic capability are discussed.
Telepresence from a manned central base to unmanned rovers is discussed as a possible solution to the problem of human presence in planetary field geology. Some issues that are essential to planetary surface field work are examined with reference to results of the Amboy field study. The discussion emphasizes the exploration behavior and user-based requirements for effective telepresence systems for planetary exploration.
This extended abstract documents our on-going work in the area of determining what autonomy a planetary rover should exhibit and how the autonomy can be interfaced to the human users of the rover. A key constraint on the autonomy is that human user and machine must be able to cooperatively carry out planetary geological sampling and field-work. We describe an approach that uses an icon language to advise an autonomous rover and to present rover feedback to the user. The icon language can be translated to plan networks in the 7S model4''2 . We present an extensive ''pencil paper'' example of using the interface to carry out cooperative exploration.
Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition such MRAC procedures are designed so that a feedforward augmented output follows the reference model output thus resulting in an ultimately bounded rather than zero output error. Thus modifications are suggested and tested that 1)incorporate feedforward into the reference model''s output as well as the plant''s output and 2)incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error and thus encourage further use of MRAC for more complex flexible robotic systems.
The objective of the Gross Motion Control project at the Air Force Institute of Technology (AFIT) Robotic Systems Laboratory is to investigate alternative control approaches that will provide payload invariant high speed trajectory tracking for non-repetitive motions in free space. Our research has concentrated on modifications to the modelbased control structure. We are actively pursuing development and evaluation of both adaptive primary (inner loop) and robust secondary (output loop) controllers. In-house developments are compared and contrasted to the techniques proposed by other researchers. The case study for our evaluations is the first three links of a PUMA- 560. Incorporating the principals of multiple model adaptive estimation artificial neural networks and Lyapunov theory into the model-based paradigm has shown the potential for enhanced tracking. Secondary controllers based on Quantitative Feedback Theory or augmented with auxiliary inputs significantly improve the robustness to payload variations and unmodeled drive system dynamics. This paper presents an overview of the different concepts under investigation and highlights our latest experimental results.
It has been experimentally verified that the jerk of the desired trajectory adversely affects the perormance of the tracking control algorithms for robotic manipulators. In this paper we investigate the reasons behind this effect and state an optimization problem that minimizes joint jerk over a prespecified cartesian space trajectory. The necessary conditions are derived and a numerical algorithm is presented.
This paper summarizes the stability of the PUMA - 560 robot manipulator under dynamic model mismatch resulting from incomplete knowledge of the link masses centers of mass and radii of gyration. PD and PID controllers are used. PAPER SUMMARY Model based control of robotic manipulators eliminates the nonlinearities in the manipulator equations under perfect knowledge of dynamic model parameters. When this is the case the manipulator model matches completely the real robot arm. However robot manipulators are in general extremely complicated to even approximately model. Moreover to keep the system model within practical and acceptable limits one has to accept (and control) unmodeled dynamics. Model mismatch may also result from incomplete knowledge of manipulator hardware parameters. This paper summarizes the stability of the PUMA - 560 manipulator under model mismatch using PD and PID controllers. Craig''s method  is directly applied when a PD controller is used. However it is also extended and modified to incorporate PID controllers . The PUMA-560 robot arm has been selected because of our knowledge related to its real-time behavior [6 7 10 11]. All desired trajectories are achievable in real-time do not violate arm speed acceleration structural and hardware limits and have been repeatedly tested and used for real-time control of the PUMA robot arm. A software package (robot simulation package) has been built on top of the original software given to us by Dr.
Robotic applications in unstructured environments in general and in space in particular require robot systems that possess a high degree of autonomy. To achieve such a degree of autonomy a robot system must possess (I) robots with a versatile physical structure (ii) perception and (iii) elaborate techniques for (a) task decomposition distribution and localization and (b) the dynamic specification of the distributed semantics involved in sensorimotor synchronization and the coordination of multiple robots. In this paper we present our developments for the dynamic specification of the distributed semantics of hierarchical multiagent systems and the synchronization of their component agents during task execution. More specifically we present a distributed model of concurrency based on Petri net theory. The model is then applied to the hierarchical decomposition distribution and localization of a bracket assembly task. Each level of the resulting hierarchy horizontally contains the synchronization-structure of task execution and vertically is a generalization ofthe level below and a specialization ofthe level above. The horizontal synchronization-structures developed bythe Petri net model maintain the desirable properties of safeness and liveness by construction.
This paper describes the design of a model-based autonomous planning system that will enable robots to manage a space-borne chemical laboratory. In a model-based planning system, knowledge is encapsulated in the Ibrm of models at the various layers to support the predefined system objectives. Thus the model-based approach can he considered as an extended planning paradigm which is able to base its planning, control, diagnosis, repair, and other activities on a variety of objectives-related models. We employ a System Entity Structure/Model Base framework to support autonoiious system design through the ability to generate a family of planning alternatives as well as to build hierarchical event-based control structures. The model base is a multi-level, multi-abstraction, and multiformalism system organized through the use ni system morphisms to integrate related models.
An architecture for implementing complex real-time systems has been developed in which a society of multiple co-operating agents provide a structure which allows distributed Sensor/Plan/Actuator subsystems to function under non-ideal/error conditions. The structure allows modifications to be made and new techniques and methods to be incorporated without significant disruption to the overall system. A novel supporting parallel hardware architecture based on rotating dataftow provides real-time capability.
An approach to adding autonomy to a teleoperated robotic system designed for on-orbit proximity operations is discussed. In particular, a control system for performing a highly automated approach-to-dock proximity operation is described which has been developed using the subsumption architecture. The control system has been built by successively adding layers of continuously operating autonomy to a teleoperated system in which human operator control is treated as a level of autonomy exactly like the machine levels. The result is a control system evolution that smoothly allows the addition of levels of competence to ease the operator''s control burden. The operation of the control system is demonstrated using a simulation of the orbital environment and two spacecraft.
A method is presented for building an internal world model of a local on-orbit reference frame using typical sensor data available to a robotic spacecraft. In accordance with the approach proposed here, the model is driven by the Clohessey-Wiltshire (CW) equations to predict future states for maneuver planning and monitoring during sensor blackout. Errors due to inaccuracies in the CW equations, atmospheric drag, and nonuniformities in the earth''s gravitational field are discussed with reference to simulation results.
This paper describes on-going work in using range and motion data generated at video-frame rates as the basis for long-range perception in a mobile robot. A current approach in the artificial intelligence community to achieve timecritical perception for situated reasoning is to use low-level perception for motor reflex-like activity and higher-level but more computationally intense perception for path planning reconnaissance and retrieval activities. Typically inclinometers and a compass or an infra-red beacon system provide stability and orientation maintenance and ultrasonic or infra-red sensors serve as proximity detectors for obstacle avoidance. For distant ranging and area occupancy determination active imaging systems such as laser scanners can be prohibitivtly expensive and heretofore passive systems typically performed more slowly than the cycle time of the control system causing the robot to halt periodically along its way. However a recent stereo system developed by Nishihara known as PRISM (Practical Real-time Imaging Stereo Matcher) matches stereo pairs using a sign-correlation technique that gives range and motion at video frame rates. We are integrating this technique with constant-time control software for distant ranging and object detection at a speed that is comparable with the cycle-times of the low-level sensors. Possibilities for a variety of uses in a leader-follower mobile robot situation are discussed.
A methodology and an algorithmic implementation are proposed for the choice of the feasible grasp points on irregular objects. In particular, the three-fingered grasp of a planar object is considered. Grasp points are chosen on the basis of an analysis of the internal grasping forces using a point contact model. It is shown that the frictional constraint at each finger can be easily satisfied by tuning the magnitude of the normal components of the finger forces. Thus, the forces needed to be generated at each finger during manipulation can be efficiently calculated.
A portable pose measurement system designed as a simple inexpensive system for performing robot manipulator calibration and performance testing is described. The system is capable of tracking from 1 to 7 passive circular targets, with complete 3D information provided for each point with a resolution of 1 part in 25,000. The system uses two standard CCD cameras and a PC-based data acquisition system. The general design of the system and preliminary test results are presented.
The applicability of H control design methods to the analysis and design of manipulator impedance control is presented. Direct application of existing H control design algorithms is shown to result in an ill-posed MIMO impedance design problem. A three-part impedance control design methodology is developed and shown to be well-posed. This methodology utilizes FL control design tools to find the controller minimizing the weighted distance measured in the Hcsense between the realized and the specified manipulator impedances. A connection is made between the design of the three parts of the controller and torque-based versus position-based impedance control design. A design example is presented derived using this three-part design methodology. Although the design procedure guarantees the resulting impedance controller will be stable for the design conditions it is shown that the resulting controller may be non-passive even if the desired impedance is passive and the distance between the realized impedance and the desired impedance is small.
The use of kinematic redundancy in control algorithms to avoid singularities evade obstacles minimize joint torques manipulator kinetic energy end effector contact forces etc. . . has been among the most active research topics in the field of robotics in the past few years. However these approaches have been associated mainly with rigid manipulators where there is no unpredictable flexible motions. When dealing with flexible manipulators the flexibility of the system will cause undesired inaccuracy in end effector motion. If these manipulators are kinematically redundant their kinematic redundancy can be used to compensate for the end effector motion inaccuracy and in many cases help damp out the vibrations. This paper examines this issue and introduces new control algorithms designed to regulate the flexibility while maintaining precise tracking of the end effector trajectory. The dynamic model used is a special case of the general multi-body dynamics designed to maximize its computational efficiency.
Design and construction of an extremely lightweight manipulator is discussed. This 3m 4kg arm can manipulate a payload of 20N half its weight on earth and almost twice its weight on Mars. Limited manipulator performance resulting from design constraints requires innovative control techniques. These include flexible control (for the non-stiff structure) endpoint control (for the compliant mechanisms) and arm/vehicle cooperation (for dynamic interactions and limited payload capability). Synergistic use of sensors and planners between the two semi-autonomous arm and vehicle systems is discussed as it affected prototype design. Use of the NASREM standard functional architecture improved system integration and allows installation on both our wheeled and walking rovers.
A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of highly interactive operator-coached machine vision in a realistic task environment and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult i. e. to segment the scene into objects and to designate their features and to use machine vision to overcome limitations of human perception i. e. for accurate measurement of object geometry. The system captures and displays video images from a number of cameras allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.
An area of increasing interest in Al Robotics and Computer Vision is integrating techniques from these fields to the problem of controlling autonomous systems. Space-based systems such as NASA''s robotic assembly of Orbital Replacement Units (ORU''s) provide a complex realistic domain for this integration research. In this paper we report on current MITRE research in the use of situated control for autonomous robotic assembly of ORUs. A wrist-mounted camera is used to acquire the pose of ORU''s. An on-line control module uses the pose data to refine the on-going robot actions so that the planned task can be executed both safely and robustly. Experimental results on a Cincinnati Milacron T3 Industrial robot at Goddard Space Right Center (GSFC) Intelligent Robotic Laboratory will be included.
Modern telerobotic systems proposed for space applications will require the flexibility for transfer of human operator control between multiple levels of abstraction in the system hierarchy. In the presence of higher levels of autonomy such as task planning and path planning, the transfer of control requires that the integrity of abstract models of the work environment be maintained. In current systems the model updates are provided manually by the user or are hard coded in the task definition. In this paper we present a hierarchical approach to a system for telerobotic situation assessment where teleoperator action interpretations are automatically generated in terms of geometric model updates. At the lowest level of the hierarchy, high frequency robot data containing synchronous position and force information is preprocessed so that interesting events can be detected with filtered data flowing up to the next higher level of absiraction. As intermediate levels receive information from lower levels, comparisons are made with expectations from the current context to produce world model updates. Sequences of events potentially interesting to the next higher level are then reported so that abstractions can progress to the topmost level. The autonomous model updates increase the effectiveness of performing teleoperation tasks using different levels of autonomy since manual updates to the world model are unnecessary.
High fidelity computer graphics preview and predictive displays provide substantial aid in four major space telerobotic activities: overall workspace design including viewing conditions specific workspace analysis for given tasks in a fixed setting operator training and reduction of operator''s uncertainty and operation time under communication time delay conditions. Computer graphics displays are also very useful to help visualize a variety of non-visual sensor data to the operator and to provide an iconic operator interface to a number-based telerobot control system. These applications are illustrated and analyzed by examples implemented in real-time operation at the JPL Advanced Teleoperator project. A video tape recording of these displays is available.
During interaction with remote environments, the operator may benefit from the addition of force feedback to the ubiquitous visual feedback. However, the apparatus required for reactive force feedback (feedback which imposes the remote environment's motion-constraints on the user by applying joint torques) is cumbersome and expensive, especially when implemented in conjunction with high degree-offreedom precision joint motion sensing. Non-reactive, tactile feedback can provide similar information, and can be implemented at much lower cost. The purposes of this research were (1)todesign and demonstrate an inexpensive tactile feedback system, and (2) to determine the extent to which such a system could aid in the performance of a simple teleoperation task. After some experimentation with some different display technologies, and preliminary design, a vibrotactile display was chosen because of its low weight, size, and low cost. The final design consisted of two voice-coils, one each for the thumb and the index finger, which were driven by a 250 Hz variableamplitude signal produced by an analog electronics unit which was controlled by a PC. Experimental results are provided to show that the addition of the tactile display provides a small but significant improvement in manual tracking performance over the use of the visual display alone, and that the tracking task may be performed with only the tactile display. In further experiments the tactile display is compared with reactive force-feedback and is shown to confer most of the reactive display's performance improvement over tracking with only a visual display.
For practical use of robotics in space machine vision is required to automatically locate objects in order to guide an autonomous control system or to assist a human teleoperator. The vision task is facilitated by placing optical targets or markings in advance on the objects to be located. We have experimented with techniques to locate such optical targets from monocular images. These include camera calibration determination of the camera to tool endpoint transformation feature extraction and pose estimation. This paper describes our experiences with these techniques as they were implemented in an integrated robot-vision system to perform tasks characteristic of space construction and servicing. The accuracies of the target location techniques were also verified separately using an optical bench and positioning devices. The usefulness of the optical target location system was demonstrated in the autonomous control of two Cincinnati Milacron T3-726 robot arms. 1 .
Most available manual controllers which are used in bilateral or force-reflecting teleoperator systems can be characterized by their bulky size heavy weight high cost low magnitude of reflecting-force lack of smoothness insufficient transparency and simplified architectures. A compact smooth lightweight portable universal manual controller could provide a markedly improved level of transparency and be able to drive a broad spectrum of slave manipulators. This implies that a single stand-off position could be used for a diverse population of remote systems and that a standard environment for training of operators would result in reduced costs and higher reliability. In the implementation presented in this paper a parallel 3 degree-of-freedom (DOF) spherical structure (for compactness and reduced weight) is combined with high gear-ratio reducers using a force control algorithm to produce a " power steering" effect for enhanced smoothness and transparency. The force control algorithm has the further benefit of minimizing the effect of the system friction and non-linear inertia forces. The fundamental analytical description for the spherical force-reflecting manual controller such as forward position analysis reflecting-force transformation and applied force control algorithm are presented. Also a brief description of the system integration its actual implementation and preliminary test results are presented in the paper.
Space robotics in the nineties and beyond will need a variety of sensor-based monitoring and control systems attached to the manipulators in order to perform different tasks. The manipulator service units which will perform tasks in unstructured environments will require information to implement procedures for handling objects of different shapes and sizes. An important component of such a procedure is the identification of control parameters which can be used to automate simple grasping and releasing operations. This paper describes a novel scheme to identify control parameters given the task status of a grasping or releasing operation. A prototype tactile sensing gripper was used to obtain experimental data while grasping and releasing a set of sample objects. From the tactile images formed from these data task status was determined and identified in terms of four parameters grasping level releasing level and a confidence factor associated with grasping and with releasing. A set of control parameters were identified such that they may be obtained from the task status parameters from two successive tactile images. An expert system was designed to determine a value for the selected parameters. The detection of object displacement with respect to the gripper surface during a task was identified in terms of static and dynamic displacements. These in turn were used to determine the control decision parameter for recommending movement of the grippers based on task status. The selection
This paper presents a summary of tests involving either pure telerobotics or telerobots in conjunction with extravehicular activity applied to some of the servicing tasks of the Hubble Space Telescope (HST). This research was conducted in a neutral buoyancy environment using the high fidelity mockup used for astronaut crew training along with two telerobots developed in the Space Systems Laboratoiy. These tests showed that current telerobots are capable of a limited subset of tasks required for EVA servicing of HST and that specific developments in robotic capability are required particularly in high torque output and positioning in constrained volumes before the telerobotic systems are capable replacements to EVA. A more immediately promising application is the use of telerobots to enhance and extend the EVA capabilities by acting as " assistants" to the EVA crewmen performing the dexterous and high-force tasks.