A three degree-of-freedom direct drive mini robot has been developed for biomedical applications. The design approach of the mini robot relies heavily upon electromechanical components from the Winchester disk drive industry. In the current design, the first joint is driven by actuators from a 5.25' drive, and the following joints are driven by actuators typical of 3.5' drives. The system has 5 - 10 micrometers of position repeatability and resolution in all three axes. A mini gripper attachment has been fabricated for the robot to explore manipulation of objects ranging from 50 micrometers to 500 micrometers . Mounted on the robot, the gripper has successfully performed pick and place operations under teleoperated control. The mini robot serves to precisely position the gripper, and a needle-like finger of the gripper deflects so the fingers can grip a target object. The gripper finger capable of motion is fabricated with a piezoelectric bimorph crystal which deflects with an applied DC voltage. The experimental results are promising, and the mini gripper may be modified for future biomedical and micro assembly applications.
EXOS has developed two state of the art prototype master controllers for controlling robot hands and manipulators under the Small Business Innovation Research (SBIR) program with NASA. One such device is a two degree-of-freedom Sensing and Force Reflecting Exoskeleton (SAFiRE) worn on the operator's hand. The device measures the movement of the index finger and reflects the interaction forces between the slave robot and the environment to the human finger. The second device is a position sensing Exoskeleton ArmMaster (EAM) worn by a human operator. The device simultaneously tracks the motions of the operator's three DOF shoulder and two DOF elbow. Both of these devices are currently used to control robots at NASA. We are currently developing a full fingered SAFiRE and a position sensing and force reflecting EAM under two second phase SBIR grants with NASA. This paper will include discussions of: (1) the design of the current prototypes, (2) kinematics of the EAM and force control of the SAFiRE, (3) design issues that must be addressed in developing more advanced versions, and (4) our progress to date in addressing these issues.
One applications of teleoperation principles is of a manipulator that might be used to augment function in a disabled person. An individual with a paralyzing injury may have complete loss of motor and sensory function in his or her arms, which limits his or her ability to interact with the environment and perform simple tasks such as feeding or turning pages. One way of enhancing functionality is to employ a telemanipulator that might take the place of a care giver, thus providing the person with increased independence. This paper describes how a high level spinal cord injured individual would use head movement to control a robot. It is felt that the key to successful manipulation is in attaining a sense of force and position proprioception. This natural proprioception exists in cable operated prosthetic arms and simple tools such as mouthsticks or laser beam pointers where the user is physically linked to the device. This sense of proprioception is being emulated using a head controlled master-slave arrangement. The goal is for the disabled individual to operate a manipulator and utilize proprioceptive as well as visual feedback. This would lessen the mental burden on the user and ultimately make the device more acceptable.
This paper presents some recent work on the development of a workstation for teaching and therapy in manual and manipulative skills. The experimental workstation, MANUS, as well as the overall concept are described. State-of-the-art aspects of the workstation under development are introduced.
A telerobotic control system must include teleoperational, shared, and autonomous modes of control in order to provide a robot platform for incorporating the rapid advances that are occurring in telerobotics and associated technologies. These modes along with the ability to modify the control algorithms are especially beneficial for telerobotic control systems used for research purposes. The paper describes an application of the PC/AT platform to the control system of a telerobotic test cell. The paper provides a discussion of the suitability of the PC/AT as a platform for a telerobotic control system. The discussion is based on the many factors affecting the choice of a computer platform for a real time control system. The factors include I/O capabilities, simplicity, popularity, computational performance, and communication with external systems. The paper also includes a description of the actuation, measurement, and sensor hardware of both the master manipulator and the slave robot. It also includes a description of the PC-Bus interface cards. These cards were developed by the researchers in the KAT Laboratory, specifically for interfacing to the master manipulator and slave robot. Finally, a few different versions of the low level telerobotic control software are presented. This software incorporates shared control by supervisory systems and the human operator and traded control between supervisory systems and the human operator.
This paper treats a basic system which realizes scaled manipulation environment. Tools for scaled manipulation are necessary when we manipulate an extremely large or small size target. This type of tool has to scale up or down size and force in bilateral way. But time scaling should also be considered in order to build up such ideal environment as we can feel our body size is scaled to the target size. This paper proposes several possible time scaling methods which can be used in bilateral force-position controls.
A system for kinesthetic and visual display of virtual environments has been developed which includes a four degree-of-freedom, force-controlled manipulation and a parallel processing architecture for performing real-time environment simulation, manipulation control, and graphics display. The system allows a user to interact with a virtual environment via a virtual hand tool. 'Forces' at the handle of the virtual tool are experienced as real forces at the handle of the manipulandum. The problem of modeling environments composed of rigid bodies with intermittent contact has been addressed. Dynamic equations for environment simulations are generated and solved in real time by an array of interconnected microprocessors. Additional microprocessors safeguard against dangerous motor accelerations and potentially damaging manipulator configurations.
In this article an on-line procedure for the robust path planning of telemanipulators is presented. The approach is based on solving a linear system of equations which incorporate an original scheme for the appropriate perturbation of the pseudoinverse matrix, and a null space vector for obstacle avoidance. This method allows to pursue simultaneously both obstacle avoidance and singularities prevention in real time in a sensor based environment. These properties make it suitable for fully autonomous or telerobotic systems operations.
The ability of a teleoperated manipulator to behave in a compliant manner when performing tasks which require contact with the environment is essential. One method of providing compliance is through the use of impedance control. Impedance control, as opposed to passively compliant devices, allows the apparent dynamic characteristics of the manipulator to be altered in real time to suit the demands of the task. The most common technique of implementing impedance control is to compute the control torques which transform the existing manipulator dynamics into the desired compliant system. This technique assumes a dynamic model of the manipulator is available and that the actuators are torque controlled. In this paper we present a new model reference impedance controller which requires neither a dynamic model of the robot or torque controlled actuators.
The proposed paper describes a new method to synthesize a 'generalized bilateral master-slave system'. It uses two criteria: a transparency distance, which measures the performance objective and a passivity distance which measures the stability objective. These criteria are useful to better understand or extend the previous results published on the subject. This method relies on Passivity concept, on impedance control and on H infinity robust control. We will split the synthesis of the Master Slave System into two distinct problems. The first one is the synthesis of a bilateral control law which defines a passive linkage between the two manipulators. The second problem is the synthesis of a control law, specific for each manipulator which increases the sensitivity of the manipulator to external forces, in order to provide backdrivability. It also has to secure the passivity of the closed loop system. The first problem is now well understood then we focus on the second problem and we propose a new synthesis solution. In the control design method we explicitly take into account the facts that manipulators have flexible joints, they suffer from dry friction, sensor noises and neglected dynamics. The synthesis issue of the control is clearly related to H infinity optimization problem of which viable computer algorithms are available. Experimental results illustrate the proposed controller design.
An advanced teleoperator control system is designed based on the paradigm of telemonitoring and Smith principle. By incorporating the model of human dynamics reacting to visual and force feedbacks into the control loop, a systematic and analytic way of obtaining an optimal teleoperator controller supporting operator's ease of control as well as of designing an optimal human/machine interface is established. The designed teleoperator control system also provides a proper coordination between human operator and manipulators under shared control through monitoring force feedback. The Smith principle is used to construct an optimal teleoperator control system under time delay, which allows not only to interpret the role of simulators in teleprogramming under time delay but also to provide an analytic result of what kind of sensory information is required and how the sensory information is fed back to the human operator. Simulation results are shown.
IRAS is a robot control system designed for space robot application with operator supervisory. The main goal of IRAS is the development of a general robot control architecture including a prototype implementation for task oriented A&R operations. The system supports interactive task level programming and adjustment of local robot action sequences and online tuning of command parameters. A functional, three level hierarchical architecture is proposed to cover the objectives. The general control concept with horizontal and vertical information flow allows forward control, nominal feedback and non-nominal feedback. A testbed has been implemented at MBB/ERNO to demonstrate the capabilities of the system. The IRAS-system is tested in a testbed environment consisting of an industrial robot performing manipulations in a MTFF (COLUMBUS). Several exceptions have been studied and analyzed to improve the performance of the IRAS-system. The project is supported by the European Space Agency ESA/ESTEC.
A shared control system has been developed and implemented which mixes teleoperator commands and autonomously generated commands to assist a teleoperator in accomplishing elemental tasks with a six-axis force-reflecting telemanipulator system operating with position to position control. The basic control structure has been developed to accommodate many tasks, but only peg-insertion type tasks are presented here. The system uses predefined position constraint configuration types which have been identified in cartesian space and maximize the effectiveness of elemental tasks. The operator selects one of these types and loads the system with corresponding parameters which define the degrees of freedom which are allowed within the position constraints. Loading can be done by teaching with teleoperation, selecting from memory, or automatically from a model of the working environment. The shared controller accepts only components of position commands from the teleoperator device which are within the position constraints. Components of force at the manipulator which lie within the allowed degrees of freedom are reflected to the teleoperator device. Perpendicular forces are generated to increase telepresence by driving the teleoperator device to a position corresponding to the autonomously controlled position of the manipulator.
Free-flying space robotic devices, in which manipulators are mounted on a thruster-equipped spacecraft, will assist in the construction, repair and maintenance of satellites and future space stations. Operation in a free-floating mode, in which spacecraft thrusters are turned off, increases a system's life. The teleoperation of free-floating systems is complicated due to the dynamic coupling between a system's manipulator and its spacecraft. Controlling such a system using visual feedback is not straightforward, especially when the task is to move the end-effector with respect to an inertially fixed target. In addition, free-floating systems are subject to path-dependent Dynamic Singularities, which restrict the paths by which Path Dependent Workspace points can be reached. These characteristics can result in increased operator burden during system teleoperation. A teleoperation planning system to assist the operator of a free-floating space robot is presented. Given an initial and a target end-effector location, and an optimal connecting path, this system examines the feasibility of reaching the target from the initial location. If a problem is detected, the system proposes an alternative feasible path. An example demonstrates the value of such a teleoperator planning aid.
Piloting and many related control activities, especially remote manipulation via teleoperations and robotics, stand to benefit substantially from better means of communication between controller and controlled. We have investigated one such approach: the use of augmented displays on a cathode ray terminal (CRT) for controlling simulation motion is microgravity. Such displays, which have been shown to be highly effective in a variety of applications, provide information to the operator which goes beyond that which is found in nature, and thereby emphasize important aspects of a task and minimize irrelevant ones. Using this approach, we attempted to develop stylized graphical displays, incorporating augmented feedback by distorting the background of the scene under display, for purposes of flight control and/or control of a robotic arm. Besides attempting to utilize transformations of the scene itself for informational purposes, the displays we developed represent significant departures from previous methods in two notable respects. First, we have attempted to design our instrumentation to make use of peripheral rather than exclusively foveal vision, thus broadening the bandwidth of perception by vision. Second, we attempted to incorporate optical illusions intended to enhance the perception of depth and apparent motion to provide better and more compelling feedback for the operator performing the task.
Rapid development in imaging technology has made useful and affordable solutions possible for applications requiring operation and analysis of remote and virtual environments. Research in human and machine vision has shown the importance of stereopsis (depth perception) in the vision process. Empirical research also has shown the benefits of stereopsis in teleoperation tasks through the use of stereoscopic display technology. The practical value of this technology for real-world applications can be greatly improved through the use of unobtrusive autostereoscopic displays. This applied research explores the various applications of teleoperation, specifically those in which stereo vision is of critical importance. Investigation of stereoscopic imaging system requirements and properties helped identify areas which could potentially benefit from such a system. A testing site using a remotely operated underwater vehicle was used to perform empirical experiments to evaluate the performance benefits derived from the use of the autostereoscopic display. These results were used to define more formal experiments which were conducted. Ergonomic issues associated with the display were also explored through a subjective user survey.
Teleoperation workstations can be designed to that a human operator can use 'natural' hand/eye coordination in performing precision operations with a remote robot. 'Natural' means that the manipulator interfaces correctly with the remote environment, in space and time, and that the human operator uses his innate abilities without becoming unduly disoriented or fatigued.
In this research effort, we are investigating various issues related to visual feedback and operator performance when controlling a teleoperated vehicle in a remote environment. Because these human factor issues are difficult to test in a real teleoperation environment, computer modeling has been used to simulate the dynamics of a remote vehicle. The real-time computer simulation also provides a visual feedback display to the operator, who manually controls the motion of the vehicle. The primary interface issue investigated is the format of the remote vehicle image. The operator views a real-time display of the environment as viewed from the vehicle, as if there was a camera mounted on the front of the vehicle. Display configurations which are being investigated include a video monitor display, a head-mounted stereoscopic display, and a head-mounted stereoscopic display with head-tracking, which provides an isomorphic transformation between the orientation of the operator's head and the direction of view 'out of' the remote vehicle.
This paper describes a testbed and method for characterizing the dynamic response of the type of spatial displacement transducers commonly used in VE applications. The testbed consists of a motorized rotary swing arm that imparts known displacement inputs to the VE sensor. The experimental method involves a series of tests in which the sensor is displaced back and forth at a number of controlled frequencies that span the bandwidth of volitional human movement. During the tests, actual swing arm angle and reported VE sensor displacements are collected and time stamped. Because of the time stamping technique, the response time of the sensor can be measured directly, independent of latencies in data transmission from the sensor unit and any processing by the interface application running on the host computer. Analysis of these experimental results allows sensor time delay and gain characteristics to be determined as a function of input frequency. Results from tests of several different VE spatial sensors are presented here to demonstrate use of the testbed and method.
In this paper we examine the role of force bandwidth in performance of close-tolerance peg-in- hole insertion. The experiments use a two fingered teleoperated hand system with finger-level force feedback. Low-pass filters are used to vary the frequency content of the force feedback signal. Task completion times and error rates decrease as force reflection bandwidth increases. Most of the benefit appears between 2 and 8 Hz bandwidth, although some improvement is seen to 32 Hz, the highest frequency examined. These experiments also indicate that even low bandwidth force feedback improves the operator's ability to moderate task forces. However, force feedback does not enable to the operator to minimize grasp force, since this requires information about the friction at the contact between the grasped object and the slave finger tip.
Our recent work indicates that normal strain data generally provides insufficient information for reconstructing object geometry. For some classes of tactile tasks, the problem of object recognition is both underdetermined and, even if fully determined by the addition of shear data, is not stably invertible. Using both traditional theoretical analysis and finite-element methods to study the solid mechanics of a contact, a series of geometric indentors are applied to a tactile sensor model. In underdetermined cases, adding tangential (shear) components to the normal components of the sensed strains may allow discrimination of fine-form geometries. This indicates that in providing tactile displays to a human operator, both tangential and normal forces or displacements should be considered.
It is proposed that a hybrid sensory feedback system comprising a visual peripheral component together with a haptic component corresponding to that of visual foveal information, is equivalent to that of full visual sensory feedback. Such a system is constructed and the ability of subjects to perceive objects using it is investigated by observing and classifying their search strategy. Although the provision of a peripheral component provides advantages over a purely haptic system, it is concluded that subjects rely heavily on the haptic data, and the resulting hybrid system is not equivalent to full vision.
This paper describes a tactile sensing and feedback system which was developed by the Kansas Augmented Telerobotics (KAT) Laboratory for its dual-arm telerobotic cell. The system includes a tactile sensing array incorporated into the slave's parallel-jawed grippers, a computer system and electronics used to scan the array and command the feedback systems, and three different methods of communicating the tactile information to the operator. The three methods of tactile feedback used were kinesthetic, cutaneous, and visual. The kinesthetic feedback mechanism provides the operator with a sense of how tightly an object is being held within the gripper. The cutaneous feedback mechanism, an array of vibrating solenoids, helps the operator perform grip verification and recognize slip. The visual display gives kinesthetic and cutaneous information through the use of a graphic display. One of the three types of feedback, kinesthetic, posed an interesting problem for which two different methods of control were investigated: simple proportional control and fuzzy logic control. Characteristics of each method are discussed.
In implementing tactile feedback, we are considering the problems of tactile transduction, signal processing, and tactile stimulation. While some of the problems--e.g. sensing--have been addressed in the past, tactile simulation or 'tactile display,' as an informational medium is yet at a very early stage of development. Our work has focused on the design of a 5 X 5 tactile display, and the fabrication problems surrounding high spatio-temporal resolution. Using pneumatic actuators and a mix of conventional and micromachining techniques, we have prototyped and characterized the display, and created a linked sensor-display system. The display was characterized in the usual manner of a linear system and the ability of human subjects to discriminate patterns, forces, and displacements was measured. The display was found to have a maximum force output of 340 milliNewton at each element, force resolution of 4.4 bits, and a frequency response of 7 Hz. Human subjects were able to recognize simple geometric patterns presented on the display, discriminant forces with 3.3 bits resolution, and sense displacements of 0.1 mm (5% of the array spacing).
A computer aid is proposed in this paper for planning paths for redundant robots. The aid, called a Path Planning Editor (PPE), allows a user to specify the complete configuration for a redundant robot over a desired path. Powerful visualization features in the PPE help the user make decisions on configuration selection for the robot. Visualization is offered not only of the robot motion but also of the optimization criterion function plotted over the redundant space. A demonstration of these ideas has been implemented using two graphic workstations for 3-D visualization and a third computer for the user interface designed using the X Window system. The robot chosen for the application is the 8 degrees of freedom AAI robot used in the Advanced Teleoperation Laboratory of JPL. Details of the implementation are described and suggestions are made for improvement of the PPE in this paper.
A Computer Aided Teleoperation (CAT) system provides the human operator with manual, automatic and mixed control modes. With such a system, the human intervenes at an execution level (in manual and mixed modes) and at a supervisory level in order to select, monitor and sequence the implemented control modes. For the latter activity, some aiding is required since the operation of the basic CAT system implies specialized robotic knowledge which is impracticable for the operator to use on-line. A solution to this problem is to assist the human with a computer possessing some description of the task: the computer is thus able to support a high-level man-machine dialogue hiding the execution details. However, a tedious and rigid 'programming' phase is inappropriate as it would jeopardize the flexibility of the teleoperation system. The current paper focuses on remote task description. Based on experiments actually performed with the TAO-2 system in nuclear maintenance and dismantling, we suggest a standard for describing tasks in a way which is relevant both for the human and the computer. We then discuss how such a task description may be generated and refined through process qualification, mission preparation and on-line supervision. We finally outline the influence of this approach on the present design of the TAO-2 supervision system.
For Teleoperation in the presence of communication delays, the detection and correction of error conditions is critically important. This paper discusses an extension of the Teleprogramming system for aiding an operator in understanding error conditions. A 'shadow robot' driven by symbolic statements generated at the remote site has been developed. Also developed is an operational mode in which the operator may replay a sequence of motions leading to an error condition. In this mode the input device is constrained to move along the trajectory corresponding to the remote trajectory. The remaining degrees of freedom available to the input device are exploited to relay kinesthetic information to the operator. Experimental results indicating the usefulness of these additions in diagnosing error conditions are presented.
Modern robotic research has presented the opportunity for enhanced teleoperative systems. Teleprogramming has been developed for teleoperation in time-delayed environments, but can also lead to increased productivity in non-delayed teleoperation. Powered tools are used to increase the abilities of the remote manipulator. However, tools add to the complexity of the system, both in terms of control and sensing. Teleprogramming can be used to simplify the operators interaction with the manipulator/tool system. Further, the adaptive sensing algorithm of the remote site system (using an instrumented compliant wrist for feedback) simplifies the sensory requirements of the system. Current remote-site implementation of a teleprogramming tool-usage strategy that simplifies tool use is described in this document. The use of powered tools in teleoperation tasks is illustrated by two examples, one using an air-powered impact wrench, and the other using an electric winch. Both of these tools are implemented at our remote site workcell, consisting of a Puma 560 robot working on the task of removing the top of a large box.
This paper considers issues related to the design of a system which can aid the operator in their interaction with the uncertain real world. This takes several complimentary approaches. The first is to reduce discrepancies between the master station's initial, imprecise, model of the slave site and the real remote world. This may be achieved by utilizing information received from the slave manipulator's kinesthetic interaction with the environment. The second is to make the operator aware of uncertainty by using color clues in the graphical user interface to provide an indication of uncertainty--this should allow an operator to compromise between speed and accuracy. The final approach is to reduce uncertainty in the position of the master arm by actively guiding it along one, or more, degrees of freedom such that it conforms to pre-defined and task-dependent geometric primitives.
A world model collision avoidance system has been developed in the Kansas Augmented Telerobotics Laboratory (KATL) at the University of Kansas. Collision avoidance is implemented on a Kraft Telerobotics master/slave system. The two primary components of the system discussed within are the building the obstacle model and the scheme for the distributed sampling of the obstacle model by the slave model. The system rune in real-time on a PC-AT platform. The collision avoidance system samples the location of objects in the slave's surroundings from the KATL world model. The system then converts a simplified constructive solid geometry (CSG) representation of the world model into the octree representation of the obstacle model. The world model represents objects with variable amounts of detail. This allows the user to select the amount of detail that is passed the collision avoidance system, which leads directly to the amount of detail in the obstacle model. At run time, the future position of the slave is predicted. The collision avoidance system resolves each link of the slave into an octree structure and requests the octree of the obstacle model in the vicinity of the slave across an Arcnet LAN. The system uses a fast algorithm to determine whether a collision will occur. If a collision is imminent, feedback forces are applied to the master to avoid the potential collision.
This paper describes the results of an investigation into the effects of providing a telerobotic operator with three types of tactile information: kinesthetic, cutaneous, and visual. A full factorial experiment was conducted using sixteen test subjects divided into four test groups. The subjects were trained and tested performing two different tasks. They were not allowed direct visual contact with the task site, but were forced to use four different camera views. During task execution, a computer-based data acquisition system was used to record the telerobotic slave joint torque values. The performance of the system was measured using three criteria: joint torque variation, maximum joint torque value, and task completion time. A standard analysis of variance (ANOVA) test was used to determine if the different modes of tactile feedback caused a change in system performance. The ANOVA results showed that the kinesthetic feedback improved operator performance of the tasks in terms of both torque variation and maximum torque value, while cutaneous had no measurable effect. Visual feedback was not included in the test because it required the operator to continually shift eye contact between the feedback display and the displays showing the task site. Neither the kinesthetic feedback or the cutaneous feedback had an effect upon task completion time for the two tasks performed.
Traditional force feedback or force reflection, which applies forces to a human operator's hand or arm muscles, has been shown in several studies to be beneficial to a person performing remote manipulation tasks with a teleoperation system. However, force reflection can have its disadvantages including operator induced instabilities in the presence of time delays. The use of tactile and auditory displays to present force feedback will be discussed. These displays can provide the human operator with force information without some of the disadvantages of force reflection. The design of the displays are explained, as well as an experimental study on the effectiveness of the displays for remote manipulation tasks. These displays compared favorably to traditional force reflection for basic force perception tests, and improve the human operator's sensitivity for detecting small forces. With a time delay, the displays improved operator performance for peg-in-hole tasks without instabilities. They also improved performance during degraded visual conditions. The benefits of using such displays for telemanipulation tasks is discussed, as well as potential applications and future research.