PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The problem addressed is controlling the relative position between the robot and an object using a sequence of operations specified at a high level. The goal is telecommands for robots to operate over a communication net with low bandwidth, a non-neglectable time delay and time jitter. Using an eye in hand range sensor, motion commands can be entered and executed relative to range map generated from measurements. The ratio of task relevant information/bandwidth is high for this sensor (it returns range in a plane between fingers in the gripper). Conventional images are only sent to the operator at `fax rate.' Some results obtained are: (1) Using a connection over Internet an operator in Lulea, Sweden (1,100 km away) was controlling a SCARA robot in Linkoping, Sweden. During the experiments several objects were successfully handled by the operator. (2) Automatically applying a sensor, say, perpendicular to a selected surface on the machine under diagnosis. The sensing may be for vibrations, shape, surface properties, etc. Observe that telecommands is not conventional telemanipulation using high bandwidth `live' images and force feedback. Instead high level commands (graphical input) relative to a 3D-model or existing measurements are used. The robot is controlled in a local feedback loop closed with full bandwidth using the range sensor mounted in the gripper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Literal teleoperation doesn't work very well. Limited bandwidth, long latencies, non- anthropomorphic mappings all make the effort of teleoperation tedious at best and ineffective at worst. Instead, users of teleoperated and semi-autonomous systems want their robots to `just do it for them,' without sacrificing the operator's intent. Our goal is to maximize human strategic control in teleoperator assisted robotics. In our teleassisted regime, the human operator provides high-level contexts for low-level autonomous robot behaviors. The operator wears an EXOS hand master to communicate via a natural sign language, such as pointing to objects and adopting a grasp preshape. Each sign indicates intention: e.g., reaching or grasping; and, where applicable, a spatial context: e.g., the pointing axis or preshape frame. The robot, a Utah/MIT hand on a Puma arm, acts under local servo control within the proscribed contexts. This paper extends earlier work [Pook & Ballard 1994a] by adding remote visual sensors to the teleassistance repertoire. To view the robot site, the operator wears a virtual research helmet that is coupled to binocular cameras mounted on a second Puma 760. The combined hand-head sensors allows teleassistance to be performed remotely. The example task is to open a door. We also demonstrate the flexibility of the teleassistance model by bootstrapping a `pick and place' task from the door opening task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new remotely operated semiautomatic robot system has been developed for safe and efficient installation of aerial telecommunication cable. The system permits aerial cable to be installed by a field robot that is controlled from the ground instead of by skilled linesmen. Employing threedimensional positional detection using images, and two auxiliary arms attached to the robot and a manipulator gripper that can be taught complex sequences of actions, the robot achieves semiautomatic operation and attitude control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the development of a virtual reality platform for the simulation of medical procedures which involve needle insertion into human tissue. The paper's focus is the hardware and software requirements for haptic display of a particular medical procedure known as epidural analgesia. To perform this delicate manual procedure, an anesthesiologist must carefully guide a needle through various layers of tissue using only haptic cues for guidance. As a simplifying aspect for the simulator design, all motions and forces involved in the task occur along a fixed line once insertion begins. To create a haptic representation of this procedure, we have explored both physical modeling and perceptual modeling techniques. A preliminary physical model was built based on CT-scan data of the operative site. A preliminary perceptual model was built based on current training techniques for the procedure provided by a skilled instructor. We compare and contrast these two modeling methods and discuss the implications of each. We select and defend the perceptual model as a superior approach for the epidural analgesia simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hard disk drives have evolved rapidly with computer miniaturization into highly compact and integrated electromechanical systems. Hard drives contain many precision mechanical parts which may prove useful in the design of small precision robots. The advantages of parts taken from hard disks include low cost, miniaturization, high quality, and for some applications, cleanliness. We report the results of engineering tests on flat coil head positioning actuators taken from hard drives of sizes ranging from 5.25' to 1.8' media diameter. We also perform a simple analysis which suggests that requirements for torque per unit mass are lower for small robot arms. The results suggest ways that hard disk actuators can be utilized in mini robotic designs and points the way towards improved versions of these designs for robotic purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this quantitative investigation of telepresence human test subjects performed a 2 dof manual task, similar to a Fitts task, and then responded to questions about the experience after each trial. The task involved using a position input device to manipulate a virtual object on a computer screen. The experimental arrangement made it possible to modify the relationship between what the subject's hand did and what his/her eyes saw. Three different control/sensory transformations were investigated: time delay, rotation, and linear scaling. The subject's responses were used as the basis for measuring the degree of subjective telepresence (equal to the probability that the human operator will detect the transformation). Subjects also made a direct subjective rating of the transformation in one experiment. Task time served as the measure of task performance. No general relationship between subjective telepresence and task performance was discovered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ecological study of haptic perception and action in tool use has been proposed. The result of such a study would be a theory of tool use to guide haptic interface, telemanipulator, and virtual environment design. As a first step in this study, we conducted an experimental study of haptic information pickup in a single degree-of-freedom positioning task. The task consisted of moving the handle of a one degree-of-freedom manipulandum to a target location using haptic perception. The manipulandum was controlled to exhibit impedances characterizing viscous drag, or damping. Damping in the target region was made to be different from the damping in the surrounding environment (ambient damping). Subjects were instructed to move to, and stop in, the target zone as rapidly as possible. The results of the experiments show that with our apparatus subjects could detect targets designated by differences in target and ambient damping greater than 2.27 N(DOT)s/m. For very large differences in target and ambient damping, subjects performed almost as well using haptic perception alone as they did when they could also see the target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The focus of this study is the application of teleoperation to live-line maintenance work on power distribution lines. The research team's objectives are to measure and compare human performance, the levels of mental workload and the degree of satisfaction corresponding to three working techniques: the hot-stick technique (S), the direct-vision teleoperation technique with and without force feedback (T1 and T2, respectively) and the ground-level teleoperation technique (G). Three linemen with substantial experience with S, little experience with T1 and none with T2 or G took part in a study in which they had to perform a typical task with each of the three techniques. The results show that, compared to S, the productivity ratios for teleoperation are approximately 0.6 for T1, 0.5 for T2 and 0.3 for G. Extrapolation of the results shows that these productivity levels will increase with practice but not to the point where the teleoperation techniques would be as rapid as the hot- stick technique. Technical improvements in the near future are expected to help increase these ratios. The mental workload is higher with T2 and G than with S and T1. Lastly, T1 is the preferred technique and T2 the least appreciated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An empirical study was performed in which human subjects were asked to execute a peg- insertion task through a telepresence link with force-feedback. Subjects controlled a remote manipulator through natural hand motions by using an anthropomorphic upper body exoskeleton. The force-reflecting exoskeleton could present haptic sensations in six degrees of freedom. Subjects viewed the remote site through a high fidelity stereo vision system. Subjects performed the peg-insertion task under three different conditions: (1) in-person (direct manipulation), (2) through the telepresence link (telemanipulation), and (3) through the telepresence link while using abstract virtual haptic overlays known as `virtual fixtures' (telemanipulation with virtual fixturing). Five different haptic overlays were tested which included virtual surfaces, virtual damping fields, virtual snap-to-planes, and virtual snap-to- lines. Results of subject testing confirmed that human performance was significantly degraded when comparing telepresence manipulation to direct in-person manipulation. Results also confirmed that by introducing abstract haptic overlays into telepresence link, operator performance could be restored closer to natural in-person capacity. The use of 3D haptic overlays were found to as much as double manual performance in the standard peg-insertion task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To control current and future space telerobotics systems, the human operator can encounter challenging human-machine interface problems. These include operating the arm with limited positioning cues, avoiding joint limits or singularities, and operating the arm a single joint at a time rather than with the hand controllers which can be required due to system failures. We are developing a multi-mode manipulator display system (MMDS) that addresses these problems. The first mode, manipulator position display (MPD) mode provides the operator with positioning cues that are particularly helpful during operations with constrained viewing conditions. The second mode, joint angle display (JAD) mode assists the operator with avoiding joint limits and singularities, and can provide cues to alleviate these conditions once they occur. Single joint operations display (SJOD) mode is the third mode and provides cues to assist the operator when operating the manipulator a single joint at a time. The fourth mode of the MMDS is sensory substitution (SS) mode which can provide force feedback information through vibrotactile or auditory displays. The MMDS has been designed for space-based applications, but can be extended to a variety of human-machine telerobotic applications including toxic waste cleanup, undersea robotic operations, manufacturing systems, and control of prosthetic devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated mining has been proposed as a solution to reducing mining costs associated with labor and development. Quite simply, no-one will need to work underground. A series of special-purpose mining vehicles is currently being designed for both semi-autonomous operation and teleoperation. A preliminary implementation at INCO's North Mine complex in Copper Cliff, Ontario, Canada, has met with great success. Improvements are required, however, in the presentation and integration of feedback from the remotely operated vehicle due to the poor video image quality. Depth cues in particular have been found to be deficient. Work currently underway at the University of Waterloo involves the development of a graphics engine responsible for the integration and rendering of data from various sources including: live video (analog and/or digital), range-finding data, an intelligent vision system, CAD mine models, and supervisory control and data acquisition systems. Graphical overlays on a live video feed are being examined as a means of enhancing and augmenting the human operator's visual input. We are investigating a tool-set which addresses the requirements of teleoperation in the context of a mining environment. This includes the integration of data from a number of different sources for the purpose of interactive mine planning and operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the project is to investigate new features or devices for teleoperated control and man—machine interfaces, with the aim to improve the efficiency and productivity in a real task. The telepresence project here reported has been realized jointly by the Robotics Laboratory of CESI and the Robotics Laboratory of the Milan Polytechnic. We have fuse, in this application, different technologies coming from the remote control field and virtual reality technology, trying to maintain the system simple and cheap. The real task application used as reference for the project has been a robotized mechanical blade, built on a tracked mobile platform, with an on board VME based controller. The system has been equipped with a stereo vision tool, a system for position and orientation computing and other on—board sensors. A control architecture implementing remote human supervision and on—board security check has been realized using a self made telepresence system with hea3 mounted displays and head movement tracking system. This telepresence technology could be applied in several fields and for several purposes; in our activity range we have remarked practical application both in underwater teleoperated vehicles and in live line inspection and maintenance systems, for high voltage lines. Several test series have been carried out, in order to test the efficiency of the teleoperated system and the efficacy of man— machine interface. These tests, that have concerned the areas of navigation, manipulation and visual inspection, have showed benefits that a teleoperated system can obtain from a special tool devoted to integrate different sensor technologies and features, with the human sensorial apparatus.
Keywords: telepresence technology, teleoperated control, man—machine interfaces, head—mounted displays
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new technologic revolution in medicine is based upon information technologies, and telemanipulation, telepresence and virtual reality are essential components. Telepresence surgery returns the look and feel of `open surgery' to the surgeon and promises enhancement of physical capabilities above normal human performance. Virtual reality provides basic medical education, simulation of surgical procedures, medical forces and disaster medicine practice, and virtual prototyping of medical equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two kinematically dissimilar robots for laparoscopic surgery have been designed and built through a collaborative effort between IBM Research and the Johns Hopkins University School of Medicine. The two mechanisms represent two distinct design approaches and a number of different engineering design decisions. In this paper we describe the mechanical design and kinematic structure of the two robots and report on the results of laboratory evaluations of the two mechanisms. The two systems are compared in terms of safety, ergonomics, ease of control, accuracy, and mechanical stiffness. In each of the categories we attempt to separate the impact of the particular design decisions made in the construction of each mechanism from the more general issue of the fundamental potential and limitations of each of the design approaches towards satisfying the particular design criterion. Based on our experience, we offer some conclusions and recommendations regarding the design of surgical robots for laparoscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A teleoperated microsurgical robot has been developed together with a virtual environment for microsurgery on the eye. Visual and mechanical information is relayed via bidirectional pathways between the slave and master of the microsurgical robot. The system permits surgeons to operate in one of three alternative modes: on real tissue, on physically simulated tissue in a mannequin, or on a computer based physical model contained within the ophthalmic virtual environment. In all three modalities, forces generated during tissue manipulation (i.e. resecting, probing) are fed back to the surgeon via a force reflecting interface to give the haptic sensations (i.e. `feel') appropriate to the actions being performed. The microsurgical robot has been designed so that the master and slave systems can be in physically separate environments which permits remote surgery to be performed. The system attempts to create an immersive environment for the operator by including not only visual and haptic feedback, but also auditory, cutaneous, and, ultimately, olfactory sensations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing a new robotic system applicable to micro- and minimally invasive surgeries. The goal is a dexterity-enhancing master-slave that will refine the scale of current microsurgeries, and minimize effects of involuntary tremor and jerk in surgeons' hands. As a result, new procedures of the eye, ear, brain and other critical faculties will become possible, and the positive outcome rates in conventional procedures will improve. In nominal configuration, this new robot assisted microsurgery (RAMS) system has a surgeon's hand controller immediately adjacent to the robot. The RAMS system is also potentially applicable to `telesurgery' -- surgeries to be carried out in local-remote settings and time-delayed operating theaters -- as considered important in field emergencies and displaced expertise scenarios. As of August 1994 we have developed and demonstrated a new 6 degree-of- freedom robot (slave) for the RAMS system. The robot and its associated Cartesian controls enable relative positioning of surgical tools to approximately 25 microns within a non-indexed and singularity-free work volume of approximately 20 cubic centimeters. This implies the capability to down-scale hand motion inputs by two to three times, and the consequent performance of delicate procedures in such areas as vitreo-retinal surgery, for which clinical trials of this robot are planned in 1996. Further, by virtue of an innovative drive actuation, the robot can sustain full extent loads up to three pounds, making it applicable to both fine manipulation of microsurgical tools and also the dexterous handling of larger powered devices of minimally invasive surgery. In this paper, we overview the robot mechanical design, controls implementation, and our preliminary experimentation with same. Our accompanying oral presentation includes a five minute video tape display of some engineering laboratory results achieved to date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes the implementation of shared control in a force reflecting telerobotic system which has been carried out during the EC TELEMAN project INGRID. The experimental facility which has been developed at Newcastle comprises a Puma 762 robot which can be manually teleoperated by a Puma 260 robot, functioning as a generalized bilateral (or force reflecting) controller. The slave robot has been configured to support several autonomous force controlled tasks, which have been developed specifically for repair and maintenance operations in nuclear and/or other hazardous environments. The control architecture is based on a network of parallel processors, and the man-machine interface provides `soft-switching' of any axis to facilitate mixed mode and shared control which can be configured in both teleoperator and task based modes. A graphic display is also included to provide a visual indication of the magnitude of forces/torques exerted at the tool/environment interface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid position/force controller is presented for a 6 DOF hydraulic manipulator. The controller has been implemented on a Kraft telerobotic slave which has been modified to accommodate a 6 DOF force/torque sensor at the wrist. The controller is implemented within the task frame, and both the position and force are controlled with a non-conventional hybrid control method. Positional accuracy control is maintained at the joint level by using a joint error prediction method based on measured joint torques which have been low-pass filtered. This prediction method eliminates the need for integral gains, which introduce unwanted limit cycling. Dynamic stability is maintained and Cartesian positional error is held to less than 0.2 inches. Conventional hybrid control is based on the ability to control joint torques, but hydraulic actuator torque can not generally be directly controlled. We instead employ a feedback loop which adjusts positional commands along force controlled DOFs until desired end effector force/,moments have been realized. This feedback loop has been implemented in both joint and Cartesian space. The joint space feedback method is based on the observed joint error verses joint torque characteristics used in the positional accuracy control portion. The joint space method has better force tracking capabilities than the Cartesian method, but is not stable for all robot configurations. Cartesian space feedback method has sufficient force tracking for a useful range of tasks, and is stable for all configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we examine the role of two force scaling laws in the performance, by human operators, of scaled teleoperated pick and place tasks. The experiments used a hybrid hardware-software telemanipulation system with force feedback. Human subjects were provided with visual and varying types of force feedback to perform the desired task. The force feedback depended on the scaling law used. Our results show that impedance scaling improved the performance with respect to constant (i.e., geometric only) scaling or no force feedback.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Certain important advantages of three-dimensional television systems over two-dimensional viewing have made them the superior choice in many applications including remote inspection, telemanipulation, robot guidance, and medicine. Earlier work on 3D TV systems, which are based on the use of a stereo-pair of cameras for the provision of two different view points, has revealed that many aspects of their performance depend on such parameters as the camera separation and the focal length of the camera lenses. A drawback of these systems is that the required setting of the camera separation to meet the specification of a given task may not always be easy to achieve. Furthermore, in order for a system of this type to perform correctly, the two lenses should be completely matched. This paper details the development of a single-camera 3D TV system. It is shown that the implementation of such a system may be successfully achieved through an appropriate optical arrangement and the time-shifting of electronic images. The system is believed to be extremely attractive for applications in environments where size is a limiting factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A telerobot control system using stereoscopic viewing has been developed. The objective of the system is to implement a world-model mapping capability using live stereo video to provide an operator with three-dimensional image information of an unstructured environment and to use stereo computer graphics renderings of wire-frame models of sought-after features in the environment in order to relate robotic task information. The operator visually correlates or matches the stereo video image of the environment with the graphic image to register the world model space to the manipulator's environment. This allows operator control of the manipulator through teleoperation in unstructured environments with a change-over to autonomous operation when the operation can be restricted and a task becomes repetitive. Details of the robot control, stereo imaging, and system control components are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among the mechanical parameters that are important for dexterous manipulation, shape is useful for both object recognition and control purposes. To investigate the role of shape information in telemanipulation we have created a tactile shape display. This prototype consists of a regular 6 X 4 array of pin elements or `tactors' which rest against the operator's finger tip. Shape memory alloy wires raise individual tactors to approximate the desired surface shape on the skin. We have implemented a feedforward control law and air- cooling that improves the bandwidth of the otherwise slow SMA wires. The hysteretic and nonlinear nature of the SMA actuators has also led us to implement a closed loop controller with position feedback using an optical emitter-receiver pair. The resulting performance of the SMA actuators has a -3 dB bandwidth point between 6 and 7 Hz. We have interfaced the display with a capacitive tactile array sensor and we are able to convey simple shapes from a remote environment through the display. The results of simple tactile feature localization experiments show the ability of the shape relay system to convey shape information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tasks definition for manipulators or robotic systems (conventional or mobile) usually lack on performance and are sometimes impossible to design. The `On-line' programming methods are often time expensive or risky for the human operator or the robot itself. On the other hand, `Off-line' techniques are tedious and complex. In a virtual reality robotics environment (VRRE), users are not asked to write down complicated functions to specify robotic tasks. However a VRRE is only effective if all the environment changes and object movements are fed-back to the virtual manipulating system. Thus some kind of visual or multi-sensor feedback is needed. This paper describes a semi autonomous robot system composed of an industrial 5-axis robot and its virtual equivalent. The user is immersed in a 3-D space built out of the robot's environment models. He directly interacts with the virtual `components' in an intuitive way creating trajectories, tasks, and dynamically optimizing them. A vision system is used to recognize the position and orientation of the objects in the real robot workspace, and updates the VRRE through a bi-directional communication link. Once the tasks have been optimized on the VRRE, they are sent to the real robot and a semi autonomous process ensures their correct execution thanks to a camera directly mounted on the robot's end effector. Therefore, errors and drifts due to transmission delays can be locally processed and successfully avoided. The system can execute the tasks autonomously, independently of small environmental changes due to transmission delays. If the environmental changes are too important the robot stops re-actualizes the VRRE with the new environmental configuration and waits for task redesign.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces the field of augmented reality as a prolog to the body of papers in the remainder of this session. I describe the use of head-mounted display technologies to improve the efficiency and quality of human workers in their performance of engineering design, manufacturing, construction, testing, and maintenance activities. This technology is used to `augment' the visual field of the wearer with information necessary in the performance of the current task. The enabling technology is head-up (see-through) display head sets (HUDsets) combined with head position sensing, real world registration systems, and database access software. A primary difference between virtual reality (VR) and `augmented reality' (AR) is in the complexity of the perceived graphical objects. In AR systems, only simple wire frames, template outlines, designators, and text is displayed. An immediate result of this difference is that augmented reality systems can be driven by standard and inexpensive microprocessors. Many research issues must be addressed before this technology can be widely used, including tracking and registration, human 3D perception and reasoning, and human task performance issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss augmented reality (AR) displays in a general sense, within the context of a reality-virtuality (RV) continuum, encompassing a large class of `mixed reality' (MR) displays, which also includes augmented virtuality (AV). MR displays are defined by means of seven examples of existing display concepts in which real objects and virtual objects are juxtaposed. Essential factors which distinguish different MR display systems from each other are presented, first by means of a table in which the nature of the underlying scene, how it is viewed, and the observer's reference to it are compared, and then by means of a three dimensional taxonomic framework comprising: extent of world knowledge, reproduction fidelity, and extent of presence metaphor. A principal objective of the taxonomy is to clarify terminology issues and to provide a framework for classifying research across different disciplines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most promising and challenging future uses of head-mounted displays (HMDs) is in applications where virtual environments enhance rather than replace real environments. To obtain an enhanced view of the real environment, the user wears a see-through HMD to see 3D computer-generated objects superimposed on his/her real-world view. This see-through capability can be accomplished using either an optical or a video see-through HMD. We discuss the tradeoffs between optical and video see-through HMDs with respect to technological, perceptual, and human factors issues, and discuss our experience designing, building, using, and testing these HMDs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the past three years, we have been developing augmented reality technology for application to a variety of touch labor tasks in aircraft manufacturing and assembly. The system would be worn by factory workers to provide them with better-quality information for performing their tasks than was previously available. Using a see-through head-mounted display (HMD) whose optics are set at a focal length of about 18 in., the display and its associated head tracking system can be used to superimpose and stabilize graphics on the surface of a work piece. This technology would obviate many expensive marking systems now used in aerospace manufacturing. The most challenging technical issue with respect to factory applications of AR is head position and orientation tracking. It requires high accuracy, long- range tracking in a high-noise environment. The approach we have chosen uses a head- mounted miniature video camera. The user's wearable computer system utilizes the camera to find fiducial markings that have been placed on known coordinates on or near the work piece. The system then computes the user's position and orientation relative to the fiducial marks. It is referred to as a `videometric' head tracker. In this paper, we describe the steps we took and the results we obtained in the process of prototyping our videometric head tracker, beginning with analytical and simulation results, and continuing through the working prototypes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a theoretical basis for combining absolute and incremental position and orientation data, based on optimization and the maximum likelihood principle. We present algorithms for carrying out the computations, and discuss associated computational issues. We treat separately the translation and rotation problems. For the translation problem, we postulate that we have a sensor of absolute (position) and a sensor of first-difference (velocity) data. We also bring in the second-difference (acceleration) when we consider a possible dynamics assumption. For the rotation problem, we postulate only that we have a sensor of orientation and a sensor of first-order rotation changes. We obtain sensor averages by solving a nonlinearly constrained quadratic optimization problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the problem of predicting future head orientations from past and current data because the use of raw sensor data in a virtual environment creates visual misalignment due to system time-lag. We develop a form of a generalized calculus where we can investigate trajectories of orientations in their most natural setting. Using a generalization of the Taylor expansion, we derive first- and second-order dynamics, that we then test against real data. Empirically, we discovered that both kinds of dynamics give fairly accurate predictions and that the first-order dynamic gives consistently better predictions than the second-order dynamic. We explain this result by forming a hypothesis: that changes in the orientation of a human head tend to be very simple. Expect for very brief surges of muscle energy when acceleration or deceleration occurs, the orientation of a human head is either fixed, or it changes in a linear, constant-angle motion about a fixed axis. We also test our first-order predictor against a published extended Kalman filter and we find that the first-order dynamic predictions are approximately 20% more accurate and have smaller variance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes our studies on perception of virtual object locations. We explore the behavior of various factors related to depth perception, especially the interplay of fuzziness and binocular disparity. Experiments measure this interplay with the use of a three- dimensional display. Image fuzziness is ordinarily seen as an effect of the aerial perspective in two-dimensional displays for distant objects. We found that it can also be a source of depth representation also in three-dimensional display space. This effect occurs with large individual variations, and for subjects who have good stereopsis it does not significantly affect their depth perception. However, it can be a very strong depth cue for subjects who have weak stereopsis. A subsequent experiment measured the effects when both size and brightness of sharp stimuli are adjusted to a standard fuzzy stimulus. The results suggest that the fuzziness cue at short range is explained by other cues (i.e. the size cue and the brightness cue). This paper presents results of a series of such experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the major components of the grasp augmented vision system. Grasp is an object-oriented system written in C++, which provides an environment both for exploring the basic technologies of augmented vision, and for developing applications that demonstrate the capabilities of these technologies. The hardware components of grasp include video cameras, 6-D tracking devices, a frame grabber, a 3-D graphics workstation, a scan converter, and a video mixer. The major software components consist of classes that implement geometric models, rendering algorithms, calibration methods, file I/O, a user interface, and event handling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been an increasing need for faster and more reliable inspection of nuclear reactor vessels during scheduled inspection/refueling/repair outages. The inspection of such a complicated environment presents many challenges. During scheduled outages, the inspection platform (in our system a remotely operated miniature submarine) must be piloted efficiently to remote inspection locations, images and measurements of inspection locations presented to the human inspector, past information about this location recalled and viewed, and decisions made regarding repair activity scheduling. We are developing a new integrated system that employs augmented reality techniques to allow the inspection system operator to efficiently and reliably carry out these tasks. We describe a system that creates a realtime animated synthetic image of the underwater environment being inspected (drawn from CAD models of reactor components) in which a synthesized image of the inspection platform moves. The image is created with respect to an operator selected viewing point. A sensor measures the position and orientation of the actual mini-submarine, and these data are used by the graphics computer to continuously update the animated image. The images to be viewed can be either two or three dimensional. This information is used to assist in guiding the vehicle through the environment. The system display also integrates current inspection data (such as live video images) with past video frames or with past inspection reports and past data to allow fast and reliable inspection decisions to be made. Examples of typical operator display screens are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Telerobotic inspection can be used in environments that are too hazardous, removed, or expensive for direct human inspection. Telerobotic inspection is a complex task requiring an operator to control and coordinate a robot and sensors, while monitoring and interpreting sensor data to detect flaws. A virtual window telepresence system has been developed to aid the operator in performing these inspections. While the operator is looking at a monitor displaying stereo video from cameras mounted on the robot, the system tracks operator head position and moves the robot to create the illusion that the operator is looking out a window. This interface allows the operator to naturally specify desired viewpoint and enables him to concentrate on the visual examination of the area that may contain a flaw.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.