PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Robotics is a subject which captures the imagination of undergraduate students in many disciplines including Computer Science and Mechanical Engineering. Despite this interest, when the topic is covered at all in an undergraduate program, the critical hands-on component of the course is often omitted. This is due to a number of causes ranging from the complexity of the subject to the availability of equipment. This paper describes a project to allow experimentation with robots through the Internet for the purpose of undergraduate education. An experimental system has been developed which links computer science students at the University of Wisconsin/LaCrosse with robots and Mechanical Engineering Students at Wilkes University, Wilkes-Barre PA. Unlike such systems constructed for the purpose of experimentation with time delay for shallow space or undersea manipulation, the focus of this system is the education of undergraduate students. This paper discusses how the educational goals affect the design of this system as well as the selection of tasks. Although there are clear advantages in capital and maintenance costs to sharing equipment, the emphasis here is on the significant educational benefits of this type of system. We show that remote operation leads to an understanding of the complexity and difficulty in specifying robot motions for an uncontrolled environment. This understanding is very difficult to achieve in a simulated or local settings where students have much greater control over the execution environment of the robot. The system was constructed in the summer of 1995, with experiments performed during the fall semester of 1995. Results of the experiments run by the joint undergraduate research groups as well as the associated educational outcomes are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A concept referred to as object-resolved telerobotics (ORT) is introduced in this paper in which the human interface to the master is a hand-held proxy for the object to be manipulated. Twist or wrench application to the master by the operator becomes a twist or wrench command to the object. The advantages of ORT are: (1) projected improvement in operator performance resulting from direct command of the object, (2) reduced amount of information that must be transmitted to and from the remote site and (3) opportunity to apply novel forms of shared control and kinesthetic feedback and to use simpler force reflecting masters. The concept has broad application in human supervision of any semi-autonomous system. In this paper, its use is demonstrated in producing cross-axis kinesthetic feedback to an astronaut or ground controller to accomplish spacecraft (s/c) rendezvous, a task that normally is performed with only visual feedback. In cross-axis kinesthetic feedback, the s/c attitude and lateral misalignment are kinesthetically sensed by the operator as a reduction in the programmed velocity in the nominal approach direction. The influence which misalignment has on the programmed velocity is increased as the closing distance is decreased to safely `funnel' the s/c into docking position. The master required to accomplish this has mixed unilateral/bilateral functionality that is demonstrated using a laboratory prototype in conjunction with a computer simulation of s/c rendezvous.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of unmanned free-swimming submersibles can increase both the effectiveness and safety of under sea operations in deep or hostile waters. Because autonomous submersibles are beyond the state-of-the-art in artificial intelligence and computer vision, remote operated submersibles must be employed. Due to inherent time delays between the command and execution of a task (caused by the acoustic data link), conventional teleoperation of the submersible's manipulator arm(s) is a demanding task requiring the operator to maintain an accurate, up-to-date mental model of the remote environment. As a result, conventional teleoperation tends to disorient operators and dramatically decreases their performance. To eliminate these effects, we are developing a Virtual Environment for Undersea Telepresence (VEUTel) for controlling a manipulator arm on a remotely operated submersible. VEUTel addresses the problems associated with low bandwidth by totally immersing the operator in a virtual environment simulation of the remote environment. VEUTel intercepts operator commands and provides an instantaneous graphical simulation of the effect of the command on the manipulator arm. The integrity of the simulation is insured by using visual imagery and other sensor data from the remote environment to maintain synchronization between the virtual and remote environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a 3D graphic interface for the inspection of cracks along a dam. The monitoring of concrete dams is restricted by the accessibility of the various parts of the structure. Since the upstream face of a dam is not usually exposed, as in our case at Hydro- Quebec, a systematic and even ad hoc inspection become extremely complex. The piloting of a ROV (Remotely Operated Vehicle) underwater is like driving in a snowstorm. The view from the camera is similar to the visibility a driver would have in a snowstorm. Sensor fusion has to be performed by the operator since each sensor is specialized for its task. Even with a 2D positioning system or sonar scan, the approach to the inspection area is very tedious. A new 3D interface has been developed using augmented reality since the position and orientation of the vehicle are known. The point of view of the observer can easily be changed during a manipulation of the ROV. A shared memory based server can access the position data of the ROV and update the graphics in real time. The graphic environment can be used as well to drive the ROV with computer generated trajectories. A video card will be added to the Silicon Graphics workstation to display the view of the camera fixed to the ROV. This visual feedback will only be available when the ROV is close enough to the dam. The images will be calibrated since the position of the camera is known. The operator interface also includes a set of stereoscopic camera, hydrophonic (sound) feedback and imaging tools for measuring cracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Outage-free maintenance of electrical power lines have been developed and used throughout the years. In these techniques workers have to work near energized components with hazard of electric shock and falls. In this context, a teleoperator system is being developed to increase the safety of the workers and the overall efficiency. It counts with two hydraulic-driven master-slave teleoperated manipulators on top of an insulated boom over a truck. The operator commands the manipulators from a cabin on the truck via a pair of master arms, and it receives visual feedback through a stereo vision system. Multimedia display, voice recognition and synthesis, stereo vision and force-feedback are some of the features being implemented in order to achieve telepresence of the operator. The paper addressed the implementation of these features on a teleoperation system prepared to work in a semi-structured hazardous environment performing repair and maintenance related tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In such application of VR to teleoperation, the most important thing is to keep the virtual environment consistent with the real environment as most of the operating time. If this can be ensured, a good part of practical teleoperation can be performed in an automatic way. Following this idea, we have developed a model-based dynamic calibration algorithm for the consistency of the two environments. First, model of the real environment is created by moving a camera through the environment. We use multi-position-based stereo vision technique. In the process of rehearsal, path is planned by the operator for a specific task. In such case, places that have complex structures for teleoperation such as turning corners, narrow spaces, etc. are defined as key positions by the operator, where local landmarks are set for later dynamic calibration. During practical teleoperation, the robot moves through the planned paths for all the areas except the selected key positions. At key positions, 3D positions of the robot are calibrated by matching features of the landmarks in the images and their corresponding features in the models interactively. Errors detected between the virtual and real environments are recorded to amend the planned path for the coming robot movements. When the robot is approaching near the target, it will be difficult for the operator to determine accurate position of the target in teleoperation. We propose and implement a marker-based algorithm for automatic location of the target at such case. Preliminary experiments have been carried out using a setup consisting of a 4 DOF movable platform, a camera and a laser gun mounted on the platform. Several small cubes are used as landmarks and placed in a room-like environment with several polyhedrons as obstacles. The target is a laser-light-receiver placed inside a small hole on a planar surface of arbitrary pose. The task is to shoot a laser beam from the laser gun into the receiver. The experimental result was quite satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous path planning and collision avoidance techniques have been proposed in the robotics literature. Global techniques provide optimal paths but are not flexible to changes in the environment. Local techniques provide non optimal paths but can adapt quickly to dynamic environments and usually require less computing time. One of the most promising techniques in the second category is the potential field approach. Unfortunately, it is well known that local minima may trap the robot away from its goal. Our work intends to override this drawback and to give full autonomy to a potential field path planning strategy ensuring collision avoidance. This paper presents a study of path planning and collision avoidance strategies and defines a general framework to implement a potential field path planning strategy in an unknown and dynamic environment. Our work is concerned with telemanipulation tasks in hazardous environments such as electricity distribution networks. A volumetric model of the scene is built incrementally from the information provided by a dynamic computer vision setup. It is used to help a human operator in guiding the manipulator without colliding with components of the scene. The impact of various path planning techniques on the behavior of the robot is analyzed as well as the dependency on the volumetric model. Finally, it is shown that potential field techniques can lead to very good performances, especially if they are combined with other appropriate tools such as a reliable model of the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents several modules of a computer vision assisted robotic system for the maintenance of live electrical power lines. The basic scene of interest is composed of generic components such as a crossarm, a power line and a porcelain insulator. The system is under the supervision of an operator which validates each subtask. The system uses a 3D range finder mounted at the end effector of a 6 dof manipulator for the acquisition of range data on the scene. Since more than one view is required to obtain enough information on the scene, a view integration procedure is applied to the data in order to merge the information in a single reference frame. A volumetric description of the scene, in this case an octree, is built using the range data. The octree is transformed into an occupancy grid which is used for avoiding collisions between the manipulator and the components of the scene during the line manipulation step. The collision avoidance module uses the occupancy grid to create a discrete electrostatic potential field representing the various goals (e.g. objects of interest) and obstacles in the scene. The algorithm takes into account the articular limits of the robot and uses a redundant manipulator to ensure that the collision avoidance constraints do not compete with the task which is to reach a given goal with the end-effector. A pose determination algorithm called Iterative Closest Point is presented. The algorithm allows to compute the pose of the various components of the scene and allows the robot to manipulate these components safely. The system has been tested on an actual scene. The manipulation was successfully implemented using a synchronized geometry range finder mounted on a PUMA 760 robot manipulator under the control of Cartool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design, development and implementation of a telepresence system for hazardous environment applications. Its primary feature is a high performance active stereo vision system slaved to the motion of the operators head. To simulate the presence of an operator in a remote, hazardous environment, it is necessary to provide sufficient visual information about the remote environment. The operator must be able to interact with the environment so that he can carry out manipulative tasks. To achieve an enhanced sense of visual perception we have developed a tightly integrated pan and tilt stereo vision system with a head-mounted display. The motion of the operators head is monitored by a six DOF sensor which provides the demand signals to servocontrol the active vision system. The system we have developed is a compact yet high performance system employing mechatronic principles to deliver a system that can be mounted on a small mobile platform. We have also developed an open architecture controller to implement the dynamic, active vision system which exhibits dynamic performance characteristics of the human head-eye system so as to form a natural and intuitive interface. A series of tests have been conducted to establish the system latency and to explore the effectiveness of remote 3D human perception, particularly with regard to manipulation tasks and navigation. The results of these tests are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
People with disabilities such as quadriplegia can use mouth-sticks and head-sticks as extension devices to perform desired manipulations. These extensions provide extended proprioception which allows users to directly feel forces and other perceptual cues such as texture present at the tip of the mouth-stick. Such devices are effective for two principle reasons: because of their close contact with the user's tactile and proprioceptive sensing abilities; and because they tend to be lightweight and very stiff, and can thus convey tactile and kinesthetic information with high-bandwidth. Unfortunately, traditional mouth-sticks and head-sticks are limited in workspace and in the mechanical power that can be transferred because of user mobility and strength limitations. We describe an alternative implementation of the head-stick device using the idea of a virtual head-stick: a head-controlled bilateral force-reflecting telerobot. In this system the end-effector of the slave robot moves as if it were at the tip of an imaginary extension of the user's head. The design goal is for the system is to have the same intuitive operation and extended proprioception as a regular mouth-stick effector but with augmentation of workspace volume and mechanical power. The input is through a specially modified six DOF master robot (a PerForceTM hand-controller) whose joints can be back-driven to apply forces at the user's head. The manipulation tasks in the environment are performed by a six degree-of-freedom slave robot (the Zebra-ZEROTM) with a built-in force sensor. We describe the prototype hardware/software implementation of the system, control system design, safety/disability issues, and initial evaluation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of an assistive telerobotic system which integrates human-computer interaction with reactive planning is the goal of our research. The system is intended to operate in an unstructured environment, rather than in a structured workcell, allowing the user considerably freedom and flexibility in terms of control and operating ease. Our approach is based on the assumption that while the user's world is unstructured, objects within are reasonably predictable. We reflect this arrangement by providing a means of determining the superquadric shape representation of the scene, and an object-oriented knowledge base and reactive planner which superimposes information about common objects in the world. A multimodal user interface interprets deictic gesture and speech inputs with the goal of identifying the object that is of interest to the user. The multimodal interface performs a critical disambiguation function by binding the spoken words to a locus in the physical work space. The spoken input is also used to supplant the need for general purpose object recognition. Instead, 3D shape information is augmented by the users spoken word which may also invoke the appropriate inheritance of object properties using the adopted hierarchical object-oriented representation scheme. The underlying planning mechanism results in a reactive, intelligent and `instructible' telerobot. We describe our approach for an intelligent assistive telerobotic system (MUSIIC) for unstructured environments: speech-deictic gesture control integrated with a knowledge-driven reactive planner and a stereo-vision system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate signal which is transmitted to the controlled subsystem (robot arm, virtual environment or cursor). When man-machine movements are distorted by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel filtering framework in which digital equalizers are optimally designed after pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: (1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination, and (2) movement signals show highly ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. A new performance indicator is introduced, namely the F-MSEd, and the optimal equalizer according to this new criterion is developed. Ill-condition of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with both a person with tremor disability, and a vibration inducing device, show significant results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Force reflective controllers can be divided into two classes; active and passive with the most common being active. Active force-feedback controllers are prone to self-actuation which can generate unintended commands and may injure the user. A six-degree-of-freedom positional input device was designed and constructed that was capable of providing force-feedback passively through the use of six magnetic hysteresis brakes. Special hardware and control strategies were developed to account for some of the limitations of a passive system and the characteristics of hysteresis brakes. The force-feedback input device has been interfaced to a six-degree-of-freedom robot to perform a variety of tasks. Initial research was conducted with a peg-in-hole task. Future research is to include contour following and bead-on-wire tests. Initial results indicated that force-feedback may only be an improvement in situations where visual cues are not clear, and may actually be a hindrance when a clear line of sight exists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this study is to compare human performance in executing tasks with a helmet- mounted display interface using different visual cues of depth perception. The study involves two experiments, the first, with direct viewing, the second, with a helmet-mounted display (HMD). These experiments are designed to assess the subject's stereoacuity in an alignment task involving two rods, one mobile, the other fixed. In both experiments, the subject has not time constraints and simply has to perform the task as well as possible. The dependent variable is the depth positioning error. Ten subjects with a stereoacuity of 20 arc-seconds or less and 20/20 visual acuity (Snellen test) corrected or not took part in this study. In all experiments, the subject was exposed to four viewing conditions in direct view or HMD: mono-stationary, stereo-stationary, mono with motion parallax and stereo with motion parallax. The independent variables are the presence of stereo (with vs. without), the presence of motion parallax (with vs. without) and the session (session 1 or session 2). ANOVA 2 X 2 X 2 statistical processing is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic multi-user interactions in a single networked virtual environment suffer from abrupt state transition problems due to communication delays arising from network latency--an action by one user only becoming apparent to another user after the communication delay. This results in a temporal suspension of the environment for the duration of the delay--the virtual world `hangs'--followed by an abrupt jump to make up for the time lost due to the delay so that the current state of the virtual world is displayed. These discontinuities appear unnatural and disconcerting to the users. This paper proposes a novel method of warping times associated with users to ensure that each user views a continuous version of the virtual world, such that no hangs or jumps occur despite other user interactions. Objects passed between users within the environment are parameterized, not by real time, but by a virtual local time, generated by continuously warping real time. This virtual time periodically realigns itself with real time as the virtual environment evolves. The concept of a local user dynamically warping the local time is also introduced. As a result, the users are shielded from viewing discontinuities within their virtual worlds, consequently enhancing the realism of the virtual environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the employment of an Arm Exoskeleton as Haptic Interface in order to realize the simplest non trivial kind of force-based interaction which can help Virtual Environments (VE) operators in the completion of exploratory tasks i.e. the interaction of the operator hand with rigid bodies whose position in the VE is fixed. Our interest was to record operators behaviors and feelings and to determine the set of interactions which can be reproduced in a satisfactory way with our system. The experimental results show the influence of system sampling time and force generation model on the realism of contacting and following virtual surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperation of complex environments from a remote place requires careful attention of the operator. In such environments as spacestations where operation errors will cause heavy damages, operators need to be well trained and experienced before practical teleoperation. Robot vision will help a lot in recognizing and locating objects in the environments. In such case, feature extraction is very important. Real time processing depends on quick and correct feature extraction of objects. If we place a man-made marker on an object, feature extraction will be much easier than directly extracting natural features of the object. We have developed a practical and robust method for object recognition and location using specially designed markers. Circle, rectangle and triangle are chosen as three primitives, and a marker is formed by combining any two of these primitives. Here primitive combination is used because it can both increase the number of markers without increasing primitives and distinguish markers from complex background. Edge information is mainly utilized in the process of object recognition because edges are relatively long and stable compared with corners. A marker is recognized by first recognizing each primitive in it and then determining their position relation. After recognition, back projective projection of a marker is taken and its 3D pose are calculated by solving spatial plane equations with the aid of parameters of each primitive in the marker such as diameters of a circle, equations of their images in the image plane. Extensive experiments have been done to verify effectiveness of the proposed method and quite good results are obtained for both indoor and outdoor environments, for both recognition and location of objects with markers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.