PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The goal of Earth and planetary exploration through telescience/telerobotics/telepresence is to broaden our understanding of the universe. Because education is a proven avenue for disseminating information, the goal of the EventScope project at Carnegie Mellon University is to merge educational software and a telescience/telerobotics/telepresence mission interface for use within classroom settings in order to provide more direct connections to new information. Answering individual scientific questions requires the ability to interact with a mission on a first-person level - for instance, a student can glean a wealth of information by remotely exploring the contours of a particular rock formation. A limitation on scientific inquiry using robotics is that physical machines can be in only one place at one time. EventScope addresses these scalability issues by enabling dynamic interaction with mission information through tools that allow users to navigate independently of the spatio-temporal nature of a robotic expedition. Further, interface communication tools allow science educators and scientists to mark representations of remote sites to convey their own experiences to students on a mass scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In-situ observation and exploration of the deep-sea environment presents considerable challenges and hazards. Teleoperation of remotely piloted vehicles (RPV) provides an opportunity for continuous telepresence, however, such missions are energy intensive both for propulsion and illumination. Tethered vehicles are limited in range and the need for a weather-dependent surface support ship. An approach is presented which utilizes a shore-based power line/fiber optic cable connected to a deep-sea recharge site. Free flying RPVs periodically recharge batteries and send video and data back to the surface. The recharge site can be relocated to expand the exploration area, and the entire mission remains underwater for the mission duration. The Hudson submarine canyon provides an ideal test site due to its proximity to a large user population area (New York City) and its geological and biological diversity. Alternate test sites and vehicle design issues are detailed. An access fee structure over the Internet for general public participation is discussed, and the possibility of an economically self-supporting venture when conducted on a sufficiently large scale is also considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a proposed collaborative research effort between the University of Illinois at Chicago and Roger Williams University in Bristol, Rhode Island. The goal of this research effort is to develop a conceptual framework and an experimental test bed for analysis and design of systems through which a human can have haptic interaction with geographically remote environment. This paper discusses the initial efforts in the development of an experimental test bed for a Haptic Information Communication System (HICS) at Roger Williams University. We present considerations for designing a system capable of remote haptic interaction and present initial conceptual designs of the experimental platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise works and manipulating micro objects are tough jobs for operators both mentally and physically. To execute these jobs smoothly without feeling wrongness, use of master-slave system is preferable because position and force are able to be scaled up and down as well under the system. In this study we develop a master-slave system where the size of a slave robot is very small and the slave robot is levitated by magnetic forces. In distinction from ordinary master- slave systems, the levitated robot does not get any other contact forces from outside. Thus we introduce a method using an impedance model for constructing the master-slave system. We confirmed the effectiveness of the positioning control algorithm through experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Force feedback is being used as an interface between humans and material handling equipment to provide an intuitive method to control large and bulky payloads. Powered actuation in the lift assist device compensates for the inertial characteristics of the manipulator and the payload to provide effortless control and handling of manufacturing parts, components, and assemblies. The use of these Intelligent Assist Devices (IAD) is being explored to prevent worker injury, enhance material handling performance, and increase productivity in the workplace. The IAD also provides the capability to shape and control motion in the workspace during routine operations. Virtual barriers can be developed to protect fixed objects in the workspace, and regions can be programmed that attract the work piece to a certain position and orientation. However, the robot is still under complete control of the human operator, with the trajectory being determined and commanded using the judgment of the operator to complete a given task. In many cases, the IAD is built in a configuration that may have singular points inside the workspace. These singularities can cause problems when the unstructured trajectory commands from the human cause interaction between the IAD and the virtual walls and fixtures at positions close to these singularities. The research presented here explores the stability effects of the interactions between the powered manipulator and the virtual surfaces when controlled by the operator. Because of the flexible nature of the human decisions determining the real time work piece paths, manipulator singularities that occur in conjunction with the virtual surfaces raise stability issues in the performance around these singularities. We examine these stability issues in the context of a particular IAD configuration, and present analytic results for the performance and stability of these systems in response to the real-time trajectory modification of the human operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design and testing of a multi-channel vibrotactile display composed of cylindrical handle with four embedded vibrating elements driven by piezoelectric beams. The experimental goal of the paper is to analyze the performance of the device during a teleoperated force controlled task. As a test bed, a teleoperator system composed of two PHANToM haptic devices is used to trace a rectangular path while the operator attempts to maintain a constant force at the remote manipulator's tip. Four sensory modalities are compared. The first is visual feedback alone. Then, visual feedback is combined with vibration, force feedback, and force feedback plus vibration. Comparisons among these four modes are presented in terms of mean force error. Results show that force feedback combined with vibration provide the best feedback for the task. They also indicate that the vibrotactile device provides a clear benefit in the intended application, by reducing the mean force errors by 35 percent when compared to visual feedback alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are interested in finding out whether or not haptic interfaces will be useful in portable and hand held devices. Such systems will have severe constraints on force output. Our first step is to investigate the lower limits at which haptic effects can be perceived. In this paper we report on experiments studying the effects of varying the amplitude, size, shape, and pulse-duration of a haptic feature. Using a specific haptic device we measure the smallest detectable haptics effects, with active exploration of saw-tooth shaped icons sized 3, 4 and 5 mm, a sine-shaped icon 5 mm wide, and static pulses 50, 100, and 150 ms in width. Smooth shaped icons resulted in a detection threshold of approximately 55 mN, almost twice that of saw-tooth shaped icons which had a threshold of 31 mN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In teleoperation, it is very important to have salient cooperation between the human operator and the control mechanism equipped on the remote manipulator. The cooperation becomes especially crucial when the teleoperation system is used to execute compliance tasks in which simultaneous control of both position and force may be demanded and inevitable contact with environments is encountered. In this paper, we focus on the issues of planning, execution, and coordination between the human operator and the remote controller in teleoperating a compliance task. In addition, we are also interested in how the intelligence in planning and control to be distributed between them according to the specific characteristics of a given compliance task. For the study, a VR-based (Virtual Reality) telerobotic system for compliance tasks is developed. The telerobotic system provides both haptic and visual information, and possesses a remote intelligence controller, capable of compliance control. Experiments based on various types of compliance tasks are performed for the investigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents design and implementation of a force feedback hand master called AirGlove for virtual reality applications. The device uses six air nozzles mounted on a glove in a Cartesian coordinate frame setting to apply a point force to the user's hand. At any given time, up to three nozzles may exhaust air jets. The magnitude and direction of the applied force are controlled by changing the flow rate of the air jets and by activating different nozzles. Experimental results indicate that the air jet controllers of the device can maintain air flow rates in spite of disturbances. Also, the AirGlove can improve the realism of a virtual reality simulation by allowing the weight of virtual objects to be felt by the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Certain minimally invasive surgical procedures involve the treatment of highly precise target locations within deformable tissues. While preoperative MRI and CT models can be used for surgical planning, they provide only coarse guidance during surgery due to their limited resolution and owing to tissue deformation. Ultrasound imaging is a promising means of obtaining real-time intraoperative data for target localization that is particularly well suited to minimally invasive surgery due to its portability, speed, and safety. This paper presents a system, in which ultrasound images are used to guide a manipulator to a surgical site. Electromagnetic tracking of the ultrasound probe is used to orient the images. These are then segmented in real time to determine target locations. Finally, target coordinates are used to produce control inputs to drive the manipulator to the target site. The potential of the approach is demonstrated experimentally using a manipulator arm, phantom target, and commercial ultrasound machine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manual assembly of minute parts is currently done using simple devices such as tweezers or magnifying glasses. The operator therefore requires a great deal of concentration for successful assembly. Teleoperated micro-assembly systems are a promising method for overcoming the scaling barrier. However, most of today's telepresence systems are based on proprietary and one-of-a-kind solutions. Frameworks which supply the basic functions of a telepresence system, e.g. to establish flexible communication links that depend on bandwidth requirements or to synchronize distributed components, are not currently available. Large amounts of time and money have to be invested in order to create task-specific teleoperated micro-assembly systems from scratch. For this reason, an object-oriented framework for telepresence systems that is based on CORBA as a common middleware was developed at the Institute for Machine Tools and Industrial Management (iwb). The framework is based on a distributed architectural concept and is realized in C++. External hardware components such as haptic, video or sensor devices are coupled to the system by means of defined software interfaces. In this case, the special requirements of teleoperation systems have to be considered, e.g. dynamic parameter settings for sensors during operation. Consequently, an architectural concept based on logical sensors has been developed to achieve maximum flexibility and to enable a task-oriented integration of hardware components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer-based algorithm has been developed which uses preoperative images to provide a surgeon with a list of feasible port triplets ranked according to tool dexterity and endoscopic view quality at each surgical site involved in a procedure. A computer simulation allows the surgeon to select from among the proposed port locations. The procedure selected for the development of the system consists of a coronary artery bypass graft (CABG). In this procedure, the interior mammary artery (IMA) is mobilized from the interior chest wall, and one end is attached to the coronary arteries to provide a new blood supply for the heart. Approximately 10-20 cm is dissected free, using blunt dissection and a harmonic scalpel or electrocautery. At present, the port placement system is being evaluated in clinical trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of a robot arm for industry is a very common problem. A great quantity of these robots have to have a remote human controller to achieve its task successfully. The user knows what is happening with the robot arm through sensors. This information has to arrive to the user and if it consists of a video then the system needs a high bandwidth to carry it to the user in real-time. The present system uses a simulation feedback instead of video information; this type of feedback gets as much information as a video with a lower bandwidth. The simulation of the system is based on virtual reality modeling feedback language (VRML) to model the robot arm which reproduces the movements of the real robot. This method of feedback has the advantage of required little information to afford the user a real approach to the system. The proposed system lets the user move the robot arm with different point-to-point trajectories and different possibilities of movement. The aim of this kind of laboratory is to facilitate the access for students and professionals in the field of robotics. This system is used for teaching university students the themes of robotics. It improves the training of the students permitting them access to a real robot which would be impossible for universities to afford if each student needed his own robot to practice. This paper presents a remote laboratory approach for experimentation with a real robot, which uses the communication techniques of the web.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an update on the Ranger Telerobotic Shuttle Experiment (RTSX) and associated key robotics technologies within the Ranger program. Ranger TSX will operate from a Spacelab logistics pallet inside the cargo bay of the shuttle and will demonstrate space station and on-orbit servicing operations including extravehicular (EVA) worksite setup, an orbital replacement unit (ORU) exchange, and various task board experiments. The flight system will be teleoperated from the middeck inside the shuttle as well as from a ground control station at NASA Johnson Space Center. This paper addresses the technical and programmatic status of the flight experiment and describes progress on the engineering test unit, Ranger Neutral Buoyancy Vehicle II (RNBVII), currently in fabrication. Also described are associated technologies, which support this effort. These include a flight robot mockup built to practice EVA stowage and Ranger NBV I, a free-flight prototype vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We used an Audio Feedback System (AFS) to present some telemetric data to human operators, as auditory information in robot arm experiments with Engineering Test Satellite VII (ETS-VII). Our intention was to provide information that assists in easier and safer operation. We think that the human-machine interface presented to human operators should correspond to different tasks and to different skill levels of human operators. Fortunately, we had opportunity to assess AFS for two tasks those of a Commander and Monitor. The Commander operates the robot arm by transmitting tele- operation commands, while the Monitor checks the indications of telemetric data on a status display. During the experiment, the Commander and Monitor used a status display to check information on the robot arm. In the experiments, seven human operators, four Commanders and three Monitors, performed their respective tasks. In order to assess the effectiveness of AFS for various skill levels of human operators, an astronaut who has a very high level of skill in controlling the robot arm was included among the Monitors. In determining the effectiveness of AFS, we focuses on the eye movements of human operators. We thus used an eye mark recorder (EMR) to measure eye movements. When auditory information was given, average fixation times required to confirm telemetric data indicated on the status display were shortened except in the case of the astronaut. AFS had no effect on the astronaut's performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robonaut, NASA's latest anthropomorphic robot, is designed to work in the hazards of the space environment as both an astronaut assistant and, in certain situations, an astronaut surrogate. This highly dexterous robot is now performing complex tasks under telepresence control in the Dexterous Robotics Laboratory at the Johnson Space Center that could previously only be carried out directly by humans. With 43 degrees of freedom (DOF), Robonaut is a state-of-the-art human size telemanipulator system. It has a three-DOF articulated waist and two seven-DOF arms, giving it an impressive work space for interacting with its environment. Its two five-fingered hands allow manipulation of a wide range of common tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems. Telepresence control is the main mode of operation for Robonaut. The teleoperator dons a variety of sensors to map hand, head, arm and body motions to control the robot. A distributed object-oriented network architecture links the various computers used to gather posture and joint angle data from the human operator, to control the robot, to generate video displays for the human operator and to recognize and generate human voice inputs and outputs. Distributed object-oriented software allows the same telepresence gear to be used on different robots and allows interchangable telepresence gear in the laboratory environment. New telepresence gear and new robots only need to implement a standard software interface. The Robonaut implementation is a two-tiered system using Java/Jini for distributed commands and a commercial-off-the-shelf data sharing protocol for high-speed data transmission. Experimental telepresence gear is being developed and evaluated. Force feedback devices and techniques are a focus, and their efforts on teleoperator performance of typical space operations tasks is being measured. Particularly, the augmentation of baseline Robonaut teleoperation control techniques with force feedback information is shown to significantly reduce potentially damaging contact forces and improve operator consistency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.