PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Transparency is a method proposed to quantify the telepresence performance of bilateral teleoperation systems. It is practically impossible to achieve transparency for all frequencies, however, previous research has shown that by proper manipulation of the individual transfer functions, transparent systems for limited frequency bands can be designed. In this paper we introduce a different approach. We first study the problem of designing systems that are transparent only for a given value of the output impedance, then, by combining this concept with that of time-adaptive impedance estimation, we postulate a new strategy for the design of transparent systems. In the proposed method, the output impedance estimate is updated at each time instant using adaptive ARMA modeling based on either the LMS or RLS algorithms. The current estimate of the output impedance is used to update some free-design system parameters in such a way that the system tries to achieve transparency. We refer to this strategy as asymptotic transparency. An example on how to use this strategy in the design of a system with position-forward and force-feedback paths is included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes our efforts to apply robotics and automation technology to assist, enhance, quantify, and document neuro-rehabilitation. It reviews a pilot clinical trial involving twenty stroke patients with a prototype robot-aided rehabilitation facility developed at MIT and tested at Burke Rehabilitation Hospital. In particular, we present a few results: (a) on the patient's tolerance of the procedure, (b) whether peripheral manipulation of the impaired limb influences brain recovery, (c) on the development of a robot-aided assessment procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A master-slave telerobotic surgery system has been developed in Human Machine Systems Lab at MIT. This system is composed of a master-slave telerobotic system, a two-way video/audio transmission link, a control data link, and a laparoscopic surgery simulation platform. With video, audio and force feedback, a surgeon can conduct telelaparoscopic surgery for a remote 'patient' by means of the master-slave telerobotic system. However, the force feedback can go unstable when the communication time delay of the control data link is larger than roughly 0.2 seconds. Therefore designing a stable force feedback control becomes an important issue for a telerobotic surgery system. This paper proposes a new approach to achieve stable force reflecting teleoperation control under time delay -- fuzzy sliding control (FSC). FSC is based on the conventional fuzzy control and sliding mode control both of which have been proven robust and stable. The design methodology of FSC includes the following major parts: a fuzzy sliding control law, rule tuning in the phase plane, and soft boundary layer tuning. FSC can easily be modified and applied to deal with the uncertainties and human interactions in teleoperation. In our research, a novel control structure which consists of FSC and a fuzzy supervisor has been implemented in our high bandwidth master-slave telerobotic system. It has been shown that this approach has stable force reflection and good tracking accuracy for loop delays up to 2 seconds. Experiment results are described in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research in haptic systems has begun to focus on the generation of textures to enhance haptic simulations. Synthetic texture generation can be achieved through the use of stochastic modeling techniques to produce random and pseudo-random texture patterns. These models are based on techniques used in computer graphics texture generation and textured image analysis and modeling. The goal for this project is to synthesize haptic textures that are perceptually distinct. Two new rendering methods for haptic texturing are presented for implementation of stochastic based texture models using a 3 DOF point interaction haptic interface. The synthesized textures can be used in a myriad of applications, including haptic data visualization for blind individuals and overall enhancement of haptic simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robot manipulators are the natural choice for transmitting computer-generated dynamic forces from a virtual environment to a human user immersed in that environment. Control schemes for this type of interactive interface are extensions of the techniques used in teleoperation systems, with the forces applied between the master and the human generated by the dynamic model rather than measured from sensors on the slave. Stability problems arise when the robot inertia and dynamics are coupled to the complex and time-varying human dynamic characteristics. In this work, a new local feedback error control technique is applied to an interface robot to allow motion only in free directions in the virtual space. This virtual force interface does not require the use of an end effector force transducer. As in all local control schemes, the dynamic characteristics and cross-coupling effects of the manipulator are neglected, so that actual motion may deviate slightly from the desired trajectory during high speed operation. Experimental results are presented showing the use of this haptic device moving within a cubic space representing a fish tank. Motion of the hand controls the motion of virtual fish within the tank, and as the fish contact the walls of the virtual tank, the human feels the hard surface of the tank walls. Stability for this scheme is based on the stability of position error feedback for the manipulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents results from an ongoing collaboration between Wilkes University and the University of Wisconsin-La Crosse in using the Internet for undergraduate education in robotics. An interface has been developed which allows computer science students at UW-La Crosse to control a robotic manipulator on the Wilkes University campus using images transmitted from Wilkes. The focus of this paper is the interface which monitors the image transmission and the control which the student user has over that transmission. An option in the interface allows the user to crop the image to a desired size in order to focus on a specific feature. Results of experiments performed by the joint undergraduate research groups at both institutions in using this component, as well as the associated educational outcomes, are presented here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study was designed to compare a new, autostereoscopic, 'glasses-free' three-dimensional screen with a passive glasses three-dimensional screen, and a two-dimensional imaging system. An objective analysis consisting of two experiments was conducted. The first experiment was designed to assess depth perception. Both 3-D imaging systems were compared to each other and a 2-D imaging system. A statistically significant difference existed between the 2-D screen and both of the 3-D screens (p less than 0.001). There was no statistical difference found between the 'glasses-free' 3-D screen and the passive glasses 3-D screen in either experiment. The second experiment was task oriented designed to compare the new, 'glasses-free' 3-D imaging screen with the passive glasses 3-D screen. In this inanimate setting, the task of passing a needle and suture through a series of hoops was performed faster with the passive glasses 3-D screen but this difference was not statistically significant. Conclusion: The 3-D screens clearly produce more accurate assessment of the depth than the 2-D screen. The new, 'glasses-free' 3-D screen produced comparable results to the established passive glasses 3-D screen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent viewing systems are required if efficient and productive teleoperation is to be applied to dynamic manufacturing environments. These systems must automatically provide remote views to an operator which assist in the completion of the task. This assistance increases the productivity of the teleoperation task if the robot controller is responsive to the unpredictable dynamic evolution of the workcell. Behavioral controllers can be utilized to give reactive 'intelligence.' The inherent complex structure of current systems, however, places considerable time overheads on any redesign of the emergent behavior. In industry, where the remote environment and task frequently change, this continual redesign process becomes inefficient. We introduce a novel behavioral controller, based on an 'ego-behavior' architecture, to command an active camera (a camera mounted on a robot) within a remote workcell. Using this ego-behavioral architecture the responses from individual behaviors are rapidly combined to produce an 'intelligent' responsive viewing system. The architecture is single-layered, each behavior being autonomous with no explicit knowledge of the number, description or activity of other behaviors present (if any). This lack of imposed structure decreases the development time as it allows each behavior to be designed and tested independently before insertion into the architecture. The fusion mechanism for the behaviors provides the ability for each behavior to compete and/or co-operate with other behaviors for full or partial control of the viewing active camera. Each behavior continually reassesses this degree of competition or co-operation by measuring its own success in controlling the active camera against pre-defined constraints. The ego-behavioral architecture is demonstrated through simulation and experimentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tasks carried out remotely via a telerobotic system are typically complex, occur in hazardous environments and require fine control of the robot's movements. Telepresence systems provide the teleoperator with a feeling of being physically present at the remote site. Stereoscopic video has been successfully applied to telepresence vision systems to increase the operator's perception of depth in the remote scene and this sense of presence can be further enhanced using computer generated stereo graphics to augment the visual information presented to the operator. The Mechatronic Systems and Robotics Research Group have over seven years developed a number of high performance active stereo vision systems culminating in the latest, a four degree-of-freedom stereohead. This carries two miniature color cameras and is controlled in real time by the motion of the operator's head, who views the stereoscopic video images on an immersive head mounted display or stereo monitor. The stereohead is mounted on a mobile robot, the movement of which is controlled by a joystick interface. This paper describes the active telepresence system and the development of a prototype augmented reality (AR) application to enhance the operator's sense of presence at the remote site. The initial enhancements are a virtual map and compass to aid navigation in degraded visual conditions and a virtual cursor that provides a means for the operator to interact with the remote environment. The results of preliminary experiments using the initial enhancements are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The MIT remote microscope is a telemicroscopy system that allows users to remotely control and view a microscope over the internet with a graphical interface that runs on an ordinary workstation computer. The microscope server consists of an automated Zeiss microscope that is controlled by a personal computer, while the client interface is implemented with custom software developed at MIT. The system was designed primarily to provide remote inspection capabilities for semiconductor researchers during the remote fabrication of integrated circuits, but can also be used as a general purpose instrument for remote inspections. The remote microscope also allows any number of clients to simultaneously view the microscope in a conference inspection mode, enabling collaboration opportunities among distant viewers. Because clients require no special hardware, the internet remote microscope is extremely accessible and easy to use, yet provides powerful remote inspection capabilities, collaboration opportunities, and easy access to hard to reach locations such as clean room environments for semiconductor processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In teleoperation situations where virtual environments are employed and fine operation is needed, it is crucial to dynamically keep the virtual environment consistent with the real remote environment. This is especially important when the remote site is at a great distance, such as in a space station, and therefore large time delays exist during the process of teleoperation. In this paper, we propose an automatic calibration method which dynamically determines the difference in 3D position and orientation between virtual and real environments by using a new color image matching technique which is based on gradients of both gray levels and color information. During the process of model building, significant color features in the real environment, either natural or specially prepared, are picked up and mapped onto the corresponding environment model positions. During the process of teleoperation, color images are taken by a camera mounted on a manipulator. These images are analyzed and features are extracted and matched with those in the model in real time. The 3D poses and positions of the camera in the real environment are calculated and then compared with those in the virtual environment in order to determine differences between them. Feature correspondences are determined based on color attributes and geometric relations. A simplified closed-form solution for 3D location of a 4-DOF mobile camera is given. Experimental results show the effectiveness of this dynamic calibration approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on the current status of the multimodal user supervised interface and intelligent control (MUSIIC) project, which is working towards the development of an intelligent assistive telemanipulative system for people with motor disabilities. Our MUSIIC strategy overcomes the limitations of previous approaches by integrating a multimodal RUI (robot user interface) and a semi-autonomous reactive planner that will allow users with severe motor disabilities to manipulate objects in an unstructured domain. The multimodal user interface is a speech and deictic (pointing) gesture based control that guides the operation of a semi-autonomous planner controlling the assistive telerobot. MUSIIC uses a vision system to determine the three-dimensional shape, pose and color of objects and surfaces which are in the environment, and as well as an object-oriented knowledge base and planning system which superimposes information about common objects in the three-dimensional world. This approach allows the users to identify objects and tasks via a multimodal user interface which interprets their deictic gestures and a restricted natural language like speech input. The multimodal interface eliminates the need for general purpose object recognition by binding the users speech and gesture input to a locus in the domain of interest. The underlying knowledge-driven planner, combines information obtained from the user, the stereo vision mechanism as well as the knowledge bases to adapt previously learned plans to perform new tasks and also to manipulate newly introduced objects into the workspace. Therefore, what we have is a flexible and intelligent telemanipulative system that functions as an assistive robot for people with motor disabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows some results of the research made in order to develop multimedia interfaces for teleoperated robots. Several experiments in order to evaluate the operator interaction with several devices have been conducted. The interaction with the operator is done through operator input devices, and operator output devices, both studied in order to improve the performance of teleoperated tasks execution. Inputs are the way for the operator to send commands to robots and other devices, such as cameras, tools, or the interface configuration itself. Three kinds of inputs have been evaluated: voice, master arms, and graphical utilities. Outputs are the way to present to the operator the remote task evolution. These outputs are linked to operator senses, and they excite his vision, hearing, and touch, according to information from the remote site. The experiments have been conducted over the same task with different configurations, so as to obtain the performance of each interface configuration. Besides, the experiments have distinct information sent to operator and information received from the operator, in order to evaluate the different devices independently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing harvesters are forestry telemanipulators that can fell, delimb, cut and pile trees in only one sequence of operations. They are complex machines and operators need between 4 and 6 months of practice to become productive. This paper describes work in progress concerning the development of a computerized environment that uses 3D graphics, audio feedback and real-time interactivity to create a virtual environment (VE) similar to one of a processing harvester. This virtual environment will be used both to train operators and to test new user interfaces that could enhance performance and/or reduce operation's learning time, as well as improve the overall productivity by allowing operators to train without taking machines out of service.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One aspect of applying teleoperators to a surgical scenario that has been overlooked to date is how best to coordinate the actions of a tele-surgeon and an assistant on the scene of the operation. This research uses laparoscopic surgery as a model, and examines performance of the surgeon/assistant pair on simulated surgical tasks under different conditions of time delay and surgical tool assignment. Our experiments suggest that under time delay conditions, tasks are completed most quickly when the tele-surgeon controls only a laparoscopic camera, and instructs the assistant in how to complete the tasks. This paper also describes the teleoperator system used and a proof-of-concept demonstration that will be conducted between MIT and Massachusetts General Hospital in late 1996. The demonstration will make use of three ISDN telephone lines to transmit audio, video, and control signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A growing concern with the rapid advances in technology is robotic systems will become so complex that operators will be overwhelmed by the complexity and number of controls. Thus, there is a need within the remote and teleoperated robotic field for better man-machine interfaces. Telepresence attempts to bring real world senses to the operator, especially where the scale and orientation of the robot is so different from the scale of a human operator. This paper reports on research performed at the INEL which identified and evaluated current developments in telepresence best suited for nuclear applications by surveying of national laboratories, universities, and evaluation of commercial products available in industry. The resulting telepresence system, VirtualwindoW, attempts to minimize the complexity of robot controls and to give the operator the 'feel' of the environment without actually contacting items in the environment. The authors of this report recommend that a prolonged use study be conducted on the VirtualwindoW to determine and bench mark the length of time users can be safely exposed to this technology. In addition, it is proposed that a stand alone system be developed which combines the existing multi-computer platform into a single processor telepresence platform. The stand alone system would provide a standard camera interface and allow the VirtualwindoW to be ported to other telerobotic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Ranger Telerobotic Flight Experiment is planned for launch into low Earth orbit in the Space Shuttle in 1998. During its mission, the Ranger space telerobot will demonstrate a wide variety of spacecraft servicing capabilities, and will set the stage for future telerobotic spacecraft servicing missions in Earth orbit. With for highly advanced robot manipulators and the capability for free flight in the space environment, the Ranger spacecraft will be able to perform on-orbit servicing tasks previously accomplishable only by astronauts. An underwater analog to the Ranger spacecraft has already been developed and is collecting data for comparison against the flight experiment data. It is anticipated that the results of the Ranger mission will lead to a new class of dexterous extravehicular telerobots for use in on-orbit spacecraft servicing operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The expansion of robotic systems' performance, as well as the need for such machines to work in complex environments (hazardous, small, distant, etc.), involves the need for user interfaces which permit efficient teleoperation. Virtual Reality based interfaces provide the user with a new method for robot task planning and control: he or she can define tasks in a very intuitive way by interacting with a 3D computer generated representation of the world, which is continuously updated thanks to multiple sensors fusion and analysis. The Swiss Federal Institute of Technology has successfully tested different kinds of teleoperations. In the early 90s, a transatlantic teleoperation of a conventional robot manipulator with a vision feedback system to update the virtual world was achieved. This approach was then extended to perform teleoperation of several mobile robots (Khepera, Koala) as well as to control microrobots used for microsystems' assembly in the micrometer range. One of the problems encountered with such an approach is the necessity to program a specific kinematic algorithm for each kind of manipulator. To provide a more general solution, we started a project aiming at the design of a 'kinematic generator' (CINEGEN) for the simulation of generic serial and parallel mechanical chains. With CINEGEN, each manipulator is defined with an ascii file description and its attached graphics files; inserting a new manipulator simply requires a new description file, and none of the existing tools require modification. To have a real time behavior, we have chosen a numerical method based on the pseudo-Jacobian method to generate the inverse kinematics of the robot. The results obtained with an object-oriented implementation on a graphic workstation are presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.