PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper the design of controllers for haptic interfaces is discussed. A general methodology to model and design such kind of controllers is presented. Some useful tools for the analysis of stability and performance of haptic interfaces interacting with virtual environments are introduced. The design of a torque controller for a joint of an electrically actuated, tendon driven haptic interface is analyzed as an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
User interfaces play an increasingly important role in robot applications. This paper presents the design and implementation of a multi-modal user interface for a mobile manipulator, consisting of an autonomous vehicle with a manipulator on it. It is capable of navigating in a real world and doing useful manipulations. The system is intended to provide services for handicapped people. The user interface is crucial for such a user to benefit from the services. The user interface designed for the system makes advantage of multimedia technology and combines graphics, speech and visualization into a coherent multi-modal interface. It serves as a user command interpreter, robot monitor and simulator. A user-centered design approach is adopted for enhanced understandability and usability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is focussed on the presentation of the functionalities and the architecture of a real time environment (RTE) dedicated to the mission programming of telerobotics systems. This environment constitutes a framework providing the human operator with tools for the preparation, the simulation and the execution of missions in non- cooperative environment. To overcome the delay time and to realize a mission with a minimum cost and the minimum safety, the preparation phase and the simulation phase of a mission are essential to obtain a well-tried scenario dedicated to the execution phase. An application that consists of teleoperating, in known and structured outdoor environment, a mobile manipulator is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most augmented reality systems enhance the user's view of their immediate surroundings, either through an optical see- through head-mounted display or using a camera mounted on the HMD to provide input to the video displays. In our system the user, wearing an immersive HMD, views the stereoscopic video images of a remote scene, provided by a pair of miniature CCD cameras mounted on a stereohead which is located in a remote environment. This four-degree-of-freedom stereohead was developed for our active telepresence system and is controlled in real-time by the motion of the operator's head. Off-the- shelf software, designed for generating and controlling virtual environments, is used to create the stereographic overlays for our augmented reality application. In order for the virtual images to be accurately registered with real scene, the virtual images must be drawn from the same viewpoint and perspective as the cameras. The paper reviews the calibration methods employed in other augmented reality systems to determine these viewpoint parameters and presents the results of our initial calibration experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents some experimental investigations in the field of robotics and telerobotics using a cooperative and complementary supervisory mode of control. The system allows a very wide range of applications provided by the design of an advanced graphic user interface (GUI) and the use of a CCD camera with a force sensor mounted together on the end effector of a Kuka361 industrial robot. The stereo-vision effect is obtained from two consecutive images separated by a known displacement of the CCD camera. Through the developed GUI, tasks can be carried out in 3D using a point-and-direct method. However, since the vision system is very sensitive with respect to the varying conditions of the real world, a cooperative error recovery scheme (CERS) is proposed in order to improve efficiently the robustness of the system. This scheme involves the supervisor and the computer in a partnership relation and establishes a dialogue between them for recovering errors, avoiding failures and saving time. For simulation purposes, a CAD model of the robot environment can be obtained from the 2D images and can be exploited under the CAD-based reaction force simulator software, called ROSI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the development of a master-slave robotics system is presented. This development is part of a research project devoted to the intelligent automation of in-service inspection of welded seams in nuclear plants using non- destructive ultrasonic based techniques. The main feature of the system is a shared explicit control scheme of the contact force during the interaction of the end-effector with the remote environment. This unilateral master-slave operational scheme does not suffer from the drawbacks of the bilateral force reflection based implementation. Moreover it avoids the operator from damaging the remote manipulator during wrong maneuvers due to imperfect video feedback. The paper describes the control structure applied (belonging to the class of explicit force control) and the hardware-software architecture of the system. Experimental results are given on the Ansaldo Olasand manipulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface following is achieved through the repetitive computation of the necessary 6 degree-of-freedom pose of an object for it to rest on the contour of a surface. This is needed for the simulation of a 3D vehicle 'riding' over virtual terrain. A novel algorithm is described allowing real- time following of the contour of a surface represented by range images, or point clouds. The developed algorithm is robust enough to compensate for holes or discontinuities in range images by filling in these discontinuities through surface approximation. Using a grid representation, the information is compressed into a lookup table allowing for real-time operation. A three-point representation of the base of an object following the surface contour is used to determine the pose of an object in a given position over the grid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been increased interest in recent years in the use of the sensation of force as a supplement to visual and auditory feed-back normally found in virtual environments. To produce a realistic feel of interacting with moving synthetic objects, the interactions between the objects and those between the user and the environment need to be based on physical laws. In this work, we present techniques for interaction of a human with a dynamic virtual environment through the haptic channel, specifically by integrating an articulated force feedback arm with a graphical physically based interactive simulation system. A distributed simulation and control model, to separate two fundamental requirements on graphics and force control is used. We describe techniques used for elegant force display in a dynamic environment, and a physically based algorithm to model surface friction between the probe and the objects. Three schemes for a local model update are also presented and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reproduction of the human tactile sense is highly desirable for successful telemanipulation of irregularly shaped objects. Advanced tactile feedback systems could significantly simplify manipulation tasks in the macro domain (e.g. telerobotics) as well as in the micro domain (e.g. minimal-invasive surgery). In order to realize tactile feedback systems, there is an increasing interest in sophisticated tactile sensing systems capable of detecting and processing mechanical and non-mechanical contact parameters, like normal and shear contact forces, temperature, and, for the purpose of the self protection of the system, reproducing a sensation similar to mechanical and thermal caused pain in humans. Because of the necessity of the tactile sensor system to interact with human operators within the framework of a tactile feedback system, an anthropomorphic approach was chosen. Based on investigations, research, and mechanical modeling in the field of tactile reception mechanisms, conclusions regarding working principle, structure, number, arrangement, optimal placement and desired parameters as well as regarding the strategies of signal and information processing are drawn for the purpose of a general design of a tactile sensing system. The realization of single sensor elements by microstructuring and microfabrication is investigated. The applicability of different transduction principles is discussed with the result that, having in mind the intended tasks, the state of the art in microsystems technology, and the requirements of assembly and packaging, capacitive and piezoresistive sensors are most promising for mechanical contact sensing. Design and technology of a simple capacitive/piezoresistive 3D contact force sensor element and of a more complex capacitive contact force sensing element are presented, their feasibility is demonstrated and their integration into a sensor system is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tactile textures have been recognized to have an important impact on the sensation of exposure in virtual reality systems and play a subtle role in achieving a set of telerobotic applications like remote probing, feature recognition and quality assessment. Their discerning and modeling in the context of virtual reality systems and telerobotic applications have been of increasing interest to a number of researchers and have so far proven to be very challenging. This paper provides a brief review of recent progresses on modeling and rendering tactile textures and presents some preliminary results of an ongoing research at Surrey that uses random field models to characterize some approximately repeatable patterns commonly encountered in tactile textures. It is believed that such an approach is able to capture the underlying trend of patterns in tactile textures and to give a more quantitative description of them. This will be valuable for achieving consistent feature extraction, registration and recognition in the context of virtual reality systems and telerobotic applications. Interpretation of some preliminary experimental results indicate that the approach is viable, and resulted in the recognition of different tactile texture patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In augmented reality systems, the user looks at real objects, through a camera or a see-through HMD, and is provided with an enhanced view. A CAD model of the object, once it is properly registered, is superimposed on top of the direct view. Hidden internal parts can be shown, certain components can be highlighted or procedures can be demonstrated. Real time registration of a 3D model with its corresponding view in the image is thus a key feature of such systems. The goal of this paper is to explore the coupling of electro-magnetic tracking and contour-based image processing to robustly track 3D objects in real time. The method keeps the robustness of electro-magnetic tracking, adds precision using video information, without adding landmarks to the scene. First we explain how to determine the different coordinate systems, using contour information to refine the camera pose estimate. Second we investigate the use of real time pose estimation techniques, based on point information or on line contours.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A polhemus-based telemanipulation system is being developed and tested in the Mechatronic System and Robotics Research group at the University of Surrey. This system provides a natural and intuitive operator input and easy remote control of a robot manipulator. The technique developed in this system will be applied to a telepresence system which has been developed in the research group for performing teleoperations in an unstructured remote and potentially hazardous environments. In this paper, the overview of the system architecture is provided. Some experiments have been conducted to evaluate this telemanipulation system and the evaluation and experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In teleoperation technology various techniques have been proposed in order to alleviate the effects of time delayed communication and to avoid the instability of the system. This paper describes a different approach to robotic teleoperation with large-time delay and a teleoperation system, based on teleprogramming paradigm, has been developed with the intent to improve the slave autonomy and to decrease the amount of information exchanged between master and slave system. The goal concept, specific of AI, has been used. In order to minimize the total task completion time has been introduced a prevision system, called Merlino, able to know in advance the slave's choices taking into account both the operator's actions and the information about the remote environment. The prevision system allows, in case of environment changes, to understand if the slave can solve the goal. Otherwise, Merlino is able to signal a 'fail situation.' Some experiments have been carried out by means of an advanced human-machine interface with force feedback, designed at PERCRO Laboratory of Scuola Superiore S. Anna, which gives a better sensation of presence in the remote environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At small, undergraduate institutions, resources are scarce and the educational challenges are great. In the area of robotics, the need for physical experimentation to reinforce and validate theoretical concepts is particularly strong, yet the requirements of maintaining a robotics laboratory can be onerous to teaching faculty. Experimental robotics often requires a software sophistication well beyond that which can be expected from undergraduate mechanical engineers, who are most often only required to write simple programs in manufacturer supplied languages. This paper describes an effort to provide an undergraduate robotics research experience in the presence of these challenges. We have teamed undergraduate mechanical engineers at Wilkes University with undergraduate computer scientists at University of Wisconsin - La Crosse in a collaborative experimental effort. The goal of this project is to remotely control a PUMA 760 robot located at Wilkes University from an operator station located at UW-La Crosse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new haptic interface device has been developed which uses Lorentz force magnetic levitation for actuation. With this device, the user grasps a floating rigid body to interact with the system. The levitated moving part grasped by the user contains curved oval wound coils and LEDs embedded in a hemispherical shell with a handle fixed at its center. The stationary base contains magnet assemblies facing the flotor coils and optical position sensors facing the flotor LEDs. The device is mounted in the top cover of a desk-side cabinet enclosure containing all the amplifiers, control hardware, microprocessing, and power supplies needed for operation. A network connection provides communication with a workstation to allow interaction with simulated 3D environments in real time. Ideally, the haptic interface device should reproduce the dynamics of the modelled or remote environment with such high fidelity that the user cannot distinguish interaction with the device from interaction with a real object in a real environment. In practice, this ideal can only be approached with a fidelity that depends on its dynamic properties such as position and force bandwidths, maximum forces and accelerations, position resolution, and realizable impedance range. The motion range of the moving part is approximately 25 mm and 15 - 20 degrees in all directions. A current of 0.75 A is required in three of the six coils to generate the vertical force to lift the 850 g levitated mass, dissipating only 13.5 W. Peak forces of over 50 N and torques of over 6 Nm are achievable with the present amplifiers without overheating the actuator coils. Other measured performance results include stiffness ranges from 0.005 N/mm to 25.0 N/mm and a position control bandwidth of approximately 75 Hz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are investigating a field of research that we call ubiquitous telepresence, which involves the design and implementation of low-cost robotic devices that can be programmed and operated from anywhere on the Internet. These devices, which we call ubots, can be used for academic purposes (e.g., a biologist could remote conduct a population survey), commercial purposes (e.g., a house could be shown remotely by a real-estate agent), and for recreation and education (e.g., someone could tour a museum remotely). We anticipate that such devices will become increasingly common due to recent changes in hardware and software technology. In particular, current hardware technology enables such devices to be constructed very cheaply (less than $500), and current software and network technology allows highly portable code to be written and downloaded across the Internet. In this paper, we present our prototype system architecture, and the ubot implementation we have constructed based on it. The hardware technology we use is the handy board, a 6811-based controller board with digital and analog inputs and outputs. Our software includes a network layer based on TCP/IP and software layers written in Java. Our software enables users across the Internet to program the behavior of the vehicle and to receive image feedback from a camera mounted on it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many current web-based telerobotic interfaces use HyperText Markup Language (HTML) forms to assert user control on a robot. While acceptable for some tasks, a Java interface can provide better client-server interaction. The Puma Paint project is a joint effort between the Department of Computing Sciences at Villanova University and the Department of Mechanical and Materials Engineering at Wilkes University. THe project utilizes a Java applet to control a Unimation Puma 1760 robot during the task of painting on a canvas. The interface allows the user to control the paint strokes as well as the pressure of a brush on the canvas and how deep the brush is dipped into a paint jar. To provide immediate feedback, a virtual canvas models the effects of the controls as the artist paints. Live color video feedback is provided, allowing the user to view the actual results of the robot's motions. Unlike the step-at-a-time model of many web forms, the application permits the user to assert interactive control. The greater the complexity of the interaction between the robot and its environment, the greater the need for high quality information presentation to the user. The use of Java allows the sophistication of the user interface to be raised to the level required for satisfactory control. This paper describes the Puma Paint project, including the interface and communications model. It also examines the challenges of using the Internet as the medium of communications and the challenges of encoding free ranging motions for transmission from the client to the robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The helmet-mounted display (HMD), often used in non-vehicle- based virtual environments (VEs), can be configured as either a stereoscopic or bi-ocular display. As a stereoscopic display the computer modeling the VE calculates two different views for each eye, based upon the views each eye normally receives due to their separation in the head. On the other hand, the same view can be presented to each eye, resulting in a bi- ocular display. The normally linked processes of accommodation and vergence must be decoupled when viewing through an HMD. This way of perceiving may lead to physiological problems. For example, a common problem with virtual environments (VE) is simulator sickness. Its symptoms are similar to those experienced in motion sickness, and include problems with eyestrain, disorientation, and nausea. A study was conducted in which both relative differences in simulator sickness and performance were examined for walking, tracking, distance estimation, and micromanipulation tasks. Using the self-report simulator sickness questionnaire (SSQ), data revealed that the stereoscopic condition was more nauseogenic. In addition, post-experimental disorientation, oculomotor discomfort and total severity measures correlated significantly with completion time on a task that required more near-far focal transitions within a short period of time than any other task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a new testbed developed at the Stanford Aerospace Robotics Laboratory (ARL) to address some of the key issues associated with semi-autonomous construction in a hazardous environment like space. The new testbed consists of a large two-link manipulator carrying two smaller two-link arms. This macro/mini combination was developed to be representative of actual space manipulators, such as the SSRMS/SPDM planned for the Space Station. This new testbed will allow us to investigate several key issues associated with space construction, including teleoperation versus supervised autonomy, dexterous control of a robot with flexibility, and construction with multiple robots. A supervised autonomy approach has several advantages over the traditional teleoperation mode, including operation with time delay, smart control of a redundant manipulator, and improved contact control. To mimic the dynamics found in space manipulators, the main arm was designed to include joint flexibility. The arm operates in 2-D, with the end-point floating on air-bearing. This setup allows cooperation with existing free-flying robots in the ARL. This paper reports the first experiments with the arm which explore the advantages of moving from teleoperation or human-in-the-loop control to the human supervisory or task-level control. A simple task, such as capturing a satellite-like object floating on the table, is attempted first with the human directly driving the end-point and second with the human directing the robot at a task-level. Initial experimental results of these two control approaches are presented and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper updates the status of the Ranger Telerobotic Shuttle Experiment (TSX). Originally planned as the Ranger Telerobotic Flight Experiment (TFX), a free-flying spacecraft servicing telerobot to be launched on an expendable launch vehicle, the Ranger mission has been recast as a Space Shuttle-based flight experiment to demonstrate key telerobotic technologies associated with servicing the International Space Station (ISS) and other orbital assets. Several modifications have been made to the flight system configuration and the operational concept, but the primary payload package -- namely the four manipulator arms and their supportive structure -- have remained relatively unchanged. This paper addresses these technical and operational modifications, and lays out the plans for the future of the flight experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.