PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper updates the status of the Ranger Telerobotic Shuttle Experiment. The first Ranger mission is a Space Shuttle-based flight experiment to demonstrate key telerobotic technologies for servicing assets in Earth orbit. The flight system will be teleoperated from on-board the Space Shuttle and form a ground control station at the NASA Johnson Space Center. The robot, along with supporting equipment and task elements, will be located in the Shuttle payload bay. A number of relevant servicing operations will be performed - including extravehicular activity worksite setup, orbit replaceable unit exchange, and other dexterous tasks. The program is underway toward an anticipated launch date in CY2000, and the hardware and software for the flight article and a neutral buoyancy functional equivalent are transitional from design to manufacture. This paper addresses the technical and programmatic status of the flight experiment, and lays out plans for the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Antenna assembly by space robots is an effective method for producing large antennas in space and it offers several advantages over using inflatable antennas and deployable antennas. The Communications Research Laboratory has developed such assembled antennas as a means to produce large space antennas, and the first on-board experiments were performed on Engineering Test Satellite VII, which was launched in 1997. In this paper, we outline the antenna- assembly experiments on ETS-VII and present initial experiments results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The excavation tasks present one of the more challenging area in robotic research. The environment is highly unstructured, the forces that appear are very large it is very important the detection of underground obstacles to avoid any damage, and the modeling of the hydraulic actuators is highly nonlinear. In recent years, the remote control of the excavation has found applications in very dangerous environments for human beings, like nuclear power plants, nuclear and chemical waste facilities and terrestrial and extra-terrestrial mining. Some kind of intelligence is required due to the presence of unexpected situations. The first approach to deal with the problem is to put a human being in the loop, that is: teleoperation. The next step towards the total automation of the excavation is the supervisory control of the task. In this scheme, the operator acts like a supervisor, providing high level commands, and checking the development and accomplishment of the task. The solutions that DISAM has developed are presented in this paper, as well as future work that will be very useful in the search for the total automation of excavation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heavy lift assist devices are an important part of manufacturing facilities that involve large, heavy or bulky material. Many devices are available that provide lift but not motive force augmentation. In these devices, the physical strength of the operator is used to move and position the work piece. Due to large work piece inertial characteristics, inertial contributions from the lift device itself, and misuse of the assist manipulator, injuries may still occur. In this research, an approach is presented that provides reduced-authority actuation to the motive joints of the lift device that allows for augmentation of the human motion forces, provides a means of correcting injurious ergonomic interactions, and allows for high rate energy dissipation for payload trajectory control and emergency situations. The approach is to provide low torque input controlled by operator hand motions. These hand motions move the payload under a centralized trajectory generation scheme that uses modulated braking commands to impose motion constraints, such as object avoidance, resonance attenuation and ergonomic trajectory enhancement. The system is implemented in an virtual reality robot simulator that allows for the investigation of dynamic characteristics prior to the prototype stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inadequacies in sensor and image processing technology have limited the capabilities of autonomous robotics in complex environments. However, the tedium and fatigue commonly encountered in manual teleoperation has prompted research in the area of sensor and computer assisted teleoperation. A previously introduced method of sensor assisted teleoperation, consisting of automatic selection of variable velocity mapping to the execution of Fitts task is described, and the theoretically predicted increase in performance compared with the actual increases shown. The experimental results are used to support a method of choosing mapping parameters to provide the optical assistance when there are known inaccuracies in the available sensor data according to the initially described concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-directed radiation therapy potentially offers significant improvement over current open-loop radiotherapy techniques. Utilizing real-time imaging of tumors, it may be possible to direct a treatment beam to achieve better localization of radiation dose. Since real-time imaging offers relatively poor fidelity, automated analysis of images is formidable. However, experienced physicians may take advantage of visual cues and knowledge of how cancer spreads to infer the location of tumors in partially occluded or otherwise ambiguous scenes. At the Cleveland Clinic, an image-directed radiation treatment system, consisting of a relatively compact linear accelerator manipulated by a 6 degree-of-freedom robot, is in use for treatment of brain tumors. This same system could be applied to teleoperated radiation treatment of non-stationary tumors. To evaluate the prospects for operator-interactive, image-directed therapy, a simulator was constructed to determine the effectiveness of emulated human-in-the-loop treatments. Early performance results based on video recordings of actual lung tumors show that image-directed treatment can offer significant improvements over current practice, motivating development of teleoperated treatment systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An overview of the design and application of a unique mobile hybrid telepresence and virtual reality system is first provided. This is followed by a description of each of the integrated sub-systems. These include the telepresence and teleoperation sub-system comprising display, control, and communication elements together with camera platforms and a mobile vehicle, a virtual reality module capable of modeling capable of modeling a 4D civil engineering environment, in this case a construction site, and the image compression and decompression techniques which allow the video from the remote site to be transmitted across a very low bandwidth mobile phone network. The mobile telepresence system can be located on a real world construction site to observe work in progress. This video information can be accessed by a user from any remote location and compared with the VR model of planned progress. The user can then guide the vehicle and camera system to any desired viewpoint. Illustrations of the first trials of the full system, comments on problems experienced, and suggestions for further work are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the use of mixed reality as a new assistance and training tool for performign teleoperation tasks in hostile environments. It describes the virtual reality techniques invovled and tackles the proble of scene registration using a man-machine cooperative and multisensory vision system. During a maintenance operation, a telerobotic task needs a perfect knowledge of the remote scne in which the robot operates. Therefore, the system provides the operator with pwoerful sensorial feedbacks as well as appropriate tools to build and update automatically the geometric model of the perceived scene. This local model is the world over which the robot is working. It also serves for mission traiing or planning an dpermits any viewpoint observation. We will describe here a new interactive approach combining image analysis and mixed reality technqiues for assisted 3D geometric semantic modling. We also tackle the problem of pose recovery and object tracking using a stereosocpic system mounted on a robot arm. The proposed model-based approach can be used for both real-time tracking and accurate static fitting of complex parametric curved objects. It therefore constitutes a unified tool for building and maintaining the local geometric model of the remote environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VIDET project aims to the design of a tactile vision substitution system for the visually impaired. An important enhancement would be color rendering. Here we face the possibility of conveying color information to the fingers by means of vibrations. Experiments on seeing and blind subjects confirm the validity of the idea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The University of Surrey is engaged in developing augmented reality systems and teleoperation techniques for enhanced visual analysis and task performance in hostile environments. One particular current project in the UK is addressing the development of stereo vision systems, augmented reality, image processing techniques and specialist robotic vehicles which may be used for the future examination and maintenance of underground sewage pipes. This paper describes the components of the stereo vision and augmented reality system and illustrates some preliminary results of the use of the stereo robotic system mounted on a mobile laboratory vehicle and calibrated using a pin-hole camera model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In traditional vision-based Augmented Reality tracking system, artificially-designed fiducials have been used as camera tracking primitives. The 3D positions of these fiducials should be pre-calibrated, which imposes limitations in ranges of tracking view. Fortunately, the advance of computer vision technologies combined with new point position estimation technology enable natural features to be detected, and calibrated to be used as camera tracking primitives. This paper describes how these technologies are used to track in an unprepared environment for Augmented Reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assembly operations require high speed and precision with low cost. The manufacturing industry has recently turned attenuation to the possibility of investigating assembly procedures using graphical display of CAD parts. For these tasks, some sort of feedback to the person is invaluable in providing a real sense of interaction with virtual parts. This research develops the use of a commercial assembly robot as the haptic display in such tasks. For demonstration, a peg-hole insertion task is studied. Kane's Method is employed to derive the dynamics of the peg and the contact motions between the peg and the hole. A handle modeled as a cylindrical peg is attached to the end effector of a PUMA 560 robotic arm. The arm is handle modeled as a cylindrical peg is attached to the end effector of a PUMA 560 robotic arm. The arm is equipped with a six axis force/torque transducer. The use grabs the handle and the user-applied forces are recorded. A 300 MHz Pentium computer is used to simulate the dynamics of the virtual peg and its interactions as it is inserted in the virtual hole. The computed torque control is then employed to exert the full dynamics of the task to the user hand. Visual feedback is also incorporated to help the user in the process of inserting the peg into the hole. Experimental results are presented to show several contact configurations for this virtually simulated task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In teleoperation, automatic identification of remote environment properties such as object weight, size, and friction can assist the teleoperator in determining optimal manipulation strategies. Similarly, virtual training systems can be calibrated using such an automatic identification procedure. For those properties which can be described by parameterization constraint equations, this paper provides a method by which the active constraints can be determined during each portion of the remote manipulator's data stream. The parameterized properties can then be estimated from the appropriate data stream segments. The approach is validated for peg-in-hole insertion using a desktop teleoperator system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research in the field of scientific visualization has begun to articulate systematic methods for mapping data to a perceivable form. Most work in this area has focused on the visual sense. For blind people in particular, systematic visualization methods which utilize other sense need further development. In this work we develop methods for adding aural feedback to an existing haptic force feedback interface to create a multimodal visualization system. We examine some fundamental components of a visualization system which include the following: characterization of the data, definition of user directives and interpretation aims, cataloging of sensual representations of information, and finally, matching the data and user interpretation aims with the appropriate sensual representations. We focus on the last two components as they relate to the aural and haptic sensor. Cataloging of sensual representations is drawn form current research in sonification and haptics. The matching procedure can be thought of as a type of encoding which should be the inverse of the decoding mechanism of our aural and haptic systems. Topics in human perception are discussed, and issues related to the interaction between the two sensor are addressed. We have implemented a visualization system in the from of a Windows NT application using a sound card and a 3 DOF point interaction haptic interface. The system present a 2D data set primarily as a polygonal haptic surface although other capabilities of the haptic sensor are utilized such as texture discernment. In addition, real time aural feedback is presented as the user explores the surface. Parameters of sound such as pitch and spectral content are used to convey information. Evaluation of the system's effectiveness as an assistive technology for the blind reveals that combining aural and haptic feedback improves visualization over using either of the two senses alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information concerning the remote environment is important for the completion of a teleoperation task. To achieve high performance, it is desirable for this information to be presented to the operator so an adaptive control strategy can be applied. This paper reports on the study of using a haptic interface to relay the contact force of the telerobot to an operator in achieving remote arbitrary path following operations. The telerobotic system contains a haptic user interface, a PUMA 560 robot and a force sensing device. The path following operation was directly controlled by the operator who could feel the contact force and direct the robot movement through the haptic interface. The paper present the configuration of the system used in the study and the result of a set of experiments which has been designed to evaluate the effectiveness of the haptic feedback in improving the performance of the given path following task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a system that helps an operator control a manipulator using a direct teleguidance method that allows him to grasp with a data glove and teach the real manipulator to assemble/disassemble mechanical parts. The direct teleguidance is realized by calculating joint angles from the position and orientation of the end effector which are specified with his data glove. Task environment are captured from two pan tilt TV cameras which are controlled according to his head movement, and he can see the stereoscopic image of it through an HMD. When deciding the end effector has approached to an object to be grasped, or a part grasped has come up to a target object, he has only to use a haptic device to continue his operation. The device transmits to him the force and torque valves added to the force torque sensor attached to the end effector. This information allows him to intuitively recognize the state of the effector together with the visual information and makes it possible to precisely control it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of methods are available for the visualization of scientific data. Most of these methods use computer graphics for the visual representation of data. Such visual methods cannot be used by a blind person. Haptic interface technology makes it possible for a user to explore haptically rendered data. A haptic interface, therefore, can be used to effectively present data to a blind person. However large and complex datasets, if rendered without processing, are often confusing to the user. Additionally haptic devices are often point interaction. Thus the amount of information conveyed through the device is far less than that obtained through a visual device, making exploration difficult. Multiresolution methods provide a solution to problems that rise due to the low information capacity of these devices. Utilizing these methods the user can feel the data at low resolution and then add in details by increasing the resolution. These techniques are particular useful for the visually impaired because complex local detail of the data often prevent the user from obtaining an overall view of the haptic plot. Wavelet is a common technique used for the generation of multiresolution data. However, the wavelet decomposition uses linear filters result in edges that are smoothed. Since nonlinear filters are known to preserve edges, we have used affine median filter in a structure similar to that used for the evaluation of wavelet coefficient. The affine median filter is a hybrid filter because its characteristics can be varied from the nonlinear median filter to a linear filter. Thus a flexible multiresolution technique with controllable characteristic is proposed. The technique is used to haptically render a 2D evenly sampled data t different resolutions. The standard Wavelet multiresolution technique is also applied to the same data sets and compared to the hybrid multiresolution technique. The advantage with the hybrid method is that with the same multiresolution structure one can go from linear wavelet decomposition to completely nonlinear multiresolution are less for nonlinear techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new type of touch sensor for detecting contact pressure at human fingertips is presented. Fingernails are instrumented with miniature LEDs and photodetectors in order to measure changes in the nail color when the fingers are pressed against a surface, this new sensor allows the fingers to directly contact the environment without obstructing the human's natural haptic rather than the finger pad. Photo- reflective plethysmorgraphy is used for measuring the nail color. A prototype fingernail sensor and is constructed and used to create a fingertip-free electronic glove. Using these new touch senors, a novel human-machine interface, termed 'virtual switch', is developed and applied to robot programming. The virtual switch detects human intention of pressing a switch by measuring the finger touch signal and the hand location. Instead of embedding a physical switch in a wall or panel, the virtual switch requires merely an image of a switch posted on the surface, and hence can be placed on any surface where one wants to place switches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At small, undergraduate institutions, resources are scarce and the educational challenges are great. In the area of robotics, the need for physical experimentation to reinforce and validate theoretical concepts is particularly strong, yet the requirements of maintaining a robotics laboratory can be onerous to teaching faculty. Experimental robotics often requires a software sophistication well beyond that which can be expected from undergraduate mechanical engineers, who are most often only required to write simple programs in manufacturer supplied languages. This paper is the third in a series describing an effort to provide an undergraduate robotics research experience in the presence of these challenges. For the last three years we have teamed undergraduate mechanical engineers at Wilkes University with undergraduate computer scientists at University of Wisconsin - La Crosse in a collaborative experimental effort. The goal of this project is to remotely control a PUMA 760 robot located at Wilkes University form an operator station located at UW-La Crosse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The real challenge in development of the Internet-based robotic system is to develop a new planning and control method which is capable of coupling with the random communication time delay and independent of a human operator. The technical difficulties include that there does not exist a realistic mathematical model for the Internet time-delay which can be used to design a controller; the probabilistic type of analysis based on a stochastic model is usually not acceptable for some applications such as telemedicine. It is even more difficult to study the common nature of the role of human operators beyond the individual mechanical character. This paper explores a new method for planning and control of Internet-based telerobotic system. The significance of the method is that it can effectively deal with the random time-delay existing in the Internet. In addition, the system stability and dynamic performance is independent of human operator. First, a novel non-time referenced action control scheme will be introduced. Instead of using time as an action reference, a new sensor-based action reference will be developed. As a result, the communication delay will have little impact on the operations, in particular the stability of the system. Furthermore the stability of the system is not dependent on the model of the human operator in the system. All these will lead to the development of the theoretical foundation on the stability of Internet-based telerobotic systems. The implementations and experimental outcomes presented herein will verify the theoretical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of a joint project with the Fisher Art Gallery at USC, we have constructed a teleoperated robotic Web site that allows for remote positioning and binocular viewing of statues and other non-planar art objects. This system has been designed to provide interactive remote access to 3D art objects in real time, so that anyone with a Web connection and a head mounted display can view and study binocular images of art objects anywhere in the world, A pair of video cameras, carried by a robot arm, are aimed at the statue which rests on a rotary table. The combination of table rotation and robotic camera positioning make it possible to observe the work of art from any desired position and orientation. The opening exhibit of the USC Digital Museum features a life size marble statue called the 'Drinking Maiden', by the German sculptor Ernst Gustav Alexander Wenck. We use a 6 degree of freedom robot arm and a linked vergence head to position two CCD cameras. The statue is placed on a rotating platform that can be commanded to one of 12 positions. The robotic is controlled via a graphical, user friendly interface written in Java, which allows the user to position the cameras anywhere in the allowed workspace of the robot. Once the positions of the cameras are established, the system takes two pictures of the statue and returns them to the user, while simultaneously composing a stereo image suitable for viewing with an HMD. The paper describes the hardware and software architecture of the system and its major features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The PumaPaint project is a web robot that allows users to create original artwork on the WWW. The site allows control of a PUMA 760 robot equipped with four paintbrushes, jars of red, green, blue and yellow paint and white paper attached to a vertical easel. Users must download a Java interface allowing interactive control of the robot. This interface contains two windows showing live camera views of the work site and various controls for connecting and disconnecting to the robot, viewing the task status and controlling the painting task. Approximately fifteen hundred unique hosts have downloaded the interface in the first four months of twenty-four hour a day operation beginning June 3, 1998. This paper describes the background of the PumaPaint project, a presentation of hardware and software detail and a discussion of the author's experiences in managing the site over the first four months of operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces an Augmented Reality interface that can be used in supervisory control of a robot over the Internet. The operator's client interface requires only a modest computer to run the Java control applet. Operators can completely specify the remote manipulator's path, using an Augmented Realty stick cursor, superimposed on multiple monoscopic images of the workspace. The same cursor is used to identify objects in the workspace, allowing interactive modeling of the workspace environment. Operating place predefined wireframe models of objects on an image of the workspace and use the cursor to align them with corresponding objects in the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of the WWW presents an opportunity for a wide audience to make use of telemanipulation. Unfortunately, current efforts in Web-based telemanipulation are primarily undertaken by individual groups and lead to a plethora of specific solutions, instead of a general, reusable framework. Because of this, any groups seeking to enter into this field must spend a large amount of time building the interfaces necessary for remote monitoring and manipulation. Our proposed solution to this problem is to build a general framework of Java-based components that allow researchers to focus on their particular applications instead of building the infrastructure for Web interaction themselves. These components only require researchers to build an interface to our framework, instead of implementing a complete end-to-end solutions. This framework is designed to enable the manipulation of simple to medium complexity devices via the WWW. Example application domains include small robotic vehicles and robotic arms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A workcell environment consisting of a robot and image acquisition system is et up for control over the Internet. The first phase of the project explores supervisory control of the workcell through the use of CGI, a standard for external gateway programs to interface with HTTP servers. Robot motion consists of movement to preprogrammed points and limited fine motion in 3 axes. Supervisory control is also extended to programming a series of motions for a robot to follow through. The current phase of the project abandons the use of CGI and the batch oriented regime of workcell control. A protocol is developed for the end-user interface program to communicate with a workcell server directly. Commands are sent to the server which in turn directs the robot to perform the associated motion.The workcell server also provides information for end-user interface programs to generate a 3D model of the robot setup. A separate HTTP- based video feedback system dispatches images to workcell supervisors. The work here provides basis on which to build a more general protocol for Internet control of not only robots but also other devices. A component approach towards building the workcell allows for hardware independence and the ease of integration of other elements of feedback.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.