PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A framework for object recognition via combinations of nonrigid deformable appearance models is described. An object category is presented as a combination of deformed prototypical images. An object in an image can be represented in terms of its geometry (shape) and its texture (visual appearance). We employ finite element based methods to represent the shape deformations more reliably and automatically register the object images by warping them onto the underlying finite element mesh for each prototype shape. Vectors of objects from the same class (like faces) can be thought to define an object subspace. Assuming that we have enough prototype images that encompass major variations inside the class, we can span the complete object subspace. Thereafter, by virtue of our subspace assumption, we can express any novel object from the same class as a combination of the prototype vectors. We present experimental results to evaluate this strategy and finally, explore the usefulness of the combination parameters for analysis, recognition and low-dimensional object encoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object tracking consists of reconstructing the configuration of an articulated body from a sequence of images provided by one or more cameras. In this paper we present a general method for pose estimation based on the evidential reasoning. The proposed framework integrates different levels of description of the object to improve robustness and precision, overcoming the limitations of approaches using single-feature representations. Several image descriptions extracted from a single-camera view are fused together using the Dempster-Shafer `theory of evidence'. Feature data are expressed as belief functions over the set of their possible values. There is no need of any a-priori assumptions about the model of the object. Learned refinement maps between feature spaces and the parameter space Q describing the configuration of the object characterize the relationships among distinct representations of the pose and play the role of the model. During training the object follows a sample trajectory in Q. Each feature space is reduced to a discrete frame of discernment (FOD) and refinements are built by mapping these FODs into subsets of the sample trajectory. During tracking new sensor data are converted to belief functions which are projected and combined in the approximate state space. Resulting degree of belief indicate the best pose estimate at the current time step. The choice of a sufficiently dense (in a topological sense) sample trajectory is a critical problem. Experimental results concerning a simple tracking system are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer vision method is presented for recognizing the non-rigid motion observed in objects moving in a 3D environment. This method is embedded in a more complete mechanism that integrates low-level (image processing), mid- level (recursive 3D trajectory estimation), and high-level (action recognition) processes. Multiple moving objects are observed via a single, uncalibrated video camera. A Kalman filter formulation is used in estimating the relative 3D motion trajectories. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages. In this paper we concentrate in the action recognition stage. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object. Trajectory-guided recognition (TGR) is then proposed as an efficient method for adaptive classification of action. The TGR approach is demonstrated using 'motion history images' that are then recognized via a mixture of Gaussian classifier. The system was tested in recognizing various dynamic human outdoor activities; e.g., running, walking, roller blading, and cycling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the current developments at PERCRO in the field of Gesture Recognition of ergodic actions. At PERCRO this concept has been applied for coding and detecting the meaning of long sequences of movements. The concept of Gesture Recognition has been introduced in teleoperation systems and in Virtual Environments. These systems already have a structure for a detailed acquisition of user movements. Therefore, the algorithms for the Gesture Recognition can be inserted into the system with few efforts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A telepresence system immerses a user in a remote environment. As a result, the displaying of that environment and the interaction channels available are of paramount importance. In fact, such a system is primarily designed and evaluated in according to its user interface. Not only in terms of the plenitude of interaction channels, or media, but also in terms of their combination to deliver an interactive experience or mode to the user. In this paper we look at the requirements for a telepresence system, the modes and media of interaction matching these requirements and the overall interface architecture. We present some results of early experiments with voice input and gesture input.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a study of whether haptic feedback can be used to represent information that is normally difficult to obtain via visual feedback in telerobotic system. Problems of manipulator kinematic condition such as singularity and joint limit have been well known for a long time. Kinematic condition of the manipulator is difficult to be recognized visually. Poor kinematic condition often causes trajectory error or other undesirable effects in the system. This problem is quite significant in telerobotics since a fully pre-planned path that completely excludes poor kinematic condition is usually not available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Haptic interfaces which fit on the desktop and provide a cursor control with 2D position convey additive feel sensations to the user in GUI and allow to enhance the efficiency of the human-computer interaction for both single user and collaborative applications. The merit of such devices is determined by the adaptability of the handle to different tasks/users, the ergonomics of the 2D workspace, the overall compactness of the device with respect to the usable workspace and the quality of the force feedback. A novel device has been designed along these guidelines. A new tendon-driven 5-bar mechanism is used to improve the isotropy of the kinematic performance over the workspace and the maximum generated force while reducing the device encumbrance. The present paper illustrates the kinematic and mechanical design of the device and reports the specifications achieved by the CAD model evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Hand Force Feedback System is an anthropomorphic haptic interface for the replication of the forces arising during grasping and fine manipulation operations. It is composed of four independent finger dorsal exoskeletons which wrap up four fingers of the human hand (the little finger is excluded). Each finger possesses three electrically actuated DOF placed in correspondence with the human finger flexion axes and a passive DOF allowing finger abduction movements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Force feedback from remote or virtual operations is needed for numerous technologies including robotics, teleoperated surgery, games and others. To address this need, the authors are investigating the use of electrorheological fluids (ERF) for their property to change the viscosity under electrical stimulation. This property offers the capability to produce feedback haptic devices that can be controlled in response to remote or virtual stiffness conditions. Forces applied at a robot end-effector due to a compliant environment can be reflected to the user using such an ERF device where a change in the system viscosity in proportion to the force to be transmitted. This paper describes the analytical modeling and experiments that are currently underway to develop an ERF based force feedback element.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a potential to achieve increased safety and higher productivity in some real world applications (e.g., operations in harsh environment) by deploying mobile robots. Some of the tasks executed in such applications (e.g., mining) are time consuming which directly affect the operation productivity. The requirements for complex mechanical interactions with the unstructured environments and the need for operator skill and intuition preclude, at this stage, the possibility of full automation. In order to develop realistic alternatives to present day operations, telerobotics based on the concept of sensorimotor augmented reality for such applications has been favored. This telerobotic technique uses the intelligent mediation between the task specification and task execution to enhance operational efficiency using shared mode control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperated systems are becoming more and more complex. The integration of simulations, operator interfaces, different control levels and hardware as well as increasing real-time requirements are challenging. This paper presents an approach to cope with these demands applying methods and paradigms of large-scale industrial control. The first part of the paper deals with the integrated control architecture (ICa), that is designed as a framework for the development of distributed control systems. Using ICa each component of the teleoperated system is implemented as an independent agent, that uses the ICa broker as an object bus to communicate with the rest of the agent community. In the second part the teleoperated system is described. It consists of a simulation of a 7 degree of freedom, anthropomorphic manipulator, a control agent and a master system to teleoperate it. Finally some experiments are carried out that demonstrate the performance and the future potential of the applied control architecture in the development of teleoperated systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restoration activities after disasters such as landslides or rock avalanches require rapid action, but in fact, in most cases these activities are very inefficient because of the danger of secondary disasters. A system which can operate reconstruction machinery by remote control was therefore developed, and it was installed on general-purpose construction machines (backhoe shovels). Control performance experiments and field experiments on this developed system were carried out, and its effectiveness was confirmed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present our progress in the research and development of an augmented reality (AR) system for the remote inspection of hazardous environments. It specifically addresses one particular application with which we are involved--that of improving the inspection of underground sewer pipes using robotic vehicles and 3D graphical overlays coupled with stereoscopic visual data. Traditional sewer inspection using a human operator and CCTV systems is a mature technology--though the task itself is difficult, subjective and prone to error. The work described here proposes not to replace the expert human inspector--but to enhance and increase the information that is available to him and to augment that information with other previously stored data. We describe our current system components which comprise a robotic stereo head device, a simulated sewer crawling vehicle and our AR system. We then go on to discuss the lengthy calibration procedures which are necessary in to align any graphical overlay information with live video data. Some experiments in determining alignment errors under head motion and some investigations into the use of a calibrated virtual cursor are then described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient and reliable interface to manage telemetry information is most important in the teleoperation of space robots. Operators need to be able to recognize and verify large amounts of telemetry information quickly and accurately. Visual information around the workspace of space robots is very limited, and the detailed position of work is uncertain. These difficulties raise the load on operators. We have been running experiments on assembling antennas using the Engineering Test Satellite VII (ETS-VII), so we are very much aware of the need for an effective man-machine interface to handle telemetry information. We have developed an audio interface system for the efficient operation of ETS-VII. Unlike a visual interface, this audio interface allows an operator to (1) perceive information even if pay small attention for it, and (2) easily identify trends and changes. The system analyzes telemetry information in real- time, and converts changes in the status of information into voice data, and changes in the magnitude of forces into the frequency of motor noise. The effectiveness of this audio interface was verified in operations of ETS-VII by monitoring eye movements over time. Time is measured by the mean interval between status changes and command submissions. An eye mark recorder records eye movements. The data suggests significant effects of the audio interface system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A set of tolls addressing the problems specific to the control and monitoring of remote robotic systems from extreme distances has been developed. The tools include the capability to model and visualize the remote environment, to generate and edit complex task scripts, to execute the scripts to supervisory control mode and to monitor and diagnostic equipment from multiple remote locations. Two prototype systems are implemented for demonstration. The first demonstration, using a prototype joint design called Dexter, shows the applicability of the approach to space robotic operation in low Earth orbit. The second demonstration uses a remotely controlled excavator in an operational open-pit tar sand mine. This demonstrates that the tools developed can also be used for planetary exploration operations as well as for terrestrial mining applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A recent progression of heavy lift assist device is to place the human operator closer to the end effector to provide close coupling of the operator input and the payload. This close coupling of the human for control and the power of a heavy lift assist device provides improved accuracy with ease of handling in the case of heavy and bulk objects. However, collisions with obstacles may still occur in a crowded manufacturing environment due to the large work piece inertia characteristics, inappropriate motion command from the operator and inattention or fatigue of the human operator. In this research, a fictitious force field is assigned to each obstacle in the workspace. As a work piece moves closer to an object, an impedance force is calculated and combined with the control forces, in order to prevent collisions. In addition, a set of impedance fields are developed and applied that associate desired trajectories with the layout of the workspace. Thus, the force fields guide the work piece to achieve advantageous orientations and positions during the material handling operation. This includes adjustment of the height of the work piece for placement on tables, orientation to preset positions, and optimizing the configuration of the lift assist robot during motion. Experimental results show that this approach to augmentation provides the operator with a natural and effective interface to the heavy lift assist device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews different forms of human and computers co-operative control in teleoperation. The need to define a new form of co-operative control is addressed due to the increase capabilities of computers in perception, decision- making and learning. A brief description of shared and traded control and supervisory control is given with their differences from co-operator control. A co-operative control concept based on the less strict sense of supervisory control is proposed. In this concept, human operators and computers can interact and co-operate in the operation at both intelligence and execution levels. Humans and computers' distinctive and overlapping advantages are utilized in the control concept. A framework for developing co-operative control systems is introduced with functional components depicted. A telerobotic system with a demonstration task has been developed as the test-bed to investigate different issues of co-operative control. The task description and operational mode of the system are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Orientation of a camera onboard an uninhabited aerial vehicle (UAV) that is used for reconnaissance is performed manually by an operator using a two degree-of-freedom joystick that commands camera azimuth and elevation. The flight path of the UAV is accomplished autonomously by an autopilot that transforms camera orientation into guidance commands that cause the UAV to fly to a destination, loiter or track a target as instructed by the operator. This control mode permits single-person operation of the UAV mission. In a manual mode, the aircraft circles the target at a fixed standoff distance from the UAV that is determined by instantaneous camera orientation and if available, ranging information to the target. The operator must continually track the target in this mode. In a shared control mode, the target location in an earth-fixed frame is determined from the camera orientation at a single point in time, in conjunction with the concurrent UAV position, the latter assumed to be available from GPS or an onboard inertial guidance system. This leaves the operator free to pan for other targets or perform other tasks. He can update target location or switch between the manual and shared modes at any time. This method also provides the added benefit that if the feed from the remote operator is lost, the aircraft will continue on its current heading or loiter smaller to current UAV operation. This teleoperation concept is being validated in Wright State University's CAVE automated virtual environment located at Wright Patterson Air Force Base.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Telemanufacturing Workcell Over The Internet project consists of a robotic arm which can be remotely monitored and controlled via the Internet. The project was tested in 2 exhibitions in Singapore, one using ATM network and the other modem dial-up. The issues encountered and some possible solutions are presented in this paper. One of the main issues was the use of a virtual model to act as a feedback as well as control the robot versus the use of images and video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The PumaPaint project is a web robot that allows users to create original artwork on the World Wide Web. The site allows control of a PUMA 760 robot equipped with four paintbrushes; jars of red, green, blue and yellow paint and white paper attached to an easel. Users must download a JavaTM interface allowing interactive control of the robot. This interface contains two windows showing live camera views of the work site and various controls for connecting and disconnecting to the robot, viewing the task status and controlling the painting task. During the first year of operation of the site, June 3rd, 1998 to June 2nd 1999, approximately 5,000 users produced 390 canvases. This paper presents summary data from one year of operation, discusses the author's experiences in operating the site and examines some of the artwork produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a robotics technology--the Autonomous Observer (AO)--developed to facilitate experimentation over the Internet. The AO is a mobile robot equipped with visual sensors. It applies visual tracking and motion planning techniques to track a designated moving object (the target) in an environment cluttered by obstacles and repeatedly measure the target's pose. This pose is sent over the Internet to remote users who can observe 3D real-time graphic renderings of the target's motion in its environment under individually selected viewpoints. The AO was used to set up an experiment in which a can-collecting robot (playing the role of the target) equipped with a range sensor and a simple arm automatically detects coke cans and collects them in a bag.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperation of remote devices on the World Wide Web is becoming more common and feasible. Prices on devices ranging from digital cameras to LEGO RCXs(tm) have dropped, making them available to a much wider audience. Increasing availability of remotely operable devices removes one barrier to ubiquitous telepresence, but leaves others intact. One of the remaining barriers is the need for a user to develop and deploy an end-to-end solution for device manipulation. The goal of our research is to reduce this barrier by making a flexible end-to-end solution accessible to a wide audience of potential Web device developers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a methodology for the development of Teleoperated Robotic System through Internet. Initially, it is presented a bibliographical review of the telerobotic systems that uses Internet as way of control. The methodology is implemented and tested through the development of two systems. The first is a manipulator with two degrees of freedom commanded remotely through Internet denominated RobWebCam. The second is a system which teleoperates an ABB (Asea Brown Boveri) Industrial Robot of six degrees of freedom denominated RobWebLink.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic systems can be controlled remotely through the use of telerobotics. This work presents a through-the-internet teleoperation system for remotely operating the IRB2000 industrial robot. The IRB2000 controller allows external access through a RS232 serial communication link, which is based on a 42 function proprietary communication protocol. The proposed teleoperation system uses this communication capability by connecting it to a local area network based on TCP/IP (Transport Control Protocol/Internet Protocol). The system was implemented using a Client/Server architecture, having as server a UNIX (LINUX) platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a summary of the design and integration of a haptic interface with a nuclear industry accepted control system and manipulator. The control system is a UK Robotics Advanced Teleoperation Controller and the manipulator is a Schilling Titan II hydraulic arm. Operator performance has been studied for peg in the hole, grinding and drilling tasks, both with and without haptic communication. The results of these experiments are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.