PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper addresses the problem of using multiple robots together as a hand to reorient objects in the plane. A planner is described which generates a sequence of intermediate object motions and robot grasps, called a grasp gait, sufficient to achieve a final desired object orientation. These gaits are implemented using 3 phantom haptic interfaces, modified to act as robot fingers. The core control structure of the implementation is presented. Finger force and position errors, as well as object position errors, are discussed, and additional control algorithms are developed to correct for these errors. These gaits are demonstrated to be robust to small disturbances using the additional control algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cooperative multirobot systems are capable of performing fairly complex tasks even while operating in an autonomous or semi-autonomous mode. The multiple views of an object afforded by a multirobot system enhance the overall level of information that is vital for success in such tasks as satellite retrieval operations. Our previous studies along these lines indicated that although simple control strategies were sufficient for the success of such a mission, limitations in the accuracy of the visual sensors led to misses during the grasping phase. This paper extends the cooperative multirobot system framework that was previously presented for more complex retrieval operations such as the recovery of a tumbling satellite. The subsumption architecture that is used for control of the multirobot system is capable of the recovery of a satellite spinning about its long axis. In this paper, it is shown that it is impractical for a free-floating multirobot system to perform the same task for a tumbling satellite. The reach-space limitations of free-floating platforms dictate a free-flying approach to the problem. In addition, using current visual sensing technology, such a system must also include a teleoperated interface due to the accuracy concerns noted in the previous study. We also present the results of an experimental simulation study of a tumbling satellite retrieval by three cooperating robots. This simulation includes full orbital dynamics effects such as atmospheric drag and non-spherical gravitation field perturbations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Natural selection is responsible for the creation of robust and adaptive control systems. Nature's control systems are created only from primitive building blocks. Using insect neurophysiology as a guide, a neural architecture for leg coordination in a hexapod robot has been developed. Reflex chains and sensory feedback mechanisms from various insects and crustacea form the basis of a pattern generator for intra-leg coordination. The pattern generator contains neural oscillators which learn from sensory feedback to produce stepping patterns. Using sensory feedback as the source of learning information allows the pattern generator to adapt to changes in the leg dynamics due to internal or external causes. A coupling between six of the single leg pattern generators is used to produce the inter-leg coordination necessary to establish stable gaits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new very fast algorithm for synthesis of discrete-time neural networks (DTNN) is proposed. For this
purpose the following concepts are employed: (i) introduction of interaction activation functions, (ii) timevarying
DTNN weights distribution, (iii) time-discrete domain synthesis and (iiii) one-step learning iteration
approach. . Theproposed DTNN synthesis procedure is useftil for applications to identification and control of
nonlinear, very fast, dynamical systems. In this sense a DTNN for a nonlinear robot control is designed. As the
contributions ofthe paper, the following items can be cited. A nonlinear, discrete-time state representation of a
neural structure was proposed for one-step learning. Within the structure, interaction activation functions are
introduced which can be combined with input and output activation functions. A new very fast algorithm for one
step learning of DTNN is introduced, where interaction activation functions are employed. The fimctionality of
the proposed DTNN structure was demonstrated with the numerical example where a DTNN model for a
nonlinear robot control is designed. This DTNN model is trained to imitate a nonlinear robot control algorithm,
based on the dynamics of the fill robot model of RRTR-structure. The simulation results show the
satisfactoiy performances ofthe trained DTNN model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An intelligent robotic architecture that autonomously synthesizes goal-oriented behaviors, while connecting sensing and action in real-time, is presented with applications to loosely defined planetary sampling missions. By the goal-oriented behaviors, we mean sequences of actions generated from automatic task monitoring and replanning toward set goals in the presence of uncertainties as well as errors and faults. This architecture is composed of perception and action nets interconnected in closed loops. The perception net, represented as a hierarchy of features that can be extracted from physical as well as logical sensors, manages uncertainties with sensor fusion, sensor planning, and consistency maintenance. The action net, represented as a hierarchy of state transition in which all the possible system behaviors are embedded, generates robust and fault-tolerant system behaviors with on-line adaptive task monitoring and replanning. The proposed intelligent robotic architecture is significant for autonomous planetary robotic sampling -- and related robotic tasks in unstructured environments -- that require robust and fault tolerant behaviors due to expected uncertainties as well as errors in sensing, actuation, and environmental constraints. We use a typical Mars planetary sampling scenario to evaluate the proposed architecture; autonomous soil science where a robot arm trenches soil to examine and deposit soil samples to lander based science instrumentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perception for an intelligent system is partially accomplished through fusing information from a number of complementary and redundant sensors. The application of intelligent system technology to sense, reason, and control unsupervised autonomous microgravity experiments requires a sensor fusion unit (SFU) that is fault-tolerant, highly available, and intelligent. Also, it should have a generic architecture to enhance the system development methodology. A generic architecture for fusion of environmental sensors for autonomous deployment of microgravity experiments is proposed in this paper. The proposed SFU has the characteristics of high data integrity, resource redundancy, on-line autonomous serviceability, and operating status reporting ability. A discrete event system (DES) model to quantify the performance of the SFU in terms of functionality (i.e., predictable redundancy management), reliability, and availability has been developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent control, inspired by biological and AI (artificial intelligence) principles, has increased the understanding of controlling complex processes without precise mathematical model of the controlled process. Through customized applications, intelligent control has demonstrated that it is a step in the right direction. However, intelligent control has yet to provide a complete solution to the problem of integrated manufacturing systems via intelligent reconfiguration of the robotics systems. The aim of this paper is to present an intelligent control architecture and design methodology based on biological principles that govern self-organization of autonomous agents. Two key structural elements of the proposed control architecture have been tested individually on key pilot applications and shown promising results. The proposed intelligent control design is inspired by observed individual and collective biological behavior in colonies of living organisms that are capable of self-organization into groups of specialized individuals capable of collectively achieving a set of prescribed or emerging objectives. The nervous and brain system in the proposed control architecture is based on reinforcement learning principles and conditioning and modeled using adaptive neurocontrollers. Mathematical control theory (e.g. optimal control, adaptive control, and neurocontrol) is used to coordinate the interactions of multiple robotics agents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robot reliability has become an increasingly important issue in the last few years, in part due to the increased application of robots in hazardous and unstructured environments. However, much of this work leads to complex and nonintuitive analysis, which results in many techniques being impractical due to computational complexity or lack of appropriately complex models for the manipulator. In this paper, we consider the application of notions and techniques from fuzzy logic, fault trees, and Markov modeling to robot fault tolerance. Fuzzy logic lends itself to quantitative reliability calculations in robotics. The crisp failure rates which are usually used are not actually known, while fuzzy logic, due to its ability to work with the actual approximate (fuzzy) failure rates available during the design process, avoids making too many unwarranted assumptions. Fault trees are a standard reliability tool that can easily assimilate fuzzy logic. Markov modeling allows evaluation of multiple failure modes simultaneously, and is thus an appropriate method of modeling failures in redundant robotic systems. However, no method of applying fuzzy logic to Markov models was known to the authors. This opens up the possibility of new techniques for reliability using Markov modeling and fuzzy logic techniques, which are developed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint position sensors are necessary for most robot control systems. A single position sensor failure in a normal robot system can greatly degrade performance. This paper presents a method to obtain position information from Cartesian accelerometers without integration. Depending on the number and location of the accelerometers, the proposed system can tolerate the loss of multiple position sensors. A solution technique suitable for real-time implementation is presented. Simulations were conducted using 5 triaxial accelerometers to recover from the loss of up to 4 joint position sensors on a 7 degree of freedom robot moving in general three dimensional space. The simulations show good estimation performance using non-ideal accelerometer measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an estimation technique which employs measures of information for nonlinear systems. General recursive estimation and in particular the Kalman filter is discussed. A Bayesian approach to probabilistic information fusion is outlined. The notion and measures of information are defined. This leads to the derivation of the algebraic equivalent of the Kalman filter, the linear information filter. The characteristics of this filter and the advantages of information space estimation are discussed. State estimation for systems with nonlinearities is considered and the extended Kalman filter treated. Linear information space is then extended to nonlinear information space by deriving the extended information filter. This establishes all the necessary mathematical tools required for exhaustive information space estimation. The advantages of the extended information filter over the extended Kalman filter are presented and demonstrated. This extended information filter constitutes an original and significant contribution to estimation theory made in this paper. It forms the basis of the decentralized data fusion techniques which can be applied to a modular wheeled mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Occupancy grids are a common representation for mobile robot activities such as obstacle avoidance, map making, localization, and place recognition. An important issue is how to accurately update the grid with new sensor readings rapidly enough to support real-time navigation. The HIMM/VFH methodology works well for a robot navigating at high speeds, but the algorithms show poor performance at lower speeds in cluttered areas. Our approach to overcoming these deficiencies is twofold. First, Dempster-Shafer theory is used for fusion because it provides a well-understood updating scheme and has been demonstrated to have additional desirable properties. Second, the number of grid elements updated varies as a function of the robot's velocity. Experiments used with Clementine, a Denning-Branch MRV4 mobile robot, demonstrate that varying the beam width with the velocity of the robot improves the updating of an occupancy grid using Dempster-Shafer theory versus that of HIMM. Furthermore, the Dempster-Shafer method tends to handle noise better and make smoother and more realistic maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper, which is the second part of a two-paper series, summarizes some key theoretical results in distributed decision fusion and provides new experimental evidence that validates these results. The experimental evidence is important in that it provides a guideline for designing and optimizing a distributed decision fusion system. Furthermore, it demonstrates that the performance of a distributed decision fusion system, when properly designed, may have performance which is comparable to the optimal centralized fusion. The objective of this paper is to benchmark centralized and distributed hypothesis testing algorithms and validate theoretical results from distributed decision fusion using the experimental multifrequency radar data from the Rome Lab Predetection Fusion Program. In a series of papers Thomopoulos et al. have designed and evaluated a robust CFAR detector (code-named RobCFAR) for the distributed fusion of multi-frequency radar data from the Rome Lab Predetection Fusion Program. In this paper the optimal centralized and distributed detectors for the same multi-frequency radar data are developed and their performance is compared with that of the RobCFAR detectors. Several problems that occur due to the necessity of on-line evaluation of the data statistics are addressed. The experimental results are used to validate several theoretical results from distributed decision fusion and benchmark the performance of a CFAR fusion design against the optimal centralized and distributed data fusion design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of Riemann surfaces offers an alternative approach to the characterization of object classes. In a situation where data from multiple sensors is available, the sensor observables can be used as the coordinates which define the space occupied by the class surfaces. The curvature of the surfaces will be governed by the underlying correlations between the phenomenologies of interest and the viewing conditions. The result is a natural coordinate system in which to implement classification algorithms. In this paper, a simple two-dimensional example is presented which introduces the underlying mathematics of the approach. A traditional statistical classifier is then used to examine classification performance. Extending the approach to include non-collocated sensors, sensor measurement error, and noise sources is also briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, mobile robot distributed detection systems are described that use multiple sources of information to construct an internal representation of its environment. We begin by considering a decision fusion model employing the parallel fusion topology. Based on their observations, local sensors make local binary decisions and transmit them to the decision fusion center where they are combined to yield the global decision. Decisions rules are obtained by using different probabilistic methods. The optimal decision scheme at the fusion center is derived, by optimizing three criterions: the mean square error, the maximum a posterior error, and the Bayesian risk. We, then, consider an optimal data fusion where the local decisions are simply added. Finally, an application to the fusion of ultrasound sonar data is presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Architecture and Programming of Distributed Robotic Agents
Object orientation has been widely used in the development of robotic systems for its benefits of modularity and reusability. Modular robotics systems are designed to be flexible, reusable, easily extendible sets of robotic resources that may be structured in various fashions to solve tasks. We describe the role of object oriented methods in the development of a modular robotic system and show how such a system supports collaborative working through a networked laboratory environment. We present an architectural framework for modular robotics, which employs and emphasizes object oriented techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a system to support a wireless vision platform moving within a large space spanned by multiple wireless cells. The system offers a cost effective and scalable design for a distributed active vision system consisting of mobile cameras, a compute cluster of high performance workstations interconnected by a high bandwidth network, and a wireless network divided into distinct cells that connects mobile platforms with the computing cluster. Position information available at the mobile platforms [e.g., via the NAVSTAR global positioning (GPS) system] together with our 'position-aware' division algorithms, and routing protocol ensures continuous quality-of-service (QoS) as the mobile platforms move from cell to cell. This paper extends a real-time QoS communication protocol developed in our earlier work to provide continuous quality of service. We also present vision algorithms that minimize the hardware needed on the mobile platforms using the GPS position information and by off-loading computation to the compute cluster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this project is to telecontrol the movements in 3D-space of a microscope in order to manipulate and measure microsystems or micro parts aided by multi-user virtual reality (VR) environments. Presently microsystems are gaining in interest. Microsystems are small, independent modules, incorporating various functions, such as electronic, micro mechanical, data processing, optical, chemical, medical and biological functions. Though improving the manufacturing technologies, the measurement of the small structures to insure the quality of the process is a key information for the development. So far to measure the micro structures strong microscopes are needed. The use of highly magnifying computerized microscopes is expensive. To insure high quality measurements and distribute the acquired information to multi-user our proposed system is divided into three parts: the virtual reality microscopic environment (VRME)-based user-interface on a SGI workstation to prepare the manipulations and measurements. Secondly the computerized light microscope with the vision system inspecting the scene and getting the images of the specimen. Newly developed vision algorithms are used to analyze micro structures in the scene corresponding to the known a priori model. This vision is extracting position and shape of the objects and then transmitted as feedback to the user of the VRME-system to update his virtual environment. The internet demon is the third part of the system and distributes the information about the position of the micro structures, their shape and the images to the connected users who themselves may interact with the microscope (turn and displace the specimen on the back of a moving platform, or adding their structures to the scene and compare). The key idea behind our project VRME is to use the intuitiveness and the 3D visualization of VR environments coupled with a vision system to perform measurements of micro structures at a high accuracy. The direct feedback real microscope -- VR by vision as internal loop enables a realistic and real-time distribution of measured and analyzed micro structure images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-agent root systems for the real world, must handle negotiations between agents. In this paper, we present a robot language which makes it easy to describe negotiation processes. This language provides concurrency and synchronization based on the logic programming language KL1. We incorporate the language into a look-ahead facility for handling emergent situations. Thus, reactive actions can be described in the language. We illustrate these facilities through cooperate tasks in pick-and-place problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion of Visual Information Sources in Robotic Workspaces
The paper is directed toward describing a practical new means by which a remote user can, with a single image of the object surface of interest, specify objectives of a task which may be described in terms of positioning particular tool junctures relative to user-specified surface points. Also included in the paper is a robust means by which an uncalibrated, remote system -- consisting of a manipulator, two or more cameras, and a pan/tilt-mounted laser pointer -- may be used to carry out the maneuver with very high precision and reliability. This means is based upon the method of 'camera-space manipulation,' wherewith use is made of a conveniently located but uncalibrated laser pointer. The laser pointer is placed upon an uncalibrated, autonomously actuated, pan/tilt unit, and is used to create the compatible maneuver objectives in participating remote cameras, as required by camera-space manipulation. The paper discusses the range of tasks which should be achievable with the paradigm under discussion, as well as some comparisons between this paradigm and alternatives such as teleoperation and virtual-reality-based approaches. Finally, results from several experimental implementations of the method are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a study regarding the sensitivity of a vision-based control method known as camera-space manipulation to inaccuracies in the specified target objectives. For the application considered, the target objectives represent the image-plane appearance of a common 3D physical space point in the cameras that participate in the task. Inaccurate or incompatible target objectives are present because of the nature in which this information is relayed to the robot system from a remote operator. Therefore, a simulation-based sensitivity analysis is presented and discussed with respect to the level of terminal positioning precision that can be achieved in light of the incompatible target objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging parameters such as focus strongly influence data quality and the performance of content extraction techniques. Narrow depth of field gives clear focus but only over a short range of depths. This paper shows results from an algorithm that uses computer-controlled focus and pan camera movement in order to obtain a scene image that is a composite which is in focus at every point. The goal is to explore the possible algorithms for using both intrinsic (focus) and extrinsic (pan and tilt) camera movements to generate an image sequence and then efficiently obtain a fused composite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of constructing 3D maps based on the relative magnification and blurring of a pair of images is presented, where the images are taken at two camera positions of a small displacement. The method, referred to here as 'depth from magnification and blurring,' aims at generating a precise 3D map of local scene of objects to be manipulated by a robot arm with a hand-eye camera. The method uses a single standard camera with telecentric lens, and assumes neither active illumination nor active control of camera parameters. The proposed depth extraction algorithm is simple in computation. Fusing the two disparate sources of depth information, magnification and blurring, the proposed method provides more accurate and robust depth estimation. Experimental results demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.