PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss a new approach for developing useful robotic systems. The main idea is to simplify the robotic systems by eliminating the sensory and control hardware from the robotic platform. Such `sensor-less' systems have significant advantages over the more traditional `sensor-driven' systems. The paper demonstrates the efficacy of this new approach for autonomous navigation of removing robots that require no onboard sensing or control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced manipulation skills that enable cooperating robots in a work cell to recognize, handle and assemble arbitrarily placed objects require sensory information on both the environment and the assembly process. Using standard approaches it becomes difficult to coordinate sensor usage and data fusion under a given task when the number of sensors becomes large or is changed during run-time. We present a solution to both coordination and fusion based on the multi (sensor) agent paradigm: each agent implements a sensory skill, a negotiation protocol and a physical communication interface to all other agents. Within this sensor-network teams cooperate on a common task. They are formed dynamically after a negotiation phase following a specific task formulation. Our framework consists of a formal specification of the requirements that the sensory skill of an agent has to meet and a comprehensive library of (C++)-objects encapsulating all of the negotiation protocol and communications. This separation makes it very easy to implement individual sensory skills. We show how the abstract concepts of this approach and the metaphor of `negotiation' work in a real-world network: several uncalibrated cameras are used to guide a manipulator towards a target. We also show how agent-teams may easily (self- )reconfigure during task execution in the case of unexpected events. The framework is distributed free of charge and can be obtained over the Internet at http://magic.uni- bielefeld.de.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an architecture for the behavioral organization of autonomous robots. For the example of navigation, we describe how complex behavior can be broken up into multiple elementary behaviors. The overall behavior is generated by activating and deactivating the elementary behaviors dependent on both the sensor input and the intrinsic logics of the behavioral plan needed to fulfill the task. The elementary behaviors as well as their organization into behavioral sequences are achieved by appropriately designed nonlinear dynamical systems. We show how intrinsicly discrete functionalities like counting and decision making can be realized by nonlinear dynamical systems and how these dynamics can be coupled stably and flexibly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an analysis of a decentralized coordination strategy for organizing and controlling a team of mobile robots performing collective search. The alpha- beta coordination strategy is a family of collective search algorithms that allow teams of communicating robots to implicitly coordinate their search activities through a division of labor based on self-selected roles. In an alpha- beta team, alpha agents are motivated to improve their status by exploring new regions of the search space. Beta agents are conservative, and rely on the alpha agents to provide advanced information on favorable regions of the search space. An agent selects its current role dynamically based on its current status value relative to the current status values of the other team members. Status is determined by some function of the agent's sensor readings, and is generally a measurement of source intensity at the agent's current location. Variations on the decision rules determining alpha and beta behavior produce different versions of the algorithm that lead to different global properties. The alpha-beta strategy is based on a simple finite-state machine that implements a form of Variable Structure Control (VSC). The VSC system changes the dynamics of the collective system by abruptly switching at defined states to alternative control laws. In VSC, Lyapunov's direct method is often used to design control surfaces which guide the system to a given goal. We introduce the alpha- beta algorithm and present an analysis of the equilibrium point and the global stability of the alpha-beta algorithm based on Lyapunov's method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design and construction of mobile robots is as much art as a science. In hardware architecture, researchers tend to construct a low-cost and reliable platform which equips with various sensory system for sensing the change of the environment to offer useful information to the navigation system. An autonomous navigation system plays a role in an mobile robot as the brain in human being. It generates action command according to those sensory data from the perception system to direct the mobile robot to go to desired positions or accomplish useful tasks without human intervention in real-world. An important problem in autonomous navigation is the need to cope with the large amount of uncertainty that is inherent of natural environment. Therefore the development of techniques for autonomous navigation in real-world environments constitutes one of the major trends in the current research on robotics. Inspired with the concept of software agents, reactive control and behavior-based control, a modular architecture, called Auto-agent, for mobile navigation is proposed. The main characteristic of Auto-agent is as following: Behavioral agents cooperate by means of communicating with other behavioral agents intermittently to achieve their local goal and the goals of the community as a whole because no one individually has sufficient competence, resources and information to solve the entire problem. Auto-agent gains advantages from the characteristics of distributed system, it offers the possibility to find an acceptable solution with a reasonable time and complexity range. Besides, the modular structure is convenient for an engineer to construct a new behavioral agent and to add it into Auto-agent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of advanced rover navigation and manipulation techniques for use by NASA's Sample Return Rover. These techniques include an algorithm for estimating the change in the rover's position and orientation by registering successive range maps from the rover's hazard avoidance stereo camera pair and the fusion of this information with the rover's wheel odometry. This map registration technique is also extended to register range maps to an a priori model-based range map for relative rover position and orientation determination. Finally, a technique for the robust and precise positioning of a rover- mounted manipulator using visual feedback from the rover's stereo pair is presented. Experimental results for each of these techniques is documented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, state and information space estimation methods used in both linear and nonlinear systems are compared. General recursive estimation and in particular the Kalman filter is discussed and a Bayesian approach to probabilistic information fusion is outlined. The notion and measures of information are defined. This leads to the derivation of the algebraic equivalent of the Kalman filter, the linear information filter. The characteristics of this filter and the advantages of information space estimation are discussed. Examples are then implemented in software to illustrate the algebraic equivalence of the Kalman and Information filters. The benefits of information space are also explored in these case studies. State estimation for systems with nonlinearities is considered and the extended Kalman filter treated. Linear information space is then extended to nonlinear information space by deriving the extended information filter. This establishes all the necessary mathematical tools required for exhaustive information space estimation. The advantages of the extended information filter over the extended Kalman filter are presented and demonstrated. This extended information filter constitute an original and significant contribution to estimation theory made in this paper. Examples of systems involving both nonlinear state evolution and nonlinear observations are simulated. Thus, the algebraic equivalence of the two filters is illustrated and the benefits of nonlinear information space manifested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Range-vision sensor systems can incorporate range images or single point measurements. Research incorporating point range measurements has focused on the area of map generation for mobile robots. These systems can utilize the fact that the objects sensed tend to be large and planar. The approach presented in this paper fuses information obtained from a point range measurement with visual information to produce estimates of the relative 3D position and orientation of a small, non-planar object with respect to a robot end- effector. The paper describes a real-time sensor fusion system for performing dynamic visual servoing using a camera and a point laser range sensor. The system is based upon the object model reference approach. This approach, which can be used to develop multi-sensor fusion systems that fuse dynamic sensor data from diverse sensors in real-time, uses a description of the object to be sensed in order to develop a combined observation-dependency sensor model. The range- vision sensor system is evaluated in terms of accuracy and robustness. The results show that the use of a range sensor significantly improves the system performance when there is poor or insufficient camera information. The system developed is suitable for visual servoing applications, particularly robot assembly operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently multisensor data fusion has proven its necessity for computer vision and robotics applications. 3D scene reconstruction and model building have been greatly improved in systems that employ multiple sensors and/or multiple cues data fusion/integration. In this paper, we present a framework for integrating registered multiple sensory data, sparse range data from laser range finders and dense depth maps of shape from shading from intensity images, for improving the 3D reconstruction of visible surfaces of 3D objects. Two methods are used for data integration and surface reconstruction. In the first method, data are integrated using a local error propagation algorithm, which we have developed in this paper. In the second method, the integration process is carried out using a feedforward neural networks with backpropagation learning rule. It is found that the integration of sparse depth measurements has greatly enhanced the 3D visible surface obtained from shape from shading in terms of metric measurements. We also review the current research in the area of multisensor/multicue data fusion for 3D object reconstructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ARTMAP neural network is used to integrate visual information and ultrasonic sensory information on a B14 mobile robot. Training samples for the neural network are acquired without human intervention. Sensory snapshots are retrospectively associated with the distance to the wall, provided by on-board odometry as the robot travels in a straight line. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. The neural network effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of the ongoing work is to fuse information from uncertain environmental data taken by cameras, short range sensors including infrared and ultrasound sensors for strategic target recognition and task specific action in Mobile Robot applications. Our present goal in this paper is to demonstrate target recognition for service robot in a simple office environment. It is proposed to fuse all sensory signals obtained from multiple sensors over a fully layer-connected sensor network system that provides an equal opportunity competitive environment for sensory data where those bearing less uncertainty, less complexity and less inconsistencies with the overall goal survive, while others fade out. In our work, this task is achieved as a decision fusion using the Fractal Inference Network (FIN), where information patterns or units--modeled as textured belief functions bearing a fractal dimension due to uncertainty-- propagate while being processed at the nodes of the network. Each local process of a node generates a multiresolutional feature fusion. In this model, the environment is observed by multisensors of different type, different resolution and different spatial location without a prescheduled sensing scenario in data gathering. Node activation and flow control of information over the FIN is performed by a neuro- controller, a concept that has been developed recently as an improvement over the classical Fractal Inference Network. In this paper, the mathematical closed form representation for decision fusion over the FIN is developed in a way suitable for analysis and is applied to a NOMAD mobile robot servicing an office environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smart man machine interfaces turn out to be a key technology for service robots, for automation applications in industrial environments as well as in future scenarios for applications in space. For either field, the use of virtual reality (VR) techniques showed a great potential. At the IRF a virtual reality system was developed and implemented which allows the intuitive control of a multi-robot system and different automation systems under one unified VR framework. As the developed multi-robot system is also employed for space application, the intuitive commanding of inspection and teleoperation sequences is of great interest. In order to facilitate teleoperation and inspection, we make use of several metaphors and a vision system as an `intelligent sensor'. One major metaphor to be presented in the paper is the `TV-view into reality', where a TV-set is displayed in the virtual world with images of the real world being mapped onto the screen as textures. The user can move the TV-set in the virtual world and, as the image generating camera is carried by a robot, the camera-viewpoint changes accordingly. Thus the user can explore the physical world `behind' the virtual world, which is ideal for inspection and teleoperation tasks. By means of real world images and with different measurement-services provided by the underlying 3D vision system, the user can thus interactively build up or refine the virtual world according to the physical world he is watching through the TV-set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel human robot interface that is an integration of real time visual tracking and microphone array signal processing. The proposed interface is intended to be used as a speech signal input method for human collaborative robot. Utilizing it, the robot can clearly listen human master's voice remotely as if a wireless microphone were put just in front of the master. A novel technique to form `acoustic focus' at human face is developed. To track and locate the face dynamically, real time face tracking and stereo vision are utilized. To make the acoustic focus at the face, microphones array is utilized. Setting gain and delay of each microphone properly enables to form acoustic focus at desired location. The gain and delay are determined based upon the location of the face. Results of preliminary experiments and simulations demonstrate feasibility of the proposed idea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of a small rover to operate semi-autonomously in a hazardous planetary environment was recently demonstrated by the Sojourner mission to Mars in July of 1997. Sojourner stayed within a 50 meter radius of the Pathfinder lander. Current NASA plans call for extended year-long, multikilometer treks for the 2003 and 2005 missions. A greater deal of rover autonomy is required for such missions. We have recently developed a hybrid wavelet/neural network based system called BISMARC (Biologically Inspired System for Map-based Autonomous Rover Control), that is capable of such autonomy. Simulations reported at this meeting last year demonstrated that the system is capable of control for multiple rovers involved in a multiple cache recovery scenario. This robust behavior was obtained through the use of a free-flow hierarchy as an action selection mechanism. This paper extends BISMARC to include fault tolerance in the sensing and mechanical rover subsystems. The results of simulation studies in a Mars environment are also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is argued that for real-world applications action selection should be satisficing, i.e. merely `good enough' rather than optimal. It is then demonstrated that multiple objective decision theory provides a suitable framework for formulating action selection mechanisms that are satisficing. A set of experiments demonstrate the potential advantages of the proposed action selection mechanisms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the use of multi-robot-systems new chances and perspectives are revealed for industrial, space- and underwater-applications. At the IRF, a versatile multi- robot-control, which fully exploits the inherent flexibility of a multi-robot-system, has been developed. In order to guarantee optimized system-throughput and increased autonomy, the system builds on a new resource-based action planning approach to coordinate the different robot manipulators and other automation components in the workcell. An important issue of the realized action planning component to be applicable to real world problems is that it is realized as an integral part of the hierarchical multi- robot control structure IRCS (Intelligent Robot Control System). The IRCS is the multi-robot control that was chosen by the German Space Agency (DLR) for major space automation projects. In this structure the resource-based action planning component is tightly integrated with components for coordinated task execution and collision avoidance to guarantee save operation of all agents in the multi-robot system. As the action planning component `understands' task descriptions on a high level of abstraction it is also the perfect counterpart for a Projective Virtual Reality (VR) system. The paper will describe the mechanism of resource based action planning, the practical experiences gained from the implementation for the IRCS as well as its services to support VR-based man machine interfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this work is to create a framework for the dynamic planning of sensor actions for an autonomous mobile robot. The framework uses Bayesian decision analysis, i.e., a decision-theoretic method, to evaluate possible sensor actions and selecting the most appropriate ones given the available sensors and what is currently known about the state of the world. Since sensing changes the knowledge of the system and since the current state of the robot (task, position, etc.) determines what knowledge is relevant, the evaluation and selection of sensing actions is an on-going process that effectively determines the behavior of the robot. The framework has been implemented on a real mobile robot and has been proven to be able to control in real-time the sensor actions of the system. In current work we are investigating methods to reduce or automatically generate the necessary model information needed by the decision- theoretic method to select the appropriate sensor actions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of sensor planning in 3D object reconstruction is to acquire enough 2D information about an object to create a 3D model of that object. This requires the acquisition of images from different viewpoints, the registration of each view, and the integration of all acquired information into a single model. Our focus is on the acquisition stage, we want to determine sensor poses or viewpoints that provide the next best view in an object reconstruction task. The Next Best View (NBV) is the determination of a new sensor position that reveals an optimal amount of unknown information about the object being modeled. The goal of a NBV reconstruction system is usually to model an object using the smallest number of views. Our approach to complete this task is to study a volumetric representation of the model after a new view is obtained. Since the volumetric model consists of known and unknown information, we want to find the viewpoint from which the largest amount of unknown data can be acquired. The NBV algorithm is integrated into a system that acquires synthetic range data from a computer object model, calculates new viewpoints according to an objective function, and reconstructs a complete volumetric model of the object of interest. The NBV algorithm is described in depth and experimental results are given. Also included is a review of related work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new approach for the coordination of the motion axes of a mobile manipulator based on fuzzy behavioral algorithms and its implementation on a physical demonstrator is presented. The kinematic redundancy of the overall system (consisting of a 7 DOF manipulator and a 3 DOF mobile robot) will be used for autonomous and reactive motion of the mobile manipulator within poorly structured and even dynamically changing surroundings. Sensors around the mobile and along the manipulator will provide the necessary information for navigation purposes and perception of the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an approach for learning manipulation skills of robot arms based on complex sensor data. The first component of the used model serves as a projection of high-dimensional input data into an eigenspace by using Principal Component Analysis. Complex sensor data can be efficiently compressed if robot movements achieving optimal manipulation tasks are constrained to a local scenario. The second component is an adaptive B-spline model whose input space is the eigenspace and whose outputs are robot motion parameters. In the offline learning phase, an appropriate eigenspace can be built by extracting eigenvectors from a sequence of sampling sensor patterns. The B-spline model is then trained for smooth and correct interpolation. In the online application phase, through the cascaded two components, a sensor pattern can be mapped into robot action for performing the specified task. This approach makes tasks such as visually guided positioning much easier to implement. Instead of undergoing cumbersome hand-eye calibration processes, our system is trained in a supervised learning procedure using systematical perturbation motion around the optimal manipulation pose. If more sensors or some robust geometric features from the image processing are available, they can also be added to the input vector. Therefore, the proposed model can integrate multiple sensors and multiple modalities. Our experimental setup is a two-arm robotic system with `self-viewing' hand-eyes and force/torque sensors mounted on each parallel jaw gripper. Implementations with one-hand grasping and two-hand assembly based on visual and force sensors show that the method works even when no robust geometric features can be extracted from the sensor pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a system for studying emergence of motion patterns in autonomous mobile robotic systems. The system implements an instance-based reinforcement learning control. Three spaces are of importance in formulation of the control scheme. They are the work space, the sensor space, and the action space. Important feature of our system is that all these spaces are assumed to be continuous. The core part of the system is a classifier system. Based on the sensory state space analysis, the control is decentralized and is specified at the lowest level of the control system. However, the local controllers are implicitly connected through the perceived environment information. Therefore, they constitute a dynamic environment with respect to each other. The proposed control scheme is tested under simulation for a mobile robot in a navigation task. It is shown that some patterns of global behavior--such as collision avoidance, wall-following, light-seeking--can emerge from the local controllers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we investigate a model for self-organizing modular robotic systems based upon dynamical systems theory. Sonar sensing is used as a case study, and the effects of nonlinear interactions between sonar sensing modules are examined. We present and analyze an initial set of results based upon an implementation of the model in simulation. The results show that the sonar sensors organize the relative phase of their sampling in response to changes in the demand placed on them for sensory data. Efficient sampling rates are achieved by the system adapting to take advantage of features in the environment. We investigate the types of phase patterns that emerge, and examine their relationship with symmetries present in the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is devoted to the control problem of a robot manipulator for a class of constrained motions in an unknown environment. To accomplish a task in the presence of uncertainties, we propose a new guidance and control strategy based on multisensor fusion. Three different sensors-robot joint encoders, a wrist force-torque sensor and a vision system--are utilized for our task. First of all, a sensor-based hybrid position/force control scheme is proposed for an unknown contact surface. Secondly, a new multisensor fusion scheme is utilized to handle an uncalibrated workcell, wherein the surface on which there is a path to be followed by a robot is assumed to be unknown but visible by the vision system and the precise position and orientation of camera(s) with respect to the base frame of the robot is also assumed to be unknown. Our work is related with areas such as visual servoing, multisensor fusion and robot control for constrained motion. The main features of the proposed approach are: (1) multi-sensor fusion is used both for two disparate sensors (i.e. force- torque and visual sensors) and for complementary observed data rather than redundant ones as in traditional way; (2) visual servoing is realized on the tangent space of the unknown surface; (3) calibration of the camera with respect to the robot is not needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses advancements in the speed of a semi- autonomous, calibration-free robot-control system which uses camera-space manipulation. Robust and precise vision-based manipulation using camera-space manipulation requires the establishment of 2D camera-space objectives in each uncalibrated camera which are consistent with the task objectives. The increased speed in the system stems from the rapid placement of `common points' for the determination of compatible maneuver objectives in the image planes of each of the participant, widely separated, uncalibrated cameras. The previous system used a single-spot laser pointer mounted on a two-axis pan/tilt unit to crate the `common points' in each of the participating cameras. In a time consuming process, the pan/tilt unit servoed the laser spot to each of the specified locations in a selection camera. The current, improved system uses structured lighting to create a grid of laser sports from a single laser spots in the 2D image planes of each camera and the matching of spots among cameras will be discussed. In regions of greater surface curvature more laser spots are needed to be placed to `capture' better the local curvature of the workpiece. This is achieved by a user-prescribed density of laser spots, as well as user-specified interpolation model. The interpolation model is used for the determination of the image plane task objectives of the participant cameras from targets specified in the selection camera using a number of the `common points'. For relatively flat regions a linear model is used whereas a quadratic model may be employed for regions of higher curvature. This paper also presents results using the new system which demonstrate its speed and high level of accuracy in the positioning and orientation of an end effector with respect to a 3D body of arbitrary position and orientation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.