PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper presents an exact solution to the delayed data problem for the information form of the Kalman filter, together with its application to decentralised sensing networks. To date, the most common method of handling delayed data in sensing networks has been to use a conservative time alignment of the observation data with the filter time. However, by accounting for the correlation between the late data and the filter over the delayed period, an exact solution is possible. The inclusion of this information correlation term adds little extra complexity, and may be applied in an information filter update stage which is associative. The delayed data algorithm can also be used to handle data that is asequent or out of order. The asequent data problem is presented in a simple recursive information filter form. The information filter equations presented in this paper are applied in a decentralised picture compilation problem. This involves multiple aircraft tracking multiple ground targets and the construction of a single common tactical picture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many future missions for mobile robots demand multi-robot systems which are capable of operating in large environments for long periods of time. A critical capability is that each robot must be able to localize itself. However, GPS cannot be used in many environments (such as within city streets, under water, indoors, beneath foliage or extra-terrestrial robotic missions) where mobile robots are likely to become commonplace. A widely researched alternative is Simultaneous Localization and Map Building (SLAM): the vehicle constructs a map and, concurrently, estimates its own position. In this paper we consider the problem of building and maintaining an extremely large map (of one million beacons). We describe a fully distributed, highly scaleable SLAM algorithm which is based on distributed data fusion systems. A central map is maintained in global coordinates using the Split Covariance Intersection (SCI) algorithm. Relative and local maps are run independently of the central map and their estimates are periodically fused with the central map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the simultaneous localization and map building (SLAM) problem. The SLAM problem asks if it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and then to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location. Conventional approaches to this problem are plagued with a prohibitively large increase in computation with the size of the environment. This paper offers a new solution to the SLAM problem that is both consistent and computationally feasible. The proposed algorithm builds a map expressing the relationships between landmarks which is then transformed into landmark locations. Experimental results are presented employing the new algorithm on a subsea vehicle using a scanning sonar sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of a two-tier state estimation approach for NASA/JPL's FIDO Rover that utilizes wheel odometry, inertial measurement sensors, and a sun sensor to generate accurate estimates of the rover's position and attitude throughout a rover traverse. The state estimation approach makes use of a linear Kalman filter to estimate the rate sensor bias terms associated with the inertial measurement sensors and then uses these estimated rate sensor bias terms to compute the attitude of the rover during a traverse. The estimated attitude terms are then combined with the wheel odometry to determine the rover's position and attitude through an extended Kalman filter approach. Finally, the absolute heading of the vehicle is determined via a sun sensor which is then utilized to initialize the rover's heading prior to the next planning cycle for the rover's operations. This paper describes the formulation, implementation, and results associated with the two-tier state estimation approach for the FIDO rover.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the development of unmanned airships for military use during the past decade, and the current status of the Small Airship Surveillance System, Low Intensity Target Exploitation (SASS LITE) platform. Topics covered will also include various missions planned and conducted, and technological advances expected to be implemented on unmanned airships in the near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Estimation of superresolution models is a problem of great interest across a broad range of applications in computer vision and robot perception. However, approaches to superresolution model estimation tend to have very high computational complexity. In this paper, we address the superresolution model estimation problem using a general modeling approach based on two-layer Bayesian or causal networks. Sensor nodes encode stochastic sensor models, while model nodes encode probabilistic inferences made about their state. The model nodes are arranged as a MRF spatial lattice. We derive optimal estimation procedures for several classes of superresolution world models, including single and multiple observation models, and analyze their computational complexity. We subsequently introduce three suboptimal estimation methods: Reinjection of Marginals (ROM), Independent Opinion Pool (IOP), and Non-Propagation of Neighbors (NPN). These methods, although suboptimal, are extremely efficient and provide high-quality superresolution estimates. We conclude by presenting results from the application of these procedures to the fusion of multiple aerial images to form highly accurate superresolution images for airborne surveying and monitoring applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic unmanned aerial vehicles have an enormous potential as observation and data-gathering platforms for a wide variety of applications. This paper discusses components of a perception architecture being developed for AURORA (Autonomous Unmanned Remote Monitoring Robotic Airship). The AURORA project focuses on the development of the technologies required for substantially autonomous unmanned aerial vehicles, and for robotic airships in particular. We describe our approach to spatial representation, which incorporates a Markov Random Field (MRF) model used for encoding spatial inferences obtained from sensor imagery. We present a dynamic approach to target recognition that uses a cycle of hypothesis formulation, experiment planning for hypothesis validation, experiment execution, and hypothesis evaluation to confirm or reject the classification of targets into object classes. We also discuss an approach to automatic hovering and landing using visual servoing techniques and interaction matrices, and present preliminary experimental results from our work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We believe that solar powered, autonomous airships with the capability to embark on extended duration sampling missions will serve as valuable tools for environmental science. In this paper, we outline our vision for an autonomous airship and discuss some of the applications for which such a craft would be well suited. We also report on our efforts to date to realize this vision. Specifically, we discuss the use of solar energy as a renewable source of power for airships. We also describe the configuration of a nine meter airship that we will use as a testbed for environmental sampling and autonomy research. We conclude by outlining directions for future research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed and built a set of miniature robots and developed a distributed software system to control them. We present experimental results on a surveillance task in which multiple robots patrol an area and watch for motion. We discuss how the limited communication bandwidth affects robot performance in accomplishing the task and analyze how performance depends on the number of robots that share the bandwidth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for light weight yet highly mobile robotic platforms is driven by the limitation of available power. With unlimited energy, surface exploration missions could survive for months or years and greatly exceed their current productivity. The Sun-Synchronous Navigation project is developing long-duration solar-powered robot exploration through research in planning techniques and low-mass robot configurations. Hyperion is a rover designed and built for experiments in sun-synchronous exploration. This paper details Hyperion's steering mechanism and control, which features 4-wheel independent drive and an innovative passively articulated steering joint for locomotion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe an architecture for the development of autonomy software for multi-robot distributed control and collective estimation. CAMPOUT, the Control Architecture for Multi-Robot Planetary OUTposts, provides communication facilities for sharing of state information across robots and it uses a behavior network for representation and execution of group activities as well as the activities of a single robot. In our research, we have shown that CAMPOUT provides a level of abstraction that enables us to develop multi-robot software in a manner much similar to what we use for single robot software development. We showcase the main architectural components by describing two multi-robot tasks for planetary construction and collective cliff-descent. For both tasks, we show how behavior networks can be used to describe group activities and how publish/subscribe and other communication mechanisms can be used to share state information across multiple robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a framework for coordinating multiple robots in the execution of a cooperative manipulation task. The coordination is distributed among the robots that use modular, hybrid controllers in order to execute the task. Different plans and models of the environment are built using a mix of global and local information and can be shared by the team members using wireless communication. Our framework uses models and metrics of computation, control, sensing and information in order to dynamically assign roles to the robots, and define the most suitable control hierarchy given the requirements of the task and characteristics of the robots. To test our framework, we have developed a object oriented simulator that allows the user to create different types of robots, to define different environments and to specify various types of control algorithms and communication protocols. Using the simulator, we have been able to test the execution of this task in different scenarios with different numbers of robots. We can verify the efficacy of the robot controllers, the effects of different parameters on the system performance and the ability of the robots to adapt to changes in the environment. Results from experiments with two real robots and with the simulator are used to explore the multi-robot coordination in a cooperative manipulation task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present two approaches for tracking of people in dynamic environments with a moving sensor system. The trajectories of persons are used to extract simple motion patterns. Object detection and tracking is based on range and color information which is provided by a laser range finder and a omnidirectional color camera. Without any predefined person-model the system acquires an internal representation of the person at an initialization phase. This representation is tracked in real-time in dynamic environments. During the tracking procedure illumination conditions are continuously monitored. The tracking approaches were implemented on a robotic wheelchair and evaluated experimentally in different dynamic environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On the software side of modular robotics, there are many challenging steps towards the ultimate intelligent and autonomous robot module (sub-component). Examples of such challenges are to derive distributed task planning, distributed motion planning and distributed control methods. In this paper we focus on the derivation of a distributed or truly modular local motion planner. Such a motion planner must as input take tasks for the assembled modular robot and without any central intelligence deliver the necessary robot motion. More specifically, we illustrate how the method of artificial forces together with a new description of the robot kinematics can be used for developing a distributed local motion planner. The motion planner is truly distributed as the software on each module only needs information about the module itself and of modules which are physically directly connected to it. Although the method is essentially very general, we for simplicity only illustrate it for planar K-linked modular robots. We present the motion planning algorithms for link and joint processors for this special case and we show output of simulations. Furthermore, we present results from applying the motion planner to a planar truly modular robot consisting of 4 links and 4 revolute joints each with their own on-board processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A basic decentralized control algorithm for modular manipulators was introduced to achieve robustness against partial failures, a reduced calculation load, and flexibility with regard to configuration and obstacle avoidance. The algorithm uses a coupled nonlinear spring as a model for a manipulator and calculates the shape of the manipulator to balance with the coupled spring. Because encoding using coupled nonlinear dynamics can explain various shapes very simply, we can reduce the complexity of obstacle avoidance and adaptation to partial faults.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed and are developing a modular robot system called PetRo (Pet Robot) as part of the ReLIVE project. In this paper we briefly introduce the ReLIVE project to give an overall picture of the context within which we are developing PetRo. We compare the design and functions of PetRo to the modular robots we have surveyed, we also give a listing of the Degrees-of-freedoms our configuration deliver. There have been several issues to address during the development of PetRo such as the design of the shape and joints. We present the results we have achieved as well as the simulations we have run to analyze the mobility and self-assembly of the system in a combination of one, two and four modules. More specifically the outcomes from the fabrication of the first module are presented as well as the necessary changes in the design required from the results. Several trade-off had to be made between the complexity of the design and the simplicity of the actuation and control; we present these alongside our reasons to select the current configuration with self-configuration and overall mobility in mind. We also put forwards proposals regarding the inclusion of an array of sensors for an autonomous behavior, we explain our vision of a herd of PetRos with social behaviors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe the application of the MARS model, for modelling and reasoning about modular robot systems, to modular manipulators. The MARS model provides a mechanism for describing robotic components and a method for reasoning about the interaction of these components in modular manipulator configurations. It specifically aims to articulate functionality that is a property of the whole manipulator, but which is not represented in any one component. This functionality arises, in particular, through the capacity for modules to inherit functionality from each other. The paper also uses the case of modular manipulators to illustrate a number of features of the MARS model, including the use of abstract and concrete module classes, and to identify some current limitations of the model. The latter provide the basis for ongoing development of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An experimental modeling language for general purpose simulation, robotic control and factory automation is presented. The advantages of a visual programming interface and data flow architecture are examined. Primarily, the model based organization is proposed as a means of integrating various intelligent systems disciplines to enhance problem solving abilities and improve system utility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensors, Controls, Modeling, and Man-Machine Interfaces
This paper considers the processing of thermal images taken during an arc spraying process. It describes a filtering method for reconstructing an estimate of the required surface temperature from a sequence of images that contain unpredictable amounts of blurring and obscuration. A practical use of the filtering to control average surface temperature during metal arc spraying is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Optical Computerized Tomography (OCT) reconstruction, the well-known restrictions are positivity of the field and boundary conditions. In this paper, the extra information --the probed value of a set of isolated points, which are readily available in the industrial practice, is introduced into OCT based on information fusing techniques. To solve the incompatibility of the different testing systems, which brings high frequencies disturbance to the reconstruction, an adaptive algorithm is developed to make a good use of the pointwise priori. The algorithm, derived from the third Algebraic Reconstruction Technique (ART3), is not only friendly to the information from various measuring systems, but very robust to irregular noise and even missing projections as well. By analyzing the simulated experiments, we strongly suggest that OCT give some attention to the information from external resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes construction of sensor network system for human behavior measurement and accumulation in a room. The network system consists of three layers: a sensing layer, a process layer and integration layer. It is capable of distributed processing in each module which is a group of several sensors. The implemented system of Robotic Room 2 based on distributed objects allows not only easy integration and accumulation of sensor information but also parallel execution of both integrated measurement and accumulation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an integrated virtual space control system for intelligent house room which consists of many kinds of appliances and information systems to enrich human life. The main features of the system are 1) the user controls the system via a large size display which represents system state linked to real appliances, 2) the system utilizes hand gesture to realize intuitive interaction, 3) the virtual space of the display utilizes 3D drag-and-drop metaphor. As for the core component of the proposed system, the paper reports implementation of a hand motion tracking system with multiple views and a controllable camera. Experimental results show that the tracking system solves a problem of the system scale explosion to achieve the accuracy and stability in real-time tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.