PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The complementary nature of LADAR, FLIR and color data for Automatic Target Recognition (ATR) is being explored by new algorithms in a three stage recognition system. The stages are initial detection, target class and pose hypothesis generation, and precise model to multisensor coregistration matching. Coregistration globally aligns 3D target models with range, IR and color imagery while simultaneously refining registration parameters between sensors. This model directed approach is expected to improve ATR performance for occluded targets, targets seen at unusual angles, and targets in cluttered settings. Color is used for initial target detection under daylight conditions. Camouflage learned from training examples generalizes across vehicles and distinguishes targets from natural terrain. Target class and pose hypothesis generation will draw upon existing LADAR boundary matching work extended to tolerate more occlusion, clutter and viewpoint variation. New model to multisensor coregistration algorithms appear robust in early tests and are the basis for future coregistration matching. A new interactive 3D visualization environment allows inspection of multisensor data, coregistration, and monitoring of recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research on the formulation of invariant features for model-based object recognition has mostly been concerned with geometric constructs either of the object or in the imaging process. We describe a new method that identifies invariant features computed from long wave infrared imagery. These features are called thermophysical invariants and depend primarily on the material composition of the object. We use this approach for identifying objects or changes in scenes viewed by downward looking infrared images. Features are defined that are functions of only the thermophysical properties of the imaged materials. A physics-based model is derived from the principle of conservation of energy applied at the surface of the imaged regions. A linear form of the model is used to derive features that remain constant despite changes in scene parameters/driving conditions. Simulated and real imagery, as well as ground truth thermo-couple measurements were used to test the behavior of such features. A method of change detection in outdoor scenes is investigated. The invariants are used to detect when a hypothesized material no longer exists at a given location. For example, one can detect when a patch of clay/gravel has been replaced with concrete at a given site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considerably attention has been focused lately on using neural networks to optimize the solution to data association. A neural network has been shown to provide a good approximation to the joint probabilistic data association method. This paper will detail the method in which a Hopfield neural network can be used for data association in the multitarget tracking problem. In addition, a comparable optimal control theory solution will be presented. Both the Hopfield Neural Network and the optimal control theory approach were shown to provide adequate results in optimizing the data association portion of the multitarget tracking problem with neither method proving to be superior. In execution time, the optimal control theory approach is the preferred method. The purpose of this paper is not to state that optimal control theory is superior to the Hopfield Neural Network is solving constrained optimization problems. Optimal control theory cannot be used in cases where all goals are weighted equally, since no one goal can be viewed as a constraint. In conclusion, in certain data association problems, the optimal control theory approach in comparison with the Hopfield Neural Network is shown to be significantly more efficient with the same measure of accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Task-directed vision obviates the need for general image comprehension by focusing attention only on features which contribute useful information to the task at hand. Window-based visual tracking fits into this paradigm as motion tracking becomes a problem of local search in a small image region. While the gains in speed from such methods allow for real-time feature tracking on off-the-shelf hardware, they lose robustness by giving up a more global perspective: Window-based feature trackers are prone to such problems as distraction, illumination changes, fast features, and so forth. To add robustness to feature tracking, we present `tracker fusion,' where multiple trackers simultaneously track the same feature while watching for various problematic circumstances and combine their estimates in a meaningful way. By categorizing different situations in which mistracking occurs, finding appropriate trackers to deal with each such situation, and fusing the resulting trackers together, we construct robust feature trackers which maintain the speed of simple window-based trackers, yet afford greater resistance to mistracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a new framework to control an active camera platform in order to improve the performance of tasks such as active stereo vision. This framework encompasses both calibrated and uncalibrated operations. We show how to control the camera system by making measurements as much as possible on compressed video streams. Coarse depth information, object segmentation, and focus and zoom control can be obtained from JPEG/MPEG streams without complete image stream decompression. When better accuracy is required, finer resolution depth maps can be reconstructed in the usual way by completely recovering the pixel information in the frames of the video stream. Our experimental results show the potential of these strategies for active vision systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the development of a real-time system for object recognition. In contrast to current approaches which mostly rely on specialized multiprocessor architectures for fast processing, we adopt a quite different approach using a distributed network architecture to support parallelism and attain real-time performance. Based on our previous work, this paper details a simple but effective and efficient matching approach to measure the degree of similarity between two image sets that are superimposed on one another. The novelty of our algorithm is to introduce some techniques used in distributed systems for the parallel implementation of a hierarchical image matching scheme. All of the implementation is on general-purpose message-passing architectures which are available on most of the existing computer systems. The system performance is evaluated in terms of recognition accuracy and execution time. Our investigation shows that distributed memory multicomputer can meet the high demand of computation and memory access in real-time imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data fusion provides tools for solving problems which are characterized by distributed and diverse information sources. Many robotic applications need to retrieve particular properties from a scene; so it is necessary to use multiple knowledge sources since a single sensory modality cannot capture all of the physical causes of a given edge feature. In this paper we focus on the problem of extracting features such as image discontinuities from both synthetic and real images. Since edge detection and surface reconstruction are ill-posed problems according to Hadamard, Tikhonov's regularization paradigm is proposed as the basic tool for solving this inversion problem and restoring well-posedness. The proposed framework includes (1) a review of 2D regularization, (2) extension of the standard Tikhonov regularization method by allowing space-variant regularization parameters, and (3) further extension of the regularization paradigm by adding multiple data sources for different sensing modalities. The theoretical approach is complemented by developing a regularized hybrid fusion algorithm for solving the early vision problems of edge detection and surface reconstruction. An evaluation of these methods reveals that this new analytical data fusion technique reconstructs a smooth filtered surface in noisy regions while preserving edge characteristics needed for extracting object features. Results indicate the fusion technique is beneficial for combining edge features from different types of sensory data to locate and identify objects of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces an approach that details how data from a variety of different sources can be combined to produce more reliable and accurate segmentation. By this we mean a surface estimation consisting of surface properties (e.g. orientation, curvature, etc.) and a precise boundary of the surface. Information from more than one source can be useful in that we can use data from one source to overcome a deficiency in another source. These concepts can be extended here to include more sources of data including shape from shading and passive stereo techniques to give us further information. Bayesian networks are used to process the variety of data that is available in order to provide the best segmentation results by extracting the most valuable information from the source images by assessing the plausibility of hypotheses made about the object's surfaces and their interaction. Other papers have dealt with the construction and defining of the Bayesian network whereas this paper will deal in more depth with the reasoning process when new information is incorporated into the network and also it's performance in the segmentation process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor fusion is a method which is of current importance for improving sensor reliability. Because individual sensors are prone to transient errors, mechanical failures, and noise, as well as being of limited accuracy, it is advisable to fuse readings from many heterogeneous sensors. This allows several different sensor technologies to be used together to measure the value of a physical variable. Using a multitude of sensor technologies makes the overall system less sensitive to the failures of any one technology. Unfortunately, it is a non-trivial task to glean the best interpretation from a large number of partially contradictory sensor readings. A number of methods exist for finding the best approximate match for this type of redundant, but possibly faulty, data. This paper presents a new algorithm which finds the best possible interpretation of partially contradictory sensor readings, some of which are incorrect, that contain data of greater than two dimensions. Currently available algorithms return interpretations which are larger than the optimal. This has been done to avoid excessive computational complexity. The algorithm presented here is based on data structures from computational geometry and provides the smallest possible region satisfying the constraints of the problem with a reasonable computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe two Artificial Neural Network (ANN) Models for Audio-visual Data Fusion. For the first model, we start an ANN training with an a-priori chosen static architecture together with a set of weighting parameters for the visual and for the auditory paths. Those weighting parameters, called attentional parameters, are tuned to achieve best performance even if the acoustic environment changes. This model is called the Performance Model (PM). For the second model, we start without any unit in the hidden layer of the ANN. Then we incrementally add new units which are partially connected to either the visual path or to the auditory one, and we reiterate this procedure until the global error cannot be reduced anymore. This model is called the Competence Model (CM). CM and PM are trained and tested with acoustic data and their corresponding visual parameters (defined as the vertical and the horizontal lip widths and as the lip-opening area parameters) for the audio-visual speech recognition of the 10 French vowels in adverse conditions. In both cases, we note the recognition rate and analyze the complementarity between the visual and the auditory information in terms of number of hidden units (which are connected either to the visual or to the auditory inputs vs Signal To Noise Ratio (SNR)) and in terms of the tuning of the attentional parameters vs SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors--or robots--to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper be describing our ongoing implementation of the proposed approach on a team of four mobile robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a cost effective and scalable design for a distributed active vision system that consists of mobile cameras, a cluster of high-performance workstations interconnected by a high bandwidth network, and a wireless LAN that connects mobile cameras with the computing cluster. The goal is to support a wireless vision platform that can quickly be moved within a large wireless cell-space by hand, via remote control, or autonomously. The vision algorithms are redesigned to minimize the hardware needed on the mobile platform, and to make us of a distributed network of high-speed nodes accessible by the wireless host machine. This work addresses three problem areas in a distributed active vision system: the design of distributed algorithms for real-time vision processing, the need for network protocol support to achieve real-time guarantees across the wireless network connection, and development of a distributed processing framework. In particular, we present designs to address three aspects of the wireless network protocol problem: providing deterministic guarantees over a multiple access wireless channel, efficient recovery from lost or corrupted packets, and synchronization of video streams. We use active stereo vision and dynamic focus control as examples of the kind of vision tasks that can be supported by this environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A trend is emerging, as detailed by McKee, towards the use of networks of smaller distributed robots for complicated tasks. A number of areas need to be addressed before such systems can be put into practical environments. Among these are the transfer and sharing of information between robots, control strategies for sensing and movement, interfaces for teleoperator assistance to the multirobot systems, and degree of autonomy. This paper presents a cooperative multirobot system framework that has a flexible degree of autonomy, depending on the complexity of the task that is to be performed. The system uses a wavelet-based method to address the pose and orientation calculations for robot positioning. Our previous work in this area demonstrated that reasonable sensor integration can be done within the wavelet domain at the coarse level. Augmented finite state machines are used under a subsumption architecture for control and integration of local and global maps for the multirobot system. This allows us to explicitly include the teleoperator interface in the system design. We also present the results of an experimental simulation study of a spinning satellite retrieval by three cooperating robots. This simulation includes full orbital dynamics effects such as atmospheric drag and non-spherical gravitation field perturbations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A model suitable for the integration of heterogeneous multisensory information in robots is proposed. This model is supported by a structured world representation which describes the contents of the physical workspace at multiple levels of abstraction through a hierarchy of oriented bounding boxes (OBBs). This hierarchical representation is implemented by means of a frame-based system. The latter provides a high degree of flexibility and facilitates the incorporation of the world representation into a larger knowledge-based robotic system. The proposed representation supports the integration of global and local heterogeneous information. Global information encompasses geometric, parametric and procedural information valid or applicable to the contents of each OBB as a whole. Local information provides a detailed description of the shape of 3D surfaces associated with individual OBBs, and scalar data associated with specific points lying on those surfaces. That information is supported by triangular meshes of control points. The proposed model allows the estimation of both the original surfaces and scalar functions defined over those surfaces through a heterogeneous fusion technique that allows for uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nets of cellular robots are robots that consist of many interconnected robot elements. Such robots can become versatile, due to the large numbers of degrees of freedom and the variety of ways they can be interconnected. However, one has to be careful when connecting two elements, in order not to lose versatility by introducing unnecessary kinematical constraints through the joint. For instance, a cellular robot connected as a linear structure should be able to move a region of its body, while the rest of the robot remains in place. The problem is simple in two dimensions, when joints are bending joints, but considerably harder in three dimensions, where joints are universal joints with two degrees of freedom. We will discuss several essential properties of joints for `snake' robots, derived from physical constraints, and tasks that the robot is expected to perform. We will also give examples of tasks that robots conforming to these requirements can perform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on work currently underway to put a robotics laboratory onto the Internet in support of teaching and research in robotics and artificial intelligence in higher education institutions in the UK. The project is called Netrolab. The robotics laboratory comprises a set of robotics resources including a manipulator, a mobile robot with an on-board monocular active vision head and a set of sonar sensing modules, and a set of laboratory cameras to allow the user to see into the laboratory. The paper will report on key aspect of the project aimed at using multimedia tools and object-oriented techniques to network the robotics resources and to allow them to be configured into complex teaching and experimental modules. The paper will outline both the current developments of Netrolab and provide a perspective on the future development of networked virtual laboratories for research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Architectures for Intelligent and Networked Robots
Intelligent systems are required to perform increasingly complicated tasks and interact with a variety of complex systems more often today than in the past. As systems become more complicated, the integration challenges become more demanding. Large, complicated intelligent systems are generally composed of smaller components of lesser complexity. These smaller components are integrated into the larger system to perform specific tasks. The key components of current manufacturing environments consist of such diverse elements as production machinery, communications hardware and software, sensors, computer, databases, file systems, operator interfaces, and production management software. In order to fully automate such manufacturing systems, these components must be able to work together in an integrated way to provide satisfactory product quality at a reasonable price. This paper discusses the development of an information architecture using an `agent-based' approach to put systems together fast, better, and cheaper that are connected to a network whether local or remote. This approach utilizes standardization of communication protocols and subsystem interfaces to allow maximum flexibility on the part of the computer modelers. Modeling resources are integrated together through the use of communication interfaces. Software drivers (translators) translate generic commands and information into the special instructions required by each software agent. Also significantly important is the ability to seamlessly merge the simulation environment with the real environment. This is accomplished by defining interfaces to allow the virtual models to communicate in the same manner as the hardware modules. Several intelligent systems consisting of robots, sensors, operator interfaces and input devices have been successfully configured and integrated via networks using this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the synthesis of a protective operating system shell environment for robots. The approach is designed to protect the robot from the effects of errors in the lower level manipulator system and the higher level plan. This shell is composed of a number of fault tolerance tasks at a discrete decision making level. In order to separate the various functions of a robotic system and to better monitor the interactions of the components, we use models from the design of computer operating systems. The user communicates with a `shell' which is wrapped around a `kernel.' One of the duties of the robot fault tolerance shell should be to enforce a protocol between the user commands and the robot fault tolerance capabilities. In the paper, we describe an implementation to conform to a formal protocol which explicitly includes fault tolerance. Each of the discrete layers in the robot control system will be modeled by a separate finite state machine (FSM). The FSMs encapsulate the redundancy and fault tolerance capabilities of the system in a uniform manner. Our FSMs will be designed to be dynamically growing and contracting as for example when new sensors are added or as sensors fail. From these FSMs we can develop a shell capability analysis utility that monitors the current fault tolerance status of the robot system. For example, the process of checking to see if the fault reconfigured robot can still complete its plan can be approach as a conformance testing problem. When faults cause joints to be lost, the reduced robot will be viewed as a subset of the original robot. Test sequences can be developed to determine if the reduced robot conforms to the original robot specification with respect to the user's original plan. A `critic' utility in the shell can also check for obstacles and will halt the robot to protect it from possible damage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Unified Telerobotics Architecture Project (UTAP) proposes a standard control architecture with well-defined interfaces as a means of promoting software reusability and component-based controller technology. Sensor-integration is a primary consideration within a telerobotic standard since teleoperation (or shared) control depends on integrating sensor feedback with motion control. A major hurdle to realizing a standard sensor-integration model involves the provision for ranges of capability, in effect, scaling the interfaces. Another problem is the need within UTAP applications to allow hybrid control, in which the system must accommodate position control in some axes as well as integrate force-sensing with motion control in other axes. This paper will examine the role of sensors as they apply to the UTAP standard telerobotic control architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual acts are patterns of viewing displayed by an operator carrying out a remote manipulation operation. Automated camera control under the guidance of these visual acts means that the operator can concentrate on the manipulation aspect of the task. Initial theoretical studies have suggested an approach to deriving visual acts based on exploiting human perceptual models of visual discrimination. This paper reports on initial studies aimed at implementing an automated viewing system based on multi-agent architecture. The paper reviews the automated viewing model we are proposing and explores the nature of the agent- based architectures that we are considering for the realization of the automated viewing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work, we have developed a generate, test, and debug methodology for detecting, classifying, and responding to sensing failures in autonomous and semi-autonomous mobile robots. An important issue has arisen from these efforts: how much time is there available to classify the cause of the failure and determine an alternative sensing strategy before the robot mission must be terminated? In this paper, we consider the impact of time for teleoperation applications where a remote robot attempts to autonomously maintain sensing in the presence of failures yet has the option to contact the local for further assistance. Time limits are determined by using evidential reasoning with a novel generalization of Dempster-Shafer theory. Generalized Dempster-Shafer theory is used to estimate the time remaining until the robot behavior must be suspended because of uncertainty; this becomes the time limit on autonomous exception handling at the remote. If the remote cannot complete exception handling in this time or needs assistance, responsibility is passed to the local, while the remote assumes a `safe' state. An intelligent assistant then facilitates human intervention, either directing the remote without human assistance or coordinating data collection and presentation to the operator within time limits imposed by the mission. The impact of time on exception handling activities is demonstrated using video camera sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Combining signal detection decisions from multiple sensors is useful in some practical communications, radar, and sonar applications. The optimum schemes for generating and combining the detector decisions have been studied for cases with independent observations from sensor to sensor. Designing schemes for cases with dependent observations from sensor to sensor is a much more difficult problem and to date very little progress has been made. Design approaches which have been suggested for these cases are quite complicated. Here a simple adaptive design approach is outlined for the important and difficult task of detecting a weak random signal in additive, possibly non-Gaussian noise. The approach is based on considering sensor decision rules and fusion rules which contain some unknown parameters. These rules have previously been shown to be optimum for cases with a larger number of observations. These previous results also show that the best parameters minimize the mean square error fit to the best centralized signal detection scheme. Based on these ideas a gradient descent algorithm is proposed for learning the best parameters. Results of the training are compared to known results for multisensor detection schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.