PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In order for a mobile robot to acquire a shape model of an unknown object, it must be able to view the entire exterior of the object. However, in an unstructured environment, it is impossible to know the extent to which the robot can circumnavigate the object. If the entire object cannot be seen, then it is impractical to discuss creating object models which contain only viewable object surfaces. In fact, it is easy to conceive if an object which possesses exterior surfaces that are hidden from any reasonable viewpoint. However, it is generally possible to establish limits to the volume of space that the object can occupy. Such a volume represents the combination of space occluded from view with space actually taken up by the object. A model of this volume is valuable, in that it has the advantage of being a complete, enclosed boundary description. Object recognition routines, for example, may require complete boundary descriptions to work with. Even if complete boundary descriptions are not required, knowing the maximum possible extent of the object could prove valuable, perhaps in differentiating between several partial object model matches. Processing a single view, we build an ''OPUS'' (object plus unseen space) by combining ''object surfaces''--defining the fraction of the exterior of the object that can actually be seen--with ''occlusion surfaces''-- indicating the limits to the volume of space which is occluded from view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously, we have describe modal analysis, an efficient, physically-based solution for recovering, tracking, and recognizing solid models from 2-D and 3-D sensor data. The underlying representation consists of two levels: modal deformations, which describe the overall shape of a solid, and displacement maps, which provide local and fine surface detail. In this paper, we give details about the mathematics behind implicit function and displacement map calculations. In addition, we describe an extension which can be used to incorporate measurement uncertainty in the recovered modal deformation parameters. The result is an energy-based implicit function; as a consequence, collision detection, path planning, dynamic simulation, and model comparisons can frequently be performed in closed-form--even for complex shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work in qualitative shape recovery and object recognition has focused on solving the ''what is it'' problem, while avoiding the ''where is it'' problem. In contrast, typical CAD-based recognition systems have focused on the ''where is it'' problem, while assuming they know what the object is. Although each approach addresses an important aspect of the 3-D object recognition problem, each falls short in addressing the complete problem of recognizing and localizing 3-D objects from a large database. In this paper, we synthesize a new approach to shape recovery for 3-D object recognition that decouples recognition from localization by combining basic elements from these two approaches. Specifically, we use qualitative shape recovery and recognition techniques to provide strong fitting constraints on physics-based deformable model recovery techniques. On one hand, integrating qualitative knowledge of the object being fitted to the data, along with knowledge of occlusion supports a much more robust and accurate fitting. On the other hand, recovering object pose and quantitative surface shape not only provides a richer description for indexing, but supports interaction with the world when object manipulation is required. This paper presents the approach in detail and applies it to real imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important pathway to solve several computer vision problems may be through qualitative vision. Progress in qualitative vision has been very limited due to the difficulties in modeling and analyzing qualitativeness. In this paper, we consider the issue of representing shape in the qualitative sense. A robust representation is important to enable the fusion of qualitative information that is obtained from different sources. We begin with the simple scheme of storing relative positions in space. This representation is compact and can be updated easily. Probabilistic, relaxation-based schemes for fusion are possible. However, we show that this representation is not unique. In particular, we show that two objects with different qualitative shapes could have the same representation. We indicate how the representation can be augmented to overcome this difficulty. We point out the need to identify minimum information requirements for representation and other tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
User control of robotic or graphic objects containing many internal degrees of freedom is difficult--existing input devices do not map well onto the parameters of highly articulated objects. When a high degree of precision is not required, as is often the case when driving graphics for 3-D animation, we show that the 3-D position and orientation of the object and the values of its joints can be recovered from 2-D sketches of the object. Such freehand sketches represent projections of the object onto the picture plane as the user wants to see it. The result is a very intuitive method for ''sketching in 3-D''. To test the robustness with respect to freehand drawing, a particularly ''noisy'' form of sensory data, we experiment with freehand strokes that artists sketch to directly position, orient and control the joints of 3-D human-like stick figures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two approaches to the combination of range and intensity data for scene description and object recognition are described. First, low level iconic intensity and depth data are combined into a single low level image description consisting of semantically labelled edges and reconstructed depth data. Second, boundary and surface primitives derived from depth and intensity images are used to identify and locate models in a cooperative, message passing system. Each approach is illustrated by specific examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the emerging paradigm of animate vision, the visual processes are not thought of as being independent of cognitive or motor processing, but as an integrated system within the context of visual behavior. Intimate coupling of sensory and motor systems have found to improve significantly the performance of behavior based vision systems. In order to conduct research in animate vision one requires an active image acquisition platform. This platform should possess the capability to change vision geometrical and optical parameters of the sensors under the control of a computer. This has led to the development of several robotic sensory-motor systems with multiple degrees of freedoms (DOF). In this paper we describe the status of on going work in developing a sensory-motor robotic system, R2H, with ten degrees of freedoms (DOF) for research in active vision. A Graphical Simulation and Animation (GSA) environment is also presented. The objective of building the GSA system is to create an environment to aid the researchers in developing high performance and reliable software and hardware in a most effective manner. The GSA includes a complete kinematic simulation of the R2H system, it''s sensors and it''s workspace. GSA environment is not meant to be a substitute for performing real experiments but is to complement it. Thus, the GSA environment will be an integral part of the total research effort. With the aid of the GSA environment a Depth from Defocus (DFD), Depth from Vergence, and Depth from Stereo modules have been implemented and tested. The power and usefulness of the GSA system as a research tool is demonstrated by acquiring and analyzing stereo images in the virtual world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decontamination and Decommissioning (D important requirement of the U.S. Department of Energy''s Environmental Restoration and Waste Management (ERWM) program. Means need to be devised to minimize radiation exposure to humans involved in the D research presented in this paper describes a human-machine system which can be employed for performing radiation scan and pipe cutting operations in a typical D Advanced Servomanipulator (ASM) from the Oak Ridge National Laboratory (ORNL), we have designed a hybrid telerobotic pipe-cutting module. The module, when fully integrated, will allow users of the ASM to exploit the original functionality of the telerobot when our pipe cutting system is not in use. Comprising the pipe-cutting system are interactive three- dimensional object localization, graphical task modeler, arm control, human-machine interface, radiation sensor, and cut-tool sub-systems. Only the task modeler and interactive object localization modules are discussed in this paper. The goal of these modules is to interactively localize an object, usually a pipe, and display it in a three-dimensional rendering of the work space. Through interaction with these modules, the supervisor coordinates a task- specific sequence of actions that the lower-level sub-systems will perform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lack of good design tools have made designing, modeling, and control of long reach, light weight, ultrahigh-speed robotic structures extremely difficult. This paper describes a research effort towards development of a comprehensive design tool to study the dynamic behavior of flexible robotic structures. The graphical simulation and animation environment presented will help researchers to design and evaluate alternate geometries and control algorithms and would fully complement laboratory based experimentation. This paper explores the advantages of using graphical simulation and animation techniques for the design of flexible robotic structures, presents an implementation of such an environment, and demonstrates its capabilities by simulating the behavior of a flexible beam driven by a motor and controlled using a simple PD controller.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes work conducted at the JPL Advanced Teleoperation Laboratory in an experiment that demonstrated the value of auditory cues in teleoperation as part of a simulated Solar Maximum Satellite Repair (SMSR). An experiment was designed to examine a specific teleoperation task of unbolting an electrical connector screw based on the apparent significance of auditory signals. Visual and kinesthetic feedback have usually been the primary modes for cueing operator manual control actions in remote manipulation tasks; however, auditory information may have further beneficial effects on operator performance. In addition to the visual cues available from a pair of stereoscopic cameras and contact force feedback cues from the operator's manual hand controller, we gave the operator an amplified microphonic task presentation. In general, sounds within the robot workspace are not heard in the operator control room. Such auditory cues had not been used in the Advanced Teleoperation Laboratory (ATOP) prior to this experiment. Six subjects participated in the experiment which examined the performance benefits of vision, force, and sound feedback. Our data infers that audio cues can make a significant difference in task completion time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a 3D object manipulation and layout technique in a 3D virtual space using a combination of an abstract natural language and hand pointing to recognize the purpose of a manipulator. This technique quantitatively transforms verbal semantics into a spatial region that uses the probability functions corresponding to some indicator word. It also uses knowledge about each object, e.g. the back of the bookshelf must be attached to the wall. And an operation will be performed with the indication from a natural language and a hand movement which is sensed using a 3D position tracker and Data GloveTM to dissolve the ambiguous selection of a candidate in a cooperative work space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach to the design of sensor-based robotic systems based on sensori-motor modules is proposed. These modules are motivated by a horizontal sensori-motor organization of the brain. Each module performs a specific function, which involves the extraction of some specific item of information from the environment. The proposed approach fits in with a task- driven approach to the design of sensor-based robotic systems. The sensori-motor modules are described and their composition and integration in the design of sensor-based robotic systems is discussed. The proposal offers the potential for the development of a systematic approach to the design of sensor-based robotic systems and the provision of a set of 'off the shelf' building blocks for their practical implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Much interest currently exists in the use of two or more simultaneous spectral bands to create a ''fused'' image of a scene to aid in detection and discrimination of targets. The use of multiple simultaneous spectral bands in an image fusion processor provides sufficient spectral diversity to allow discernment of objects that are difficult to detect in any single band. Feature enhancement processes, some of which can be implicit in the fusion algorithms, can also be applied as appropriate to highlight shadows and objects or areas of interest. Fusion processing has been implemented and demonstrated on various combinations of FLIR, TV, and laser radar imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. Human efficacy for innovation in architectural design has always reflected more the projected perceptual vision of the designer visa vis the hierarchical spirit of the design process. In pursuing better ways to build and/or design things, we have found surprising success in exploring certain more esoteric applications. One of those applications is the vision of an artistic approach in/and around creative problem solving. Our evaluation in research into vision and visual systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and interdisciplinary design practice. Discussion will cover areas of cognitive ergonomics, natural modeling sources, and an open architectural process of means and goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process. One hypothesis is that the kinematic simulation of perceived connections between hard and soft sciences, centering on the life sciences and life in general, has become a very effective foundation for design theory and application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a geometric approach to underwater 3-D scene reconstruction using sonar range sensing. Our goal is to recover explicit geometric surface descriptions for man-made objects, by focusing the geometric constraints of multiple sonar returns obtained from different sensing locations by a moving autonomous underwater vehicle (AUV). We employ a simplified physical model of the sonar sensing process, based on the geometrical acoustics high-frequency approximation of acoustic scattering. Our current research effort is directed to support the task of locating and retrieving rigid objects from the deep ocean seafloor, using an untethered AUV. The key open problems concern the development of (1) robust methods for 3-D shape recovery, (2) an effective data association procedure, and (3) directed sensing strategies to control the data acquisition process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of multiple distributed imaging sensors to track point or blob features is motivated by a desire to estimate 3-dimensional kinematics that are not easily derived from individual imaging sensors. Several distinct processing steps can be identified. First, the desired features are tracked based only on data collected by each sensor individually (sensor-level tracks). Next, these sensor-level tracks are compared with one-another and those corresponding to common features are identified. Based on this data correspondence, the sensor-level tracks are combined to form measurements of 3-D kinematic parameters. Finally, the resulting time- sequence of measurements is smoothed or filtered to improve accuracy as well as to estimate 3-D parameters not measured directly. This paper briefly reviews algorithms for each of these processing steps and describes a tracking system that results from considering the interaction among the individual steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a motion estimation and analysis technique for moving target detection in image sequences acquired from a moving electro-optical sensor. Imagery from the visible or infrared region is processed to yield a ranked target list consisting of detected target centroid, bounding box dimensions, and a confidence measure for each detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent trend towards dynamic vision has led to the need for real-time performance in various vision and control algorithms. Some of the burden placed on algorithms using purely visual input can be lessened using multiple disparate sensors. Research into the integration of information from disparate sensors while moving through an environment has for the main part concentrated on static environments. Moving obstacles complicate tasks such as avoidance and path planning. In this paper we present a system which integrates range and visual sensory inputs for the dynamic analysis of motion within the field of view of an autonomous platform. The approach we follow combines some recently developed neural network motion analysis algorithms with an epipolar plane image technique. We report the results of some experiments on a synthesized visible/range sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method intended to reconstruct a scene composed of cylindrical objects, and to simultaneously estimate the position of the moving camera used to acquire the image sequence. The iterated extended Kalman filter, used to perform this task, is supplied with the discrete sequence of monocular images of the scene and a poor a priori knowledge of the camera motion between successive shooting positions. Simulations performed on synthetic scenes show a good filter behavior when a 20% camera motion uncertainty, and a 2 pixels Gaussian noise in the image are assumed. A real scene test is also presented, that shows accurate results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion of information from multiple sources is an increasingly important area of research and application. This problem is often complicated by various sensors having different limitations and fields of view. Further complications result from the absence of prior knowledge. In addition to fusing diverse information, it is also necessary to manage multiple sensors with various limitations efficiently for optimal overall system performance. We have solved this set of problems using the MLANS neural network that employs model based approach and fuzzy decision logic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an information based framework for addressing multi-sensor data fusion and its management. The basis for this approach is the notion of Bayesian Information Update from which we present a probabilistic model for data fusion and its management. We proceed to outline how architectures and algorithms can be derived from the information update. This leads to a framework for sensor management based on using information as the expected utility of taking actions. We show how Fisher information and more generally Entropy can be used to quantify information. We conclude by briefly outlining a vehicle application making use of data fusion algorithms and sensor management techniques that we present.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, there has been increased emphasis on employing reactive actions in robot task planning. The principle reasons for this change are to increase the robustness of robot action by making them sensor controlled, and to accommodate dynamic, unpredictable environments. However, in many cases, supporting reactive mechanisms requires choosing sensor inputs for the reactive procedure. This paper addresses the issue of planning the sensor required to carry out a reactive robot program. A preliminary framework for planning is presented, and sensor planning is illustrated for the problem of replacing a mechanically attached plug in a space environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A distributed (or decentralized) multiple sensor system is considered under binary hypothesis environments. The system is deployed with a host sensor (HS) and multiple slave sensors (SSs). All sensors have their own independent decision makers which are capable of declaring local decisions based solely on their own observation of the environment. The communication between the HS and the SSs is conditional upon the HS's command. Each communication that takes place involves a communication cost which plays an important role in the approaches taken in this study. The conditional communication with the cost initiates the team strategy in making the final decisions at the HS. The objectives are not only to apply the team strategy method in the decision making process, but also to minimize the expected system cost (or the probability of error in making decisions) by optimizing thresholds in the HS> The analytical expression of the expected system cost (C) is numerically evaluated for Gaussian statistics over threshold locations in the HS to find an optimal threshold location for a given communication cost. The computer simulations of various sensor systems for Gaussian observations are also performed in order to understand the behavior of each system with respect to correct detections, false, alarms, and target misses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmenting an image of an object using a single information source (e.g. depth data, light intensity) or a single processing method (e.g. determining edges) can prove to be unreliable as each approach has its own advantages and disadvantages. However, if these sources of data or processes are combined, the advantages of each can be harnessed to given more accurate results. For example, depth data gives explicit three-dimensional geometric information while light intensities can give a more accurate edge representation than many three-dimensional sensing methods. The process of combining sources of information results in greater amounts of data needing analysis. Bayesian networks may be used to guide the segmentation process and to extract the most valuable information from each source image by assessing the plausibility of hypotheses made about the object's surfaces and their interaction. The believability of these hypotheses can then be estimated by examining the original source images and utilizing this information as complimentary or contrasting evidence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a model of lateral coordination control in sensor networks is proposed. It is based on the notion of negotiated cooperation between pairs of equal and autonomously acting sensor nodes. The actual communication phase is preceded by a bidding scheme to establish appropriate communication links. This model incorporates the aspect of network self- organization in order to adapt to changing environmental conditions. The cooperation is modelled on human behavior in the case of a task being worked on sequentially by team members with equal rights but different capabilities. To this end, a generalized approach to the organization of distributed systems is given and a cooperation protocol is described to achieve the desired lateral coordination. The qualitative reasoning is supplemented by simulation results to support the superiority of lateral over pure vertical coordination, particularly under severe environmental conditions, such as sensor failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To recognize objects and to determine their poses in a scene we need to find correspondences between the features extracted from the image and those of the object models. Models are commonly represented by describing a few characteristic views of the object representing groups of views with similar properties. Most feature-based matching schemes assume that all the features that are potentially visible in a view will appear with equal probability, and the resulting matching algorithms have to allow for 'errors' without really understanding what they mean. PREMIO is an object recognition system that uses CAD models of 3D objects and knowledge of surface reflectance properties, light sources, sensor characteristics, and feature detector algorithms to estimate the probability of the features being detectable and correctly matched. The purpose of this paper is to describe the predictions generated by PREMIO, how they are combined into a single probabilistic model, and illustrative examples showing its use in object recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work is motivated by the observation that Computer Vision and Image Understanding processes are not very robust. Small changes in exposure parameters or in internal parameters of algorithms used can lead to significantly different results. A combination (fusion) of these results is, under many aspects, profitable. We introduce an extended fusion concept dealing with different sources of information at external (world, scene, image) and internal levels (image description, scene description, world description) and define the process of fusion. Related work in the field is reviewed and connected with our model. Each of our levels requires its own quality measures and information fusion algorithms in order to yield a combination of components from several sources, so that we start investigating fusion at isolated levels. Two application examples from our own work are discussed: remote sensing (improvement of classification results by fusion at the image level), and medical image processing of ocular fundus images (automatic control point selection by fusion at the image description level). Our results with experiments at isolated levels encourage the incorporation of the complete fusion model into a complex image understanding system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor information fusion was defined as the integration of data and information from different sensors with the goal to produce a consistent description of the environment being sensored. Most methods make routine assumptions on the type of relation between these evidences, that is, the evidences are independent. The problem of dependent evidences has been receiving little attention in the literature. In this paper, we propose a generalized integration method of dependent evidences represented by an interval probability. A dependent parameter (DP) of uncertain evidences is first introduced. The dependent parameter DP can be represented as an interval, too. The following four types of dependency relation have been considered: minimum dependence, maximum dependence, independence, and unknown dependence. Based on the DP parameter, the algorithm to combine two evidences with dependency information is presented. The proposed method particularly well suits to computerization in the case of dependency information and obtains a satisfactory hypothesis value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For sensor-guided robotic tasks, a calibration procedure must be performed to determine the relationship between information in sensor coordinates and the position and orientation in robot coordinates of the parts to be manipulated. This paper reports on the first stage of research and development of a straightforward approach to the calibration problem in which the robot 'calibrates' the sensors by performing a series of known, carefully-chosen manipulations under observation of the sensors. The data from calibration represent a mapping which relates changes in feature location in sensor coordinates to changes in part position and orientation in robot coordinates. Calibration is completed by solving for the best-fit transformation representing this relationship. In each cycle of the production process, sensor data for the presented part are operated on by the calibration transformation to determine the position and orientation of the grasped part. The key to this procedure of direct calibration is obtaining from the calibration data the best-fit mapping relating changes in feature location in sensor coordinates to changes in part position and orientation in robot coordinates. Simulations were conducted using a simple three-layer artificial neural network to process data from multiple distance sensors to predict changes in position and orientation of a windshield-sized rectangular body. In these simulation, two approaches for supervised learning were used for network training during calibration. In production, the network must be iteratively inverted to predict location of the body from sensor data. Results from these preliminary simulations were encouraging: using data from only four sensor units, changes in position and orientation of the rectangular body were estimated to within a reasonable accuracy for planar part-presentation perturbations spanning an envelope of +/- 50 mm and 10 degree(s). Sources of error and the effects of the different training methods on performance of the network are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Humans can recover the three-dimensional structure of moving objects from their changing two-dimensional structure (Wallach and O'Connell 1953; Braunstein 1976). In this paper, we describe a patient, A.F., with bilateral lesions involving the visual cortex who is severely impaired on computing local-speed and global-motion fields, but who can recover structure from motion. The data suggest that although possibly useful, global-motion fields are not necessary for deriving structure from motion. We discuss these results from the perspective of theoretical models for this computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The initial transformation of light into neural signals is known to introduce nonlinearities in the spatiotemporal responses of retinal cells. In spite of these early nonlinearities, at least one class of retinal ganglion cells (the X cells first reported by Enroth Cugell & Robson) behaves as if all processing prior to the ganglion cell layer were linear. Similarly, frequency analyses show that cortical simple and complex cells are largely unaffected by well-known nonlinearities in the ganglion cell output. A push-pull model of retinal processing can reconcile these paradoxes by showing how ganglion cells can be selectively tuned to transient or sustained components of their input signals, independently of contrast or average retinal illuminance, and in spite of arbitrary nonlinear preprocessing. Theoretical considerations suggest that similar push-pull connectivity should also exist in the pathway joining ganglion cells to visual cortex. This model differs from other push-pull mechanisms in that each cell is described by nonlinear membrane equations, but response is linearized by the convergence of push-pull inputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been a growing interest in developing efficient and reliable distributed detection systems for target recognition and communications. Chair and Varshney have derived an optimal decision rule for fusing decisions based on the Bayesian criterion. To implement the rule, the probability of detection PD and the probability of false alarm PF for each detector must be known, but this information is not always available in practice. This paper presents an adaptive fusion model which estimate the PD and PF adaptively during the decision fusing process. The estimation is implemented by a simple statistical method. That is, the estimates for PD and PF of the ith detector are obtained by counting the number of its decisions that are considered to be correct and incorrect, respectively. Since reference signals are not given, whether the decision of a local detector is considered correct or incorrect is arbitrated by the fused decision of all the other local detectors; that is, the fused decision of all other local detectors is used as the reference for the ith detector. Furthermore, in the work, the fused results of the other local decisions are classified as 'reliable' and 'unreliable'. Only reliable decisions are used to develop the decision rule. Analysis on classifying the fused decisions in terms of reducing estimation error is given, and simulation results which conform to our analysis are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robot must have an internal representation of the local space it occupies to use for both navigation and obstacle localization. In addition, it must be possible to build and update the map in real-time so that it can be used in feedback control loops. A robot's notion of local space must bridge the gap between symbolic and continuous control. To satisfy both real-time constraints and the needs of high-level navigation and object recognition, the map building system must use a simple representation that can be computed quickly yet will support the construction of more involved maps over longer timescales. A complete system also requires control behaviors that can use the simple representation to drive the robot through its immediate surroundings in service of higher-level local navigation goals generated from the more detailed map. This paper describes a system based on building simple geometric occupancy maps from multiple sensors in real-time and using them for control. The mapping and local navigation algorithms presented were used to control the University of Chicago mobile robot at the AAAI-92 Robot Competition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Much of the work in robotics assumes that sensors provide very little data, and/or that the data is both unreliable and difficult to extract. This has led to a great deal of research on combining the information presented by a large collection of sensors, a difficult and often expensive proposition. In this paper I will argue that low cost vision systems can often provide a large amount of useful data directly, thus reducing the need to use fuse information across modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Construction of robots which operate in unstructured environments has of late produced a number of approaches for transforming sensor readings into activity in the world. Most of these approaches provide no formal semantics for discussing the way in which the internal state of the robot maps to the desired state of the world. We have been investigating the use of the GAPPS programming language as a mechanism for defining robotic reactions. This work has resulted in the creation of reactive modules which mediate between discrete statements about world states to achieve or maintain and the required continuous activity. While relatively complex goals have been achieved with this approach, the syntax and semantics of the GAPPS language is inappropriate for complicated dynamically changing goals. As a result, we have begun investigating the use of Reactive Action Packages (RAPs) as a mechanism for sequencing the activation of GAPPS-based reactive skills. The motivation for using RAPs is twofold. First, the syntax and semantics of the RAPs language integrates smoothly with a traditional non-linear planning system, allowing the construction and execution of plans for increasingly complex tasks. Second, GAPPS-based reactions fulfill a missing component of a RAPs-based controller system, namely the transformation of discrete RAP primitives (e.g., (maintain grasp ?thing)) into continuous physical activity. This paper presents the approach we are taking and discusses some of the issues involved in integrating these two systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the control system for Rocky IV, a prototype microrover designed to demonstrate proof-of-concept for a low-cost scientific mission to Mars. Rocky IV uses a behavior-based control architecture which implements a large variety of functions displaying various degrees of autonomy, from completely autonomous long-duration conditional sequences of actions to very precisely described actions resembling classical AI operators. The control system integrates information from infrared proximity sensors, proprioceptive encoders which report on the state of the articulation of the rover's suspension system and other mechanics, a homing beacon, a magnetic compass, and contact sensors. In addition, significant functionality is implemented as 'virtual sensors', computed values which are presented to the system as if they were sensors values. The robot is able to perform a variety of useful tasks, including soil sample collection, removal of surface weathering layers from rocks, spectral imaging, instrument deployment, and sample return, under realistic mission- like conditions in Mars-like terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our work focuses on local, decentralized sensing and behavior strategies for controlling a collection of twenty mobile robots. Each of the robots is equipped with a suite of simple sensors: infra-red, radio, and bump sensors, and programmed with a collection of interacting behaviors, based on the distributed style of the subsumption architecture. The goal of this work is to taxonomize a set of basic collective behaviors, as well as identify simple control strategies for producing them, in order to use them as building block for more complex behaviors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sensor system designed to support autonomous navigation should provide a stable, robust model of the environment. We propose and illustrate an approach in which multiple concurrent descriptions of objects are used to construct such a stable model. The principal idea is that several different representations are used to describe the same object in order to support different visual tasks, and to insure an appropriate match between the data, the model, and the task. The use of multiple representations to describe objects requires that the system be able to decide which descriptions of an object are valid. In our approach we use stability over time to indicate validity. To illustrate the power of this approach we have implemented a system, 'TraX', that constructs and refines models of outdoor objects detected in sequences of range data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For reliable navigation, a mobile robot needs to be able to recognize where it is in the world. We previously described an efficient and effective image-based representation of perceptual information for place recognition. Each place is associated with a set of stored image signatures, each a matrix of numbers derived by evaluating some measurement functions over large blocks of pixels. One difficulty, though, is the large number of inherently ambiguous signatures which bloats the database and makes recognition more difficult. Furthermore, since small differences in orientation can produce very different images, reliable recognition requires many images. These problems can be ameliorated by using active methods to select the best signatures to use for the recognition. Two criteria for good images are distinctiveness (is the scene distinguishable from others?) and stability (how much do small viewpoint motions change image recognizability?). We formulate several heuristic distinctiveness metrics which are good predictors of real image distinctiveness. These functions are then used to direct the motion of the camera to find locally distinctive views for use in recognition. This method also produces some modicum of stability, since it uses a form of local optimization. We present the results of applying this method with a camera mounted on a pan-tilt platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dexterous robotic hands have numerous sensors distributed over a flexible high-degree-of- freedom framework. Control of these hands often relies on a detailed task description that is either specified a priori or computed on-line from sensory feedback. Such controllers are complex and may use unnecessary precision. In contrast, one can incorporate plan cues that provide a contextual backdrop in order to simplify the control task. To demonstrate, a Utah/MIT dexterous hand mounted on a Puma 760 arm flips a plastic egg, using the finger tendon tensions as the sole control signal. The completion of each subtask, such as picking up the spatula, finding the pan, and sliding the spatula under the egg, is detected by sensing tension states. The strategy depends on the task context but does not require precise positioning knowledge. We term this qualitative manipulation to draw a parallel with qualitative vision strategies. The approach is to design closed-loop programs that detect significant events to control manipulation but ignore inessential details. The strategy is generalized by analyzing the robot state dynamics during teleoperated hand actions to reveal the essential features that control each action.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the technical feasibility of proximity sensor based control for kinematically redundant robot arm manipulators. In contrast to model-based approaches, sensor based control techniques require no a priori knowledge of the operational environment. The method described in this paper uses infrared proximity sensors located about the periphery of a three degree of freedom planar mechanism to provide real time knowledge of the environment near the manipulator. The control algorithm produces a collision-free path around detected obstacles based on this information, while allowing the end effector to reach the desired goal position. A fully functional collision avoidance system for a redundant planar manipulator was constructed and tested. The testbed incorporated a SCARA-type robot manipulator with a sensor 'skin' comprised of 49 infrared sensor pairs about its periphery. A standard desktop computer served as the process controller. This work is currently being extended to redundant spatial manipulators under a NASA Phase II SBIR research grant. Also, related work into proximity sensor technology and distributed sensor data processing has recently been completed, in which the performance characteristics of infrared, capacitive, and ultrasonic sensing devices were measured. A distributed processing electronics system and supporting communications protocol was developed and successfully tested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of a technique for using data from multiple sensors to plan the path of the end effector of a tool handling robot. The method involves examining data from several sensors with a knowledge-based expert system utilizing heuristic rules. Rather than fusing the sensor data, the data is reduced to individual sensor results which are then fused with the addition of previously established knowledge. The knowledge-base of heuristic rules is used to resolve conflicting information, eliminate unnecessary information, and infer additional information from the sensor results. Further heuristic rules are then used to produce parameters for motion instructions to the robot system. An example of an application of the technique is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic controllers frequently operate under constraints. Often, the constraints are imperfectly or completely unknown. In this paper, the Lagrangian dynamics of a planar robot arm are expressed as a function of a globally unknown hard constraint. Laser sensors are utilized to produce estimates of local constraints. These sensors guide the end effector over the unknown object which is denoted the learning phase. The learning phase generates noisy joint position encoders and tachometers data. A extended continuous-discrete Kalman filter based estimator processes the measurements to compute an estimated parameterization of the constraint. The output of the estimator is input to a suboptimal combiner. The gradient of the estimated parameter vector is equivalent to the tactile sensory data. During the learning phase, the combiner computes a weighted combination of estimated and sensed constraints. The controller uses the constraint estimate to guide the robot arm. Thus a feedback loop is closed around the constraints. As the statistics of the estimated constraint vector become favorable compared to the stationary statistics of the sensors, the learned constraints gradually replace the need for sensory data. A block diagram of the controller, estimator, combiner, sensors, and constraints is shown. Comparative simulations are given for various combinations of ideal and noisy data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We find a suboptimal time-variant tracking control law which provides guidance for a car's steering system. A proportional plus derivative controller uses an observer which employs a novel sensor fusion regime. The observer fuses sensors whose reliability is inversely related to their availability by using an erroneous plant model subject to nonholonomic and kinematic constraints. The observer samples the plant model with 25% systematic error at 100 hertz. Internal sensors are sampled at 20 hertz and have 10% error. External sensors are sampled at 5 hertz and have no error. The observer performs dead reckoning during the intervals of time when the external sensors are not available. There are two internal sensors, an odometer and a steering-wheel angle meter. The odometer is used to compute a linear approximation to the speed of the plant. The steering-wheel angle meter measures the steering-wheel deflection relative to the longitudinal centerline of the plant. There are three external sensors which are used to measure the pose of the plant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe development and integration of a point laser sensor for precise proximity measurements and surface characterization in a robotic workcell. The proximity measurement and surface characterization system utilizes a commercially available point laser device. The device is small in size and its light weight allows for easy mounting on the end- effector of an industrial robotic arm. The system has three operational modes: point measurement, line profile measurement, and surface image measurement. All of the necessary calibration techniques, computer and robot controller interfaces, and data acquisition and processing algorithms have been developed and extensively tested. These tests have indicated that the system is able to provide submillimeter measurement accuracy and is very efficient. Point measurements can be made instantaneously, and the line and grid measurements can be performed in seconds to minutes, depending upon the desired resolution. These features along with the low cost, and three operational mode versatility makes this system quite unique and of practical utility. The system has proven very valuable in proximity research in active exploration of robotic workcells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Basically the Multivision System consists of two different sensor systems combined to a multisensor system via the data processing. The first one is a 2D-picture processing system, where as the second one is the 3D Laser Range Finder module. In view of the X/Y-scanner this module feeds digital picture data--information on the object's position in terms of its Z axis and its tilt and turn angles--to the interface. The Laser Range Finder provides absolute range values at the interface and the laser spot on the surface of the measured object will be detected by the camera system. Any information of a scene provided by the camera system (e.g. edge-detection, edge-description) can be used to control the laser spot in view of the 3D measurement. Thereby the location of a scan point, which is measured with the laser scanner, can be transformed into the camera system, so that the position in the camera image is calculable. An easy way to describe the geometric information of such points is the use of coordinate systems. Therefore the multisensor system is molded by a set of different coordinate systems: the scanner-, the camera- and the cartesian transfer coordinate system. The paper deals with geometric modelling, the control system architecture as well as the practical system design and some accuracy considerations. The first application focused within this paper are navigation of autonomous vehicles and obstacle detection in such an environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with an intelligent inspection system, capable of combining intensity- and range-data in order to solve inspection tasks in the area of non-tactile, three-dimensional coordinate-measurement and object-recognition. The two sensors, i.e. a laser range-finder (triangulation system) and a CCD-camera are mounted on a four-axis displacement-set, which consists of stepper-motor-driven high-precision translation and rotary stages. Intensity-data coming from the camera's CCD-sensor is segmented by performing edge-detection and Kalman-filter-based contour-tracking. The segmentation result is compressed by approximating the extracted contours by a sequence of linear and circular geometric features. This symbolic edge description enables the system to classify both objects and features on their surface very rapidly, because the recognition task is reduced to scanning the scene for a small number of geometric features. The edge description is used to guide the laser range-finder to interesting areas visible in the scene, leading to a significant saving of time required for range sensing, i.e. for the 3D-inspection task in general. Because the sensor data integration heavily depends on the quality of the sensor models, the paper depicts important measurement errors and uncertainties introduced by camera and range-finder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an efficient method for calibration of multiple sensors by a planar calibration object. First, a coordinate system PCS is constituted based on the calibration object. Then, the coordinate transformations from the coordinate systems of each camera system and range finder system to PCS are calibrated. By these transformations, the coordinate transformations from one sensor system to the others are calculated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the verification condition of the estimated 3D camera pose is discussed. We argue that the usual test is not perfect by using projection and/or back-projection relation between image points and their correspondence in reference frame. Due to input data noise and its impacts on the 2D-3D mapping relation, different combinations of camera intrinsic parameters and extrinsic parameters may represent the same camera pose and all the perspective transformation matrices derived from them meet the demands of the 2D-3D mapping relation constraints well. we call this the compensation of camera intrinsic parameters and extrinsic parameters. It makes the determination of 3D camera pose ambiguous. Two-view calibration should be adopted by using the intrinsic parameter consistency (INPAC) constraints to estimate reliable 3D camera pose. Camera rigid motion constraints are introduced to confirm the 3D camera pose data of two positions besides the usual projection test and back- projection test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on the development of a PC-based optical array detector system. Hardware and software designs using a modular approach are discussed. Calibration of the detector array is described. Different applications, including image sensors calibration, are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.