PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Model-based and data-driven approaches to automatic target recognition each provide a methodology to determine the class of an unknown target. Model-based recognition is a goal-driven approach that compares a representation of the unknown target to a reference library of unknown targets. A comparator algorithm determines a degree of `match' to each reference target. Data-driven approaches use a numeric algorithm to process a set of characterization features to produce a class likelihood estimate. Each approach has advantages and limitations that should be considered for a specific implementation. This research compares a specific implementation of each of these approaches developed for an automatic target recognition system that processes multi- spectral imagery representing military targets. To provide a valid baseline to compare the performance of each approach, a common target set, characterization feature set, and performance metrics are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an algorithm for tracking remotely sensed objects moving over terrain, using geographic information system information. The proposed model specifically accounts for the influence of the environment (rivers, roads, elevation, etc.) on the propagation of the probability densities. We make a number of assumptions about the motion of targets, but do not assume a priori knowledge of goal locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents alternative strategies framed within a recursive structure for enhancing multisensor system performance through fusion of individual decisions derived from imperfect sensors. A confidence measure can be associated with each decision derived from an imperfect sensor and as such, decisions obtained with a confidence less than an acceptable threshold value are viewed as non-decisions in developing the fused decision. The initial and asymptotic performances of these alternative strategies are analyzed and compared against the datum of single sensor performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new stereo matching method. The stereo disparity is estimated using a symmetric phase-only matched filter applied to foveal views of binocular stereo images. The foveal view at an image point is a simulation of the human retina image with a non-uniform spatial resolution, where the spatial resolution is the highest at the foveal center and decreases monotonically towards the periphery. Since spectral phase preserves the location of the image structure while it ignores spatial intensity correlation, the symmetric phase-only matched filtering applied to nearly identical images yields a very sharp correlation peak, which makes the detection of the image shift reliable and easy. Determining stereo disparity using the foveal view symmetric phase-only matched filtering is quite robust and accurate. The good performance of our stereo matching algorithm has been demonstrated for determining the disparity of remote sensing stereo images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computation of 3D structure from motion using a monocular sequence of images in the paradigm of active vision is presented in this paper. Robotic tasks such as navigation, manipulation, and object recognition all require 3D description of scene. The 3D description for these tasks varies in resolution, accuracy, robustness, range, and time. For a robotic system capable of performing a wide range of applications, it must have the ability to actively control the imaging parameters so that a 3D description sufficient enough for that task is generated. In the approach presented here, the 3D structure is determined in two steps. In the first step, based on the analysis of the spatial and the temporal gradients of an image stream, a characterization of 3D information in terms of camera displacements which result in a fixed disparity, is obtained. In the second step, extrapolated disparity values between the first and last frame of the image stream, are refined using normalized cross-correlation. The length of the image stream, interframe camera displacement, and the disparity value are actively controlled to obtain the 3D structure of desired quality. This approach has been implemented on a pipeline based computing environment to provide a real-time performance. Extensive experiments have been conducted to verify the performance and capabilities of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with a data fusion technique for depth reconstruction which integrates regularization by variational methods with stochastic optimization based on Kalman filtering. A framework for the fusion of multiple regularized depth maps is proposed for on-line integration of many views of the visible scene. This kind of approach has some advantages in respect with similar ones, as it is stressed widely in the paper. It does not use optical flow, camera modeling or an explicit motion equation and can be used to fuse stochastically both sparse or dense depth data, obtaining reliable estimates in the whole image domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computationally economical techniques for employing confidence measures to generate depth maps are presented in this paper. The NASA-JSC PRISM system has been successfully applied to experiments in manipulation (EVA Helper Retriever) and mobility (Mobile Robot Lab). Results from these experiments and future plans are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In segmentation, the goal is to partition a given 2D image into regions corresponding to the meaningful surfaces in the underlying physical scene. Segmentation is frequently a crucial step in analyzing and interpreting image data acquired by a variety of automated systems ranging from indoor robots to orbital satellites. In this paper, we present results of a study of segmentation by means of cooperative fusion of registered range and intensity images acquired using a prototype amplitude-modulated CW laser radar. In our approach, we consider three modalities--depth, reflectance and surface orientation. These modalities are modeled as sets of coupled Markov random fields for pixel and line processes. Bayesian inferencing is used to impose constraints of smoothness on the pixel process and linearity on the line process. The latter constraint is modeled using an Ising Hamiltonian. We solve the constrained optimization problem using a form of simulated annealing termed quenched annealing. The resulting model is illustrated in this paper in the rapid quenched, or iterated conditional mode, limit for several laboratory scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an interactive tool for applying 3D sensor data to a robotic work environment. The tool provides all of the necessary functions to build an accurate world model of a workspace. It is developed with special consideration for teleoperated or semi-autonomous applications, such as those expected in future space missions. Means are provided for collecting 3D images (depth maps), filtering that data, and performing fine positioning or registration between the observed data and geometric models of objects in the scene. A graphical interface allows a human supervisory operator to identify objects and correct for any errors that occur. The system has been implemented in a semi-autonomous robotics testbed at Rensselaer. Results show that it is capable of working with a wide variety of object types in a time frame that is suitable for human supervisory interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem addressed in this paper is that of estimating the tracks of dynamic obstacles in the environment of a helicopter operating in hazardous conditions. Fuzzy logic and neural networks have shown their strength in recent years in the solutions to non-linear problems. The aim of this paper is to present neuro-fuzzy data fusion algorithms which can be used to fuse information provided by multiple spatially separate sensors engaged in the tracking of obstacles whose dynamics are a priori unknown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The world we live in is in constant motion: an agent who wishes to interact with the environment must be able to interpret visual motion. Motion information is extracted by identifying the temporal signature associated with the textured objects in the scene. In this paper, we present a new computational framework for motion perception. Our methodology considers spatiotemporal frequency domain analysis to extract the optical flow information. Using a mathematical description of a moving texture pattern in the 3D x-y-t space (an x-y image moving along time t), we formulate a computational framework for motion perception. A detailed analytical description of this model and results to highlight and evaluate their salient properties are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In air-to-ground applications, the detection of target is a difficult problem due to complex background where classical detection algorithms generate a large amount of false alarms. This paper addresses the detection of moving target based on motion compensated sequences. In the presence of noisy image acquisition and motion discontinuities, the estimation of optical flow is reformulated in robust estimation framework. The motion estimation is based on robust optical flow algorithm developed in the pyramidal Markov Random Model framework. We present the results of this detection algorithm on real-world airborne I.R. image sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using surface and subsurface sensing, we have developed a perception system for autonomous retrieval of buried objects. The subsurface sensing system uses Ground Penetrating Radar (GPR) to locate buried objects. A 2D laser rangefinder system generates an elevation map, and using this map a robotic arm positions the GPR antenna. This setup allows us to automate the GPR data collection. An image processing algorithm is used to locate the object of interest in the GPR data. After it is located, we use sense and dig cycle to retrieve the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In target detection using multiple scene clutter rejection, pixel data from separate observations of the same scene are used to reduce or remove the background clutter. This is generally accomplished by registering the scene (converting onserved electromagnetic inputs to pixel vector signals), and filtering the vectors via algorithmic processing. When a scene is misregistered, its pixels are mislocated in the observation vector, leading to processing errors that degrade target detection. In this paper misregistration degradation is evaluated for several basic cases, and used to scope the allowable alignment errors that can be tolerated from scene to scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A theory is developed that incorporates the piezoelectric effect into slewing flexible composite materials using classical laminate theory. Using the piezoelectric material as a modal sensors allows for placement of all of the poles of the system without the need for a state observer design. Pole placement is used on a numeric example involving a graphite epoxy beam and a DC motor. Critical damping of both the motor and beam are achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the application of moment invariants as pixel-level characterization features. An innovative machine learning paradigm used to automatically learn information fusion models is briefly presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection, classification, and identification of high-value targets are ongoing challenges for the defense research community. Many automatic target recognition approaches exist, each with specific advantages and limitations. This approach first segments potential targets at the pixel level, followed by several hierarchical levels of object classification and identification. This paper discusses a specific aspect of this paradigm--the heuristic assessment of object classification likelihood estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved volume intersection method for constructing 3D models is presented. The 3D modeling scheme is the basis for a unified approach for the prediction of multisensory imagery and features. A modified volume-surface octree is used for the simulations of the physical processes that affect the generation of visual, thermal and laser radar imagery. The accuracy of the object model is improved by a new technique that detects and eliminates false volumes in octrees constructed from silhouettes. This technique partitions the input silhouettes at concavities based on line drawing information. The partitioned silhouettes form the three principal views for each of a number of subparts of the object. Octrees are constructed for each subpart and a union operation is used to combine these subparts. Examples of the improved object model and the multisensory imagery produced by this scheme are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over The Horizon Radar (OTHR) uses the ionosphere layers in the sky as a reflection medium. Multiple ionospheric layers cause several tracks per target to be observed at the receiver. The objective of this research is to associate tracks which belong to the same target. We adopt a data fusion approach to track association which is based on knowledge of human perceptual grouping mechanism--Gestalt psychology. To facilitate the fusion of the tracks by using affinity information derived from human perceptual grouping principles we developed a clustering algorithm based on a refined self-organizing neural network. This network, which we call the dynamic clustering scheme, automatically controls the allocation of clusters in response to the novelty of each input.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a relative assessment of the multisensor system performances obtainable under the two alternative fusion strategies studied in Part I of this study. Both the initial and asymptotic performances of these alternative strategies are compared and discussed to delineate possible overall operational strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.