PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present methods to enhance the classification rate in decision fusion with partially redundant information by manipulating the input to the fusion scheme using a priori performance information. Intuitively, it seems to make sense to trust a more reliable tool more than a less reliable one without discounting the less reliable one completely. For a multi-class classifier, the reliability per class must be considered. In addition, complete ignorance for any given class must also be factored into the fusion process to ensure that all faults are equally well represented. However, overly trusting the best classifier will not permit the fusion tool to achieve results that rate beyond the best classifiers performance. We assume that the performance of classifiers to be fused is known, and show how to take advantage of this information. In particular, we glean pertinent performance information from the classifier confusion matrices and their cousin, the relevance matrix. We further demonstrate how to integrate a priori performance information within an hierarchical fusion architecture. We investigate several schemes for these operations and discuss the advantages and disadvantages of each. We then apply the concepts introduced to the diagnostic realm where we aggregate the output of several different diagnostic tools. We present results motivated from diagnosing on-board faults in aircraft engines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning, one of the data mining and knowledge discovery tools, addresses automated extraction of knowledge from data, expressed in the form of production rules. The paper describes a method for improving accuracy of rules generated by inductive machine learning algorithm by generating the ensemble of classifiers. It generates multiple classifiers using the CLIP4 algorithm and combines them using a voting scheme. The generation of a set of different classifiers is performed by injecting controlled randomness into the learning algorithm, but without modifying the training data set. Our method is based on the characteristic properties of the CLIP4 algorithm. The case study of the SPECT heart image analysis system is used as an example where improving accuracy is very important. Benchmarking results on other well-known machine learning datasets, and comparison with an algorithm that uses boosting technique to improve its accuracy are also presented. The proposed method always improves the accuracy of the results when compared with the accuracy of a single classifier generated by the CLIP4 algorithm, as opposed to using boosting. The obtained results are comparable with other state-of-the-art machine learning algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the problem of fusing logic-based technical specification -or model-based diagnosis- knowledge components of a physical device or process, is investigated. It is shown that most standard logic approaches to beliefs fusion are not relevant in this context since some rules should be merged even in the case of a consistent fusion. Accordingly, we discuss the various types of formulas that should be merged during a fusion process, in order to avoid necessary conditions for the absence of failure to become sufficient conditions. This transformation is then described formally. It can be performed as an efficient preprocessing step on the knowledge components to be fused. Finally, a series of subsumption tests are proposed, preventing conditions of absence of failure from being overridden by subsumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this paper is to show an approach to target recognition (ATR) that allows for efficient updating of the recognition algorithm of a fusion agent when new symbolic information becomes available. This information may, for instance, provide additional characterization of a known type of target, or supply a description of a new type of target. The new symbolic information can be either posted on a web page or provided by another agent. The sensory information can be obtained from two imaging sensors. In our scenario the fusion agent, after noticing such an event, processes the new symbolic information and incorporates it into its recognition rules. To achieve this goal the fusion agent needs to understand the symbolic information. This capability is achieved through the use of an ontology. Both the fusion agent and the knowledge provider (it may be another software agent or a human annotator) know the ontology, and the web based information is annotated using that ontology. In this paper we describe the approach, provide examples of symbolic target descriptions, describe an ATR scenario, and show some initial results of simulations for the selected scenario. The discussion in this paper shows the advantages of the proposed approach over that in which the recognition algorithm is fixed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe an ontology based multi-agent approach to data fusion from heterogeneous data sources. We assume that the data can come from various sensors or databases. In our approach, each data source is handled by one agent. The agent is able to deliver the data and the description of the data. The data description is provided in the form of specialized ontology. This description is the basis for fusion and integration of data from different sources or agents. A specialized agent fuses the data by evaluating these descriptions under the given query. Since data source agents themselves maintain the description of the data, the whole system of data sources is extensible. This means that a new data source agent may enter the system of data sources at any time. After proper registration with the fusion agent, a new agent can contribute to the overall process. The fusion agent is requested to provide data from different sources with type and location specification. The possible process outcomes are fused data, transformation rules or a failure message. After finding fusion rules, the agent is able to provide data continuously. In our examples of ontologies, we describe primarily numerical data sources and their relations, however symbolic data fusion is also considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a series of knowledge fusion operators are motivated and analyzed. They are defined in a semantic way, although syntactical facets of knowledge are taken into account. More precisely, they rely on a rank-ordering of interpretations that is based on the number of formulas that the interpretations falsify. It is briefly discussed how these operators could be refined, by taking into account various distribution policies of the falsified information among the knowledge sources, syntactical properties of formulas to be fused and forms of integrity constraints preference among literals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The major drawback of the Dempster-Shafer's theory of evidence is its computational burden. Indeed, the Dempster's rule of combination involves an exponential number of focal elements, that can be unmanageable in many applications. To avoid this problem, some approximation rules or algorithms have been explored for both reducing the number of focal elements and keeping a maximum of information in the next belief function to be combined. Some studies have yet to be done which compare approximation algorithms. The criteria used always involve pignistic transformations, and by that a loss of information in both the original belief function and the approximated one. In this paper, we propose to analyze some approximation methods by computing the distance between the original belief function and the approximated one. This real distance allows then to quantify the quality of the approximation. We also compare this criterion to other error criteria, often based on pignistic transformations. We show results of Monte-Carlo simulations, and also of an application of target identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For several years, researchers have explored the unification of the theories enabling the fusion of imperfect data and have finally considered two frameworks: the theory random sets and the conditional events algebra. Traditionally, the information is modeled and fused in one of the known theories: bayesian, fuzzy sets, possibilistic, evidential, or rough sets... Previous work has shown what kind of imperfect data these theories can best deal with. So, depending on the quality of the available information (uncertain, vague, imprecise, ...), one particular theory seems to be the preferred choice for fusion. However, in a typical application, the variety of sources provide different kinds of imperfect data. The classical approach is then to model and fuse the incoming data in a single theory being previously chosen. In this paper, we first introduce the various kinds of imperfect data and then the theories that can be used to cope with the imperfection. We also present the existing relationships between them and detail the most important properties for each theory. We finally propose the random sets theory as a possible framework for unification, and thus show how the individual theories can fit in this framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents some important differences that exist between theories, which allow the uncertainty management in data fusion. The main comparative results illustrated in this paper are the followings: Incompatibility between decisions got from probabilities and credibilities is highlighted. In the dynamic frame, as remarked in [19] or [17], belief and plausibility of Dempster-Shafer model do not frame the Bayesian probability. This framing can however be obtained by the Modified Dempster-Shafer approach. It also can be obtained in the Bayesian framework either by simulation techniques, or with a studentization. The uncommitted in the Dempster-Shafer way, e.g. the mass accorded to the ignorance, gives a mechanism similar to the reliability in the Bayesian model. Uncommitted mass in Dempster-Shafer theory or reliability in Bayes theory act like a filter that weakens extracted information, and improves robustness to outliners. So, it is logical to observe on examples like the one presented particularly by D.M. Buede, a faster convergence of a Bayesian method that doesn't take into account the reliability, in front of Dempster-Shafer method which uses uncommitted mass. But, on Bayesian masses, if reliability is taken into account, at the same level that the uncommited, e.g. F=1-m, we observe an equivalent rate for convergence. When Dempster-Shafer and Bayes operator are informed by uncertainty, faster or lower convergence can be exhibited on non Bayesian masses. This is due to positive or negative synergy between information delivered by sensors. This effect is a direct consequence of non additivity when considering non Bayesian masses. Unknowledge of the prior in bayesian techniques can be quickly compensated by information accumulated as time goes on by a set of sensors. All these results are presented on simple examples, and developed when necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach to multisensor data fusion based on the use of Support Vector Machines (SVM). The approach is investigated using simulated generic sensor data, representative of data imperfections that may be encountered in multisensor fusion applications. In particular the issue of data incompleteness is addressed and a method exploiting vicinity of training points is proposed for incompleteness correction. The paper also investigates applicability of vicinal kernels in SVM-based sensor data fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coatings damage in shipboard tanks is presently assessed using Certified Coatings Inspectors. Prior to a coatings inspector entering a tank, the tank must be emptied and certified gas free. These requirements combined with the limited number of certified coatings inspectors available at shipyards and Naval Bases significantly increases the cost and the logistical requirements associated with performing shipboard tank inspections. There is additionally significant variation in damage assessments made by different inspectors. To overcome these difficulties, the Naval Research Laboratory has developed two video inspection systems that obviate requirements for both certifying tanks gas free and for emptying the tank prior to performing an inspection. These systems also obviate requirements for inspector presence during tank inspections. The Naval Research Laboratory has also developed an automatic corrosion detection algorithm. The corrosion detection algorithm currently employs two independent algorithms that individually assess the tank coatings damage. The independent damage assessments are than fused to attain a single coatings damage value. In testing performed to date, it has been shown that the corrosion detection algorithm significantly reduces the effect of inspector-to-inspector variability and provides an accurate assessment of tank coatings damage. This in turn makes it significantly easier to prioritize ship maintenance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint histogram of two images is required to uniquely determine the mutual information between the two images. It has been pointed out that, under certain conditions, existing joint histogram estimation algorithms like partial volume interpolation (PVI) and linear interpolation may result in different types of artifact patterns in the MI based registration function by introducing spurious maxima. As a result, the artifacts may hamper the global optimization process and limit registration accuracy. In this paper we present an extensive study of interpolation-induced artifacts using simulated brain images and show that similar artifact patterns also exist when other intensity interpolation algorithms like cubic convolution interpolation and cubic B-spline interpolation are used. A new joint histogram estimation scheme named generalized partial volume estimation (GPVE) is proposed to eliminate the artifacts. A kernel function is involved in the proposed scheme and when the 1st order B-spline is chosen as the kernel function, it is equivalent to the PVI. A clinical brain image database furnished by Vanderbilt University is used to compare the accuracy of our algorithm with that of PVI. Our experimental results show that the use of higher order kernels can effectively remove the artifacts and, in cases when MI based registration result suffers from the artifacts, registration accuracy can be improved significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Screening mammography is the most efficient and cost-effective method available for detecting the signs of early breast cancer in asymptomatic women between the ages of 50 and 69. To improve the detection rate and reduce the number of unnecessary biopsies, many different computer-aided diagnosis techniques have been developed. Many of these techniques use image processing algorithms to automatically segment and classify the images. The decision-making process associated with the evaluation of mammograms is complex and incorporates multiple sources of information from standard medical knowledge and radiology to pathology. The use of this information combined with the results of image processing offers new challenges to the field of data and information fusion. In this paper, we describe the different information sources and their data as well as the framework that is needed to support this type of fusion. A database of breast cancer screening cases forms the basis of the resulting fusion model. The database and decision-level fusion techniques will facilitate unique and specialized approaches for efficient and sophisticated diagnosis of breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper has briefly identified some application areas that require very reliable and precise estimates of real- time depth information using visual sensors. Amongst them stereoscopy is quite popularly employed in extracting 3-D structure information about imaged objects. In such applications, it was strongly felt the need to evaluate the uncertainty that remains in the stereoscopicinformation. This will be significant for fusing the information with other sensor modalities in multi-sensor systems and also for minimizing the uncertainty to any pre-assigned value, if required. These two approaches have been realized in this paper. In the first approach, partial stereo information obtained from a single camera has been considered for fusion using a generalized camera model and the uncertainty ellipsoid of the same information has been derived. In the second approach a multiple-camera (or multiple-baseline) stereo system has been considered where the correctness and precision of multi-baseline stereo matching has been improved by the application of fusion concepts. The improvement of the trade-off between precision and ambiguity through fusion of depth estimates has been illustrated using a particular intensity function for the images. By using fusion a fewer number of images are required in order to obtain the same level of precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The given paper suggest recognition algorithms of multilevel images of multicharacter identification objects. These algorithms are based on application of linear (nonlinear) equivalent (nonequivalent) space-dependent similarity means of normalized matrix data as criterial (discriminant) functions. The results of modeling and experimental results have shown that such nonlinear-equivalent algorithms process higher discriminant properties and operating characteristics, especially in case of considerable (up to 50 %) noise level content of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the last decades the research in the sensor fusion area has mainly been focused on fusion methods and feature selection methods. A possible further development in this area is to incorporate a process referred to as active perception. This means that the system is able to manipulate the sensing mechanisms to create a focus on selected information in the surrounding environment. This process may also be able to handle the feature selection process with respect to which features to be used and/or the number of features to use. This paper presents a model that contains a decision system based on active perception integrated with previous sensor fusion algorithms. The human body has perhaps one of the most advanced perceptual processing systems. The human perception process can be divided into sensation (measurement collection) and perception (interpret the surroundings). During the sensation process a huge amount of data is collected from different sensors that reflect the environment. The information has to be interpreted in an effective way, i.e. in the fusion process. The interpretation together with a decision system to control the sensors to focus on important information will correspond to the (active) perception process. The model presented in this paper capitalizes on the properties presented by the biological counterpart to achieve more human-like processes for a sensor fusion. Finally, the paper presents the testing of the model in two examples. The applications used have a safety approach of fire indication, identification and decision-making. The goal is to enlarge a conventional fire alarm system to not only detect fire, but also to propose different actions for a human in a dangerous area for example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data fusion architecture can be categorized into data-level fusion, feature-level fusion and decision-level fusion by its characteristics. In this paper, we provide a new target identification fusion technology in which we adopt not only feature-level fusion approach but also decision-level fusion approach in order to consider even sensors' uncertain reports and improve fusion performance. In feature-level fusion stage, we applied fuzzy set theory and Bayesian theory based on the sensor data, such as sensor parameter and detected target information. In decision-level fusion stage, we applied advanced Bayesian theory to decide final target identification. Experimental results with various kinds of sensor data have verified the robustness of our algorithms comparing with conventional feature-level, decision-level fusion algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper deals with the problem of the multifunction radar resources management. The problem consists of target/tasks ranking and tasks scheduling. The paper is focused on the target ranking, with the data fusion approach. The data from the radar (object's velocity, range, altitude, direction etc.), IFF system (Identification Friend or Foe) and ESM system (Electronic Support Measures - information concerning threat's electro - magnetic activities) is used to decide of the importance assignment for each detected target. The main problem consists of the multiplicity of various types of the input information. The information from the radar is of the probabilistic or ambiguous imperfection type and the IFF information is of evidential type. To take the advantage of these information sources the advanced data fusion system is necessary. The system should deal with the following situations: fusion of the evidential and fuzzy information, fusion of the evidential information and a'priori information. The paper describes the system which fuses the fuzzy and the evidential information without previous change to the same type of information. It is also described the proposal of using of the dynamic fuzzy qualifiers. The paper shows the results of the preliminary system's tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many missile and fire control applications, targets of interest may be acquired and tracked over some finite period of time with one or more sensors. This allows for the collection of sequential segments or frames of temporal information per sensor as well as across various sensors. By appropriately processing this information, target detection and classification performance can be considerably increased. Furthermore, we have developed new and different fusion strategies (additive and MINMAX fusion) in addition to the traditional strategies. Our test and analysis results show that temporal fusion can improve target classification as well as spatial fusion. In this work we have developed an optimal and novel design for an integrated spatio-temporal multi-sensor fusion system that combines inputs from different sensors as well as from the different time frames of each sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the requirement for accurate estimates of the statistical correlations between measurements in a distributed air-to-ground targeting system. The study uses results from a distributed multi-platform targeting simulation based on a level-1 data fusion system to assess the extent to which correlated measurements can degrade system performance, and the degree to which these effects need to be included to obtain a required level of accuracy. The data fusion environment described in the paper incorporates a range of target tracking and data association algorithms, including several variants of the standard Kalman filter, probabilistic association techniques and Reid's multiple hypothesis tracker. A variety of decentralized architectures are supported, allowing comparison with the performance of equivalent centralized systems. In the analysis, consideration is given to constraints on the computational complexities of the fusion system, and the availability of estimates of the measurement correlations and platform-dependent biases. Particular emphasis is placed on the localisation accuracy achieved by different algorithmic approaches and the robustness of the system to errors in the estimated covariance matrices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe the nature of the problem of surveillance of airport surface movement. We describe the characteristics, performance, and unique problems of various airport sensors available, and the need to develop a fusion system to provide an integrated surveillance picture. Parallel sensor fusion developments are described in terms of their applicability to the sensor fusion task in surface surveillance. Paradigms for sensor fusion, including alternative architectures, algorithms, and performance metrics will be described. Finally we describe system implementation and quantitative performance of sensor fusion applied to the surface surveillance problem at demonstrations in Atlanta Hartsfield International Airport (1998, ATL), Dallas Fort Worth International Airport (1999, 2000, DFW), and in-process and planned future developments in sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique to virtually recreate speech signals entirely from the visual lip motions of a speaker is proposed. By using six geometric parameters of the lips obtained from the Tulips1 database, a virtual speech signal is recreated by using a 3.6s audiovisual training segment as a basis for the recreation. It is shown that the virtual speech signal has an envelope that is directly related to the envelope of the original acoustic signal. This visual signal envelope reconstruction is then used as a basis for robust speech separation where all the visual parameters of the different speakers are available. It is shown that, unlike previous signal separation techniques, which required an ideal mixture of independent signals, the mixture coefficients can be very accurately estimated using the proposed technique in even non-ideal situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present study, the investigation by General Dynamics Canada, formerly Computing Devices Canada, into Bayesian Inference shows improved sensor fusion of multiple scanning sensors in the detection of buried anti-tank (AT) mines. This algorithm uses statistical data taken from trials and constructs conditional probabilities for individual sensors in order to better discern landmines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The human brain fuses information from a variety of modalities to locate, track, and identify targets. Vision- based tracking, which uses a 2D signal, is able to accurately identify and locate objects, however it requires more processing time than 1D auditory systems. Auditory systems can locate and identify objects based on interaural time difference (ITD) and interaural intensity difference (IID) fusion. In order to investigate the advantages of a neurophysiology-based fusion model, we seek to localize a target from a 1D signals analysis conducted over repeated measurements where a user is allowed move sensors. Similar to the human fusing auditory information from the ears, we seek to fuse information over time and space from two sensors monitoring a single target. Through spatiotemporal fusion from the 1D analysis, we show how ITD and IID fusion functions support a Level 5 User Refinement task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with multisensor statistical interval interval estimation fusion, that is, data fusion from multiple statistical interval estimators for the purpose of estimation of a parameter (theta) . A multisensor convex linear statistic fusion model for optimal interval estimation fusion is established. A Gaussian-Seidel iteration algorithm for searching for the fusion weights is proposed. In particular, we suggest convex combination minimum variance fusion that reduces huge computation of fusion weights and yields near optimal estimate performance generally, and moreover, may achieve exactly optimal performance for some specific distributions of observation data. Numerical examples are provided and give additional support to the above results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considering the target traveling with a constant acceleration in the 2-dimensional space, a mathematical observability criterion of the target states is established in this paper. In the middle of obtaining the criterion, first, the non-linear bearing measurement equation is changed into linear framework by subtly making use of some mathematical transformations. Next, a high-order non-linear differential equation is converted into a set of one-order linear differential equations by utilizing some characteristics of the full-rank matrix. At the end of this paper, the observability criterion given by some previous researchers for the target traveling in a 2-dimensional space with a constant velocity is directly deduced from the criterion presented by this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the multi-node distributed decision system under some conditions there are a few or none permitted information exchange between the nodes, this makes the information fusion and final decision difficult. If one node looks as an agent, it has some other node's historical experiences or knowledge for resolving problems and stored in additional case bases, it can uses case based reasoning (CBR) and transposition reasoning to obtain the possible viewpoints or decisions of those nodes and then makes information fusion by itself, so may reduce the subjectivism which is weakness of pure transposition reasoning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern technology provides a great amount of information. In computer monitoring systems or computer control systems, especially real-time expert systems, in order to have the situation in hand, we need one or two parameters to express the quality and/or security of the whole system. This paper presents a principle for synthesizing measurements of multiple system parameters into a single parameter. This principle has been successfully applied in the monitoring of an ultra-energy efficient house in Canada and other applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An array of thick film pH sensor electrodes has been fused using two separate fuser designs: the feedforward neural network and Nadaraya-Watson kernel estimator. In both cases the fuser is based on empirical data rather than analytical sensor models. Complementary sensor responses have been obtained by fabricating sensors using different metal oxides. This approach provides some immunity to interference caused by the ionic composition of the solution being sensed. The Nadaraya-Watson estimator is shown to provide a useful alternative to the feedforward neural network for multisensor fusion where sensor distributions are unknown. Indicative test results are provided for the measurement of pH in printing ink. The results confirm that the fused results are more accurate than those obtained using the single best sensor, or simple fusion schemes such as averaging or majority voting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.