Given two legacy exploitation systems, whose performances are known, one might wish to determine if combining these together using some rule would yield a new exploitation system with improved performance. This is the fusion process. Often there are several performance objectives one would consider in this process. We investigate the fusion process based upon multiple performances. This is related to multi-objective optimization, but is different in some aspects. In this paper we pose a multi-performance problem for combining two classifications systems and derive the multi-performance fusion theory. A classification system with M possible output labels will have M(M-1) possible errors. The Receiver Operating Characteristic (ROC) manifold was created to quantify all of these errors. The assumption of independence is usually made to simply the mathematics of combining the individual systems into one system. Boolean rules do not exist for multiple symbols, thus, Boolean-like rules were created that would yield label fusion rules. An M-label system will have M! consistent rules. The formula for the resultant ROC manifold of the fused classification systems which incorporates the individual classification systems previously was derived. For the multi-performance problem we show how the set of permutations of the label set is used to generate all of the consistent rules and how the permutation matrix is incorporated into a single formula for the ROC manifold. Examples will be given that demonstrate how the solution to the multi-performance fusion problem relates to the solution of the single performance fusion problem.
A Classification system such as an Automatic Target Recognition (ATR) system with N possible output labels (or decisions)
will have N(N-1) possible errors. The Receiver Operating Characteristic (ROC) manifold was created to quantify all
of these errors. Finite truth data will produce an approximation to a ROC manifold. How well does the approximate
ROC manifold approximate the TRUE ROC manifold? Several metrics exist that quantify the approximation ability, but
researchers really wish to quantify the confidence in the approximate ROC manifold. This paper will review different
confidence definitions for ROC curves and will derive an expression for confidence of a ROC manifold. The foundation of
the confidence expression is based upon the Chebychev inequality..
The Receiver Operating Characteristic (ROC) curve is typically used to quantify the performance of Automatic Target Recognition (ATR) systems. When multiple systems are to be fused, assumptions are made in order to mathematically combine the individual ROC curves for each of these ATR systems in order to form the ROC curve of the fused system. Often, one of these assumptions is independence between the systems. However, correlation may exist between the classifiers, processors, sensors and the outcomes used to generate each ROC curve. This paper will demonstrate a method for creating a ROC curve of the fused systems which incorporates the correlation that exists between the individual systems. Specifically, we will use the derived covariance matrix between multiple systems to compute the existing correlation and level of dependence between pairs of systems. The ROC curve for the fused system is then produced, adjusting for this level of dependency, using a given fusion rule. We generate the formula for the Boolean OR and AND rules, giving the exact ROC curve for the fused system, that is, not a bound.
Performancemeasures for families of classification system families that rely upon the analysis of receiver operating
characteristics (ROCs), such as area under the ROC curve (AUC), often fail to fully address the issue of risk,
especially for classification systems involving more than two classes. For the general case, we denote matrices
of class prevalence, costs, and class-conditional probabilities, and assume costs are subjectively fixed, acceptable
estimates for expected values of class-conditional probabilities exist, and mutual independence between a variable
in one such matrix and those of any other matrix. The ROC Risk Functional (RRF), valid for any finite number
of classes, has an associated parameter argument, which specifies a member of a family of classification systems,
and for which there is an associated classification system minimizing Bayes risk over the family. We typify
joint distributions for class prevalences over standard simplices by means of uniform and beta distributions, and
create a family of classification systems using actual data, testing independence assumptions under two such
class prevalence distributions. Examples are given where the risk is minimized under two different sets of costs.
n Automatic Target Recognition (ATR) system with N possible output labels (or decisions) will have N(N − 1) possible
errors. The Receiver Operating Characteristic (ROC) manifold was created to quantify all of these errors. When multiple
ATR systems are fused, the assumption of independence is usually made in order to mathematically combine the individual
ROC manifolds for each system into one ROC manifold. This paper will investigate the label fusion (also called decision
fusion) of multiple classification systems that have the same number of output labels. Boolean rules do not exist for multiple
symbols, thus, we will derive possible Boolean-like rules as well as other rules that will yield label fusion rules. The
formula for the resultant ROC manifold of the fused classification systems which incorporates the individual classification
systems will be derived. Specifically, given a fusion rule and two classification systems, the ROC manifold for the fused
system is produced. We generate formulas for the Boolean-like OR rule, Boolean-like AND rule, and other rules and give
the resultant ROC manifold for the fused system. Examples will be given that demonstrate how each formula is used.
The Receiver Operating Characteristic (ROC) curve can be used to quantify the performance of Automatic Target Recognition
(ATR) systems. When multiple classification systems are fused, the assumption of independence is usually made
in order to mathematically combine the individual ROC curves for each of these classification systems into one fused
ROC curve. However, correlation may exist between the classification systems and the outcomes used to generate each
ROC curve. This paper will demonstrate a method for creating a ROC curve of the fused classification systems which
incorporates the correlation that exists between the individual classification systems. Specifically, we will use the derived
covariance between multiple classification systems to compute the existing correlation and thus the level of dependence
between pairs of classification systems. Then, given a fusion rule, two systems, and the correlation between them, the ROC
curve for the fused system is produced. We generate the formula for the Boolean OR and AND rules, giving the resultant
ROC curve for the fused system. This paper extends our previous work in which bounds for the ROC curve of the fused,
correlated classification systems were presented.
Automatic Target Recognition (ATR) system's performance is quantified using Receiver Operating Characteristic (ROC) curves (or ROC manifolds for more than two labels) and typically the prior probabilities of each labeled-event occurring. In real-world problems, one does not know the prior probabilities and they have to be approximated or guessed, but usually one knows their range or distribution. We derive an objective functional that quantifies the robustness of an ATR system given: (1) a set of prior probabilities, and (2) a distribution of a set of prior probabilities. The ATR system may have two labels or more. We demonstrate the utility of this objective functional with examples, and show how it can be used to determine the optimal ATR system from a family of systems.
Significant advances in the performance of ATR systems can be made when fusing individual classification systems into a single combined classification system. Often, these individual systems are dependent, or correlated, with one another. Additionally, these systems typically assume that two outcome labels, (for instance "target" and "non-target") exist. Little is known about the performance of fused classification systems when multiple outcome labels are used. In this paper, we propose a methodology for quantifying the performance of the fused classifier system using multiple labels. Specifically, a performance measure for a fused classification system using two classifiers and multiple labels will be developed. The performance measure developed is based on the Receiver Operating Characteristic (ROC) curve. The ROC curve in a two-label system has been well defined and used extensively, in not only ATR applications, but also other engineering and biomedical applications. A ROC manifold is defined and use in order to incorporate the multiple labels. An example of this performance measure for a given fusion rule and multiple labels is given.
Data fusion as a science has been described in the literature in great detail by many authors, particularly over the last
two decades. These descriptions are, for the vast majority, non-mathematical in nature and have lacked the symbolism
and clarity of mathematical precision. This paper demonstrates a way of describing the science of data fusion using
diagrams and category theory. The description begins using category theory to develop a clear definition of what
fusion is in a mathematical sense. The definitions of fusion rules and fusors show how a notion of ”betterness” can
be defined by developing appropriate functionals. Using a simple diagram of a multisensor process, an explanation
of how receiver operating characteristic (ROC) curves can provide an appropriate functional to compare fusion rules,
fusors, and even classifiers. A partial ordering of a finite number of fusors can then be created.