1 April 2003 Sensitivity of fusion performance to classifier model variations
Author Affiliations +
During design of classifier fusion tools, it is important to evaluate the performance of the fuser. In many cases, the output of the classifiers needs to be simulated to provide the range of fusion input that allows an evaluation throughout the design space. One fundamental question is how the output should be distributed, in particular for multi-class continuous output classifiers. Using the wrong distribution may lead to fusion tools that are either overly optimistic or otherwise distort the outcome. Either case may lead to a fuser that performs sub-optimal in practice. It is therefore imperative to establish the bounds of different classifier output distributions. In addition, one must take into account the design space that may be of considerable complexity. Exhaustively simulating the entire design space may be a lengthy undertaking. Therefore, the simulation has to be guided to populate the relevant areas of the design space. Finally, it is crucial to quantify the performance throughout the design of the fuser. This paper addresses these issues by introducing a simulator that allows the evaluation of different classifier distributions in combination with a design of experiment setup, and a built-in performance evaluation. We show results from an application of diagnostic decision fusion on aircraft engines.
© (2003) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kai F. Goebel, Kai F. Goebel, "Sensitivity of fusion performance to classifier model variations", Proc. SPIE 5099, Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2003, (1 April 2003); doi: 10.1117/12.487284; https://doi.org/10.1117/12.487284

Back to Top