Translator Disclaimer
3 April 2000 Expanding context against weighted voting of classifiers
Author Affiliations +
In the paper we propose a new method to integrate the predictions of multiple classifiers for Data Mining and Machine Learning tasks. The method assumes that each classifier stands in it's own context, and the contexts are partially ordered. The order is defined by monotonous quality function that maps each context to the value from the interval [0,1]. The classifier that has the context with better quality is supposed to predict better than the classifier from worse quality. The objective is to generate the opinion of `virtual' classifier that stands in the context with quality equal to 1. This virtual classifier must have the best accuracy of predictions due to the best context. To do this we build the regression where each prediction is put with the weight, equal to quality evaluation of the context of the correspondent classifier. This regression will give us the best opinion in the point 1. Some experiments on the vowel recognition tasks showed validity of the approach.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vagan Terziyan, Boris Omelayenko, and Seppo Jumani Puuronen "Expanding context against weighted voting of classifiers", Proc. SPIE 4051, Sensor Fusion: Architectures, Algorithms, and Applications IV, (3 April 2000);

Back to Top