Paper
12 April 2004 Techniques for evaluating classifiers in application
Author Affiliations +
Abstract
In gauging the generalization capability of a classifier, a good evaluation technique should adhere to certain principles. For instance, the technique should evaluate a selected classifier, not simply an architecture. Secondly, a solution should be assessable at the classifier’s design and, further, throughout its application. Additionally, the technique should be insensitive to data presentation and cover a significant portion of the classifier’s domain. Such principles call for methods beyond supervised learning and statistical training techniques such as cross validation. For this paper, we shall discuss the evaluation of a generalization in application. For illustration, we will present a method for the multilayer perceptron (MLP) that may be drawn from the unlabeled data collected in the operational use of a given classifier. These conclusions support self-supervised learning and computational methods that isolate unstable, nonrepresentational regions in the classifier.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Amy L. Magnus and Mark E. Oxley "Techniques for evaluating classifiers in application", Proc. SPIE 5421, Intelligent Computing: Theory and Applications II, (12 April 2004); https://doi.org/10.1117/12.542596
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Back to Top