1 August 1990 Statistical learning from nonrecurrent experience with discrete input variables and recursive-error-minimization equations
Author Affiliations +
Abstract
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
© (1990) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jeffrey R. Carter, Jeffrey R. Carter, Wayne E. Simon, Wayne E. Simon, } "Statistical learning from nonrecurrent experience with discrete input variables and recursive-error-minimization equations", Proc. SPIE 1294, Applications of Artificial Neural Networks, (1 August 1990); doi: 10.1117/12.21211; https://doi.org/10.1117/12.21211
PROCEEDINGS
5 PAGES


SHARE
Back to Top