1 September 1993 Probabilistic neural network with reflected kernels
Author Affiliations +
Abstract
Probabilistic neural networks (PNN) build internal density representations based on the kernel or Parzen estimator and use Bayesian decision theory in order to build up arbitrarily complex decision boundaries. As in the classical kernel estimator, the training is performed in a single pass of the data and asymptotic convergence is guaranteed. Asymptotic convergence, while necessary, says little about discrete sample estimation errors. These errors can be quite large. One problem that arises using either the kernel estimator or the PNN is when one or more of the densities being estimated has a discontinuity. This commonly leads to a pdfL(infinity ) expected error on the order of the amount of the discontinuity which can in turn lead to significant classification errors. By using the method of reflected kernels, we have developed a PNN model that does not suffer from this problem. The theory of reflected kernel PNNs, along with their relation to reflected kernel Parzen estimators, is presented along with finite sample examples.
© (1993) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
George W. Rogers, George W. Rogers, Carey E. Priebe, Carey E. Priebe, Jeffrey L. Solka, Jeffrey L. Solka, } "Probabilistic neural network with reflected kernels", Proc. SPIE 1962, Adaptive and Learning Systems II, (1 September 1993); doi: 10.1117/12.150591; https://doi.org/10.1117/12.150591
PROCEEDINGS
11 PAGES


SHARE
Back to Top