1 November 1998 Conjugate gradient and approximate Newton methods for an optimal probablilistic neural network for food color classification
Author Affiliations +
Abstract
The probabilistic neural network (PNN) is based on the estimation of the probability density functions. The estimation of these density functions uses smoothing parameters that represent the width of the activation functions. A two-step numerical procedure is developed for the optimization of the smoothing parameters of the PNN: a rough optimization by the conjugate gradient method and a fine optimization by the approximate Newton method. The thrust is to compare the classification performances of the improved PNN and the standard back-propagation neural network (BPNN). Comparisons are performed on a food quality problem: french fry classification into three different color classes (light, normal, and dark). The optimized PNN correctly classifies 96.19% of the test data, whereas the BPNN classifies only 93.27% of the same data. Moreover, the PNN is more stable than the BPNN with regard to the random initialization. The optimized PNN requires 1464 s for training compared to only 71 s required by the BPNN.
Younes Chtioui, Younes Chtioui, Suranjan Panigrahi, Suranjan Panigrahi, Ronald A. Marsh, Ronald A. Marsh, } "Conjugate gradient and approximate Newton methods for an optimal probablilistic neural network for food color classification," Optical Engineering 37(11), (1 November 1998). https://doi.org/10.1117/1.601972 . Submission:
JOURNAL ARTICLE
9 PAGES


SHARE
Back to Top