1 June 1991 Massively parallel implementation of neural network architectures
Author Affiliations +
Abstract
In recent years neural networks have been used to solve some of the difficult real time character recognition problems. These SIMD implementations of the networks have achieve some success, but the real potential of neural networks are yet to be utilized. Several well known neural network architectures have been, modified, and implemented. These architecture are then applied to character recognition. The performance of these parallel character recognition systems are compared and contrasted. Feature localization and noise reduction are achieved using least squares optimized Gabor filtering. The filtered images are then presented to an FAUST based learning algorithm which produces the self- organizing sets of neural network generated features used for character recognition. Implementation of these algorithms on highly parallel computer with 1024 processors allows high speed character recognition to be achieved at a speed of 2.3 ms/image, with greater than 99% accuracy on machine print and 89% accuracy on unconstrained hand printed characters. These results are achieved using identical parallel processor programs demonstrating that the method is truly font independent. The back propagation is included to allow comparison with more conventional neural network character recognition methods. The network has one hidden layer with multiple concurrent feedback from the output layer to the hidden and from hidden layer to the input layer. This concurrent feedback and weight adjustment is only possible on a SIMD computer.
© (1991) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Omid M. Omidvar, Charles L. Wilson, "Massively parallel implementation of neural network architectures", Proc. SPIE 1452, Image Processing Algorithms and Techniques II, (1 June 1991); doi: 10.1117/12.45412; https://doi.org/10.1117/12.45412
PROCEEDINGS
12 PAGES


SHARE
Back to Top