Paper
28 March 1995 High-resolution synaptic weights and hardware-in-the-loop learning
Taher Daud, Tuan A. Duong, Mua D. Tran, Harry Langenbacher, Anilkumar P. Thakoor
Author Affiliations +
Proceedings Volume 2424, Nonlinear Image Processing VI; (1995) https://doi.org/10.1117/12.205250
Event: IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology, 1995, San Jose, CA, United States
Abstract
Artificial neural network paradigms are derived from biological nervous system and are characterized by massive parallelism. These networks have shown the capabilities of processing input-output mapping operations even where the transformation rules are not known, partially known, or ill-defined. For high-speed processing, we have fabricated neural network architectures as building-block chips with either a 32 X 32 matrix of synapses or a 32 X 31 array of synapses along with 32 neurons along a diagonal for a 32 X 32 matrix. Reconfigurability allows a variety of architectures from fully recurrent to fully feedforward, including constructive architectures such as cascade correlation. Further, a variety of gradient-descent learning algorithms have been implemented. Additionally, the chips being cascadable, larger size networks are easily assembled. An innovative scheme of combining two identical synapses on two respective chips in parallel nominally doubles the bit resolution from 7 bits (6-bit + sign) to 13 bits (12-bit + sign). We describe the feedforward net obtained by assembly of 8 chips on a board with nominally 13 bits of resolution for a hardware-in-the-loop learning of a feature classification problem involving map-data. This neural net hardware with 27 analog inputs and 7 outputs is able to learn to classify the features and provide the required output map at high speed with 89% accuracy. This result, with hardware's lower precision, etc., compares favorably with an accuracy of 92% obtained both by a neural network software simulation (floating point accuracy of synaptic weights) and a statistical technique of k-nearest neighbors.
© (1995) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Taher Daud, Tuan A. Duong, Mua D. Tran, Harry Langenbacher, and Anilkumar P. Thakoor "High-resolution synaptic weights and hardware-in-the-loop learning", Proc. SPIE 2424, Nonlinear Image Processing VI, (28 March 1995); https://doi.org/10.1117/12.205250
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Neural networks

Analog electronics

Composites

Mirrors

Computing systems

Computer simulations

Back to Top