30 October 1992 Modified TAG neural network for large-scale optical implementation
Author Affiliations +
Proceedings Volume 1812, Optical Computing and Neural Networks; (1992) https://doi.org/10.1117/12.131195
Event: International Symposium on Optoelectronics in Computers, Communications, and Control, 1992, Hsinchu, Taiwan
Abstract
Training by adaptive gain (TAG) neural network model, which had been developed for optical implementation of large-scale artificial neural networks, is further extended for better performance and its feasibility is demonstrated by a small-scale electro-optic implementation. For fully interconnected single-layer neural networks with N input and M output neurons the modified TAG model contains two different types of interconnections, i.e., MN fixed global interconnections and (beta) N + M adaptive local interconnections. For the original TAG model the number of adaptive local interconnections (beta) was set to 1, and the interconnections were understood as adaptive gain. For 2-dimensional input and output patterns the fixed global interconnections may be achieved by page-oriented holograms, and the adaptive local interconnections by spatial light modulators. The original and modified TAG models require much less adaptive elements than the popular perceptron model with fully adaptive global interconnections, and provide possibilities of implementing large-scale artificial neural networks with some sacrifice in performance. The training algorithm is based on gradient descent and error back-propagation, and is easily extensible to multi-layer architecture. Computer simulation and electro-optic implementation demonstrate much better performance of the modified TAG model compared to the original TAG model.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Soo-Young Lee, Hyeuk-Jae Lee, Sang-Yung Shin, "Modified TAG neural network for large-scale optical implementation", Proc. SPIE 1812, Optical Computing and Neural Networks, (30 October 1992); doi: 10.1117/12.131195; https://doi.org/10.1117/12.131195
PROCEEDINGS
5 PAGES


SHARE
RELATED CONTENT


Back to Top