Translator Disclaimer
30 March 2000 Neural network correspondencies of engineering principles
Author Affiliations +
Applications of neural networks have been reported on a lot in recent years, but the research on how to find reliable guidelines to design neural networks is still in its infancy. This work intends to provide some ideas on how to find useful predefined network structures for at least certain parts of the neural net. By breaking off to a certain extend the so-called black-box character of the neural net, the performance of the networks can be improved and the solutions of the nets get more transparent and understandable at the same time. Additionally, the ability of the neural nets to generalize from some training patterns to unlearned data regions is improved substantially. In this work two commonly used engineering principles such as the technique of dimensional analysis and the Laplace- transformation are used to identify suitable topologies for neural networks. The integration of the dimensional analysis in the context of feed-forward neural networks is presented. In the second part of this work the use of the Laplace- transformation in neural networks is demonstrated. Even though at the moment the application of this technique has been shown in a linear time-invariant process, a future use of this method in nonlinear system is considered.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Georg Schneider, Detlef Korte, and Stephan Rudolph "Neural network correspondencies of engineering principles", Proc. SPIE 4055, Applications and Science of Computational Intelligence III, (30 March 2000);


Back to Top