Implementation issues of neural network formulations onto an available computer architectures are unique. In conventional signal and image processing, for example, many of the frequently performed mathematical operations, like convolutions and transformations, are matrix-vector operations. Computational requirements of these algorithms are met by parallel hardware architectures tailored for efficient handling of vector data. However, many neural network systems require more than just making the computation parallel. Implementation of neural network architectures, such as those exemplified by Grossberg's Boundary Contour System, involve the solution of hundreds of coupled ordinary non-linear differential equations. A neurocomputer whose basic computational unit performs integration, rather the usual arithmetic operations, would be ideal for these systems. The training of the back propagation network, similarly, can be expressed as a problem of solving a system of 8ttff, coupled ordinary differential equations for the weights which are treated as continuous functions of time. The ability to efficiently solve differential equations on digital and parallel computers is therefore quite important for the implementation of artificial neural networks. The central idea of a mapping, described in this paper, involves the replacement of differentiation operators and functions, in the given equation, respectively with the so-called differentiation matrices and function matrices. With the help of a so-called projection matrix, a differential equation is transformed into a rectangular vector-matrix equation which can then be solved on a systolic processor. The algorithm is computationally efficient and enables one to numerically compute, separately, the general solution for the homogeneous part and the particular solution for any specified forcing function.
|