A feedforward network is composed of neurons arranged in layers, as illustrated by Fig. 1.10 in the introduction. Data are introduced into the system through an input layer. This is followed by processing in one or more intermediate (hidden) layers. Output data emerge from the network’s final layer. The transfer functions contained in the individual neurons can be almost anything. In this appendix, we describe the mathematics behind the feedforward neural network and the backpropagation algorithm that is commonly used to train feedforward networks with sigmoidal transfer functions. We also mention some of the alternatives to backpropagation for training feedforward networks.
For the mathematical derivations within this appendix, we will use the notations given in Table A.1. This follows the terminology found in many sources that deal with backpropagation. We will also use superscripts to indicate the index of the layer. Subscripts indicate indices for neurons and patterns (data samples).
Online access to SPIE eBooks is limited to subscribing institutions.