Neural Networks are part of a revived technology which has received a lot of hype in recent years.
As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application
difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and noncontrived
problems. For example, one net has been trained to "read", translating English text into phoneme
sequences. Other applications of Neural Networks include data base manipulation and the solving of muting
and classification types of optimization problems.
Neural Networks are constructed from neurons, which in electronics or software attempt to model but
are not constrained by the real thing, i.e., neurons in our gray matter. Neurons are simple processing units
connected to many other neurons over pathways which modify the incoming signals. A single synthetic neuron
typically sums its weighted inputs, runs this sum through a non-linear function, and produces an output. In
the brain, neurons are connected in a complex topology: in hardware/software the topology is typically much
simpler, with neurons lying side by side, forming layers of neurons which connect to the layer of neurons
which receive their outputs. This simplistic model is much easier to construct than the real thing, and yet can
solve real problems.
The information in a network, or its "memory", is completely contained in the weights on the
connections from one neuron to another. Establishing these weights is called "training" the network. Some
networks are trained by design -- once constructed no further learning takes place. Other types of networks
require iterative training once wired up, but are not trainable once taught Still other types of networks can
continue to learn after initial construction.
The main benefit to using Neural Networks is their ability to work with conflicting or incomplete
("fuzzy") data sets. This ability and its usefulness will become evident in the following discussion.