**Publications**(53)

This will count as one of your downloads.

You will have access to both the presentation and article (if available).

This content is available for download via your institution's subscription. To access this item, please sign in to your personal account.

No SPIE account? Create an account

_{m}, m=1 to M, the output P-bit binary classification vectors V

_{m}, M=1 to M, of a one-layered, feed-forward neural network (OLNN) can be represented geometrically by P dichotomizing hyper-planes going through the origin of the N-dimension Euclidean coordinate in the N-space. In general, all these P planes divide the N-space into 2

^{P}hyper-cones. Each cone contains one U

_{m}and each cone corresponds to one V

_{m}. Learning of the OLNN is then equivalent to establishing these P planes geometrically in the N-space such that, after the learning, if a test pattern vector T, not necessarily equal to any class pattern U

_{m}, falls into the m-th cone (m=1 to 2

^{P}) established by these P planes, this T will also be recognized as V

_{m}. The robustness of this recognition is seen now to be equivalent to the geometrically allowed variation range of U

_{m}in the m-th cone. This allowable range can be systematically adjusted for each cone during the learning process. This paper reports the optimum method of adjusting these variation ranges such that any unknown T containing environmental noise not included in the training can still be recognized with maximum accuracy.

_{m}, m=1 to M, each U

_{m}represents a distinct pattern}, if N>>M, then a one-layered sign-function neural network (OLNN) is sufficient to do a very robust, yet very accurate, noniterative-learning of all patterns. After the learning is done, the OLNN will make an accurate identification on an untrained test pattern even when the test pattern is varying within a certain dynamic range of a particular standard pattern learned during the noniterative learning process. The analytical foundation for making this dynamic neural network pattern recognition possible is the following. If we know that a standard pattern U

_{m}will vary gradually among K boundary patterns U

_{m1}to U

_{mk}, then we can train the neural network noniteratively to learn JUST THE BOUNDARY vectors {U

_{mi}, i=1 to k} for each pattern U

_{m}. Then, due to a distinctive property of noniterative learning, for a test input pattern U

_{t}equal to any graduate changes within the boundaries (i.e., U

_{t}= any CONVEX combination of the boundary set {U

_{mi}, i=1 to k, m fixed.}), the OLNN can still automatically recognize this changed pattern even though all these gradually changed pattern are NOT learned step by step in the noniterative learning.

**image extraction**technique. It will be verified in each experiment by reconstructing the original image from the compactly extracted analog data file.

**EXPERIMENTALLY**using this program, it reports in detail many interesting, novel, 5D-geometrical properties. These include such properties as the ring-structures of the boundary vectors, topological symmetries of the boundaries, different ways of joining the boundary planes (4D hyper-planes) in the 5-space, topological order of the extreme edges and that of the boundary planes located around the cone, etc.

_{m}mapped to V

_{m}, m=1 to M} where u

_{m}is an N- dimension analog (pattern) vector, V

_{m}is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Y

_{mi}, m=1 to M} (where Y

_{mi}there existsV

_{mi}U

_{m}and V

_{mi}=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.

_{m}. This paper repots the theory and the design of this NOVEL PCTLP system.

_{m}} are closely related and they are inseparable (according to some targeted binary output vectors {V

_{m}}) by a one- layered perceptron (OLP), then {U

_{m}} must be linearly dependent, and the output-augmented {U

_{m}} must be positively, linearly dependent. The learning of this OLP is then impossible no matter what learning rules we use, because the solution of the connection matrix just does not exist. However, we can always use a parallel-cascaded, two-layered perceptron (PCTLP) to realize this inseparable mapping. The design of this PCTLP is derived from the positive-linear independency condition we previously studied. It is a very intriguing mathematical derivation and the design of the PCTLP is much more efficient then that of the conventional series- cascaded, three-layered, neural networks. Also its robustness in recognizing any untrained, closely related patterns can be controlled and maximized. The physical origin, the theory, and the design of this novel, `universal' perceptron pattern recognition system will be discussed in detail in this paper.

^{k}in U can be used as an automatic feature extraction process in this noniterative-learning system. This paper reports the theoretical derivation and the design and experiments of a superfast-learning, optimally robust, neural network pattern recognition system utilizing this novel feature extraction process. An unedited video movie showing the speed of learning and the robustness in recognition of this novel pattern recognition system is demonstrated in life. Comparison to other neural network pattern recognition systems is discussed.

_{2}M) is generally sufficient to do the pattern recognition job. The connection matrix between the input (linear) layer and the neuron layer can be calculated in a noniterative manner. Real-time pattern recognition experiments implementing this theoretical result were reported in this and other national conferences last year. It is demonstrated in these experiments that the noniterative training is very fast, (can be done in real time), and the recognition of the untrained patterns is very robust and very accurate. The present paper concentrates at the theoretical foundation of this noniteratively trained perceptron. The theory starts from an N-dimension Euclidean-geometry approach. An optimally robust learning scheme is then derived. The robustness and the speed of this optimal learning scheme are to be compared with those of the conventional iterative learning schemes.

_{2}M) is generally sufficient to accomplish the learning- recognition task. The recognition should be very robust and very fast if an optimum noniterative learning scheme is applied to the perceptron learning process. This paper concentrates at the discussion of two special characteristics of this novel pattern recognition system: the automatic feature extraction and the automatic feature competition. An unedited video movie recorded on a series of learning-recognition experiments may demonstrate these characteristics of the novel system in real time.

You currently do not have any folders to save your paper to! Create a new folder below.

View contact details

Shibboleth users login
to see if you have access.