Paper
1 July 1992 Improving convergence and performance of Kohonen's self-organizing scheme
Nikhil R. Pal, James C. Bezdek, Eric C.K. Tsao
Author Affiliations +
Abstract
Kohonen-like clustering algorithms (e.g., learning vector quantization) suffer from several major problems. For this class of algorithms, output often depends on the initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such an algorithm, even if it terminates, may not produce meaningful results in terms of prototypes for clustering. This is because it updates only the winner prototype with every input vector. In this paper we propose a generalization of learning vector quantization (which we shall call a Kohonen clustering network or KCN) which, unlike other methods, updates all the nodes with each input vector. Moreover, the network attempts to find a minimum of a well defined objective function. The learning rules depend on the degree of match to the winner node; the lesser the degree of match with the winner, the more is the impact on nonwinner nodes. Our numerical results show that the generated prototypes do not depend on the initialization, learning coefficient, or the number of iterations (provided KCN runs for at least 200 passes through the data). We use Anderson's IRIS data to illustrate our method; and we compare our results with the standard Kohonen approach.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nikhil R. Pal, James C. Bezdek, and Eric C.K. Tsao "Improving convergence and performance of Kohonen's self-organizing scheme", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); https://doi.org/10.1117/12.140118
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Prototyping

IRIS Consortium

Neural networks

Quantization

Stochastic processes

Artificial neural networks

Associative arrays

Back to Top