6 April 1995 Fuzzy algorithms for learning vector quantization: generalizations and extensions
Author Affiliations +
Abstract
This paper presents a general methodology for the development of fuzzy algorithms for learning vector quantization (FALVQ). These algorithms can be used to train feature maps to perform pattern clustering through an unsupervised learning process. The development of FALVQ algorithms is based on the minimization of a fuzzy objective function, formed as the weighted sum of the squared Euclidean distances between an input vector, which represents a feature vector, and the weight vectors of the map, which represent the prototypes. This formulation leads to the development of genuinely competitive algorithms, which allow all prototypes to compete for matching each input. The FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms are developed by selecting admissible generalized membership functions with different properties. The efficiency of the proposed algorithms is illustrated by their use in codebook design required for image compression based on vector quantization.
© (1995) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nicolaos B. Karayiannis, Pin-I Pai, "Fuzzy algorithms for learning vector quantization: generalizations and extensions", Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); doi: 10.1117/12.205133; https://doi.org/10.1117/12.205133
PROCEEDINGS
12 PAGES


SHARE
KEYWORDS
Prototyping

Algorithm development

Quantization

Fuzzy logic

Information operations

Image compression

Image quality

Back to Top