The disadvantage of the generalized learning vector quantization (GLVQ) and fuzzy generalization learning vector quantization (FGLVQ) algorithms is discussed. A revised GLVQ (RGLVQ) algorithm is proposed. Because the iterative coefficients of the proposed algorithms are properly bounded, the performance of our algorithms is invariant under uniform scaling of the entire data set unlike Pal's GLVQ, and the initial learning rate is not sensitive to the number of prototypes as Karayiannis's FGLVQ. The proposed algorithms are tested and evaluated using the iRIS data set. The efficiency of the proposed algorithms is also illustrated by their use in codebook design required for image compression based on vector quantization. The training time of RGLVQ algorithm is reduced by 20% as compared with Karayiannis's FGLVQ but the performance is similar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.