13 October 1998 Extension of the generalized Hebbian algorithm for principal component extraction
Author Affiliations +
Abstract
Principal component analysis (PCA) plays an important role in various areas. In many applications it is necessary to adaptively compute the principal components of the input data. Over the past several years, there have been numerous neural network approaches to adaptively extract principal components for PCA. One of he most popular learning rules for training a single-layer linear network for principal component extraction is Sanger's generalized Hebbian algorithm (GHA). We have extended the GHA (EGHA) by including a positive-definite symmetric weighting matrix in the representation error-cost function that is used to derive the learning rule to train the network. The EGHA presents the opportunity to place different weighting factors on the principal component representation errors. Specifically, if prior knowledge is available pertaining to the variances of each term of the input vector, this statistical information can be incorporated into the weighting matrix. We have shown that by using a weighted representation error-cost function, where the weighting matrix is diagonal with the reciprocals of the standard deviations of the input on the diagonal, more accurate results can be obtained using the EGHA over the GHA.
© (1998) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Fredric M. Ham, Fredric M. Ham, Inho Kim, Inho Kim, } "Extension of the generalized Hebbian algorithm for principal component extraction", Proc. SPIE 3455, Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation, (13 October 1998); doi: 10.1117/12.326722; https://doi.org/10.1117/12.326722
PROCEEDINGS
12 PAGES


SHARE
Back to Top