Open Access
1 April 2008 Locality preserving projections plus affinity propagation: a fast method for face recognition
Chunhua Du, Jie Yang, Qiang Wu, Tianhao Zhang, Shengyang Yu
Author Affiliations +
Abstract
This letter develops a novel and fast framework called APLPP to tackle the face recognition task by combining locality preserving projections (LPP) with the affinity propagation clustering method. Unlike LPP, which uses all training images to identify a test image, our affinity propagation LPP (APLPP) uses much fewer images, free of noise, for identification. Thus APLPP is more computationally efficient than LPP. Comparative experiments also indicate that APLPP outperforms LPP regarding the recognition rate.

1.

Introduction

Face recognition (FR) has received increasing attention in recent years due to its potential applications. Many FR systems have been devised in the past few decades. One of the most popular techniques for FR is subspace analysis. Two of the most important and representative ones are principal component analysis (PCA)1 and linear discriminant analysis (LDA).2 Many other methods such as 2DPCA,3 2DLDA,4 ICA,5 (2D)2LDA ,6 BDPCA+LDA ,7 and IPCAICA 8 are all extended from them. LPP9 is also a subspace analysis method. As stated in,9 it performs better than LDA in terms of error rate. However, all methods [including locality preserving projections (LPP)] mentioned above are inefficient, because they identify a test face image by comparing it with all training images. To overcome this disadvantage, we propose to combine LPP with affinity propagation (AP)10 to give a new FR framework, APLPP, which uses much fewer training images, free of noise, to identify a test image. Thus APLPP is more computationally efficient than LPP.

The reason why we propose to combine AP with LPP is as follows: AP can detect a representative sample for each class. Therefore, we can use AP to detect a representative face image for each subject and use only the representative images rather than all training ones for identification. However, the original AP regards the negative Euclidean distance between two images as their measure of similarity, which is directly computed on the gray pixel values. As we stated in Ref. 11, such a definition of similarity is unreasonable. To solve this problem, we propose to compute the similarity on low-dimensional features derived from LPP, not only because LPP can be used for dimensionality reduction and best detect the essential face manifold structure, but because it can reduce the unwanted variations resulting from changes in lighting, facial expression, and pose. By using LPP on the high-dimensional face data, more efficient low-dimensional features can be obtained. Obviously, the similarity on these features should be better than that directly computed from the gray pixel values. In addition, it needs much less computational time, because the dimension has been reduced greatly. In view of this analysis, we propose to combine AP with LPP and make full use of their advantages.

The rest of this paper is as follows: AP and LPP are reviewed in Secs. 2, 3, respectively. In Sec. 4, we summarize the APLPP method. Experimental results are given in Sec. 5. The final section gives the conclusions.

2.

Affinity Propagation

AP10 is an efficient clustering method. It first builds a similarity matrix s , in which s(i,k) between data points xi and xk is their negative Euclidean distance. Before clustering, each point also needs to be assigned a number P(B) , which characterizes the prior knowledge of how good point B is as a representative example. The data points with larger values of P(B) are more likely to be chosen as representative examples. These values are referred to as preferences. In fact, all data points are equally suitable as representative examples; thus the preferences should be set to the same value, which can be varied to produce different numbers of clusters. Generally, that value is the median of s . After the construction of similarity matrix and the setting of preferences, two kinds of messages (responsibility, availability) are passed between data points. The responsibility r(i,k) , sent from the data point xi to the candidate representative example data point xk , reflects the accumulated evidence for how proper it would be for point xk to serve as the representative example for point xi . It is updated using the rule

Eq. 1

r(i,k)s(i,k)maxk:kk{a(i,k)+s(i,k)}.
The availability a(i,k) , sent from the candidate representative example point xk to point xi , reflects the accumulated evidence for how well suited point xi is to choose point xk as its representative example. It is computed by the rule

Eq. 2

a(i,k)min{0,r(k,k)+i:i{i,k}max{0,r(i,k)}}.

Availabilities and responsibilities can be combined to recognize representative examples at any time during affinity propagation. For point xi , the k that maximizes a(i,k)+r(i,k) indicates that point xk serves as the representative example for point xi .

3.

Locality Preserving Projections

LPP, or Laplacianface,9 is also a subspace analysis method for recognition. Here, it is formulated as follows: Suppose a set of face images {x1,x2,,xN} . Let S be a similarity matrix defined on the data points. Laplacianface can be obtained by solving the minimization problem

Eq. 3

Wopt=argminWi=1Nj=1N(WTxiWTxj)2Sij=argminWWTXLXTW
with the constraint WTXDXTW=1 , where L=DS is the graph Laplacian, and Dij=jSij measures the local density around xi . Here S is computed using Sij=max{exp(xixj2t),0} . (It should be noted that the similarity matrix used in LPP is different from that in AP). Finally, the basis functions of Laplacianface are the eigenvectors associated with the smallest eigenvalues of the generalized eigenproblem

Eq. 4

XLXTW=λXDXTW.

Once the eigenvectors are computed, let W=[w1,w2,,wk] be the transformation matrix. Thus the low-dimensional feature can be obtained by using Y=WTX .

4.

Proposed Affinity Propagation Locality Preserving Projections

Suppose there are N training images {x1,x2,,xN} belonging to c different subjects. Firstly, select n (n> 1) images per person (hence nc in total) to form the training set, and use LPP on the training set to get the transformation matrix W and the nc corresponding reduced features. Secondly, use AP to cluster these nc reduced features into c different classes and obtain a representative reduced feature for each class. Thus there will be c representative features A1,A2,,Ac , such that each feature is assigned an identity ωi (1ic) , and each class has just one representative feature. The reason why these features should be clustered into c different classes can be found in Ref. 11. Finally, convert the test face image T into the low-dimensional feature L using L=WT×T , and then identify it using a nearest-neighbor classifier (NNC) or AP. Thus two methods APLPP(NNC) and APLPP(AP) are generated. APLPP(NNC) uses NNC to find which representative face image is nearest to the test face image. In this case, Euclidean distance is used. On the other hand, APLPP(AP) uses AP again to cluster c+1 features ( c representative features plus a test feature) into c classes and find which representative features the test one belongs to. It should satisfy two conditions: (a) the number of classes after clustering must equal c ; (b) the representative features achieved in the training stage must remain representative ones after clustering. How to achieve this goal? As described before, the points with larger preference are more likely to be chosen as representative examples, and each test feature B is supposed to be clustered into one of these representative features; thus we assign large values (e.g., 1) to A1,A2,,Ac and very small values (e.g., 1000 ) to L , and then perform the clustering.

Since APLPP(NNC) and APLPP(AP) use only c representative features for recognition, their recognition time mainly depends on the number of subjects. It increases linearly with c , while the recognition time of LPP increases linearly with the number of training images (nc) . Obviously, both our methods are more computationally efficient than LPP.

5.

Experiments

In this section, experiments are described that validate the effectiveness of APLPP and compare it with LPP, using the extended YaleB face database.12 It contains around 64 near-frontal images for each of 38 distinct subjects. All images were preprocessed11 so that the final images have size 32×32 , with 256 gray levels per pixel. Different numbers of images per individual were taken to form the training set. The rest of the database was considered to be the testing set. The experiments were repeated 20 times, and the average accuracy was recorded. In general, the performance of the LPP and APLPP varies with the number of dimensions. We only show the best results obtained by them in Table 1. The values in parentheses denote the dimensions to attain top recognition rate. As can be seen, APLPP(AP) performs uniformly better than APLPP(NNC), followed by LPP. This means AP is more suitable for use as a classifier than NNC in these cases. Moreover, both of our methods improve the FR performance by large margins over LPP.

Table 1

Performance comparison on extended YaleB database. Here n is the number of training face images per person.

Face recognitionapproachAccuracy and (no. of dimensions needed)
n=10 20304050
LPP76.4602% (38)70.8313% (36)82.1154% (86)93.9597% (44)96.3521% (38)
APLPP(NNC)76.674% (62)71.058% (60)82.2135% (38)94.8937% (38)96.5564% (38)
APLPP(AP)79.8869% (62)72.3519% (80)82.5432% (38)96.0011% (38)98.1907% (38)

The experiments reveal some interesting points:

  • 1. The recognition rates of all methods generally increase with the number of training samples. The more training samples, the better performance can be achieved.

  • 2. Both APLPP(NNC) and APLPP(AP) perform better then LPP by a large margin. The underlying reason is that AP can detect a good representative face image for each subject. The more training images per person, the better the representative face images that can be detected, and thus the better the performance attained. Another important reason is that APLPP uses only representative face images, free of noise, for identification, which means it avoids the effects of the noise.

  • 3. Since the APLPP uses only the representative samples in the testing stage, APLPP is more computationally efficient than LPP. Moreover, the speed difference between APLPP and LPP would become more significant with increase of the number of training images per person.

6.

Conclusions and Further Work

LPP has been proposed to be combined with AP to give a new FR framework. In contrast with LPP, which identifies a test image by comparing it with all training images, our proposed method uses only representative face images. We also apply the AP clustering method for identification. The new framework has been evaluated on the extended YaleB database. Comparative experimental results show the effectiveness of the new framework. Further work will include applying the proposed method to human gait recognition and digital character recognition.

Acknowledgments

The authors would like to thank the anonymous reviewers for their critical and constructive comments and suggestions. This research has been supported by the National Natural Science Foundation of China (No. 60675023, No. 60602012) and the China 863 project (No. 2007AA01Z164).

References

1. 

M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces,” 586 –591 (1991). Google Scholar

2. 

P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces versus Fisherfaces: recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell., 19 (7), 711 –720 (1997). https://doi.org/10.1109/34.598228 0162-8828 Google Scholar

3. 

J. Yang, D. Zhang, A. F. Frangi, and J. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., 26 (1), 131 –137 (2004). 0162-8828 Google Scholar

4. 

J. Yang, D. Zhang, X. U. Yong, and J. Y. Yang, “Two-dimensional discriminant transform for face recognition,” Pattern Recogn., 38 (7), 1125 –1129 (2005). 0031-3203 Google Scholar

5. 

M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, “Face recognition by independent component analysis,” IEEE Trans. Neural Netw., 13 (6), 1450 –1464 (2002). https://doi.org/10.1109/TNN.2002.804287 1045-9227 Google Scholar

6. 

S. Noushath, G. H. Kumar, and P. Shivakumara, “(2D)2LDA: an efficient approach for face recognition,” Pattern Recogn., 39 (7), 1396 –1400 (2006). 0031-3203 Google Scholar

7. 

W. Zuo, D. Zhang, J. Yang, and K. Wang, “BDPCA plus LDA: a novel fast feature extraction technique for face recognition,” IEEE Trans. Syst., Man, Cybern., Part B: Cybern., 36 (4), 946 –953 (2006). 1083-4419 Google Scholar

8. 

I. Dagher and R. Nachar, “Face recognition using IPCA-ICA algorithm,” IEEE Trans. Pattern Anal. Mach. Intell., 28 (6), 996 –1000 (2006). 0162-8828 Google Scholar

9. 

X. He, S. Yan, and Y. Hu, “Face recognition using Laplacianfaces,” IEEE Trans. Pattern Anal. Mach. Intell., 27 (3), 328 –340 (2005). https://doi.org/10.1109/TPAMI.2005.55 0162-8828 Google Scholar

10. 

B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, 315 (5814), 972 –976 (2007). https://doi.org/10.1126/science.1136800 0036-8075 Google Scholar

11. 

C. Du, J. Yang, Q. Wu, and F. Li, “Integrating affinity propagation clustering method with linear discriminant analysis for face recognition,” Opt. Eng., 46 (11), 110501 –110503 (2007). https://doi.org/10.1117/1.2801735 0091-3286 Google Scholar

12. 

K. C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell., 27 (5), 684 –698 (2005). https://doi.org/10.1109/TPAMI.2005.92 0162-8828 Google Scholar
©(2008) Society of Photo-Optical Instrumentation Engineers (SPIE)
Chunhua Du, Jie Yang, Qiang Wu, Tianhao Zhang, and Shengyang Yu "Locality preserving projections plus affinity propagation: a fast method for face recognition," Optical Engineering 47(4), 040501 (1 April 2008). https://doi.org/10.1117/1.2903838
Published: 1 April 2008
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Facial recognition systems

Databases

Computing systems

Independent component analysis

Optical character recognition

Optical engineering

Principal component analysis

Back to Top