Human face pose estimation has a variety of applications, such as face recognition, face tracking, and human-computer interaction (HCI). Due to the inadequacy of the quality of 3-D quantities estimated from 2-D data, it is very complex to estimate face poses from 2-D face images. In addition, many factors exacerbate the problem, for example, illumination conditions, face expressions, spatial scale, etc. More importantly the appearance of the human head can change drastically across different viewing angles, mainly caused by nonlinear deformations during in-depth rotations of the head.1 Many different approaches have been proposed to solve this problem. Generally, the existing pose estimation methods can be broadly classified into two categories: feature-based2 and appearance-based methods.3, 4
There are four major problems to be solved in the existing approaches mentioned before. The first is that the face region must be extracted from a whole face image. It is very difficult to locate the face region from a side or a profile face image. The second is that original face images are normalized manually. However, manual normalization is tedious work and its cost is very high. The third is the difficult problem of extracting the face features accurately. It is especially more difficult to extract features from a side face image than a frontal one. Lastly, face images with varying intrinsic features such as illumination, face pose, and face expression are considered to constitute highly nonlinear manifolds in the high-dimensional observation space. Therefore, some pose estimation systems using linear approaches [for example, principal components analysis (PCA)] will ignore subtleties of manifolds. Manifold learning algorithms are better alternatives. However, the discriminant ability of the low-dimensional subspaces obtained by manifold learning is often lower than those obtained by the traditional dimensionality reduction approaches. Furthermore, the original feature vectors may include high-order correlation, which cannot be removed by manifold learning algorithms. Therefore, a new approach based on manifold learning is proposed to address the four problems mentioned before. In our proposed approach, face images not removed from the background are first transformed by Gabor filters. Then, a novel supervised locality preserving projection (SLPP) is proposed to project Gabor-based data out of the samples into a common low-dimensional subspace. For simplicity, the two combinations of Gabor fiters (GF) and SLPP are abbreviated to . Last, the support vector machine (SVM) classifier is applied to estimate the face pose.
Proposed Combination Approaches of Gabor Filters and the Supervised Locality Preserving Projection
Gabor filters are particularly appropriate for use in face pose estimation because they incorporate smoothing and can reduce sensitivity to spatial misalignment and illumination change. GWT can also obtain image representations that are locally normalized in intensity and decomposed in spatial frequency and orientation.5 In addition, Gabor filters can enhance pose-specific face features. Moreover, Gabor filters transform the face images into frequency domain, where unnoticeable information in the spatial domain will become clear. The transformational results of face images do well improving the discriminant ability of SLPP.
In our studies, the system processes face images as follows. A set of Gabor kernels is specified and the original image is convolved with those kernels at each pixel. The result is a set of 2-D coefficient arrays,is the convolution result corresponding to the Gabor kernel at scale and orientation . denotes the convolution operator.
Since the outputs consist of different localities, scales, and orientation features, we concatenate all these features into a feature vector . Without loss of generality, assume each output is a column vector, which can be constructed by concatenating the rows (or columns) of the output. Before the concatenation, each output is down-sampled by a factor to reduce the dimensionality of the origin vector space. Then, it is normalized to zero mean and unit variance. Let denote a normalized output, and then the feature vector is defined as:is the transpose operator. The feature vector thus encompasses all the output as important discriminating information.
After high-order information features are extracted by the Gabor filters, an immediate problem is to reduce the dimensionality and uncover the intrinsic low-dimensionality manifold. In this work, we propose a SLPP approach.
LPP seeks a transformation to project high-dimensional input data into a low-dimensional subspace . The linear transformation can be obtained by minimizing an objective function as follows:6evaluates the local structure of data space. It can be defined as follows: is a suitable constant. The minimization problem can be converted to solving a generalized eigenvalue problem as follows: is a diagonal matrix, and . For a more detailed derivation and justifications of LPP, refer to Ref. 6.
The -dimensional data from LPP are further mapped into -dimensionality discriminant subspace through the linear discriminant analysis (LDA) algorithm. To minimize the intraclass distances while maximizing the interclass distances of the face manifold, the column vectors of discriminant matrix are calculated by the eigenvectors of associated with the largest eigenvalues,is the between-class scatter matrix, and is the within-class scatter matrix. Then the matrix projects vectors in the low-dimensionality face subspace into the common discriminant subspace, which can be formulated as follows: encodes classification information.
In this section, we manually selected two collections of face images from the JDL-PEAL face database.7 They both include 130 subjects, which are selected randomly, each with seven differently posed face images varying intrinsic features such as pose, illumination, and expression. The difference between the two collections is that the first collection is used as a training set while the second one is used as a testing set. In the first collection, all face images were resized to . Some samples are illustrated in Fig. 1. Before performing the proposed approach, several parameters need to be fixed. First, for the Gabor filters, we chose five scales and eight orientations, and the number of is 4. Second, the two reduced dimensions and of the proposed method are fixed. is defined as 20. The reduced discriminant dimension is generally no more than , where denotes the number of face poses.
We compared our proposed algorithm with PCALDA, , and SLPP. For PCALDA, the algorithm is exploited to obtain the subspace in the training set directly. For SLPP, we utilize the SLPP approach without Gabor filters to learn the subspace in the training set. For , the approach is similar to the approach, but the dimensionality reduction approach is replaced by PCALDA.
In the approach, the reduced discriminant dimension influences the performance of the proposed approach. It can be seen from Fig. 2 that as increases, the has a higher accuracy rate.
The experimental results with the optimal reduced dimensions are listed in Table 1. It can be seen from Table 1 that the discriminant ability of the SLPP approach is better than the approach, and the method achieves the best performance.
The accuracy rate (percent) of the combination of dimensionality reduction and SVM classification. d=20 and d′=6 .
|SLPP accuracy rate||75.23||78.51||78.84||83.85||79.15||77.58||73.39|
|PCALDA accuracy rate||58.21||59.68||61.18||64.38||61.23||58.92||57.98|
The research is sponsored by the Fundamental Project of the Committee of Science and Technology, Shanghai, under contract 03DZ14015.