In the past few years, some nonlinear manifold learning algorithms have been proposed to discover the nonlinear structure of the manifold by investigating the local geometry of samples, such as isomap,1 locally linear embedding (LLE),2, 3 and Laplacian eigenmap.4 These methods are defined only on training data, and the issue of how to map new test data remains difficult. Therefore, they cannot be applied directly to recognition problems.
Recently, some manifold-based linear algorithms, like neighborhood preserving embedding (NPE),5 resolve the difficulty by finding a mapping on the whole data space, not just on training data. However, they have a common inherent limitation: they deemphasize discriminant information that is very important for the recognition task. In addition, the basis vectors of these methods are statistically correlated, and then the extracted features contain much redundancy. As a result, the overlapped information can distort the distribution of the features and even degrade the recognition performance.
In this work, a new feature extraction algorithm, called kernel uncorrelated neighborhood discriminative embedding (KUNDE) is proposed to address the problems mentioned earlier. On the one hand, the method explicitly considers both the within-class neighboring information and the between-class scatter information and emphasizes the discriminant information. On the other hand, the method obtains statistically uncorrelated features with minimum redundancy by putting a simple uncorrelated constraint on the computation of the basis vectors. Mapping the input data to some high-dimensional feature space using the kernel technique, nonlinear features are extracted.
Kernel Uncorrelated Neighborhood Discriminative Embedding
Given a data set in , suppose that each data point belongs to one of the classes , and each class contains samples. The data is then mapped into an implicit high-dimensional feature space by a nonlinear mapping function . The problem is to find a transformation matrix that maps these points to be new points in , where .
The algorithm procedure of KUNDE is stated here:
1. Construct the kernel matrix whose elements are , where is a kernel function that satisfies .
2. Compute the weight matrix by minimizing the reconstruction error , where , and if is one of the identical-label nearest neighbors of ; otherwise, . An efficient way to minimize the error can refer to Ref. 3.
3. Compute matrices , , and as follows: , , and , where is an identity matrix, if and belong to the th class; otherwise, , and .
4. Solve the generalized eigenvalue problem , with , and constitute the matrix .
5. For any data point in , the embedded feature in is given by .
In this section, we provide the theoretical analysis of the KUNDE algorithm. As mentioned earlier, the objective of KUNDE is to preserve the within-class neighboring geometry while maximizing the between-class scatter in the low-dimensional space.
First, we characterize the within-class geometry of each data point in the feature space by linear coefficients that reconstruct the data point from its identical-label nearest neighbors. To preserve the within-class neighboring relations, the basic idea is that the same weights that reconstruct the point in should also reconstruct its embedded counterpart in . Therefore, we should minimize the following cost function:.
Second, since the purpose of KUNDE is to solve classification problems, we should make the embedded vectors from different classes far from each other. Here, we propose to maximize the between-class scatter:is the between-class scatter matrix in . According to Ref. 6, the total scatter matrix , within-class scatter matrix , and between-class scatter matrix can be represented as follows: , , and . Thus, Eq. 2 can be rewritten as:1, 4, we should minimizing the following objective function:
Now, we turn to the statistically uncorrelated constraint. Assuming that any two different components and of the extracted feature are uncorrelated, this means that:and are two different columns of the matrix . Besides, should be normalized. Without loss of generalization, let satisfy:6, 7, we get:
As a result, KUNDE can be formulated as the following constrained minimization problem:9 is equivalent to: should lie in the span of all training samples in , there exist coefficients such that , where . Therefore, Eq. 10 becomes: is the kernel matrix, which is defined in Sec. 2.
Last, the constrained minimization problem is reduced to a generalized eigenvalue problem, as follows:is determined by the eigenvectors corresponding to the first smallest eigenvalues of Eq. 12. Once is obtained, for any data point in the input space, the nonlinear feature is given as .
In this section, experiments are performed on radar target recognition with measured and simulated range profiles, respectively. The performance of the KUNDE algorithm is compared with those of kernel NPE (KNPE),5 KPCA,7 and KFDA.8 The Gaussian kernel is adopted, and the parameter is simply set to 1. Since we focus only on feature extraction, as for classification, the nearest-neighbor classifier using a Euclidean metric is employed for the sake of simplicity.
Experiments on Measured Data
The measured data are from three flying airplanes, including An-26, Yark-42, and Cessna Citation S/II. Each airplane has 260 range profiles. In the experiments, each profile is preprocessed by energy normalization. Then, for each airplane, one third of all profiles are used for training and the rest for testing. Figure 1 shows the plots of recognition rates obtained by each method versus the reduced dimensionality. Note that the dimensionality of KFDA is at most , where is the number of classes.
From Fig. 1, it can be seen that KUNDE achieves the best recognition results at each feature dimensionality and KFDA performs second best, while KNPE and KPCA are relatively poor since neither of them considers discrimination information. This indicates that KUNDE has more discriminative power than the other three methods by incorporating the within-class neighboring information and the between-class scatter information and that the statistically uncorrelated features are very helpful for improving the recognition performance.
Experiments on Simulated Data
The simulated profiles are from six airplanes: Mirage, IDF, F16, J8II, SU27, and E2C, and each airplane has 60 profiles. Similarly, each profile is normalized to unit energy, and 20 profiles per target are used for training and the rest for testing. The recognition results are shown in Fig. 2. The results suggest that KUNDE outperforms again all the other methods. In addition, it can be seen that as the feature dimensionality increases, KUNDE has a higher recognition rate and KPCA and KNPE can also give satisfactory results.
A novel algorithm called kernel uncorrelated neighborhood discriminative embedding (KUNDE) is presented for pattern recognition. KUNDE has two prominent characteristics. First, it integrates the within-class neighboring information and the between-class scatter information, which enables its powerful discriminative ability. Second, it extracts statistically uncorrelated features with minimum redundancy by introducing an uncorrelated constraint, which is helpful to improve recognition performance. Experimental results on radar target recognition show that KUNDE achieves better recognition performance than all the other involved methods.
The authors would like to thank the anonymous reviewer and editors for their helpful comments and suggestions. This work is partially supported by the Key Project of the Chinese Ministry of Education (Grant No. 105150) and the foundation of the ATR Key Lab (Grant No. 51483010305DZ0207).