Sparse representation classification (SRC) is being widely applied for target detection in hyperspectral images (HSI). However, due to the problem of the curse of dimensionality and redundant information in HSI, SRC methods fail to achieve high classification performance via a large number of spectral bands. Selecting a subset of predictive features in a high-dimensional space is a challenging problem for hyperspectral image classification. In this paper, we propose a novel discriminant feature selection (DFS) method for hyperspectral image classification in the eigenspace. Firstly, our proposed DFS method selects a subset of discriminant features by solving the combination of spectral and spatial hypergraph Laplacian quadratic problem, which can preserve the intrinsic structure of the unlabeled pixels as well as both the inter-class and intra-class constraints defined on the labeled pixels in the projected low-dimensional eigenspace. Then, in order to further improve the classification performance of SRC, we exploit the well-known simultaneous orthogonal matching pursuit (SOMP) algorithm to obtain the sparse representation of the pixels by incorporating the interpixel correlation within the classical OMP by assuming that neighboring pixels usually consist of similar materials. Finally, the recovered sparse errors are directly used for determining the label of the pixels. The extracted discriminant features are compatibly used in conjunction with the established SRC methods, and can significantly improve their performance for HSI classification. Experiments conducted with the hyperspectral data sets and different experimental settings show that our proposed method increases the classification accuracy and outperforms the state-of-the-art feature selection and classification methods.
Sparse Representation (SR) has received an increasing amount of interest in recent years. It aims to find the sparsest representation of each data capturing high-level semantics among the linear combinations of the base sets in a given dictionary. In order to further improve the classification performance, the joint SR that incorporates interpixel correlation information of neighborhoods has been proposed for image pixel classification. However, joint SR method yields high computational cost. To improve the performance and computation efficiency of SR and joint SR, we propose a seeded Laplacian based on sparse representation (SeedLSR) framework for hyperspectral image classification, where a hypergraph Laplacian explicitly takes into account the local manifold structure of the hyperspectral pixel in a spatial-type weighted graph. Given the training data in a dictionary, SeedLSR algorithm firstly finds the sparse representation of hyperspectral pixels, which is used to define the spectral-type affinity matrix of an undirected graph. Then, using the training data as user-defined seeds, the final classification can be obtained by solving the combination of spectral and spatial hypergraph Laplacian quadratic problem. To assess the efficiency of the proposed SeedLSR method, experiments were performed on the scene data under daylight illumination. Compared with SR algorithm, the classification results vary smoothly along the geodesics of the data manifold.
Sparse Representation (SR) is an effective classification method. Given a set of data vectors, SR aims at finding the sparsest representation of each data vector among the linear combinations of the bases in a given dictionary. In order to further improve the classification performance, the joint SR that incorporates interpixel correlation information of neighborhoods has been proposed for image pixel classification. However, SR and joint SR demand significant amount of computational time and memory, especially when classifying a large number of pixels. To address this issue, we propose a superpixel sparse representation (SSR) algorithm for target detection in hyperspectral imagery. We firstly cluster hyperspectral pixels into nearly uniform hyperspectral superpixels using our proposed patch-based SLIC approach based on their spectral and spatial information. The sparse representations of these superpixels are then obtained by simultaneously decomposing superpixels over a given dictionary consisting of both target and background pixels. The class of a hyperspectral pixel is determined by a competition between its projections on target and background subdictionaries. One key advantage of the proposed superpixel representation algorithm with respect to pixelwise and joint sparse representation algorithms is that it reduces computational cost while still maintaining competitive classification performance. We demonstrate the effectiveness of the proposed SSR algorithm through experiments on target detection in the in-door and out-door scene data under daylight illumination as well as the remote sensing data. Experimental results show that SSR generally outperforms state of the art algorithms both quantitatively and qualitatively.
Principal component analysis transforms a set of possibly correlated variables into uncorrelated variables, and is widely used as a technique of dimensionality reduction and feature extraction. In some applications of dimensionality reduction, the objective is to use a small number of principal components to represent most variation in the data. On the other hand, the main purpose of feature extraction is to facilitate subsequent pattern recognition and machine learning tasks, such as classification. Selecting principal components for classification tasks aims for more than dimensionality reduction. The capability of distinguishing different classes is another major concern. Components that have larger eigenvalues do not necessarily have better distinguishing capabilities. In this paper, we investigate a strategy of selecting principal components based on the Fisher discriminant ratio. The ratio of between class variance to within class variance is calculated for each component, based on which the principal components are selected. The number of relevant components is determined by the classification accuracy. To alleviate overfitting which is common when there are few training data available, we use a cross-validation procedure to determine the number of principal components. The main objective is to select the components that have large Fisher discriminant ratios so that adequate class separability is obtained. The number of selected components is determined by the classification accuracy of the validation data. The selection method is evaluated by face recognition experiments.