Nonlinear graph-based dimensionality reduction algorithms have been shown to be very effective at yielding low-dimensional representations of hyperspectral image data. However, the steps of graph construction and eigenvector computation often suffer from prohibitive computational and memory requirements. In the paper, we develop a semi-supervised deep auto-encoder network (SSDAN) that is capable of generating mappings that approximate the embeddings computed by the nonlinear DR methods. The SSDAN is trained with only a small subset of the original data and enables an expert user to provide constraints that can bias data points from the same class towards being mapped closely together. Once the SSDAN is trained on a small subset of the data, it can be used to map the rest of the data to the lower dimensional space, without requiring complicated out-of-sample extension procedures that are often necessary in nonlinear DR methods. Experiments on publicly available hyperspectral imagery (Indian Pines and Salinas) demonstrate that SSDANs compute low-dimensional embeddings that yield good results when input to pixel-wise classification algorithms.
Proc. SPIE. 9840, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXII
KEYWORDS: Target detection, Hyperspectral imaging, Principal component analysis, Detection and tracking algorithms, Image segmentation, Silver, Control systems, Algorithm development, Hyperspectral target detection, RGB color model
The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.
Nonlinear graph-based dimensionality reduction algorithms such as Laplacian Eigenmaps (LE) and Schroedinger Eigenmaps (SE) have been shown to be very effective at yielding low-dimensional representations of hyperspectral image data. However, the steps of graph construction and eigenvector computation required by LE and SE can be prohibitively costly as the number of image pixels grows. In this paper, we propose pre-clustering the hyperspectral image into Simple Linear Iterative Clustering (SLIC) superpixels and then performing LE- or SE-based dimensionality reduction with the superpixels as input. We then investigate how different superpixel size and regularity choices yield trade-offs between improvements in computational efficiency and accuracy of subsequent classification using the low-dimensional representations.