Band selection provides performance improvement in hyperspectral applications such as target detection, spectral unmixing and classification. Signal-to-noise ratio estimation (SNRe) as a method can be adjusted for different specific applications. SNRe is usually used to remove some low SNR bands from original hyperspectral data in a preprocessing stage and then other band selection methods are applied for the remaining high SNR bands of hyperspectral data to make the operations more efficient. In this paper, we take advantage of SNRe to select the bands which contain the largest amount of information. The wavelet transform is first used to realize the signal-noise separation and get the noise standard deviation of each band, and then the SNRs of all bands are calculated orderly. Considering some time-consuming operations in SNRe algorithm which can’t satisfy some real time applications are very suitable for high performance computing(HPC) in parallel, we design a new massively parallel algorithm to accelerate the SNR estimation algorithm on graphics processing units(GPUs) using the compute device unified architecture(CUDA) language. In addition the implementation of our GPU-based SNRe algorithm has extremely explored the possible parallelism in the C code and been debugged carefully to verify its correctness and efficiency. Experiments are conducted on two sets of real hyperspectral images and considerable acceleration is obtained.
Target detection is one of the most important applications in hyperspectral remote sensing image analysis. Sparse representation method has been considered to be effective in hyperspectral target detection. In this method, a sparse representation with respect to a certain pixel in hyperspectral imagery means a linear combination of few data vectors in the data dictionary. An training dictionary consisting of both target and background samples in the same feature space is first constructed and test pixels are sparsely represented by decomposing over the dictionary. Though sparse representation is considered to preserve main information of most pixels, inevitable indeterminacy may lead to different representations of same or similar pixels. In this paper, a manifold regularized sparsity model is proposed to deal with this problem. A graph regularization term is incorporated into the sparsity model under the manifold assumption that similar data pixels should have similar sparse representation. Then a modified simultaneous version of the SP algorithm (SSP) is implemented to obtain the recovered sparse vectors which are composed of sparse coefficients corresponding to both target sub-dictionary and background sub-dictionary. Once the sparse vectors are obtained, the residual between original test samples and estimate recovered from target sub-dictionary as well as the residual between original test samples and estimate recovered from background sub-dictionary are calculated to determine the test pixels’ class. The proposed algorithm is applied to real hyperspectral image to detect targets of interest. Experimental results show a more accurate target detection performance with this proposed model over that with conventional sparse models.