The hyperspectral image in thermal infrared domains provide information, such as temperature and emissivity, about different kinds of materials. These information can be used for a wide number of applications such as mineral mapping, bathymetry, indoor and outdoor detection of chemicals. But because of the limitation of spatial resolution and the characteristics of thermal infrared sensor, there are many mixed pixels in the data, whose temperature，emissivity and abundance of different components can be hard to estimate. In this paper, a new method to estimate the parameters in pure and mixed pixels is proposed based on linear and nonlinear optimization. Firstly, the standard temperature and emissivity separation (TES) algorithm is applied on pure pixels of different materials selected by supervise or unsupervised methods to get the initial temperature. Secondly, the emissivity in different bands can be retrieved by minimizing the reconstruction error, which the more accurate temperature is optimized with. The emissivity in one band is trained by the samples in the same band but in different pixels, while the temperature is trained by different bands in one pixel. Lastly, the abundance and temperature of components in mixed pixels are estimated based on a linear mixture model of the bottom of atmosphere radiance as full constraint linear optimization problem and nonlinear optimization problem. The method is also analyzed with respect to sensitivity to the noise and different parameters’ influences on estimation errors.
Classification of real-world remote sensing images is a challenging task because of complex spectral–spatial information with high-dimensional feature vectors. Most of the traditional classification approaches directly treat data as vectors, which usually results in a small sample size problem and abundant redundant information; thus, they inevitably degrade the performance of the classifier. To overcome the drawbacks, we take advantage of the benefits of local scatters and tensor representation and propose a framework for hyperspectral image (HSI) classification through combining local tensor discriminant analysis (LTDA) with spectral–spatial feature extraction. First, we use a well-known spectral–spatial feature extraction approach to extract abundant spectral–spatial features as feature tensors. Then, based on class label information, LTDA is used to eliminate redundant information and to extract discriminant feature tensors for the subsequent classification. Two real HSIs are used as experimental datasets. The obtained results indicate that the proposed method exhibits good performance, while using a small number of training samples.
KEYWORDS: Mathematical modeling, Hyperspectral imaging, Minerals, Detection and tracking algorithms, Data modeling, Vegetation, Signal processing, Information technology, Statistical modeling, RGB color model
Current algorithms of endmember extraction generally need to determine the number of endmembers manually. However, the number of endmembers is unknown in practical application, so an automated and iterative endmember extraction algorithm is put forward in this paper to solve the problem. Firstly, due to the spectral information of endmember is similar with its neighbors but noise is independent with others, we analyze the relevance between pixels and endmember in the concentric sliding window centered at each test endmember in order to eliminate the influence of noise. Then, due to the independence among endmembers, a candidate set formed of endmembers which have been extracted is constructed. We compute the correlation between the new endmember and the candidates in the set each time, if the largest correlation is small; the new one is added to the set. If the new one fails to join the set directly, we can take it to replace the existed in the set to increase the distance among endmembers. Finally, if the endmembers in the set remain unchanged in a few times, the iteration stops. The experiment shows that the improved algorithm have a near accuracy of endmember extraction with the traditional algorithm, meanwhile it weakens the influence of noise on the endmember extraction.
In hyperspectral image processing, anomaly detection is a valuable way of searching targets whose spectral characteristics are not known, and the estimation of background signals is the key procedure. On account of the high dimensionality and complexity of hyperspectral image, dimensionality reduction and background suppression is necessary. In addition, the complementarity of different anomaly detection algorithms can be utilized to improve the effectiveness of anomaly detection. In this paper, we propose a novel method of anomaly detection, which is based on clustering of optimized K-means and decision-level fusion. In our proposed method, pixels with similar features are firstly clustered using an optimized k-means method. Secondly, dimensionality reduction is conducted using principle component analysis to reduce the amount of calculation. Then, to increase the accuracy of detection and decrease the false-alarm ratio, both Reed-Xiaoli (RX) and Kernel RX algorithm are used on processed image. Lastly, a decision-level fusion is processed on the detection results. A simulated hyperspectral image and a real hyperspectral one are both used to evaluate the performance of our proposed method. Visual analysis and quantative analysis of receiver operating characteristic (ROC) curves show that our algorithm can achieve better performance when compared with other classic approaches and state-of-the-art approaches.