Hyperspectral imagery denoising is a classical problem in hyperspectral remote sensing applications. In our previous work, we combined principal component analysis with wavelet shrinkage, and good denoising results were obtained. We combine minimum noise fraction (MNF) with video block matching and 3D filtering (VBM3D), which is a powerful video denoising method. After MNF transform, we automatically select k0, the number of spectral band images as a threshold for denoising. We reduce the noise in spectral band images of MNF transformed data from k0 to the last band image and do not denoise the first k0 − 1 spectral band images. Finally, we perform an inverse MNF transform to obtain the denoised data cubes. We compare our MNF + VBM3D method with different denoising methods such as bivariate wavelet shrinkage (BivShrink), non-local means, SURELET, and block matching and 3D filtering (BM3D). Experimental results demonstrate that MNF + VBM3D achieves the best denoising results among almost all methods for three testing data cubes and with different noise levels.
Hyperspectral image (HSI) classification has many applications in different diverse research fields. We propose a method for HSI classification using principal component analysis (PCA), 2D spatial convolution, and support vector machine (SVM). Our method takes advantage of correlation in both spatial and spectral domains in an HSI data cube at the same time. We use PCA to reduce the dimensionality of an HSI data cube. We then perform spatial convolution to the dimension-reduced data cube once and then to the convolved data cube for the second time. As a result, we have generated two convolved PCA output data cubes in a multiresolution way. We feed the two convolved data cubes to SVM to classify each pixel to one of the known classes. Experiments on three widely used hyperspectral data cubes (i.e., Indian Pines, Pavia University, and Salinas) demonstrate that our method can improve the classification accuracy significantly when compared to a few existing methods. Our method is relatively fast in terms of central processing unit computational time as well.
Assessment of image quality is critical for many image processing algorithms, such as image acquisition, compression,
restoration, enhancement, and reproduction. In general, image quality assessment algorithms are classified into three
categories: full-reference (FR), reduced-reference (RR), and no-reference (NR) algorithms. The design of NR metrics is
extremely difficult and little progress has been made. FR metrics are easier to design and the majority of image quality
assessment algorithms are of this type. A FR metric requires the reference image and the test image to have the same
size. This may not the case in real life of image processing. In spatial resolution enhancement of hyperspectral images,
such as pan-sharpening, the size of the enhanced images is larger than that of the original image. Thus, the FR metric
cannot be used. A common approach in practice is to first down-sample an original image to a low resolution image, then
to spatially enhance the down-sampled low resolution image using a subject enhancement technique. In this way, the
original image and the enhanced image have the same size and the FR metric can be applied to them. However, this
common approach can never directly assess the image quality of the spatially enhanced image that is produced directly
from the original image. In this paper, a new RR metric was proposed for measuring the visual fidelity of an image with
higher spatial resolution. It does not require the sizes of the reference image and the test image to be the same. The
iterative back projection (IBP) technique was chosen to enhance the spatial resolution of an image. Experimental results
showed that the proposed RR metrics work well for measuring the visual quality of spatial resolution enhanced
hyperspectral images. They are consistent with the corresponding FR metrics.
KEYWORDS: 3D modeling, 3D acquisition, Data modeling, MATLAB, Error analysis, Sensors, Data acquisition, 3D image processing, Optical engineering, 3D scanning
The acquisition of a three-dimensional (3-D) model in a real-world environment by scanning only sparsely can save us a great ammount range-sensing time. We address a new method for inferring missing range data based on the given intensity image and sparse range data. It is assumed that the known range data are given on a number of scan lines with 1 pixel width. This assumption is natural for a range sensor to acquire range data in a 3-D real-world environment. Both edge information of the intensity image and linear interpolation of the range data are used. Experiments show that this method gives very good results in inferring missing range data. It outperforms both the previous method and bilinear interpolation when a very small percentage of range data are known.
Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.
Increasing the spatial resolution of panchromatic images and multispectral images is a classical problem in remote
sensing. However, it is still in its infancy to spatially enhance the resolution of hyperspectral imageries. In this paper, we
proposed a new method for increasing the spatial resolution of a hyperspectral data cube by using an iterative back
projection (IBP) based method. Also, we developed a new metric to measure the visual quality of the enhanced images.
This metric is good at measuring the visual quality of an image whose full-reference image is not available whereas the
low spatial resolution image is available. Experimental results confirm the superiority of the proposed method.
In this paper, we study the Locally Linear Embedding (LLE) for nonlinear dimensionality reduction of hyperspectral data. We improve the existing LLE in terms of both computational complexity and memory consumption by introducing a spatial neighbourhood window for calculating the k nearest neighbours. The improved LLE can process larger hyperspectral images than the existing LLE and it is also faster. We conducted experiments of endmember extraction to assess the effectiveness of the dimensionality reduction methods. Experimental results show that the improved LLE is better than PCA and the existing LLE in identifying endmembers. It finds more endmembers than PCA and the existing LLE when the Pixel Purity Index (PPI) based endmember extraction method is used. Also, better results are obtained for detection.
We propose an invariant descriptor for recognizing complex patterns and objects composed of closed regions such as printed Chinese characters. The method transforms a 2D image into 1D line moments, performs wavelet transform on the moments, and then applies Fourier transform on each level of the wavelet coefficients and the average. The essential advantage of the descriptor is that a multiresolution querying strategy can be employed in the recognition process and that it is invariant to shift, rotation, and scaling of the original image. Experimental results show that the descriptor proposed in this paper is a reliable tool for recognizing Chinese characters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.