Target detection is an important issue in hyperspectral remote sensing image processing. This paper proposes a method for hyperspectral target detection using data field theory to simulate the data interaction in hyperspectral images (HSIs). We then build a data field model to unify spectral and spatial information. Furthermore, a support vector detector based on a data field model is proposed. Compared with traditional methods, our method achieves superior performance for hyperspectral target detection, and it describes a target class with a more accurate and flexible high potential region. Moreover, in contrast to traditional hyperspectral detectors, the proposed method achieves integrated spectral–spatial target detection and shows superior robustness to signal-noise-ratio decline and spectral resolution degradation. The experimental results show that our method is more accurate and efficient for target detection problems in HSIs.
Bearings-only tracking has been the focus of distinct research interest in the last few decades. To attack target maneuvering and measuring nonlinearity, most of the existing methods adopt an interactive multiple model (IMM) with a bank of nonlinear filtering recursions, such as an extended Kalman filter, an unscented Kalman filter (UKF), or a particle filter. However, these nine-state coupled-tracking methods usually suffer from low tracking accuracy due to inseparable model probability and high computational burden because of the calculating inverse of 9×9 matrices. This present work focuses on target tracking by two platforms's bearings-only measurements in an axial decoupled way. First, the passive ranging formula is derived with the conversion error proven to be Gaussian noises. Then to reduce the computational burden and maintain high tracking accuracy, the diagonal least upper bound for the covariance matrix of conversion error is given. This makes the decoupled tracking algorithm applicable. To attack the target maneuvering, the present decoupled IMM that uses a bank of passive-ranging-based Kalman filters is adopted along each axis separately. Finally, the decoupled tracking algorithm is compared with conventional coupled IMM in Cartesian coordinates by criterions such as tracking accuracy, model probability, and computational burden.
The optimization of image fusion is researched. Based on the properties of nonsubsampled contourlet transform (NSCT), shift invariance, multiscale and multidirectional expansion, the fusion parameters of the multiscale decompostion scheme is optimized. In order to meet the requirement of feedback optimization, a new image fusion quality metric of image quality index normalized edge association (IQI-NEA) is built. A polynomial model is adopted to establish the relationship between the IQI_NEA metric and several decomposition levels. The optimal fusion includes four steps. First, the source images are decomposed in NSCT domain for several given levels. Second, principal component analysis is adopted to fuse the low frequency coefficients and the maximum fusion rule is utilized to fuse the high frequency coefficients to obtain the fused coefficients and the fused result is reconstructed from the obtained fused coefficients. Third, calculate the fusion quality metric IQI_NEA for the source images and fused images. Finally, the optimal fused image and optimal level are obtained through extremum properties of polynomials function. The visual and statistical results show that the proposed method has optimized the fusion performance compared to the existing fusion schemes, in terms of the visual effects and quantitative fusion evaluation indexes.
With the rapid development of image fusion technology, image fusion quality evaluation plays a very important guiding role in selecting or designing image fusion algorithms. Objective image quality assessment is an interesting research subject in the field of image quality assessment. The ideal objective evaluation method is consistent with human perceptual evaluation. A new fusion image quality assessment method according with human vision system and discrete cosine transform (DCT) is introduced. Firstly, using the Sobel operator to calculate to gradient images for the source images and fused image, the gradient images are divided into 8×8 blocks and calculating the DCT coefficients for each block, and then based on the characteristics of human visual system, calculates the luminance masking, contrast masking to form the perceptual error matrix between input images and fused images. Finally, weighs the perceptual error matrix using the structural similarity. Experiments demonstrate that the new assessment maintains better consistency with human subjective perception.
Template matching is the process of determining the presence and the location of a reference image or an object
inside a scene image under analysis by a spatial cross-correlation process. Conventional cross-correlation type
algorithms are computationally expensive. In this paper, an algorithm for a robust template matching method based on
the combination of the wavelet transform method and SIFT is proposed. Discrete wavelet transform is done firstly on a
reference image and a template image, and low frequency parts of them is extracted, then we use harris corner detection
to detect the interesting point in low frequency parts of them to determined the matching candidate region of template
image in reference image, extracting SIFT features on the matching candidate region and template image, The extracted
SIFT features are matched by k-d tree and bidirectional matching strategy. Experiment show that, the algorithm can
improve the accuracy of matching and at the same time to reduce the computation load.
Invariants are widely used in object recognition due to their good performance under circumstances such as changing viewpoints. Previous methods of calculating invariants from 3-D points and lines have limited success because of computational expense or hard constraints on spatial positions. To overcome these drawbacks, analyses have first been carried out of general situations where the 3-D projective invariants of points and lines can be computed from images. Then numbers of images required under possible situations are determined. Based on the analyses, a novel mathematical model has been extracted to compute 3-D point and line invariants in general positions. The validity of the model has been verified by experiments, where the invariants derived from five points and one line have been adapted. Simulation results on real images show that the invariants remain stable and accurate.
Invariance is widely used in 3-D object recognition due to its good performance on change of viewpoint. A method of computing 3-D invariants of seven points from two images is presented, which can be used to achieve reliable recognition of a 3-D object and scene. Based on the matrix representation of the projective transformation between 3-D and 2-D points, geometric invariants are derived by the determinant ratios. First, the general ratiocination about invariants is represented. Second, the general method of deriving 3-D invariants from images is proposed. Simulation results on real images show that the derived invariants remain stable and are quite robust and accurate.
Most previous spatial methods to deblur rotary motion blur raise an overregularization problem in the solution of deconvolution. We construct a frequency domain framework to formulate the rotary motion blur. The well-conditioned frequency components are protected so as to avoid the overregularization. Then, Wiener filtering is applied to yield the optimal estimation of original pixels under different noise levels. The identifications of rotary motion parameters are also presented. To detect the rotary center, we develop a zero-interval searching method that works on the degraded pixel spectrum. This method is robust to noise. For the blur angle, it is iteratively calibrated by a novel divide-and-conquer method, which possesses computational efficiency. Furthermore, this paper presents a shape-recognition and linear surface fitting method to interpolate missing pixels caused by circularly fetching. Experimental results illustrate the proposed algorithm outperforms spatial algorithms by up to 0.5–4 dB in the peak signal-to-noise ratio and the improvement of signal-to-noise ratio and prove the methods for missing pixel interpolation and parameter identifications effective.
Infrared (IR) images derived from cloudy skies are always too spatially varied for tiny targets to be detected, especially for single-frame detection. Using the neural networks (NN) nonlinear regression, discrimination, and self-leaning capability, an NN-based method is proposed for tiny point target detection in single-frame IR images with high background clutter. First, the background was estimated by an improved NN-based morphologic filter, the structure element of which was optimized by a two-layer NN. Second, noise characteristics were well studied, and thus a two-level segmentation is presented to delete noises as well as to further remove remaining background components. Last, images with several potential targets were fed to a BP NN that predicted the identity of the input, which was either a target or a pseudo-target. It is these two neural networks that separate target from background and pseudo-target, respectively, with different training destinations, thus avoiding the over-training problem. Results on real data indicate that, given the false alarm probability, the detection probability by this method reaches 98.3%, which is improved by 11.02% compared to the traditional approach with fixed SE and without trained NN.