Automatic recognition based on image fusion techniques are widely used to integrate a lower spatial resolution
multispectral image with a higher spatial resolution panchromatic image. The earthquake events were first researched after
the Kocaeli Earthquake of 1999 show that the spatial images from various satellites could be exploited. The remote sensing
which in terms of spatial resolution and data processing open new possibilities concerning the natural hazard assessment.
However, the existing techniques either cannot avoid distorting the image spectral properties or involve complicated and
time-consuming frequency decomposition and re-construction processing. To address these problems, we present our
study on a HIS transform and intensity modulation algorithm. The algorithm is further optimized a proposed objective of
minimizing error rate. Experiments in recognition of building damage due to earthquakes applications show that the
algorithm provides better recognition accuracy than others. Although some environment problems, such as the influence of
sunshine need further research, the proposed method can benefit further study of the application.
This paper is to take the advantage of the imagery produced from QuickBird for damage identification in urban areas.
The buildings collapsed following the Bam earthquake. We present our study results in remote sensing image fusion of
identification of earthquake-caused building harm with HIS transform and intensity modulation. Commencing with the
inventory of buildings as objects within high resolution QuickBird satellite imagery captured before the event. The
number of collapsed buildings is computed based on the unique statistical characteristics of these buildings within the
'after' scene. The promising results from this analysis prove that improving spatial detail and spectral information could
be used as a potential methodology for automated identification of building damage.
This study analyzed texture features in multi-spectral image data. Recent development in the mathematical theory of wavelet transform has received overwhelming attention by the image analysts. An evaluation of the ability of wavelet transform and other texture analysis algorithms in feature extraction and classification was performed in this study. The algorithms examined were the wavelet transform, spatial co-occurrence matrix, fractal analysis, and spatial autocorrelation. The performance of the above approaches with the use of different feature was investigated. Wavelet transform was found to be far more efficient than other advanced spatial methods.
Automatic image registration is important for many multiframe-based image analysis applications. With an increasing
number of images collected every day from different sensors, automated registration of multi-sensor/multi-spectral
images has become an important issue. A wide range of registration techniques exists for different types of applications
and data sources, however no algorithm is known that can accurately register multi-source images consistently. This
research addresses this problem by investigating the development of a fully automatic registration system for remote
sensing images. The development of this new automatic image registration method is based on the extraction and
matching of common features that are visible in both images. The algorithm involves the following five steps: noise
removal, edge extraction, edge linking pattern extraction and pattern matching.
We present a new method by using GHM discrete multiwavelet transform in image denoising on this paper. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising of images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by treating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. We apply the multiwavelet-based to remote sensing image denoising. Multiwavelet transform technique is rather a new method, and it has a big advantage over the other techniques that it less distorts spectral characteristics of the image denoising. The experimental results show that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.
Detection and recognition of the dim moving small targets in the infrared image sequences containing cloud clutter is an
important research area, especially for Infrared Search and Track surveillance applications. In the paper, the author propose a new algorithm having high performance for detecting moving small targets in infrared image sequences containing cloud clutter. The novelty of the algorithm is that it fuses the features of the moving small targets in both the spatial domain and temporal domain. Two independent units can realize the calculative process. Another advantage of the method is that it can get the better detection precision than some other methods. We also present the algorithm based on image fusion and Kalman tracking that can track a number of very small, low constant objects through an image sequence taken from a static camera.
An essential determinant of the value of digital images is their quality. Over the past years, there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet transform and Human visual system. This way the proposed measure differentiates between the random and signal-dependant distortion, which have different effects on human observer. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated for quality evaluation. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.
In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.
In this paper, a new technique for improving the spatial resolution of hyperspectral image data will be presented. This technique combines a high-resolution image with a lower spatial resolution hyperspectral image to produce a product that has the spectral properties of the hyperspectral image at a spatial resolution approaching that of the panchromatic image. Hyperspectral imaging systems are assuming a greater importance for a wide variety of commercial and military systems. There have been several approaches to using a single higher spatial resolution band to improve the spatial resolution of the hyperspectral data. This algorithm offers a new approach to the problem of combining hyperspectral data with high-resolution images, and it is based and generally shows lower levels of error than the statistically based algorithms.
The purpose of image fusion is to merge information from multi-sensor and to improve abilities of information analysis and feature extraction. In this paper, a new image fusion algorithm based on discrete multiwavelet transform to fuse multi-sensor images is presented. The detailed discussions in the paper are focused on CL (Chui-Lian) multiwavelet, a two-wavelet and two-scaling function multiwavelet, and use it to accomplish image fusion processing. The CL multiwavelets have several advantages in comparison with scalar wavelets, so that it is employed to decompose and reconstruct images in this algorithm. When images are merged in multiwavelet space, different frequency ranges are processed differently. It can merge information from original images adequately and improve abilities of information analysis and feature extraction in remote sensing area. The experiments, including the fusion of registered Visible (VIS) \Infrared (IR) images are presented in this paper. Comparing with other image fusion methods, satisfactory result has been obtained by applying this method on both objective and subjective performance measure.
Image fusion refers to the techniques that integrate complementary information from multiple image sensor data such that the new images are more suitable for the purpose of human visual perception and the compute-processing tasks. In this paper, a new image fusion algorithm based on multiple wavelet, namely multiwavelet, transform to fuse multispectral images is presented. Multiwavelets are extensions from scalar wavelet, and have several unique advantages in comparison with scalar wavelets, so that multiwavelet is employed to decompose and reconstruct images in this algorithm. In this paper, the image fusion is performed at the pixel level, other types of image fusion schemes, such as feature or decision fusion, are not considered. In this fusion algorithm, a feature-based fusion rule is used to combine original subimages and to form a pyramid for the fused image. When images are merged in multiwavelet space, different frequency ranges are processed differently. It can merge information from original images adequately and improve abilities of information analysis and feature extraction. The experiment on the fusion of registered multiband SPOT multispectral Panchromatic band \XS3 band images is presented in this paper. The experiment results show that this fusion algorithm, based on multiwavelet transform, is an effective approach in image fusion area.
In this paper, we present a new method by using 2-D discrete multiwavelet transform in image denoising. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising is images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by threating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. The performances of multiwavelets are compared with those of scalar wavelets. Simulations reveal that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.
The purpose of multispectral image fusion is to merge information from multi-sensor and to improve abilities of information analysis and feature extraction. Discrete wavelet transform can offer a more precise way for image analysis than other multi-resolution analysis. It decomposes an image into low frequency band and high frequency band in different level, and it can also be reconstructed gradually in different level. But this method only decomposes low frequency band in a higher scale, so that it omits some useful details of the images. In this paper, we research an improved discrete wavelet transform. It decomposes high frequency band in higher scale which wavelet analysis does not do. We apply it on image data and give a fusion method in pixel level. Through merging remote sensing image of different wavebands from multi-sensor to a same object by applying method of improved wavelet analysis, we have obtained a fused picture. The method can fuse details of input image successfully, and display information of the each input image perfectly. Comparing with other image fusion methods, satisfactory result has been obtained by applying this method on both objective and subjective performance measure.