In remote sensing, accurate identification of concealed far objects is difficult. Here, to detect concealed objects from a
distance, the wideband technology is utilized. As the wideband data includes a broad range of frequencies, it can reveal
information about both the surface of the object and its content. To better detect the object and to improve the accuracy
of target identification, the collected wideband data is processed in the wavelet domain. Information about the target is
spread over different wavelet subbands, and it is possible to better discriminate the target from background for which
their frequency content is placed in the same frequency range. Simulation is done at different frequency ranges and
different powers to identify targets. In conclusion, wavelet-based processing of collected wideband data helps to
appropriately estimate the presence of a target in the scene, and improves the process of target identification.
Spectral images have relatively low spatial resolution, compared to high-resolution single band panchromatic (PAN)
images. Therefore, fusing a spectral image with a PAN image has been widely studied to produce a high-resolution
spectral image. However, raw spectral images are too large to process and contain redundant information that is not
utilized in the fusion process. In this study, we propose a novel fusion method that employs a spectral band reduction
and contourlets. The band reduction begins with the best two band combination, and this two-band combination is
subsequently augmented to three, four, and more until the desired number of bands is selected. The adopted band
selection algorithm using the endmember extraction concept employs a sequential forward search strategy. Next, the
image fusion is performed with two different spectral images based on the frequency components that are newly
obtained by contourlet transform (CT). One spectral image that is used as a dataset is multispectral (MS) image and the
other is hyperspectral (HS) image. Each original spectral image is pre-processed by spectrally integrating over the entire
spectral range to obtain a PAN source image that is used in the fusion process. This way, we can eliminate the step of
image co-registration since the obtained PAN image is already perfectly aligned to the spectral image. Next, we fuse the
band-reduced spectral images with the PAN images using contourlet-based fusion framework. The resultant fusion
image provides enhanced spatial resolution while preserving the spectral information. In order to analyze the band
reduction performance, the original spectral images are fused with the same PAN images to serve as a reference image,
which is then compared to the band-reduced spectral image fusion results using six different quality metrics.
To simultaneously compress multichannel climate data, the Wavelet Subbands Arranging Technique (WSAT) is studied. The proposed technique is based on the wavelet transform, and has been designed to improve the transmission of voluminous climate data. The WSAT method significantly reduces the number of transmitted or stored bits in a bit stream, and preserves required quality. In the proposed technique, the arranged wavelet subbands of input channels provide more efficient compression for multichannel climate data due to building appropriate parent-offspring relations among wavelet coefficients. To test and evaluate the proposed technique, data from the Nevada climate change database is utilized. Based on results, the proposed technique can be an appropriate choice for the compression of multichannel climate data with significantly high compression ratio at low error.
Image fusion involves merging two or more images in such a way as to retain the most desirable characteristics of each.
There are various image fusion methods and they can be classified into three main categories: i) Spatial domain, ii)
Transform domain, and iii) Statistical domain. We focus on the transform domain in this paper as spatial domain
methods are primitive and statistical domain methods suffer from a significant increase of computational complexity. In
the field of image fusion, performance analysis is important since the evaluation result gives valuable information which
can be utilized in various applications, such as military, medical imaging, remote sensing, and so on. In this paper, we
analyze and compare the performance of fusion methods based on four different transforms: i) wavelet transform, ii)
curvelet transform, iii) contourlet transform and iv) nonsubsampled contourlet transform. Fusion framework and scheme
are explained in detail, and two different sets of images are used in our experiments. Furthermore, various performance
evaluation metrics are adopted to quantitatively analyze the fusion results. The comparison results show that the
nonsubsampled contourlet transform method performs better than the other three methods. During the experiments, we
also found out that the decomposition level of 3 showed the best fusion performance, and decomposition levels beyond
level-3 did not significantly affect the fusion results.
Typical systems used for detection of Weapon of Mass Destruction (WMD) are based on sensing objects using gamma rays or neutrons. Nonetheless, depending on environmental conditions, current methods for detecting fissile materials have limited distance of effectiveness. Moreover, radiation related to gamma- rays can be easily shielded.
Here, detecting concealed WMD from a distance is simulated and studied based on radar, especially WideBand (WB) technology. The WB-based method capitalizes on the fact that electromagnetic waves penetrate through different materials at different rates. While low-frequency waves can pass through objects more easily, high-frequency waves have a higher rate of absorption by objects, making the object recognition easier. Measuring the penetration depth allows one to identify the sensed material.
During simulation, radar waves and propagation area including free space, and objects in the scene are modeled. In fact, each material is modeled as a layer with a certain thickness. At start of simulation, a modeled radar wave is radiated toward the layers. At the receiver side, based on the received signals from every layer, each layer can be identified. When an electromagnetic wave passes through an object, the wave’s power will be subject to a certain level of attenuation depending of the object’s characteristics. Simulation is performed using radar signals with different frequencies (ranges MHz-GHz) and powers to identify different layers.
Target detection is difficult when the target is concealed or placed under ground or water. To detect and identify
concealed objects from a distance, the analysis of the HyperSpectral Imaging (HSI) and Wideband (WB) data is studied.
While the HSI analysis may render surface information about objects, the WB data can reveal information about inner
layers of the object and its content.
Two of the challenging issues with object identification using HSI are (i) computational complexity of the analysis
and (ii) signature mismatch. Here, the robust matched filter is emphasized for HSI processing. In addition, the wideband
technology is utilized to provide more information about concealed target, and to support spectral processing for object
uncovering more effectively.
During simulation, electromagnetic waves and propagation areas are modeled. In fact, an object is modeled as
different layers with different thicknesses.
The existence of a target is estimated by the detection of spectral signatures relating to materials used in the target. In
other words, the simultaneous presence of spectral signatures corresponding to the main materials of the target in the
hyperspectral data helps detecting the target.
The reflected higher frequency signals provide information about exterior layers of both an object and the
background; in addition, the reflected lower frequency signals provide information about interior layers of the object. To
identify different objects, the simulation is performed using HSI, and WB technology at different frequencies (MHz-
GHz) and powers. Based on simulation, the proposed method can be a promising approach to detect targets.
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.