<p>Binarization is the starting step of document analysis and recognition systems. A binarization method is proposed for a degraded historical document image. The binarization methodology is based on the joint use of nonsubsampled contourlet transform (NSCT) for enhancement and <italic>k</italic>-means clustering for binarization. The input degraded image is decomposed by NSCT for generating coefficients, which are handled through a weighting scheme for highlighting significant features. The resulting reconstructed enhanced image is then binarized by mapping pixels into foreground (text) or background (no text) using <italic>k</italic>-means clustering. Experiments are conducted on document image binarization competition datasets using blind and unblind evaluation protocol. Unblind evaluation is performed on four specific types of degradations, which are stain, ink bleed-through, nonuniform background, and ink intensity variation. The obtained results show the effectiveness of the proposed scheme in terms of objective and subjective evaluations as well as stability with respect to the other well-known methods.</p>
We propose in this paper the investigation of the change detection approaches based on the pixel level and the object level. The pixel level approach is based on the simultaneous analysis of multitemporal data, while the object level approach uses a comparative analysis of independently produced classifications of data. Thereby, the comparison is established by using the multilayer neural network classifier. Usually, the backpropagation algorithm is used as a training rule. In this paper, we investigate the use of the Kalman filtering (KF) as the training algorithm for detecting changes in remotely sensed imagery. By using SPOT images and evaluation criteria, the detailed comparison indicates that the KF algorithm is preferable compared to the BP algorithm in terms of convergence rate, stability and change detection accuracy.
Radar imaging provides an important advantage for the earth change observation independently of weather conditions. However, the recognition of some features as roads is more difficult leading thus to an ambiguous interpretation of the scene. In order to compensate the lack of features, the high spatial resolution panchromatic image is often used as a complementary data for improvng the quality of the radar image. The basic idea consists to extract features (details) from the panchromatic image by means of the High Pass Filter (HPF) in order to incorporate into the radar image. However, the difficult choice of the size and shape of the filter does not allow enhancing significantly the lack of features into the radar image. To resolve this problem, we propose the use of the <i>a` trous</i> algorithm as an alternative approach for extracting features from the panchromatic image. Its advantage lies to the local characterization of features by the wavelet coefficients. The radar-panchromatic composite image produced from the a` trous algorithm gives a better detection of lines, edges and field boundaries compared to the HPF method.
The Intensity Hue Saturation (IHS) transform is a widely used method to enhance the spatial resolution of multispectral images by substituting the Intensity component by the high resolution of the panchromatic image. However, such a direct substitution introduces important modifications on spectral properties. A more rigorous approach should consist in enhancing the spatial resolution of the intensity component through an appropriate combination with the panchromatic image. Such a combination is performed in the redundant wavelet domain by using a fusion model. SPOT images are used to illustrate the superiority of our approach compared to the IHS method for preserving spectral properties.
We propose in this paper the joint use of the color space and wavelet transform to improve the spatial resolution of multispectral images. The principle consists in transforming the Red Green Blue (RGB) image components into IHS independent components. The I component is merged with the P image in the wavelet domain via an appropriate model and an IHS reverse transformation is then performed to produce new high resolution multispectral images. For this purpose, the redundant wavelet transform may be used since it ensures that significant details coming from P are well injected in the I component. Another advantage of this approach lies in rejecting the noise present in the two components. The merging I and P considered here is based on the local detail mean matching that allows to adjust the high resolution details of the P image with the low resolution of the I component. To evaluate the results of our merging method, two criteria are used: correlation coefficient and index deviation.
We propose in this paper an integration method of the radar information in multispectral images without disturbing the spectral content. The main problem is to define a fusion rule that allows to take into account the characteristics of these images. Also, the main purpose of this paper lies in defining a new fusion rule performed in the redundant wavelet domain. This rule is based on the Mahalanobis distance applied on the wavelet coefficients. Instead of comparing coefficient-to- coefficient, the distance-to-distance comparison is performed. In this case the selected coefficient in the fused image will be the one that presents the large distance. This approach is applied to fusing the infrared band of SPOT with, respectively, RADARSAT and ERS images. The results show that spectral information is well preserved and there is a better information on the texture and the area roughness.