Many regions of the sky are currently being observed with a high number of spectral bands with various instruments and this trend shall develop considerably in the coming years. Even with few wavelength-bands, it is not easy to match the objects identified in each individual image, or to get consistent measurements and classify them spectrally. A detection based on fusion images is proposed. Different algorithms are tested for building such an image. The best results in terms of object detection are obtained from those involving a deconvolution with a wavelet approach. Another way to sort the pixels of astronomical images into a coherent set of physical sources is to classify them following some basic assumptions. The main difficulty comes from the fact that astronomical sources fill a multidimensional continuum of spectral classes leading to a significant color classes increase with the number of bands. The pixels spectral behaviour is assumed here to be a superimposition of several pure elements. The resulting mixing categories define a set of intermediate classes, and the application of the matching pursuit algorithm allows then to get class percentages, providing very promising results for the analysis of multi-wavelength astronomical images.
In this paper, we propose a segmentation algorithm for multi-component images. The technique is based on the combination of three principles: it is an interband approach, where the correlations between the different image components are exploited; it is a multi-resolution technique, that is applied in the wavelet domain; it
is a model based segmentation technique, that applies a multinormal model for the multi-component image, where model parameters are estimated using Maximum Likelihood principles. From this procedure, a regionmerging segmentation technique emerges, employing a generalized likelihood ratio test for the merging. The procedure is embedded into a larger segmentation framework for multi-component images. This framework contains anisotropic diffusion noise filtering, watershed-based segmentation and a multiscale region merging procedure. All techniques are multiscale procedures and work in the wavelet domain. Moreover, they all are multicomponent techniques, making use of the correlation in between the different image components. To demonstrate the proposed procedure, it is applied to a 3-band color image.
Wavelet transform is efficiently applied to multisensor image fusion because it's properties such as multiresolution analysis, accurate reconstruction and similar to people's visual understanding. But the selection of a particular wavelet filter and choosing different decomposition scheme affect the overall performance of multisensor image fusion. It also results in different computational complexity. Experiments were carried out using different wavelet filters(mainly using Daubechies wavelets and Biorthogonal wavelets) as well as the decimated and undecimated versions of the transform (Mallat algorithm and a trous algorithm). Optical and SAR images are used as test data. The performance of fusion results were compared and analyzed. It has been shown that choosing an inappropriate wavelet filter and decomposition scheme can significantly decrease the quality of the fusion results. Considering the conducted experiment and computational complexity, recommendations on choosing a particular wavelet transform to produce satisfactory fusion result in SAR and optical image fusion in different applications are given.
We present in the following work a family of wavelets which never delocalizes edge detection. This result is obtained by considering a new model introducing a close edge. The mutual inlfuence induced by the close edge is discussed. Two properties for the regularization filter are proposed to avoid edge delocalization. The edge detector which never delocalizes is selected to generate a family of wavelets. Each wavelet has n vanishing moments and will be very useful depending on the application. This work is overall conducted in the discrete domain.
In this paper, a wavelet-based enhancement method for multicomponent images or image series is proposed. The method applies Bayesian estimation, including the use of a high-resolution noise-free grey scale image as prior information. The resulting estimator statistically exploits the correlation between the image series and the high-resolution noise-free image to enhance (i.e. to improve the signal to noise ratio and the spatial resolution) of the image series. To validate and demonstrate the procedure, results are shown on a color image. The idea of using an auxiliary image can be applied in many different domains. As an example, experiments are conducted in two different application domains: resolution enhancement of multispectral remote sensing images and improvement of brain activity measurements on functional MRI image time series.
This paper focuses on reviewing some recent works of the use of Gabor filters dealing with industrial applications. After a brief recall of Gabor filter basis, the two usual uses of Gabor filters are recalled: filter bank approach and filter design approach. The third part presents recent published works domain by domain. A fourth part exposes our own work with Gabor Filters for defect detection on semiconductor. A short conclusion summarizes the paper.
The manufacturing processes for paper and similar non-woven fiber webs can affect end-use properties. In this paper, we document a new wavelet-based method of product diagnostic. The methods combines 1-dimensional Morlet and isotropic 2-dimensional Mexican hat wavelets, with wavelet-based filtering and denoising techniques. Two samples produced in pilot machines by different forming methods are examined for variations in their mass per unit area - the grammage. The grammage maps are decomposed into three layers: one associated with the nearly-periodic grammage streaks at large scale, one associated with flocs or related medium-size structures, and the background which combines pixel-size fluctuations and large-scale stochastic modulations. By correlating the structure and background fluctuations with the streaks local phase, we show that one sample exhibits formation streaks (statistical variations in properties other than mean grammage), which are not found in the other.
Our article presents a new way to characterize texture : Wavelet Geometrical Features, that extracts structural measurements from wavelet sub-bands, when most of the wavelet-based methods found in the litterature use only statistical ones. We first describe the method used to compute our features, and thereafter compare them
to thirteen other standard texture features in a classification experiment on the whole Brodatz texture database. We showed that our method produces the best results, especially over the wavelet energy signature and the method it originated from, the Statistical Geometrical Features of Chen.
A feature extracting method based on wavelets for Fourier Transform Infrared (FTIR) cancer data analysis is presented in this paper. A set of low frequency wavelet basis is used to represent FTIR data to reduce data dimension and remove noise. The fuzzy C-means algorithm is used to classify the data. Experiments are conducted to compare classification performance using wavelet features and the original FTIR data provided by the Derby City General Hospital in the UK. Experiments show that only 30 wavelet features are needed to represent 901 wave numbers of the FTIR data to produce good clustering results.
This paper deals with the enhancement of CCD- (charge-coupled device) based thermal images by applying multiresolution image denoising methods. The main focus of this experimental work lies in the attempt to determine and visualize the surface temperature of heated metal parts in the temperature range of approx. 300°C to 500°C. The aim is measurement of the temperature distribution of metallic objects at the lower physical limits of silicon based detectors at a very high spatial resolution. It is shown that the examined filter methods lead to an improved spatial temperature resolution (NETD, noise equivalent temperature difference) that is highly reproducible. A precondition for correct application of these denoising filters is an exact noise characterization of the imaging system. This noise characterization is based on the "Photon Transfer Technique" which clearly demonstrates the Poisson characteristic to be the determining factor of the image formation process, i.e. the random nature of photon emission and detection is the dominant source of noise in the imaging system presented. Based on these results, examples for density and intensity estimates of Poisson noise images with multiresolution methods (wavelets, platelets) are presented, showing the improved image quality and temperature resolution after the denoising process.
This paper presents a novel approach for defect detection using a wavelet-domain Hidden Markov Tree (HMT)1 model and a level set segmentation technique. The background, which is assumed to contain homogeneous texture, is modeled off-line with HMT. Using this model, a region map of the defect image is produced on-line through likelihood calculations, accumulated in a coarse-to-fine manner in the wavelet domain. As expected, the region map is basically separated into two regions: 1) the defects, and 2) the background. A level-set segmentation technique is then applied to this region map to locate the defects. This approach is tested with images of defective fabric, as well as x-ray images of cotton with trash. The proposed method shows promising preliminary results, suggesting that it may be extended to a more general approach of defect detection.
In this paper a new denoising technique for gray valued images is presented. The proposed technique is best suited for flat or textured images affected by relatively low noise levels, where we aim at high quality reconstruction of tiny image structures and fine details. To avoid the attenuation of these fine image details, we replace the common wavelet thresholding and shrinking rules by an averaging step over a certain region of consistent edge directions. This region is obtained by first extracting the pixels that belong to an "oriented structure". We develop a classification algorithm which extracts the oriented structures by using directional information from the wavelet detail images. After this classification step we perform an adaptive averaging: each pixel is averaged over a window that depends on the detected structures in its neighbourhood. We demonstrate the visual improvement of our method over two spatially adaptive wavelet shrinkage methods.
In this paper, we review an implementation of the Ridgelet transform: The Discrete Analytical Ridgelet Transform (DART). This transform uses the Fourier strategy for the computation of the associated 2-D and 3-D discrete Radon transforms. The innovative step is the definition of a discrete 3-D transform and the construction of discrete analytical lines in the Fourier domain. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a DART adapted to a specific application. Indeed, the DART representation is not orthogonal, it is associated with a flexible redundancy factor. The DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. We had proved in different publications that the 2D and 3D DART are performant for the level of greys images restorations. Therefore we have interesting to 2D/3D color image restorations. We have compared the restoration results in function of different color space definition and importance of the white Gaussian noise. We criticize our results with two different measures : the Signal Noise Ratio calculation and perceptual measures to evaluate the perceptual colour difference between original and denoised images. These experimental results show that the simple thresholding of the DART coefficients is competitive than classical denoising techniques.
This paper presents a method to design a wavelet-filter that minimizes entropy in the wavelet transform. Filters that minimize entropy in images tend to filter out texture while highlighting features of interest. The design of the wavelet filter is couched as a non-convex optimization problem which is solved using a hybridized Genetic Algorithm. As an example, three distinct filters are tuned to detect horizontal, vertical and blob defects in woven fabrics. The effects of shifting on the optimized set of coefficients is also explored.
The wavelet transform is well suited for approximation of two dimensional functions with certain smoothness characteristics. Also point singularities, e.g. texture-like structures, can be compactly represented by wavelet methods. However, when representing line singularities following a smooth curve in the domain -- and should therefore be characterizes by a few parameters -- the number of needed wavelet coefficients rises dramatically since fine scale tensor product wavelets, catching these steep transitions, have small local support. Nonetheless, for images consisting of smoothly colored regions separated by smooth contours most of the information is comprised in line singularities (e.g. sketches). For this class of images, wavelet methods have a suboptimal approximation rate due to their inability to take advantage of the way those point singularities are placed to form up the smooth line singularity.
To compensate for the shortcomings of tensor product wavelets there have already been developed several schemes like curvelets, ridgelets, bandelets and so on. This paper proposes a nonlinear normal offset decomposition method which partitions the domain such that line singularities are approximated by piecewise curves made up of borders of the subdomains resulting from the domain partitioning. Although more general domain partitions are possible, we chose for a triangulation of the domain which approximates the contours by polylines formed by triangle edges. The nonlinearity lies in the fact that the normal offset method searches from the midpoint of the edges of a coarse mesh along the normal direction until it pierces the image. These piercing points have the property of being attracted towards steep color value transitions. As a consequence triangular edges are attracted to line up against the contours.
The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.
In the RESUME project (Reconfigurable Embedded Systems for Use in Multimedia Environments) we explore the benefits of an implementation of scalable multimedia applications using reconfigurable hardware by building an FPGA implementation of a scalable wavelet-based video decoder. The term "scalable" refers to a design that can easily accommodate changes in quality of service with minimal computational overhead. This is important for portable devices that have different Quality of Service (QoS) requirements and have varying power restrictions.
The scalable video decoder consists of three major blocks: a Wavelet Entropy Decoder (WED), an Inverse Discrete Wavelet Transformer (IDWT) and a Motion Compensator (MC). The WED decodes entropy encoded parts of the video stream into wavelet transformed frames. These frames are decoded bitlayer per bitlayer. The more bitlayers are decoded the higher the image quality (scalability in image quality). Resolution scalability is obtained as an inherent property of the IDWT. Finally framerate scalability is achieved through hierarchical motion compensation.
In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with excellent data locality (both temporal and spatial), streaming capabilities, a high degree of parallelism, a smaller memory footprint and state-of-the-art compression while maintaining all required scalability properties. These claims are supported by an effective hardware implementation on an FPGA.
MESHGRID is a novel, compact, multi-scalable and animation-friendly 3D object representation method, which is part of MPEG-4, and which resides in the Animation Framework Extensions (AFX) toolset. The paper introduces the novel concept of local error control for arbitrary mesh encoding. In this sense, the paper proposes a new wavelet-based L∞-constrained coding technique for MESHGRID models, generating a fully scalable L∞-oriented bit-stream. The advantages of scalable L∞-oriented coding over L2 coding are experimentally demonstrated.
In this paper, we propose a passive error concealment scheme for the reconstruction of wavelet coded images which are damaged due to packet loss. The proposed interpolation scheme calculates a lost coefficient from its neighbors while adapting the interpolation weights to the image correlation in each direction. All subbands are processed independently which allows a fast, parallel execution. This is interesting for real-time video applications such as two way video communication. The results demonstrate that the proposed scheme outperforms the existing schemes of similar complexity, both in terms of mean squared error and visually.