In the case of textured images and more particularly of directional textures, a new parametric technique is proposed to estimate the orientation field of textures. It consists of segmenting the image into regions with homogeneous orientations, and estimating the orientation inside each of these regions. This allows us to maximize the size of the samples used to estimate the orientation without being corrupted by the presence of boundaries between regions. For that purpose, the local-hence noisy-orientations of the texture are first estimated using small filters (3×3 pixels). The segmentation of the obtained orientation field image then relies on a generalization of a minimum description length based segmentation technique, to the case of π-periodic circular data modeled with Von Mises probability density functions. This leads to a fast segmentation algorithm without tuning parameters in the optimized criterion. The accuracy of the orientations estimated with the proposed method is then compared with other approaches on synthetic images and an application to the processing of real images is finally addressed.
This paper deals with textured images and more particularly with directional textures. We propose a new parametric technique to estimate the orientation field of textures. It consists in partitioning the image into regions with homogeneous orientations, and then to estimate the orientation inside each of these regions, which allows us to maximize the size of the samples used to estimate the orientation without being corrupted by the presence of frontiers between regions. Once estimated the local - hence noisy - orientations of the texture using small filters (3×3 pixels), image partitioning is based on the minimization of the stochastic complexity (Minimum Description Length principle) of the orientation field. The orientation fluctuations are modeled with Von Mises probability density functions, leading to a fast and unsupervised partitioning algorithm. The accuracy of the orientations estimated with the proposed method is then compared with other approaches on synthetic images. An application to the processing of real images is finally addressed.
We review some new results about recently proposed intrinsic degrees of coherence of partially polarized light.
We show that they can be derived from an invariance principle equivalent to that used to derive the standard
degree of coherence of scalar fields. We compare them with the degree of coherence recently proposed by Wolf.
We illustrate the difference and the complementarity between these two definitions on a simple example of optical
The presented study, based on the continuous wavelet transform and time-frequency representations, introduce new algorithms which perform different kinds of separation processing depending on the nature of the seismic data. When dealing with a one dimensional recorded signal (one sensor), we propose a segmentation of its time-scale representation. This leads to the automatic detection and separation of the different waves. This algorithm can be applied to a whole seismic profile containing several sensors, by tracking the segmentation features in the time-scale image sequence. The resulting separation algorithm is efficient as long as the patterns of the different waves do not overlap in the time-scale plane. Afterwards, the purpose is to take into account the redundancy of information in more dimensional data to increase the separation possibilities in presence of interference. In the case of vectorial sensors, we use the polarization information to separate the different waves using phase shifts, rotations, and amplifications. At last, in the case of linear array data, we use the propagation velocity information to separate dispersive waves with overlapping patterns. For this purpose, we propose a new time-scale representation which enable the estimation of the wave dispersion function from a small array of sensors.