This paper proposes a class of random functions that
models multidimensional scenes (microscopy, macroscopy, video sequences, etc.) in a particularly adequate way. In the triplet (V,s,P) that defines a random function, the s-algebra s, here, is that introduced by G. Matheron in his theory of the upper (or lower) semi-continuous functions from a topological space E into R. On the other hand, the set V of the mathematical objects is the class Lw of
the equicontinuous functions of a given modulus, w say, that map E, supposed to be metric, into R or (R)n. For a comprehensive member of metrics on R, class Lw is a compact subset of the u.s.c. functions E!R, on which the topology reduces to that of the pointwise
convergence. In addition, class Lw is closed under the usual dilations, erosions and morphological filters, as well as for convolutions g such that *ug(dx)u51. Examples of the soundness of the model are given.
The aim of this paper is to reproduce the texture of
rough surfaces by means of simulations of probabilistic random function models. The studied texture is obtained by a physical process, the electro-erosion discharge. After an introduction to this process and to the characterization of random functions, different morphological models are reviewed and tested: the Boolean random function, the dead leaves random function, the sequential alternate random function, and the dilution random function. They all involve a combination of a Poisson point process and of elementary patterns, the primary random function. For each model, the case of cylinder primary functions is first considered for illustration; then simulations with roughness craters are compared to rough surfaces. The final model involves the use of a repulsion distance in the simulation, as could be expected from the variogram of the data.
TOPICS: Monte Carlo methods, Binary data, Image filtering, Signal to noise ratio, Image restoration, Image classification, Magnetorheological finishing, Statistical analysis, Image analysis, Digital filtering
Morphological size distributions and densities are frequently used as descriptors of granularity or texture within an image. They have been successfully employed in a number of image processing and analysis tasks, including shape analysis, multiscale shape representation, texture classification, and noise filtering. In most cases however it is not possible to analytically compute these quantities. In this paper, we study the problem of estimating the (discrete) morphological size distribution and density of random images, by means of empirical as well as Monte Carlo estimators. Theoretical and experimental results demonstrate clear superiority of the Monte Carlo estimation approach. Examples illustrate the usefulness of the proposed estimators in traditional image processing and analysis problems.
Representation of set operators by artificial neural networks and design of such operators by inference of network parameters is a popular technique in binary image analysis. We propose an alternative to this technique: automatic programming of morphological
machines (MMachs) by the design of statistically optimal operators. We propose a formulation of the procedure for designing set operators that extends the one stated by Dougherty for binary image
restoration, show the relation of this new formulation with the one stated by Haussler for learning Boolean concepts in the context of
machine learning theory (which usually is applied to neural networks), present a new learning algorithm for Boolean concepts represented as MMach programs, and give some application examples in binary image analysis.
We describe a new technique for simulations of large
communication networks. This technique is based on a new approach for mobile communication networks modeling by means of point processes and stochastic geometry tools. Simulator ARC developed by the authors and described in the paper uses recent effective geometrical algorithms for computing the topology of the system model. New algorithms based on stochastic gradient technique and implemented in the simulator allow evaluation of certain performance characteristics of the system and can be used for optimization of the system parameters. We analyze the used estimator and give its asymptotic variance.
Lattice gas models use particles moving and interacting on a graph. Developed to simulate complex flows, they can be used to generate random structures on a physical basis. By addition of chemical reactions between species to fluid motion, reaction-diffusion models can be generated. This is illustrated by simulations of random textures obtained from a creation-annihilation model and from a multi-species model. Finally, by means of the immiscible lattice gas coupled with an aggregation process, stratified media reproducing the deposition of droplets are generated.
Developing generic models to simulate complex behaviors is one of the great challenges of numerical simulation. We present here a generic approach based on particle systems that has proved to be efficient in various kinds of complex situations, such as
crowd simulation or airbag deployment. In crowd simulation each particle represents one person of the crowd whose behavior is assigned according to the class it belongs to. In airbag deployment,
both the gas mixture and the bag are modeled by particles. In this case, particles don't represent the molecules but abstract blocks chosen to reflect the macroscopic behavior of the system. By describing these two concrete applications, we intend to illustrate how our generic model can be successfully applied to a larger class of problems.
We present a study concerning the practical possibilities of using the homomorphic filtering for color image enhancement. Two of the most popular color models, RGB and C-Y (color difference), are employed and the results are comparatively discussed.
The homomorphic filtering has proven to be a viable tool for both color models considered.
Quantization noise prevalent in transform encoded images becomes increasingly objectionable as the required bit rate for the compressed image representation is reduced. The perceptual effect of this coding noise is not uniform throughout the image, however, being highly dependent on the local behavior of the signal on which it is superimposed. An adaptive, nonlinear postprocessing algorithm is described, which is shown to appreciably enhance the
subjective quality of the reconstructed image. A three-component image model is adopted, according to which any image is considered to be composed of nonoverlapping strong edge, textured, and monotone components. The foundation of the postprocessing algorithm is a computationally efficient edge classifier capable of resolving an image into its three components. The classifier operates by exploiting the characteristic shape of the histogram of pixel luminance values in a strong edge region to distinguish between strong and textured edges. The postprocessing algorithm consists of a combination of adaptive, a trimmed mean filtering (where the a
value and window size are determined by the output of the edge classifier) and a cosine transform domain dithering technique. The results presented confirm the efficacy of the proposed approach.
The wavelet transform, which provides a multiresolution representation of images, has been widely used in image compression. A new image coding scheme using the wavelet transform and lattice vector quantization is presented. The input image is first decomposed into a hierarchy of three layers containing 10 subimages by discrete wavelet transform. The lowest resolution low-frequency
subimage is scalar quantized with 8 bits/pixel. High-frequency subimages are encoded by lattice vector quantization. A pyramidal piecewise uniform companding approach is used to design the lattice
quantizer according to a piecewise constant approximation to the probability density function of the input source. Due to the fast algorithm of lattice quantization, computational complexity is greatly reduced as compared to the vector quantizers based on the Linde- Buzo-Gray (LBG) algorithm. Computer simulations show that the proposed coding scheme can achieve a high compression ratio while maintaining good reconstruction image quality (both objectively and subjectively).
A modified error-diffusion algorithm that eliminates the false-texture contour phenomenon that usually appears in the halftone images is presented. The main idea behind our modification is to introduce a local filter that can smooth the false-texture contours
of the low variation regions in the images generated by the standard error-diffusion algorithm. Experimental results are included.
An active range imaging system based on triangulation
between a projected grid and an image viewed by a camera is discussed. The geometry of the imaging system is designed to readily accommodate a calibration of both the camera and the projected
grid using 2-D calibration objects. A new improved grid design is presented to increase the volume where the correspondence of range points can be resolved unambiguously. The steps needed to extract range information from the production of the camera image are reviewed. This review includes the image processing techniques needed to extract the location of range points in the camera image by resolving the correspondence between projected and imaged range points using the principle of connectivity and a simple "look up" table produced off-line during the calibration. Finally, the calculation of range data based on triangulation is presented, together
with experiments to demonstrate the general performance of the imaging system.