In the context of colorimetric matching, the intent of color scanner and printer calibrations is to characterize the devicedependent responses to the device-independent representations such as CIEXYZ or CIE 1976 L*a*b* (CIELAB). Usually, this is accomplished by a two-step process of gray balancing and a matrix transformation, using a transfer matrix obtained from multiple polynomial regression. Color calibrations, printer calibrations in particular,
are highly nonlinear. Thus, a new technique, the neural network with the Cascade Correlation learning architecture, is employed for representing the map of device values to CIE standards. Neural networks are known for their capabilities to learn highly nonlinear
relationships from presented examples. Excellent results are obtamed using this particular neural net; in most training sets, the average color differences are about one Eab. This approach is compared to the polynomial approximations ranging from a 3-term
linear fit to a 14-term cubic equation. The results from training sets indicate that the neural net outperforms the polynomial approximation. However, the comparison is not made in the same ground
and the generalizations, using the trained neural net to predict relationships it has not been trained with, are sometimes rather poor. Nevertheless, the neural network is a very promising tool for use in
color calibrations and other color technologies in general.
An application of marker-controlled segmentation in petroleum engineering is presented. The images to be segmented originate from high-resolution conductivity measurements of borehole walls. These measurements reflect the composition and structure of the rock formation through which the well was drilled. In this
application, we detect and measure small cavities, or vugs, in the walls. We use the tools provided by mathematical morphology. Our strategy is based on gradient image modification using markers and on the watershed transformation. First, the vugs are automatically marked, as well as the background. These markers together delineate areas of interest in which we know there is one contour per vug. To find the vug contour and perform measurements, we modify
the gradient image in such a way that only a single edge is kept between the vug and the background markers. We perform the final step of edge detection using the watershed transformation of the modified gradient image. The final result is one closed contour per marked vug. This strategy is presented in detail, experimental results are shown, and artifact elimination is discussed.
Inverse haiftoning is the method by which an approximation of a gray-scale image is reconstructed from a binary, halftoned version of the original. Several inverse-halftone algorithms are described, including a three-level cascade algorithm. We demonstrate that a priori knowledge of the halftone technique is not essential, but can be used if available. Finaily, we demonstrate the results of applying inverse-halftone operations to both computer
synthesized and photographic images.
This study addresses how to generate high spatial resolution image data from a low spatial resolution imaging source. An edge-restricted spatial interpolation algorithm is developed to increase the image resolution and at the same time to enhance the
sharp edges and details from the original imaging source. The algorithm is based on a cubic spline-under-tension interpolation kernel. The weights of the interpolation kernel can be adjusted adaptively
according to the edge information in the neighborhood of the interpolated pixels. The algorithm can be applied to a relatively low spatial resolution image source, such as video, to generate highresolution
image data for high-quality printing devices.
A new mapping transformation method is presented to
scale the control points of discrete polygons to locate the transformed coordinate of the control points as accurately as possible. In traditional methods, the area of the control point is regarded as
zero and can be suitable only for scaling analog polygons (or a single polygon). By using ourproposed technique, the ratio of black points to totalpoints in resultant polygons can closely approach the
original discrete polygons. Our technique eliminates the mapping distortion by taking into account the topology relation among the discrete polygons. We apply our technique to Chinese fonts, English fonts, and mathematical expression symbols, and we compare it to three other transformation techniques.
Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and
interpretation. A general formulation of the problem is presented, along with a novelsolution utilizing the simple and numerically stable method of Gram-Schmidt orthogonallzation. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering features. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios(SNRs)and CNRs ofthe filtered images. The technique is demonstrated through its applications to simulatedand acquiredMRl scene sequences of a human brain with a cerebralinfarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.
Working on an image coding method using the discrete
cosine transform with 8 x 8 pixel blocks, blocking effects are found. Filtering is used after image reconstruction without modifying the coding method. We have studiedsome differentilnearfilters, varying
the number of neighboring pixels and varying the value of the coefficients of each point. We compare, in each case, Laplacian filters with other filters, and we show the blocking effects and then the
filtering effects with EARTHO16 and EARTH125 images. Finally, we plot the curve of the mean square error of the reconstructed image depending on the coefficient values used with the One Neighbor
Accounting Filter. Various tests have been done with varying images, compression ratios, and MSEs. In conclusion, we compare each filter in each case to determine the best.
Visualprogramming using reusable software components
and a dataflow model of computation is proposed as a method to reduce software development time and cost in the construction of image processing applications. A graphical editor is described that allows interactive definition of logical relationships among components of systems through direct manipulation of on-screen icons. This editor manages the syntax of datafiow graphs and catalogs of component parts while the semantics of the operation of these parts is provided by a separate postprocessor. The postprocessor includes a user extensible set of primitive (i.e., low-level) component parts for image processing from which more complex algorithms
may be constructed. This environment provides a powerful visual programming system for rapid interactive development of machine vision algorithms and management of reusable image processing
software. A case study describing construction of a medical image processing application is included.
The efficient encoding and transmission of information for facsimile communication relies on redundancy in the scanned pixels. Halftone images, especiaily those rendered by high-quality dispersed
dot techniques, are "busy" with alternative black and white pixels and shorter run lengths as compared to text information. Because of this, it is desirable to increase the redundancy and decrease the entropy of those images for efficient encoding and transmission. We propose a novel technique whereby both transmiffing
and receiving fax devices have in memory a halftone screen such as the "blue noise mask" (BNM). The BNM is a halftone screen that produces a visuaily appealing dispersed dot pattern with an unstructured, isotropic pattern. When both the transmitting and receiving fax devices have the same halftone screen in ROM, the problem of halftone image encoding can be reduced to that of transmitting the mean gray value of blocks, or subimages, followed by a sparse halftone error image with increased redundancy and
run-lengths compared to the original halftone. Examples show that by using the proposed technique, image entropy can be reduced to 0.2 bits/pixel, and typical run-lengths can increase by a factor of
5. The increase in image quality, combined with increased transmission speed, could add considerably to the utilization and acceptance of halftone fax images.
The process of digital haiftoning replaces a visually
continuous-tone image with a binary image. This procedure must be accomplished in such a way as to give the illusion of multiple gray levels while introducing a minimum amount of artifacts or structure
not present in the original continuous-tone image. In this investigation, nonperiodic noise patterns that were uniformly distributed, so as to maintain good continuous-tone reproduction, were generated and used as random halftone screens. The noise patterns also had a prescribed two-dimensional spatial correlation, chosen in an attempt to reduce the undesirable artifacts normally introduced by the halftone process. Noise that has a correlation such that its
spectrum is lacking low-frequency power is sometimes referred to as "blue noise." An iterative method of generating random correlated noise patterns is described, and some examples of the resulting halftoned images are presented.