Three image-quality metrics are evaluated: Hamerly's
edge raggedness, or tangential edge profile; Granger and Cupery's subjective quality factor (SQF) derived from the second moment of the line spread function; and SQF derived from Gur and O'Donnell's reflectance transfer function. These metrics are but a handful of
many in the literature. Standard office papers from North America and Europe representing a broad spectrum of what is commercially available were printed with a 300-dpi Hewlett-Packard Deskjet
printer. An untrained panel of eight judges viewed text, in a variety of fonts, and a graphics target and assigned each print an integer score based on its overail quality. Analysis of the metrics revealed
that Granger's SQF had the highest correlation with panel rank, and achieved a level of precision approaching single-judge error, that is, the ranking error made by an individualjudge. While the other measures correlated in varying degrees, they were less precise. This paper reviews their theory, measurement, and performance.
A theoretical analysis of threshold modulation in error diffusion is given. It is shown that spatial modulation of the threshold is mathematically identical to processing an equivalent input image
with the standard error diffusion algorithm. The equivalent input is the sum of the original image with a high-pass-filtered version of the threshold spatial modulation. The filter is a function only of the weights used to distribute the errors. This result can be used to explain the several published observed effects of threshold modulation, such as edge enhancement and the effects of adding noise to the threshold.
We present a new approach for estimating printer model
parameters that can be applied to a wide variety of laser printers. Recently developed "model-based" digital halftoning techniques depend on accurate printer models to produce high-quality images using
standard laser printers (typically 300 dpi). Since printer characteristics vary considerably, e.g., write-black versus write-white laser printers, the model parameters must be adapted to each individual
printer. Previous approaches for estimating the printer model parameters are based on a physical understanding of the printing mechanism. One such approach uses the "circular dot-overlap model,"
which assumes that the laser printer produces circularly shaped dots of ink. The circular dot-overlap model is an accurate model for many printers but cannot describe the behavior of all printers. The new approach is based on measurements of the gray level produced by various test patterns and makes very few assumptions about the laser printer. We use a reflection densitometer to measure the average reflectance of the test patterns and then solve a constrained optimization problem to obtain the printer model parameters. To demonstrate the effectiveness of the approach, the model parameters
of two laser printers with very different characteristics were estimated. The printer models were then used with both the modified error diffusion and the least-squares model-based approach to produce
printed images with the correct gray-scale rendition. We also derived an iterative version of the modified error diffusion algorithm that improves its performance.
We present a novel color processor with programmable
interpolation by small memory (PRISM). The input/output signals to/from the devices are flexibly converted by a 3-0 look-up table (LUT) with a PRISM interpolator. The PRISM architecture provides a simple computation algorithm with sufficient accuracy. The performance of PRISM interpolation is compared with other conventional methods. In practice, PRISM is less complicated than CUBE
and PYRAMID, and more accurate than PYRAMID and
TETRAHEDRON. PRISM cuts the memory size of LUT drastically to an orderof iO compared with a full-size LUT method and brings with it a large-scale integration color processor operating at a higher
than video rate. The PRISM structure is the most suitable for the perceptual color spaces such as YCrCb or CIELAB and very useful for device-independent color reproduction and transmission. Typical applications by a PRISM color processor are presented.
We describe a Iinearscannermodelthatprovides a useful
characterization of the response of a scanner to diffusely reflecting surfaces. We show how the linear model can be used to estimate that portion of the scanner sensor responsivities that fall within the
linear space spanned by the input signals. We also describe how the model can be extended to characterize a scanner's response to surfaces that fluoresce under the scanner illuminant.
The task of instrumental measurement of the color of nonimpact printing is addressed. The majority of the instrument and sample parameters that can introduce systematic errors in the instrumental readings are identified. For selected cases, proper measurement
procedures are identified. In other cases, the user is
warned of the problem and must devise a measurement methodology for minimizing the effects of the identified parameters. Without proper concern for these parameters, high-fidelity deviceindependent
color reproduction is not fully achievable.
new color interchange mechanism in a networked color
system is proposed. Compared with the current color interchange mechanism adopted in international standards such as open document architecture and standard page description language, it considers
recent algorithms about color adaptation correction and gamut mapping, e.g., Nayatani's color adaptation correction method and linear gamut mapping in the CIELAB color space. Also, simpler calibration data for the CMY(K) color space based on 3x 3 (or 3x 4)
matrices and 1-D LUTs are proposed. Because Nayatani's method and linear gamut mapping in CIELAB can be done by 3x 3 matrices and 1-D LUTs, the total calculation can be executed by a chain of matrices and 1-D LUTs. In the case where the interchange color
space is fixed to CIELAB, this new mechanism can be much simplified.
Lossy plus lossless techniques for image compression
split an image into a low-bit-rate lossy representation and a residual that represents the difference between this low-rate lossy image and
the originaL Conventional schemes encode the lossy image and its lossless residual in an independent manner. We show that making use of the lossy image to encode the residual can lead to significant savings in bit rate. Further, the complexity increase to attain these savings is minimaL The savings are achieved by capturing the inherent structure of the image in the form of a noncausal prediction model that we call a prediction tree. This prediction model is then used to transmit the lossless residuaL Simulation results show that a reduction of 0.5 to 1.0 bit/pixel can be achieved in bit rates compared
to the conventional approach of independently encoding the residuaL
The classical Hough transform, the generalized Hough
transforms, and their extensions are quite robust for detection of a large class of objects that can be categorized as industrial parts. These objects are rigid and have fixed shapes, i.e., different instances
of the same object are more or less identical. These techniques, and indeed most current techniques, however, do not adequately handle shapes that are more flexible. These shapes are widely found in nature and are characterizedby the fact that different
instances of the same shape are similar, but not identical, e.g., leaves and flowers. We present a new technique to recognize natural shapes, based on principal component analysis. A set of basis shapes are obtained using principal component analysis. A Houghlike technique is used to detect the basis shapes. The results are then combined to locate the shape in the image. Experimental resuits show that the approach is robust, accurate, and fast.
The acceptance of digital workstations as primary diagnostic devices for chest radiographs will be precluded if there is a reduction in the radiologist's accuracy of diagnosis compared to that
achieved with conventional screen-film images. Reduction in diagnostic efficacy is believed to be partially due to a reduction in contrast resolution on video monitors. We present the results of a pilot
study that tests the ability of the contrast-enhancement algorithm artifact-suppressed adaptive histogram equalization (ASAHE) to compensate for reduced contrast resolution. The ASAHE algorithm
is compared to a computed radiographic algorithm that previously delivered observer performance inferior to conventional screen-film images. The algorithms are compared on the basis of five readers interpreting an image set consisting of 45 dllnical cases, 23 of which
are confirmed as demonstrating pneumothoraces. Detection efficacy, measured by the area under a receiver operating characteristic (ROC) curve, is not significantly different for the two algorithms. The
average ROC curves for the algorithms have different shapes, suggesting that the ASAHE algorithm is affecting diagnostic performance in a way that is not well understood. The results of the pilot study indicate that a test with higher statistical power would need to be performed using this algorithm to form a final estimate of its usefulness.