Now that a number of color prepress systems are on the market and the concept of making editorial corrections in appearance variables rather than in ink amounts is beginning to be accepted, it is appropriate to see what further improvements can be made. The required operations are reviewed. It is concluded that the main challenges are a more convenient operator interface, an accurate soft proof, and full (instant) interactivity. Calibration of input and output devices to the requisite accuracy is also important but easier to accomplish. The problem of gamut compression that is usually required to accommodate the smaller dynamic range of the printer also calls for attention. Printing with more than four inks is discussed as well. The extent to which the editorial process can be made automatic has been the subject of much speculation and some research. After listing the separate operations and discussing the ease (or difficulty) of automating them, it is concluded that fully automatic adjustment is neither possible nor desirable. Instead, it is suggested that most efforts should be applied to making the operator's job easier and faster.
With the increasing use of desktop color scanners for digitizing color images it has become desirable to obtain device independent color information from these scanners. In order to achieve this goal the scanner spectral sensitivity needs to be estimated. This paper describes the application of signal processing techniques to the problem of estimating the scanner sensitivity. Results obtained by applying the methods described to an actual commercial scanner are presented and the performance of two different techniques is compared.
Three-dimensional interpolation can minimize calculations when converting images from one device-independent color space to another or converting information between device-dependent and device-independent color spaces; this makes 3D interpolation a suitable way to implement many kinds of color space transformations. This paper analyzes trilinear interpolation and several tetrahedral interpolation schemes that extract data from a cubical packing of space, including a five-tetrahedron scheme proposed by Kanamori and Kotera, a six-tetrahedron method due to Clark, and three variations on the Clark arrangement. Also analyzed are two versions of the disphenoid extraction from the body-centered-cubic packing proposed by Kasson, Plouffe, and Nin, and the PRISM method reported by Kanamori, et al. The test for interpolation algorithm performance of the earlier paper is applied to a large set of color space conversions and lattice granularities, allowing meaningful conclusions about average and worst-case performance.
With electronic-imaging input and output devices describing colors in their own native quantized triplets and color management systems translating between them through conversion models to a reference color space, calibration to the reference should be optimized in some sense within the constraints of the model. When a model allows for three channel-specific functions in addition to a linear channel-mixing transformation, the optimization of functions and transformation separately can be improved upon by their simultaneous optimization. As an example, the calibration of scanner RGB values to CIE XYZ values employing both separate and simultaneous optimization will be demonstrated and compared using color difference measures. An iterative technique to perform simultaneous optimization when a closed-form solution cannot be found will be presented. The reason for an improvement in calibration accuracy using simultaneous optimization is just that the responses of input and output devices are functions of their own particular color space, which is being inferred in the process, rather than that of the reference.
With the increasing popularity of device-independent color reproduction systems, obtaining predictable data from color scanning devices plays an important role in both workflow productivity and maintaining consistency of color communication. A major obstacle lies in the fact that characteristics of scanners vary from one manufacturer to another and even across units of the same model. The problem is further complicated by the phenomenon of scanner metamerism. This paper describes the results of using an optical filter assembly to spectrally characterize a desktop film scanner. Together with effective spectral representation schemes for different media types, captured color across various scanners and media types can be made predictable and consistent.
Colorimetric reproduction requires calibrated color output devices. One way to characterize a color output device is with a 3D look-up table which maps the tristimulus values, t, to the control values, c of the output device. The functional form of the output device can be written in vector notation as t equals F(c). The purpose of calibration is to define an inverse mapping from tristimulus values to control values. Since the function F((DOT)) has no closed form, it is defined by interpolation from a table of values. Given a set of control values ]ci[ on a regular grid and the corresponding set of tristimulus values ]ti[ obtained from data collection, we wish to find the ]cg[ for different ]tg[ on a grid in the tristimulus space. The grid is obtained from a relatively sparse data set with an appropriately defined interpolation scheme. This interpolation scheme can be complex since it is used only once to compute the grid. The regular finer grid can be used in real time with simple interpolation. While the functions that represent the device are usually well behaved and smoothly varying, the truncation of the data can cause an problem with interpolations near the edge of the gamut. An approach to solving this problem is to extrapolate the data outside the gamut using bandlimited or linear extrapolation methods. The extrapolated points along with the measured data are used in a single interpolation algorithm over the entire gamut of the device. The results of this method are comparable to other interpolation methods but it is simpler to implement. It has the additional advantage of allowing physical constraints, such as bandlimits, to be easily imposed.
The main subject of this report is the identification of computable image properties related to perceptual criteria involved in approving the quality of satellite color images. Such image properties can be used, together with other parameters, to compute estimates of tolerance for image degradation in automatic systems.
This paper documents the approach we took to address color reproduction problems starting from several base assumptions: that the inputs to the system would come from a variety of unknown and uncharacterizable sources; that these existing images would primarily be in the form of scanned or machine-generated raster files (i.e. continuous-tone RGB); and that no color management system was available. Our goal was to develop a system that would give reasonable results on a relatively low-cost output device, despite the fact that we could not depend on calibrated inputs. We discuss the characterization and color correction techniques we used, some of the strengths and weaknesses of these techniques, how we chose to deal with the disparity of input sources, our goals for speed and colorimetric accuracy, how we dealt with some of the device limitations, and how well these goals have been met.
In device-independent color imaging systems, it is necessary to relate device color coordinates to and from standard colorimetric or appearance based color spaces. Such relationships are determined by mathematical modeling techniques with error estimates commonly quoted with the CIELAB (Delta) E metric. Due to performance considerations, a lookup table (LUT) is commonly used to approximate the model. LUT approximation accuracy is affected by the number of LUT entries, the distribution of the LUT data, and the interpolation technique used (full linear interpolation using cubes or hypercubes versus partial linear interpolation using tetrahedrons or hypertetrahedrons). Error estimates of such LUT approximations are not widely known. An overview of the modeling process and lookup table approximation technique is given with a study of relevant error analysis techniques. The application of such error analyses is shown for two common problems (converting scanner RGB and prepress proofing CMYK color definitions to CIELAB). In each application, (Delta) E statistics are shown for LUTs based on the above contributing factors. An industry recommendation is made for a standard way of communicating error information about interpolation solutions that will be meaningful to both vendors and end users.
Adding RGB inks to the traditional set of CMYK inks increases the attainable color gamut. But the added complexity poses a challenge in generating suitable separations for rendering of color images. The approach taken in this study reduces the dimensionality of the problem by subdividing the 7 inks into smaller groupings. A series of 4 ink subsets from the 7 ink superset of CMYKRGB were individually characterized and a colorimetric transform from ink to color was obtained for each subset. In color space the 4 ink subsets represent adjacent and overlapping subgamuts of the 7 ink gamut. By utilizing these characterizations both individually and in combination with one another, an inktable which transforms color to ink was generated. In the darker tonal region, 4 inks/color permits access to regions of the full 7 ink gamut which are inaccessible to schemes employing 3 inks/color.
This paper presents an overview of the architecture of Apple Computer's forthcoming color management system release. ColorSyncTM 2.0 is the first of a new generation of operating system based color management systems. The major goal of this architecture is to provide a scalable, flexible and extensible solution to managing color on the desktop for both the end user and application developer. The ambitious scope of this goal necessitated a level of complexity far beyond the first release of ColorSyncTM. ColorSyncTM 1.0 was introduced at MacWorld in January 1993. It is a "plug-in" framework which permits thirdparties to ship color-matching software which users may employ without the need for a complete proprietary environment. After this introduction, Apple then actively solicited feedback from third party developers for several months. An agreement was reached on the profile format at the FOGRA meeting at the Seybold conference in October 1993. This agreement lays the foundation for version 2.0 of the Extension software. Acceptance of the profile format has since been announced by several platform vendors. The ColorSyncTM 2.0 profile format incorporates significant changes from version 1 .0. The new format is a diskbased tagged-element structure allowing selective access to profile data whereas the ColorSync 1 .0 profile format is a memory resident structure. The 1 .0 default CMM uses algorithmic matching derived from work by Apple ATG. This method is relatively fast and accurate for monitors, but is less than optimal for printers. The new format supports use of multidimensional color lookup table transforms. ColorSyncTM 2.0 requires either a 68020 or later processor or a 601 PowerPC or later processor. In addition, a version of the Macintosh Operating System 7.1 or later is also required (which includes Apple's Component Manager). ColorSyncTM 2.0 relies on the Component Manager for the basis of the framework which allows plug-and-play capability for third party color-modeling implementations. ColorSyncTM 2.0 consists of an Extension file from which the resident dispatcher is installed at system startup and a separate Control Panel to provide the user control of the ColorSyncTM System Profile. The ColorSyncTM 2.0 architecture is divided into five parts; 1) the resident dispatcher, 2) profile management, 3) color conversion methods, 4) color modeling methods (CMMs), and 5) the ColorSyncTM profile format. The following sections describe each of these five parts individually. The profile manager, color conversion methods and color modeling methods are all implemented using the Apple Component Manager. Components are used to dynamically load and run executable object code on demand. In addition, the component manager provides database facilities to track each component in the system, allowing the resident dispatcher to call the appropriate component without forcing unreasonable memory requirements on each application. This paper places an emphasis on the profile format since this format provides crossplatform capabilities for color management systems.
Much has been said and written in the past few years regarding color management in general, as well as specific color management systems offered from a variety of vendors. However, considerably less attention has been paid to how these individual color management systems interact and work together in a system that includes applications, drivers, and printers with their own color management capabilities. This paper describes such a system architecture. In addition, support for color rendering intents in PostScript is discussed.
The CIELab system of color coordinates is not optimal for use in desktop publishing (DTP) systems, because it is non-uniform, not well matched to human visual dynamics, and computationally inconvenient. CIELab uses cube roots of differences of color matching functions. In complex scenes, perceived lightness varies quadratically, not cubically, with intensity. Chroma varies in a more complex way but is also not well represented by cubic polynomials. As a result, CIELab exaggerates highlights and compresses shadows. It distorts the concentric circles and radii of the Munsell chart into ellipses and curves. This makes it difficult to achieve appearance equivalence during gamut compression. Gamut compression is an essential element in working with the variety of inexpensive DTP devices. Users expect DTP operations to proceed rapidly on inexpensive equipment. This dictates heavy reliance on integer arithmetic and lookup tables. These techniques do not mesh well with the computational complexities of CIELab.
An historical background is provided for the ATD model for color perception and visual adaptation, whose developing variations have appeared over a period of several years, and which is applied to several sets of chromatic adaptation data in the succeeding companion paper.
Using a slightly modified version of an earlier model, and using a new rule for simulating the effects of simultaneous or successive chromatic adaptation, excellent predictions are made for experimental data that offer especially strong challenges to models for chromatic adaptation.
Companion papers show why the CIELab system of color coordinates is not optimal for use in desktop publishing (DTP) systems and how ATD remedies most of the problems of CIELab. Since DTP deals primarily with the reproduction of images, DTP applications do not need to model human vision. They need only reproduce accurately in the image the reflective properties of the original. The eye does the rest. Linear arithmetic is adequate for this task. An intermediate color space decouples scanner calibration from printer control, leading to a system that is nearly device independent. Plots of Munsell and other color grids show the intermediate space to be uniform. The uniformity of the intermediate space and the linearity of the model lead to accurate gamut compression and negligible transformation errors using integer arithmetic operations on the 8-bit quantities associated with inexpensive DTP equipment. The resulting images are substantially better than those produced by current DTP programs.
CRT monitors are often used as a soft proofing device for the hard copy image output. However, what the user sees on the monitor does not match its output, even if the monitor and the output device are calibrated with CIE/XYZ or CIE/Lab. This is especially obvious when correlated color temperature (CCT) of CRT monitor's white point significantly differs from ambient light. In a typical office environment, one uses a computer graphic monitor having a CCT of 9300K in a room of white fluorescent light of 4150K CCT. In such a case, human visual system is partially adapted to the CRT monitor's white point and partially to the ambient light. The visual experiments were performed on the effect of the ambient lighting. Practical method for soft copy color reproduction that matches the hard copy image in appearance is presented in this paper. This method is fundamentally based on a simple von Kries' adaptation model and takes into account the human visual system's partial adaptation and contrast matching.
This paper presents a feedforward multi-layered neural network approach, which is capable of characterizing the colorimetric model of a color CRT monitor driven by a digital red, green, and blue signal source. The model consists of forward and inverse transforms. The forward transform predicts the CIE display color for a given triad of digital RGB video signals. The inverse transform determines the digital RGB drive signals necessary to produce a color of specific CIE coordinates. Nevertheless, the identification of both forward and inverse transforms becomes intractable when the nonlinear distortions are involved in the colormetric model. These distortions result from the violations of the assumptions of phosphor constancy, gun independency, spatial uniformity and monitor characteristic function with an expression of power function. The feedforward neural networks have the capability to learn arbitrary nonlinearity and show great potential for determining the forward transform model of a color monitor. To avoid inverting the complicated nonlinear forward transform directly, an alternative approach to the inverse transform identification is proposed on the basis of Widrow and Winter's neural-based inverse system control scheme. The performance of both transforms was evaluated by the prediction and measurement of 29 test colors chosen to adequately sample the color gamut reproducible by the CRT phosphors.
The paper deals with the comparison of subjective visual color evaluation and color measurements. For this purpose several subjective color test charts were developed by a class of design students. On the other hand an objective color chart was used to test the color accuracy of several color copiers. The results of both test procedures were evaluated and ranked and also compared with each other. Also, the pure subjective results were evaluated and an analysis was carried out to find some similarities between these different test methods. The results will be discussed looking alsi in the structure of modern color management systems.
Has anyone achieved device independent color (DIC)? Some recent papers would seem to indicate that it has been achieved in a few instances. A more detailed reading of these papers, however, raises some doubts. All of the schemes for attaining DIC depend on instrumental colorimetric measurement and calibration of transparency and reflective hardcopy targets and exhibits. A paper presented by Zawacki at the 1993 annual TAGA meeting reports that colorimetric measurements made with spectrophotometers manufactured by the same company and with the same geometry can differ by 0.7 to 1.7 CLab (Delta) E and instruments made by different manufacturers with the same geometry can differ by 1.5 to 3.0 (Delta) E. Clearly, schemes based on instrumental measurement that achieve graphic arts DIC may be eliminating one device dependency while introducing other device dependencies.
Color calibration involves the assessment of a tristimulus vector describing a surface. A color calibration device must return this tristimulus vector under various surface modes and/or illuminants. For reflective surfaces, the illuminant must be known in order to assess the surface reflectance. For luminous surfaces, a pseudo-illuminant may be used to account for the adaptation state of the observer. In each separate case, the color calibration device requires a different set of color matching functions. These several sets do not span a single three-dimensional space. Ideally, a device capable of operating under k modes should have basis functions spanning a 3k dimensional space. This paper considers the selection of fewer than 3k basis functions for operation under k modes. It discusses the selection of error criteria and a strategy to minimize the error due to the reduced dimensional approximation.
There is a common requirement among the various proprietary color management systems (CMS). It is the need to characterize the response of many different color output devices. Characterizations are typically the net result of thousands of measurements of a test target across several each of a given color output device. Color output devices can be classified as soft-proof (monitor), digital proof, or conventional output (film to off-press proof and to press sheet). In this paper we focus on characterization of reflective hard-copy output. Measurements of this reflective copy can be taken with any of several instruments: (1) Spectrophotometers (0/45 or sphere), (2) Spectral- based Colorimeters, (3) Tristimulus Colorimeters (with 3 or 4 filters). We compare the colorimetric identity of an array of color output devices as measured by spectral-based and tristimulus color measurement instrumentation. The output devices include a variety of pre-proofs, digital proofs, and conventional proofs, as well as ink-on-paper press sheets. Colorimetric assessment capabilities of less-expensive filter-based colorimeters versus those of more costly spectral-based instrumentation are evaluated. Comparisons will take the form of simple scatter plots in commonly-used CIE color spaces. The motivation for this research comes from the need to minimize the cost and effort expended by CMS users in creating localized custom characterizations for their mix of color output devices.
The present paper studies two different table-based approaches for the calibration of electronic imaging systems. The first approach, which is the classical one, uses the device-independent CIE-XYZ colorimetric space as an intermediate standard space. Input and output devices such as scanners, displays and printers are calibrated separately with respect to the objective CIE-XYZ space. The calibration process requires establishing a 3-dimensional mapping between scanner's device-dependent RGB pace and a device-independent colorimetric space such as CIE-XYZ. Measured samples belonging to the calibration set are used for splitting the colorimetric space into Delaunay tetrahedrons. The second approach, the so-called closed loop approach, calibrates directly scanner-printer pairs, without any reference to an objective colorimetric space. It enables a 3D mapping to be built between the scanner's RGB space and the printer's CMY space without requiring any colorimetric measurement. It offers very accurate calibrated output for input samples having the same characteristics (halftone dot, ink spectral reflectance) as the printed samples used for the calibration process. When the desktop scanners' RGB sensibilities are not a linear transform of the CIE x, y, z matching curves, an accurate calibration can only be made if input color patches are based on the same primary inks as the patches used for device calibration.