PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The viewing conditions in different types of display, such as reflection-copy and cathode-ray tubes, affect the appearance of colors; these effects must be allowed for in designing signals intended to produce color images. A model of color vision has been devised to predict the appearance of colors in different viewing conditions, in terms of their hues, brightnesses, lightnesses, colorfulnesses, saturations, and chromas. The model has been tested by comparing its predictions to published color appearance data. These data include magnitude scaling of the hue, lightness, brightness, and colorfulness of colors displayed as reflection samples, on cathode-ray tubes, on projected slides, and on transparencies viewed on light boxes. On the whole, the predictions given by the model compare reasonably well with the precision with which color appearance data can be determined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the shortcomings of trying to achieve device independence through colorimetry. These color models do not take into account the fundamental problems of the printing process and that there is a need for a true graphic arts color exchange space. A new meta printing space is proposed for color exchange that will offer a single exchange standard. This standard will allow the blind interchange of data between elements of the color reproduction process. The meta representation is based on human vision and able to encompass the gamuts of the dye sets used in the reproduction of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new color space, RLAB, for cross-media color reproduction has been developed. This space is a modification of the CIE 1976 L*a*b* (CIELAB) color space which is widely used in a variety of industries. The RLAB modification of the CIELAB space allows for more accurate predictions of changes in color appearance due to chromatic adaptation, prediction of differences due to the types of media, and adjustment for changes in the relative luminance of the stimulus surround. In addition, the RLAB space can be used for the accurate calculation of color differences through a modification of the CIELAB color difference equation. These enhancements allow useful application of the CIELAB color space to problems of device- independent color imaging. This paper describes the formulation of the RLAB color space and its implementation. It is based on the chromatic adaptation model previously published by Fairchild and the Bartleson and Breneman corrections for image surround. In addition, psychophysical results comparing use of the RLAB color space to other techniques and color appearance models are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Different kinds of color models were tested in typical monitor image manipulation applications. considerable attention was paid to different ways of adjusting the tone rendering (or luminance component) of color images. Other image manipulation measures, such as the adjustment of grey balance and saturation, were also discussed. The color models studied included RGB, HSV, HLS and CIELUV color co-ordinate systems, and the new Ls(alpha) space, which was developed specifically for image manipulation purposes. In the tone rendering adjustment tests, the color co-ordinate systems were primarily assessed according to how ell they allowed free manipulation of brightness without affecting other components of color (i.e. hue and saturation). In these visual tests, the L component of the Ls(alpha) color space proved to be the most suitable co-ordinate for adjustment with a tone rendering curve.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Device-independent color reproduction depends on matching colors that arise from different spectral power distributions. The spectral differences of matching lights are particularly great between light produced by VDU phosphors and light reflected by pigmented materials. The least frequently questioned assumption in this difficult situation is the Grassmann additivity rule, that if matching lights are added to two sides of a match, the result is also a match. However, recent experiments by Thornton suggest there is strong Grassmann additivity failure for human color vision. In particular, a color-match of a test light with one set of primaries does not persist when each of these primaries is replaced by a matching mixture of a second set of primaries. The present paper examines two covering theories for Grassmann additivity in search of consistency with Thornton's results. The theories are as follows: (1) Power- function theory. Each cone performs a weighted wavelength sum of an exponentiated value of photon flux over wavelength (not simply a weighted count on the photons, in which the implicit exponent is 1); and (2) Photodepletion theory. The density of unbleached photopigment in a cone is decreased under the action of light, and this leads to an integral- equation expression for the quantum catch instead of a simple spectrum integral. When Thornton's color-matching data are examined in terms of both these theories, the conventional Grassmann theory is found to fit the data best.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Properties of the visual system were described to produce on that basis colorimetric data for equidistant color series and color thresholds (just noticeable differences). There is a complex relationship between equidistant color series and color thresholds. The luminous reflectances for gray series were discussed for both equidistant color series of separated samples and color thresholds of touching samples. The threshold is very much dependent on viewing or adaptation time. Modern film recorders with 3600 lines per inch (lpi) allow dots or lines with 0.007 mm diameter. The high resolution allows line-screen exposure and printing of high stability with 225 lpi to fill the halftone cell of a square (0.1 mm X 0.1 mm) with 16 X 16 dots (or lines combined with dots). The luminous reflectance varies theoretically between 0 and 16 X 16 equals 256. This allows to produce only about 16 equidistant visual steps of separated samples on a gray surround between black and white for both dot and line- screen technique. Most difficulties with the stability appear near black. The stability of the printing process is increased using line-screens compared to dot screens. The dots vary in all directions during the printing process and the line width in only two directions. Four color printing with cyanblue, yellow, magentared and black can be based on line-screens with vertical, horizontal and two diagonal directions. Some properties are discussed using PostScript-programs and studying multicolor prints produced by line-screen technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a top-down data placement methodology for a large intertive muliimedia information system (MMIS) on a single spindle multi-disk environment such as a Jukebox is presented. The objective of this work is to minimize aveiage disk seek time as well as the number of platter switehes fcw Jukebox. A large data placement problem can be divided into a number of small data placement problems by weighted graph decomposition. The Kernighan-Lin partitioning algorithm is recursively applied for this jiirpoac. Once the graph is fully partitioned, the objects in the same subgraph are assigned to the same disk. The data placement within a disk is divided into two stages, global data placement and detailed data placement. The expected access patterns of global data placement are modeled as a time-homogeneous ergodic Markov Chain, from which the stationary probability for each node of the browsing graph can be found. Based on these probabilities, we define an expected access cost Then, the problem of global data placement is posed as an optimization problem, and various clustering and storage layout algxithms are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color calibration technology is being incorporated into both Apple and Microsoft's operating systems. These color savvy operating systems will produce a market pull towards 'smart color' scanners and printers which, in turn, will lead towards a distributed architecture for color management systems (CMS). Today's desktop scanners produce red-green-blue color signals that do not accurately describe the color of the object being scanned. Future scanners will be self-calibrating and communicate their own 'device profile' to the operating system based CMS. This paper describes some of the key technologies required for this next generation of smart color scanners. Topics covered include a comparison of colorimetric and conventional scanning technologies, and the impact of metamerism, dye fluorescence and chromatic adaptation on device independent color scanning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Device independent color requires that all color imaging devices 'speak' in CIE colorimetric terms. A color image scanner is one component of the desktop color imaging chain that needs to be colorimetric if color is to be device independent. Most commercial color scanners are not colorimetric. Instead, they are built as color densitometers, emulating commercial graphic arts scanners. There are a wide variety of spectral response options if the design goal is a colorimetric scanner. This paper explores 'almost' color mixture spectral response options using Neugebauer's Colorimetric Quality Factor (CQF) as a design merit function. A set of empirical equations relating the average CIEL*a*b* color difference to the CQF under three CIE illuminants (A, F2 and D65) is developed, based on a database of over 1000 colors representing a wide variety of colorants and color imaging technologies. Our goal is to empirically bridge this gap between the CQF specification for each channel and the scanner's colorimetric performance, in terms of CIEL*a*b* color difference. For spectral response functions that have a high CQF, a 'universal' transformation matrix is described for transforming 'almost' colorimetric scanner RGB data to approximate CIE XYZ tristimulus data. Scanner peak wavelength and bandwidth requirements for colorimetric and densitometric applications are also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of calibrating a hard copy color device for a number of viewing illuminants is discussed. Four methods are introduced and implemented using a HP PaintJet XL300 printer. In each case it is assumed that the printer is calibrated for one particular illuminant, via a look-up-table or similar approach. The methods for calibrating with respect to a number of illuminants involve performing simple transformations on the input tristimulus values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple method of converting scanner (RGB) responses to estimates of object tristimulus (XYZ) coordinates is to apply a linear transformation to the RGB values. The transformation parameters are selected subject to minimization of some relevant error measure. While the linear method is easy, it can be quite imprecise. Linear methods are only guaranteed to work when the scanner sensor responsivities are within a linear transformation of the human color- matching functions. In studying the linear transformation methods, we have observed that the error distribution between the true and estimated XYZ values is often quite regular: plotted in tristimulus coordinates, the error cloud is a highly eccentric ellipse, often nearly a line. We will show that this observation is expected when the collection of surface reflectance functions is well-described by a low-dimensional linear model, as is often the case in practice. We will discuss the implications of our observation for scanner design and for color correction algorithms that encourage operator intervention.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The characterization of a highly non-linear color print device can involve a large number of measurements of printed color output. If the measurement process is not automated this can be a significant fraction of the cost of developing a color model for a device. One way to limit the number of measurements required is to ensure that in any given region, only enough measurements are made to adequately characterize the local behavior. With no prior knowledge of the behavior, this requires an adaptive approach to the sampling. An adaptive sampling technique developed for this work, termed Model Accuracy Moderated Adaptive Sampling (MAMAS), is described. Simulation tests with and without measurement noise are presented and the results are compared to measurements using uniform regular sampling. The technique is also applied to a real printer, the Canon CLC500, for which some results are presented. The color model used for the print device is based on an interpolated look up table (ILUT). Because of the highly non-linear nature of the device being modeled a flexible technique is required to translate the irregular measurement samples into a regularly gridded model. A method based on a regularized linear spline was developed. Appropriate choice of the penalty function for the regularization can achieve a compromise between fitting the measured points and reducing the impact of measurement noise. A brief overview of the technique is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technique of barycentric interpolation, commonly used for finite element analysis, is extended to its three dimensional form and applied to color space transforms. This is an improved method for transforming three dimensional image data from an RGB color space to a CMY or CMYK ink or dye color system. The 3-D CMY or CMYK 'color cube' from the desired marking technology is efficiently divided into a space-filling matrix of unequal irregular hexahedra delineated by sample nodes. Node locations are placed in a manner which maximizes sampling and interpolation efficiency. Test charts are printed using the colors represented by each node. The colors are measured and a color vs. ink amount table is formed. Each hexahedra is then partitioned into a space-filling aggregation of irregular tetrahedrons. The four possible hexahedra partitioning modes are derived and the optimum one is determined. Within each tetrahedron the color data associated with its vertices are then used to perform a three-dimensional interpolation. This provides efficient, seamless, artifact-free creation of 3-D color transformation look-up tables while reducing the quantity of printed colors required to achieve a desired accuracy. This method has been successfully incorporated into several applications, most notably Aldus Corporation's PrePrint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional interpolation is often employed to minimize calculations when approximating mathematically defined complex functions or producing intermediate results from sparse empirical data. Both situations occur when converting images from one device- independent color space to another, or converting information between device-dependent and device-independent color spaces; this makes three-dimensional interpolation an appropriate solution to many kinds of color space transformations. Interpolation algorithms can be analyzed by considering them as consisting of three parts: packing, in which the domain of interest of the input space is populated with sample points; extraction, which consists of selecting the sample points necessary to approximate the function for a particular input value; and calculation, which accepts the input point and the extracted points and carries out calculations to approximate the function. Those algorithms that extract four points and perform tetrahedral interpolation yield the fewest calculations. The paper presents a test for interpolation algorithm accuracy, and provides a normalization which allows various packing and extraction schemes to be compared. When subjected to the normalized accuracy test, different packing and extraction schemes yield different accuracies. The paper describes a packing and an extraction algorithm that yields accurate results for many conversions. The performance of this scheme is compared to that of several well-known packing and extraction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present paper studies the chromaticity gamut of multi-color printing processes. Heptatone (7-color) printing - the most promising variant of multi-color printing - offers a significantly larger gamut than a conventional CMYK printing process, approaching CRT and film gamuts. The behavior of the process in the device-independent CIE-XYZ and CIE-L*u*v* colorimetric spaces is explored using the compound Neugebauer model developed for this purpose. A simple and straightforward Moire-free separation process is proposed. The strong point of the proposed separation process is the fact that only 3 different screen layers are needed for any odd number of basic colors including black.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color gamut mapping is required whenever two imaging devices do not have coincident color gamuts or viewing conditions. Two major gamut mapping techniques include lightness and chroma manipulations. Lightness mapping accounts for differences in white level, black level, and viewing conditions while chroma mapping accounts for differences in gamut volume. As a three dimensional space in which color gamut mapping is implemented, the 1991 Hunt model of color appearance was used utilizing dimensions of lightness, chroma, and hue. This model accounts for viewing conditions in addition to the usual device independent specification. The mapping techniques were applied to back-lit photographic transparencies in order to reproduce images using a dye diffusion thermal transfer printer. As the first experiment, a lightness mapping experiment was performed. Three different lightness mapping techniques, a linear technique and two non-linear techniques, were tested for four images. The psychophysical method of paired comparison was used to generate interval scales of preferred color reproduction. In general, the preferred technique depended on the amount of lightness mapping required and on the original image's lightness histograms. For small amounts of compression, the preferred technique was a clipping type. For large amounts of compression, the preferred technique was image dependent; low preference was caused by loss of detail or apparent fluorescence of high chroma image areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I shall describe an approach to Device Independent Color Transformations using standard input and output targets specified by the ANSI-IT8 group. This approach allows colorimetric calibration from input scanners into CIE-XYZ color space and to various output processes like color prints in a production environment. A quantitative error analysis is presented for the suitability of a variety of input scanners as colorimeters as well as a detailed look at possible problem areas on the scanning side. A precise control flow has to be established and followed for repeatable results, especially if the scanners are capable of producing more than 8 bits per color channel. An equivalent analysis is made for the output side with selected colors being followed through the entire system from input to output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many people believe that in the future of high fidelity color reproduction, a Device- Independent Color Space (DICS) will be used to represent color image information. Discussions of DICS have focused on what CIE color space is most appropriate as a standard, how to calibrate input devices, and how to characterize output devices. However, the definition of Device-Independent Color has aroused a great deal of confusion in the literature. Does it mean a device-independent definition of color measurement? Does it mean a standardized color space such as a CIE space? Does it mean a method of color transformation such as EFI Color? Does it mean automated (no operator intervention) color matching? Does it mean open system architecture? Does a DICS truly represent color appearance? Does a DICS guarantee device-independency of color? In other words, is the device-dependency of color related only to the color coordinates that are used? If not, where does device-dependency come from? And how does one deal with device-dependency of color in DICS? In this paper, I will address all these issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Before ColorSync, the QuickDraw graphics model had no specific means to define device independent color. That is, colors in QuickDraw were defined in terms of RGB values without reference to a specific, objective definition of RGB. This means that devices have no way of knowing how to interpret the incoming RGB values. When QuickDraw asks for a red value of 300, say, it asks the device to output 300 / 65536 of full red. The problem is that the full red has not been defined, so the color is relative to the full red of the destination device. The end result is that the same color triplet imaged on different output devices looks different. Without the capability of device independent color definition, achieving consistent color across varying devices is impossible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researching color documents and the systems that produce them requires a new kind of laboratory, unlike the ones we are accustomed to working in. With this in mind, Xerox has created a 1700 square-foot Color Systems Studio at the Xerox Webster Research Center. The Studio is designed to provide an environment in which researchers and customers can meet to coexplore the requirements for color documents in the workplace. Involving customers is a key feature of the studio and its investigations into the ecology of color documents. Creation of this new research environment is based on the premise that color documents cannot be separated from the context in which they are created and used. The Studio is a 'living' lab equipped with up-to-date computer platforms, networks and printers. This makes it possible to assemble complete document systems, from creation to printing, using a mix of commercially available and experimental equipment. In this environment, researchers can study not only the performance of individual components, but the work process itself. This paper will describe the reasoning that led to the creation of the Color Systems Studio and our early experiences with it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color management system (CMS) plays an important role in color image processing systems to maintain consistent color appearance among various devices on a network. Considering a network environment, the conventional CMS depends deeply on the configuration of the network and the system. This paper describes an object-oriented CMS architecture and its implementation strategy for a distributed environment name Network CMS (NCMS). NCMS can keep the consistency of the device profiles and needs not to be installed on all the work stations on a network. The discussion of the effectiveness of NCMS shows that it suits well for color management on a distributed system due to its object-oriented features and the Client- Server model implementation. This system will be a good test bed to explore advanced color document processing systems and future color interchange facilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Device-independent color imaging is in use at the Armstrong Laboratory, Aircrew Training Research Division as a means of individually tailoring colors for each of the display-channel devices used to present visuals for flight simulation. Specifically, an accurate color match across multiple display-channel boundaries is desired. This complex system of color control encompasses the collection of display-channel color characterization data, the processing of that data into individualized Red-Green-Blue (RGB) color tables, and the utilization of those tables to match the colors of images across display-channel boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An accurate and inexpensive method for calibrating computer displays is introduced. The characterization is done entirely by visual comparisons and the results are equivalent to characterizations made by commercially available instruments. The method is based on a parametric CRT model, requiring that the user perform only a limited number of comparisons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Photo CD system provides an inexpensive means of participating in the information age with personal digital images. At the heart of the system is a 35 mm, 2000 pixels/inch scanner to acquire image data from a variety of photographic media. The raw data is then processed into a calibrated color metric, decomposed into a hierarchy of 5 resolution, compressed, encoded, and written to a compact disc using a high-speed CD writer. These discs can then be quickly displayed to a consumer television with a low-cost Photo CD player, that is also a high-quality CD-audio player. Additionally, Photo CD discs can be read into a computer using a CD-ROM XA drive, and manipulated on the desktop for importing into documents, or, for hard-copy printing to a variety of devices. The 35 mm-based system described above is only a part of a more generalized architecture. An add-on component capable of scanning larger format negatives and slides will soon be available, and will utilize extensions to the Photo CD format appropriate for these new image modalities. In particular, the formation of a 6th resolution and its relationship to the existing hierarchy will be discussed. Additionally, the structure of future extensions currently under development will be outlined in the context of the specific applications which they are designed to support.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The color definition activities of Subcommittee 4 of ANSI Standards Committee IT8 (Digital Data Exchange Standards) are described. This subcommittee is charged with the responsibility of developing color calibration tools, color definitions, reference targets, and data sets necessary to facilitate color data exchange within the graphic arts electronic prepress application area. Two standards (IT8.7/1 and IT8.7/2) have been completed which define targets for input scanner calibration with transparency and reflection photographic products. A third standard (IT8.7/3) has been completed which provides a reference data set for CMYK to printed-color characterization. Work is still continuing on the development of a scanner calibration target for use with negative films and on the development of default definitions for three-component color data exchange.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advent of color management systems, and other methods of controlling the color reproduction of computer systems and their imaging peripherals, the use of colorimetric techniques to encode color information is gaining prominence. The calculation of colorimetric quantities to describe reflective materials always involves the use of an illuminant spectrum. This paper examines how the results of these calculations differ as the illuminant is changed from the standard CIE D50 daylight spectrum to a more realistic fluorescent viewing booth lamp intended to simulate D50. Color difference formulae are used to quantify the difference for a wide variety of color printing devices. In the case of certain colors, differences of greater than 6 (Delta) E*ab units are found for virtually all of the devices considered. The interaction of the illuminant spectra with the spectral reflectance of printer colorants is analyzed in some detail. Also, a method is presented which allows for the device specific transformation of colorimetric variables appropriate for one of the illuminants into those appropriate for another illuminant. Finally, recommendations are made concerning the standardization of practical D50 illuminants and their importance to the digital communication of color images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A theory of decision making with perception through parallel information channels is presented. Decision making is considered a parallel competitive process. Every channel can provide confirmation or rejection of a decision concept. Different channels provide different impact on the specific concepts caused by the goals and individual cognitive features. All concepts are divided into semantic clusters due to the goals and the system defaults. The clusters can be alternative or complimentary. The 'winner-take-all' concept nodes firing takes place within the alternative cluster. Concepts can be independently activated in the complimentary cluster. A cognitive channel affects a decision concept by sending an activating or inhibitory signal. The complimentary clusters serve for building up complex concepts by superimposing activation received from various channels. The decision making is provided by the alternative clusters. Every active concept in the alternative cluster tends to suppress the competitive concepts in the cluster by sending inhibitory signals to the other nodes of the cluster. The model accounts for a time delay in signal transmission between the nodes and explains decreasing of the reaction time if information is confirmed by different channels and increasing of the reaction time if deceiving information received from the channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Direst to plate imaging is rapidly becoming a reality. Plates, processors and imagesetters are becoming increasingly available. There are practical considerations in making direct to plate an effective solution. Such a system is not simply a film image onto a different medium. PostScript must be athpted in a more Critical factor. Process control must be provided for.. Plates themselves are an emerging commodity with a variety of characteristics. The plate handling has its own problems. This paper explores these and other problems and some of their solutions
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important issue in the advancement of Image Processing and Computer Vision (IP/CV) algorithm development is the ability to test, verify and compare the newly developed algorithm performance to other functionally equivalent algorithms and to ground truth in a complete and meaningful way. In this paper, we explore several criteria to estimate the performance of IP/CV algorithms against databases of synthetic and real data and use statistical analysis to determine performance. Formal system analysis along with Monte Carlo testing is used to rigorously compare these algorithms. A case study is performed on the familiar edge detection algorithms to illustrate the techniques and also the vulnerability of these algorithms to good average performance but in certain cases to very poor performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a document storage system using a one-terabyte Mass Storage System that has 1,600 rewritable optical disks. This system can store up to 20 million document pages. A file management subsystem frees the used from file and disk management. The document entry time is five seconds per page. The remote retrieval time for one page is 1.5 seconds for a 10 - Mbps Local-Area Network and two seconds for a 1.5 - Mbps Integrated Services Digital Network. Documents can also be entered from distant G3 and G4 facsimile machines and can be sent to them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to lead the reader through the essentials of the CSPM process by trying to strike an optimum balance between concept and detail. Ihe intent is to bypaSS details which are not required to explain the basic concepts, yet retain those necessary to explain the essence of the process. Except where needed for clarity, no detailed operational procedures are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Comparison of document users' real underlying type handling and imaging requirements with the industry-supplied solutions reveals an industry-imposed nightmare for the customer. This paper suggests the real and the imposed requirements, uses Xerox as an example of a vendor odyssey moving from bitmap to outline technology and from host-based to networked printing, and discusses the challenges of next steps in satisfying the 'real requirements', including the move away from emulation of the typographer's typebox in computer-based solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kinematic animation of articulated objects, such as robots, is accomplished by the use of key framing, which requires that certain key positions of an object's motion path are chosen to be representative of the complete pathway. Commercial animation software then smoothly interpolates the motion path between the chosen keys. Dynamical analysis is used for simulating motion, rather than animating the pathways. The joints of the objects are affected by gravity plus an array of torques and forces. The analysis of the geometry of the trajectory behaviors belongs within the mathematical framework of dynamical systems. Here, an articulated robot simulated the motion behavior of a human as it swung around a fixed, single, horizontal bar and performed a reverse somersault dismount.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US Army Engineer Waterways Experiment Station (WES) has been exploring potential uses for virtual environments (VE) in engineering and scientific applications for approximately two years. The initial research involved commercial software, and while researchers at WES feel that the software available at that time had potential in a variety of application areas, it was found to lack specific features and flexibility needed for their target applications. The research then concentrated on designing and developing a software system which will provide VE designers at WES and the Engineering Research Center (ERC) a foundation of basic research upon which future development and application systems could be based. In implementing the prototype system, many graphical and software techniques and their application toward VE in general were evaluated. Several of these techniques are discussed in terms of performance and effectiveness. A specific application was developed based on the prototype system, and its effectiveness in relating meaningful information to the research scientist is discussed in the body of the paper. Finally, general observations concerning the potency of VE in engineering and scientific visualization are made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, most visualization tools have applied the traditional flat sequential file for handling scientific data. This results in inefficient and ineffective in storage, access and ease-of-use for large complex data set especially for applications like scientific visualization. The available data models for scientific visualization, including CDF, netCDF and HDF, only supported some common data types. The relationship among data is not considered. But it is important for revealing the insight of scientific data set. In this paper, we presented an architecture of object-oriented semantic data flow system (OSDF). OSDF concentrated on the scientific data representation and access in data flow system. Scientific Data Model (SDM) was put forward to model scientific data set. In SDM, the semantics of data were described by the Association and Data Constructor. Association was used to describe the relationship among data and data constructor to construct new data type. All data objects of an application were stored in object base. Data were organized and accessed by its semantics. An interface to access data in object base was supplied. To attain the best visualization effect, rules and principles for selecting visualization techniques were integrated into data object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color negative films are designed to be intermediate records of photographed objects rather than the final reproduction. Their light absorption serves only to attenuate the printing exposure of another photographic material. They are low gamma, nonviewable films designed to be printed onto high gamma print materials, and the spectral sensitivities of these print materials differ significantly from the human visual system. Input scanners optimized for viewable transmission and reflection materials are not necessarily optimized for color negative films. Their wide exposure latitude and good signal/noise performance result in color negative films being good image capture media. Images having excellent color reproduction, tone scale and image structure characteristics can be extracted from scans of color negative films, but the complexity of recovering optimum pictorial results has resulted in these films being somewhat of an enigma in the desktop publishing and graphic arts arena. The IT8 SC4 committee is in the initial stages of appraising the necessity and considering the specification and design of an input target that would permit calibrated scans to be obtained from color negative films. This paper presents an overview of the negative-positive system and summarizes several specification schemes which could be used as a basis for the design of an IT8 color negative film input calibration target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.