This paper describes the performance of an image capture simulator. The general model underlying the simulator assumes that (a) the image capture device contains multiple classes of sensors with different spectral sensitivities and (b) that each sensor responds linearly to light intensity over most of its operating range.We place no restrictions on the number of sensor classes,their spectral sensitivities, or their spatial arrangement. The input to the simulator is a set of narrow-band images of the scene taken with a custom-designed hyperspectral camera system. The parameters for the simulator are the number of sensor classes, the sensor spectral sensitivities, the noise statistics and number of quantization levels for each sensor class, the spatial arrangement of the sensors, and the exposure direction. The output of the simulator is the ray image data that would have ben acquired by the simulated image capture device. To test the simulator, we acquired images of the same scene both with our hyperspectral camera and with a calibrated Kodak DCS-200 digital color camera. We used the simulator to predict the DCS-200 output from the hyperspectral data. The agreement between simulated and acquired images validated the image capture response model, the spectral calibrations, and our simulator implementation. We believe the simulator will provide a useful tool for understanding the effect of varying the design parameters of an image capture device.
Digital cameras are gaining popularity in many applications of multimedia information processing. But the CCD sensor used by digital cameras does not provide all three red, green, blue primaries for each pixel. Instead it uses an interlaced sampling scheme with only one primary per pixel. This article considers the problem of reconstructing a 24- bit/pixel color image from the interlaced sampling. A simple, efficient, and effective algorithm for color restoration from digital camera data is proposed. The proposed algorithm uses a pattern matching technique to reconstruct the missing color primaries based on the pixel contexts. Experimental results show that the proposed algorithm outperforms the technique of color interpolation.
In this paper, a novel wavelet-based approach to recover continuous tone images from halftone images is presented. Wavelet decomposition of the halftone image facilitates a series of spatial and frequency selective processing to preserve most of the original image contents while eliminating the halftone noise. Furthermore, optional non- linear filtering can be applied as post-processing stage to create the final aesthetic contone image. This approach lends itself to practical applications since it is independent of parameter estimation and hence universal to all types of halftoned images, including those obtained by scanning printed halftones.
A number of applications employing lossy compression require that a mechanism be employed to control the size of a compressed image so that it does not exceed the capacity of a fixed-size buffer. This situation arises in digital cameras, where a user expects to store a predefined number of pictures into a fixed-size buffer, or in computer peripheral systems like a laser printer where a fixed-size buffer is used to store the rendered page image. We describe a fully JPEG compliant tow-pass scheme that can compress an arbitrary image to a predetermined fixed size file. The advantages of the proposed method are that it is fully compliant with the JPEG standard image compression algorithm and it requires a smaller working buffer than other rate- control schemes.
We present a general method for lossless/lossy coding of bi- level images. The compression and decompression method is analogous to JBIG, a current international standard for bi- level image compression, and is based on arithmetic coding and a template to determine the coding state. Loss is introduced in a pre-process on the encoding side by flipping pixels in a controlled manner. The method is primarily aimed at halftoned images as a supplement to the specialized soft pattern matching techniques which work better for text. The new algorithm also works well on documents of mixed contents e.g. halftoning and text without any segmentation of the image. The decoding is usually slower than JBIG due to a more wide-spread template. A decoding output of more than 1 Mpixels per second can be obtained in software implementations. We present a greedy 'rate-distortion' algorithm for flipping as well as less complex algorithms intended for relatively fast encoding and moderate latency. In the less complex algorithms, flipping and encoding is carried out in the same pass. The potential risk of flipping avalanches is minimized by conditioning flipping on the sign and magnitude of the local gray-scale error computed by a forgetful error diffusion algorithm. Template based refinement coding is applied for a lossy-to-lossless refinement step. The (de)coding method is proposed as part of JBIG-2, an emerging international standard for lossless/lossy compression of bi-level images.
We are currently developing a high-speed photographic system that images objects engulfed in high radiance backgrounds. Using a copper vapor laser as a pulsed illuminator in conjunction with a narrow band interference filter and an electro-optic shutter as spectral and temporal filters respectively, the background radiance is diminished by eight orders of magnitude. Consequently, the back-scattered imaging photons from the laser overpower the radiant background, resulting in a sufficiently high imaging-to- background ratio. Such high levels of background discrimination is made possible by the copper vapor laser that illuminates within a spectral line of 7 GHz with great intensity centered on the 1 nm vapor laser that illuminates within a spectral line of 7 GHz with great intensity centered on the 1 nm wide transmission band of the interference filter. The laser emits 30 ns pulses with a repetition rate of 20 kHz, synchronized with an equivalent open aperture period of the electro-optic shutter. The image is captured by an electronic camera at the rate of 20,000 per second. The image is available for further digital image processing.
A digital high-speed camera- and recording system for 2D UV- laser spectroscopy was recently completed at Bremen drip tower. At the moment the primary users are the microgravity combustion researchers. The current project studies the reaction zones during the process of combustion. Particularly OH-radicals are detected 2D by using the method of laser induced predissociation fluorescence (LIPF). A pulsed high-energy excimer lasersystem combined with a two- staged intensified CCD-camera allows a repetition rate of 250 images per second, according to the maximum laser pulse repetition. The laser system is integrated at the top of the 110 m high evacuatable drop tube. Motorized mirrors are necessary to achieve a stable beam position within the area of interest during the drop of the experiment-capsule. The duration of 1 drop will be 4.7 seconds. About 1500 images are captured and stored onboard the drop capsule 96 Mbyte RAM image storagesystem. After saving capsule and data, a special PC-based image processing software visualizes the movies and extracts physical information out of the images. Now, after two and a half years of development the system is working operational and capable of high temporal 2D LIPF- measuring of OH, H2O, O2 and CO concentrations and 2D temperature distribution of these species.
Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.
The lightness values of white papers cover an approximate range of fifteen jnd. The tone range of scanners is adjusted for the lightest possible substrate. Therefore, a scan is usually preceded by a preview operation in which the image is subsampled and the data is analyzed to determine the actual tone range. In color facsimile and sheet-fed scanners that do not buffer the entire image, such an operation is not possible. We present a technique in which statistical methods are used to estimate the tone level of the paper. This estimate is used to set the parameters for a tone reproduction curve. The technique is incremental, the statistical data is gathered during the scan. While the scan progresses. The estimate is refined based on the increased amount of data available from the accumulated histogram. This has also the advantage that artifacts due to lamp warming during slow scans are automatically compensated.
This paper introduces a new approach to inverse halftoning using nonorthogonal wavelets. The distinct features of this wavelet-based approach are: a) edge information in the highpass wavelet images of a halftone is extracted and used to assist inverse halftoning, b) cross-scale correlations in the multiscale wavelet decomposition are used for removing background halftoning noise while preserving important edges in the wavelet lowpass image, c) experiments show that our simple wavelet-based approach outperforms the best results obtained from inverse halftoning methods published in the literature, which are iterative in nature.
The use of color imaging devices continues to rapidly expand as the use of personal
computers grows. Understanding the characteristics of color imaging devices such as
displays, scanners, dig:ital cameras, printers etc. is key to their proper use. The purpose of
this report is not to delineate all of the varied aspects of these devices but to concentrate on
one interesting aspect, viz. color volume. The many other aspects that could be discussed
such as gray scale range, density range, page size, colorant stability, etc. are also important
but for the purposes of this report, only color volume is going to be discussed.
A technique is proposed for estimating the surface of the color gamut of an output device, in 3D colorimetric space. The method employs a modified convex hull algorithm. This approach is shown to be more general, and more accurate, than existing techniques. Simple numerical metrics are derived from this surface description: namely the gamut volume in 3D space; and the percentage of colors from the Pantone Matching System which fall within the gamut.
In this study, gamut mapping between real print processes was investigated. The processes were supposed to have equal lightness ranges, hence no lightness mapping applied. Several chroma mapping methods modifying and not modifying lightness were systematically evaluated. Pure clipping of chroma resulted as the best technique when lightness was left untouched. However, a few images required adjustments of lightness in order to retain higher chroma. These investigations were used to gain practical experience with anew, analytical color gamut representation recently published. THE new representation method displayed no kind of anomalies and hence proved to be fully practical.
The color appearance of an object depends on its reflectance spectrum and on the spectral characteristic of the illumination. In order to offer correct color under different illuminations at the reproduction stage it is necessary to transmit information about the whole reflectance spectrum. Knowing the tristimulus color values for a set of illuminations it is possible to calculate a reflectance spectrum that leads to the desired colors for these illuminations and linear combinations of them. The relatively small changes in color with changing illumination allow the use of a DPCM coding scheme. The performance is improved by predicting the color appearance of the object from one illumination to another. To achieve this we employ the white adaptation mechanism of the human eye and extend this by statistical modeling of the underlying reflectance spectra. We evaluate our approach using a set of reflectance spectra as well as a multispectral image.
The proliferation of cheap color peripherals such as printers, scanners and digital cameras, has rendered them accessible to home users who can connect them to capture color images, and print these out on a color hard-copy device such as an ink-jet printer. Color consistency between the scanned original and the printed image is a well-known problem that has been approached by various techniques. Industry standards for color-management systems attempt to provide a device-independent framework for the consistent transfer of color images between devices in a manner that is transparent to the user. These systems require the specification of device characteristics that are often unavailable for older and/or inexpensive peripherals. In this paper, a closed-loop color-matching system to achieve consistent color reproduction between an original color image and a printed copy of the scanned original, is presented. The technique is designed for home users of scanners and printers who may not have access to expensive equipment for conducting photometric measurements. The algorithm requires minimal user intervention and is computationally efficient. To perform a calibration run, a small set of reference color patches printed by the printer are scanned using the scanner. A computer program analyzes the correspondence between the printed and scanned values to generate the necessary mappings.
Standards relating to color data definition continue to be a dominant theme in both the US and international graphic arts standards activity. There is a growing understanding of the role that metrology and printing process definition play in helping define stable conditions to which color characterization data can be related. Standards have been published to define color measurement and computation requirements, scanner input characterization targets, four- color output characterization, and graphic arts applications for both transmission and reflection densitometry. Work continues on standards relating to ink testing, reference ink color specifications, and printing process definition. In addition, efforts are underway to document, in ANSI and ISO Technical Reports, colorimetric characterization data for those printing processes having broad-based usage. These include various applications of offset, gravure, and flexographic printing processes. Such data is key to the success of color profiles developed in accordance with the specifications being developed by the International Color Consortium. The published graphic arts imaging and color- related standards and technical reports are summarized and the current status of the work in progress is reviewed. In addition, the interaction of the formal standards programs and other industry-driven color activities is discussed.
Numerous papers have been written regarding techniques to translate color measurements from an RGB device into some standard color space. The papers seem to ignore the mathematical 'truth'...that the translation is impossible. Do these color transforms work, or under what conditions do they work, and what are the limitations. Why is it that they work. In this paper, light emitting diodes (LEDs) are viewed with a color video camera.It is seen that, for the spectra of LEDs, transforming from the camera color space the XYZ tristimulus space leads to very large errors. The problem stems from the fact that the RGB filter response are not a linear combination of XYZ responses. Next, it is shown that the transformation of CMYK halftones does not pose such difficulties.Here, it is found that a simple linear transform is relatively accurate, and some options to improve this accuracy are investigated. Several explanations are offered as to why transforms of CMYK are more accurate than transforms of LEDs. To determine which of the explanations is the most likely, linear transforms are applied to a variety of collections of colors.
The production of color images has grown in recent years due to the impact of digital technology. Access and equipment affordability are now bringing a new generation of color producers into the marketplace. Many traditional questions concerning color attributes are repeatedly asked by individuals: color fidelity, quality, measurements and device characterization pose daily dilemmas. Curriculum components should be offered in an educational environment that enhance the color foundations required of knowledgeable managers, researchers and technicians. The printing industry is adding many of the new digital color technologies to their vocabulary pertinent to color production. This paper presents current efforts being made to integrate color knowledge in a four year program of undergraduate study. Specific topics include: color reproduction, device characterization, material characterization and the role of measurements as a linking attribute. This paper also provides information detailing efforts to integrate color specification/measurement and analysis procedures used by students and subsequent application in color image production are provided. A discussion of measurement devices used in the learning environment is also presented. The investigation involves descriptive data on colorants typically used in printing inks and color.
Over the past five years, the author has built several automated measuring systems for measurement of the color and/or density of scanner and printer calibration targets. These system position portable spectrophotometers with a computer controlled three axis XYZ motorized stage. Also during the past five years the author has published many papers on the mechanisms of lateral diffusion measurement error. At the Society of Plastics Engineers Regional Technical Conference on Color and Appearance at St. Louis, October, 1996 he presented a paper on a lateral diffusion error correction method that employs data from a second measurement made with the sample moved back from the normal measuring position. The present paper reports on efforts to incorporate this error correction method into an automated graphic arts color measurement system.
A fluorescence dot area meter is an instrument that measures halftone dot sizes on printing plates that contain one or more fluorescent compounds. Currently, dot sizes on printing plates are being measured with a reflection densitometer, which provides questionable results. In the proposed instrument, a printing plate is excited by visible light in the blue-green region and the amount of fluorescent emission is measured. This method provides a high signal to noise ratio and freedom from directional effects.
The ANSI IT8.7/1-1993 standard defines the requirements for color film targets, intended for the characterization of color input scanners for graphic arts applications. These targets are created by the film manufacturer, based on their understanding of the capabilities of the specific photographic film product used, and in accordance with the guidelines of the standard. The manufacturer of the target is required to furnish colorimetric data describing the targets, either for individual targets or as batch average data. The standard makes provision for the reporting of the colorimetric parameters based on either integrating sphere- referenced measurements or opal-glass diffuser-referenced measurements. The integrating sphere is the more conventional geometry for colorimetric data while opal-glass is the specified geometry for densitometric data. Many new spectral measurement instruments, that are intended to provide both density and colorimetric data, use an opal- glass diffuser.
The luminance of a given display pixel depends not only on the present input voltage but also on the input voltages for the preceding pixel or pixels along the display raster. This effect which we refer to as the adjacent pixel nonlinearity is never compensated for when 2D stimulus patterns are presented on standard display monitors. To compensate for the adjacent pixel nonlinearity, we summarize in this paper the methods for generating a 2D lookup table which corrects for the nonlinearity over most of the displays luminance range. This table works even if the current pixel luminance depends on more than one preceding pixel. The creation of a 2D lookup involves making a series of calibration measurements and a least squares data fitting procedure to determine the parameters for a model of the adjacent pixel nonlinearity proposed by Mulligan and Stone. Once the parameters are determined for a particular display the 2D lookup table is created. To increase the available mean luminance we have evaluated the utility for 2D lookup table use when multiple color guns are in use.
In 1990, the first monochrome print-on-demand (POD) systems wee successfully brought to market. Subsequent color versions have been less successful, in my view mostly because they require a different workflow than traditional systems and the highly skilled specialists have not been trained. This hypothesis is based on the observation that direct-to-plate systems for short run printing, which do not require a new workflow, are quite successful in the market place. The internet and the World Wide Web are the enabling technologies that are fostering a new print model that is very likely to replace color POD before the latter can establish itself. In this model the consumers locate the material they desire from a contents provider, pay through a digital cash clearinghouse, and print the material at their own cost on their local printer. All the basic technologies for this model are in place; the main challenge is to make the workflow sufficiently robust for individual use.
The Internet is rapidly changing the traditional means of creation, distribution and retrieval of information. Today, information publishers leverage the capabilities provided by Internet technologies to rapidly communicate information to a much wider audience in unique customized ways. As a result, the volume of published content has been astronomically increasing. This, in addition to the ease of distribution afforded by the Internet has resulted in more and more documents being printed. This paper introduces several axes along which Internet printing may be examined and addresses some of the technological challenges that lay ahead. Some of these axes include: (1) submission--the use of the Internet protocols for selecting printers and submitting documents for print, (2) administration--the management and monitoring of printing engines and other print resources via Web pages, and (3) formats--printing document formats whose spectrum now includes HTML documents with simple text, layout-enhanced documents with Style Sheets, documents that contain audio, graphics and other active objects as well as the existing desktop and PDL formats. The format axis of the Internet Printing becomes even more exciting when one considers that the Web documents are inherently compound and the traversal into the various pieces may uncover various formats. The paper also examines some imaging specific issues that are paramount to Internet Printing. These include formats and structures for representing raster documents and images, compression, fonts rendering and color spaces.
I would like to share my experience ofusing the computer for creating art. I am a graphic designer
originally trained without any exposure to the computer. I graduated in July of 1994 from a four-year
curriculum of graphic design at the Istituto Europeo di Design in Milan Italy. Italy is famous for its
excellent design capability. Art and beauty influence the life ofnearly every Italian. Everywhere you look
on the streets there is art from grandiose architecture to the displays in shop windows. A keen esthetic
sense and a search and appreciation for quality permeate all aspects of Italian life, manifesting in the way
people cut their hair, the style ofthe clothes and how furniture and everyday objects are designed. Italian
taste is fine-tuned to the appreciation ofrefined textiles and quality materials are often enhanced by simple
design. The Italian culture has a long history ofexcellent artisanship and good craftsmanship is highly
appreciated. Gadgets have never been popular in Italian society. Gadgets are considered useless objects
which add nothing to a person's life, and since they cost money they are actually viewed as a waste. The
same is true for food, exception made in the big cities filled with tourists, fast food chains have never
survived. Genuine and simple food is what people truly desire. A typical Italian sandwich, for example, is
minimalist, the essential ingredients are left alone without additional sauces because if something is
delicious by itselfwhy would anyone want to disgnise its taste?
Printing from the world wide web has been a difficult and disjointed process. Users are required to print each separate HTML file as an individual document. Many web documents are published with each chapter or section stored as a separate HTML file. Standard print engines are not accustomed to this multi-document paradigm. In addition, standard printing methods are not aware of hypertext links. The information in HTML files is written and structured to be viewed on computer monitors and in interactive web browsers. HTML files contain a lot of data that is not viewed or is only meaningful on-line and it is difficult to interpret what to print. This paper will discuss the problems with printing from HTML-based documents and describe some solutions to these problems.
The quality of typical error diffused images can be improved by designing an error diffusion filter that minimizes a frequency weighted mean squared error between the continuous tone input and the halftone output. Previous approaches to this design are typically based on an assumption that the binary quantizer error is a white noise source. We propose in this paper an iterative method for designing an optimum error diffusion kernel without such an assumption on the spectral characteristics of the binary quantizer error.In particular, we use a set of training images, and iterate the two steps of designing the error diffusion filter and evaluating the spectrum of the quantizer error. Experimental results are shown for error diffusion filters designed using this iterative method.
Two approaches to describe the error diffusion (ED) technique are presented, based on the image regions formed during the ED process: processed pixel area and unprocessed pixel area. According to the first approach, the ED method consists of thresholding the gray value of the current pixel to decide the binary output and then the error produced by selection of the binary output is diffused forward the unprocessed pixels. According to the second approach, the ED selects the value of the output pixel to minimize the error produced between the continuous input image and the equivalent gary level corresponding to binary output. This paper presents an ED method that unifies the two approaches. The neighbors of the current pixel to be processed are classified into two classes that are distinctly processed: on one hand the processed neighbors are used to minimize the error between the input and the equivalent gary level output and on the other hand the unprocessed pixels absorb the error produced due to the selection of the binary output. Since all neighbors of the current pixel are involved in computation, this ED approach is called symmetric error compensation (SEC). The SEC method can progress in arbitrary direction through the image area. This advantage enables the derivation of a new hybrid method between the SEC and the pulse density modulation method.
Various modifications to the Floyd-Steinberg's error diffusion algorithm have been proposed to reduce the undesirable
artifacts or to enhance the edges on the error diffused image. Most of the existing error diffusion techniques utilize the error
diffusion kernel defmed on the causal image plane with respect to the raster scanning directions. In this paper, an error
diffusion kernel containing non-causal neighbors is proposed for edge enhancement. The error diffusion kernel for individual
gray level is first estimated by minimizing the error criterion defined for the input gray level ramp image and its output binary
image. The proposed error diffusion kernel is then calculated by taking an average of the estimated error diffusion kernels
representing mid-tone gray levels. Experiments are performed to examine the proposed error diffusion method. Experimental
results indicate that the binary images obtained by the proposed error diffusion kernel exhibit the enhanced edges compared to
those from the existing error diffusion techniques.
This paper describes a new model based error diffusion method to compensate dot overlapping. We designed 32 test patterns to measure printer non-linearity. The effects of four neighboring pixels are considered in the standard error diffusion. The effect of dot overlap is exactly measured with the conditions of various distribution of ON/OFF neighboring pixels. A new model based error diffusion based on measured non-linearity shows good reproduction of gray scale. It is worth while to notice that there is little assumption about the radius of dot, homogeneity inside the dot, and overlapped area phenomenon. The proposed method is applicable to real environment with little restrictions of the printer dot modeling.
Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.
Dithering with blue noise mask has no artifact as the error diffusion and it is very simple to implement. But it does not represent well the local property of edges. In this paper, a local adaptive masking technique is proposed. By controlling the local adaptive factors, the amount of high pass filtered and blue noise components are controlled. It was shown that the proposed algorithm has similar effects on simple threshold in background, blue noise masking in mid- level and high pass filtering in edge regions. The resulting image shows good gray-tone reproduction as well as good edge characteristic.
In a drop-on-demand thermal ink-jet printer, the dot size of an ink droplet expelled from printer depends on the absorption of the paper. This causes severe differences between output images on the different paper materials. In this paper, the color matching algorithm for different papers is proposed. To achieve corresponding color reproduction, dot gain compensation based on saturation was applied to predict color reproduction on printer. If the dot gain of pigment is increased, the white portion of paper decreases while the saturation value increases monotonically. As the result of dot gain compensation, intensity change may be appeared. Therefore, an intensity compensation without any hue variation is followed to match the colors of different substrates.
One of the major problems encountered in the color halftoning on ink-jet printers is the problem of ink duty. Most commercial ink-jet printers will produce ink droplets that exceed the physical area of a single. While limiting the continuous tone ink is a valid solution for the dispersed or blue noise dithering, that method fails on cluster dot dither. Our method deals with the Ink Duty problem in the cluster dot case by 'punching out' holes in the halftoning pattern without altering its aspect and in a time efficient manner. Given a printer, we first determine the ink duty constants. Then a dither matrix will be used to remove enough dots locally in order to satisfy the (alpha) , (Beta) , (gamma) constraints.
In this paper, a novel and unified hardware structure to implement various binarization algorithms is proposed. It is designed to perform: 1) simple thresholding, 2) high pass filtering, 3) dithering, 4) blue noise masking, 5) error diffusion, 6) threshold nodulated error diffusion, and 7) edge enhanced error diffusion. In general, these algorithms have been implemented with several logic blocks. We found that a single data path architecture can be used for implementing those algorithms. A new structure is designed to have same data-flow that can share the blocks. All processing is possible in the proposed unified architecture which is based on the threshold modulated and edge enhanced error diffusion scheme. This structure has error filter coefficient registers, error memory, threshold memory, and arithmetic units, etc. This paper shows that the proposed hardware structure reduces the number of gates efficiently. The hardware design and debugging complexity is reduced by the unified control logic and data path.
An adaptive error diffusion technique for color ink jet printing has been developed to improve halftone smoothness and quality. The technique makes the output color dot distribution smoothly spread apart each other. This reduces color interference noises. In addition, it increases frequency at light tones to result better visual quality. Threshold values for different color plans are determined adaptively depending on the color values and their accumulated errors. For a given CMY(K) color input pixel, the CMY(K) values are summed with errors which were diffused from previously-processed neighboring pixels. The summed CMY(K) values suggest the priority of turning 'ON' colors. Threshold values for CMY(K) are, therefore, ranked and assigned by different values. When we change a threshold value in a single color channel, visually it does not change output tones. This characteristic makes the adaptive threshold result same tones but more smooth dot distribution. In addition, the reduction of printer mechanical registration errors is discussed by the presented error diffusion. It achieves overall quality improvement. Experimental results are provided to show the effectiveness of the technique.
A method will be shown to incorporate digital watermarks in printed halftone images using stochastic screens. The watermark is not visible to the eye and introduces no loss in image quality. Although it cannot be seen, the watermark can be extracted at a later time with post processing. Watermarks of high contrast are incorporated in the image by varying the statistics of the stochastic screen. The watermark information can be made visible by comparing the relative changes in spatial correlation in the halftone texture of the image. Watermarking allows a printed image to be tested for the purposes of identifying the owner or the source of the image. Arbitrary customer information can be incorporated into the image, including variable information, such as the data or time of day. The technique is robust to copying of the printed image and the watermark can be detected in reproductions of the halftoned image.
Recent years have witnessed the development of laser electrophotography as one of the major technologies for document printing, serving a wide range of market applications. With the evolution of color and market demand for color hard copy, electrophotography is again taking center stage to serve the customer need in quality, cost and convenience. Today, electrophotographic technology is used to offer products for color document printing for desktop, mid-volume and high-speed applications. Total cost of ownership, convenience and quality today favor the use of this technology over alternatives in many applications. Development of higher speed color electrophotographic engines demands very high speed, Raster Input Processors and pre-press applications that are expected to become available in the market during the next five years. This presentation will cover the changing environment of office communication and the continuing role of electrophotography in color document printing.
This paper discusses the use of multi-pass print modes in an ink-jet device. Multi-pass printing techniques are a class of pixel timing/sequencing methods that are associated with many commercial ink-jet printers. When these techniques are used, only part of the final printed image is actually printed during each pass of the printhead over the substrate. After a sufficient number of passes of the printhead over the substrate, the entire final image is printed. There are a variety of algorithms to decide how the image is built up after each successive pass of the printhead. The particular algorithm used will depend on the output results desired from the printer. The following sections will illustrate how these methods are commonly applied to offer enhanced printer capabilities, improve print quality, and increase systems tolerances and latitudes.
We have been studying a kind of word-processor that is able to create Japanese characters, Kanji or Hiragana strings in the cursive style, using an electronic writing brush model. Int his paper, we describe in detail the operation characteristics of the electronic writing brush which we have proposed. We defined a touch shape pattern of the electronic writing brush as a form which is projected as a circle and a cone. The brush goes on certain points of the skeleton of the character figure which is given as skeleton data. The thickness of the line is determined by a diametric variable brush pressure. Our progressive action model can rotate the direction of the writing brush tip corresponding to the difference angle between the direction of brush tip and the direction of the brush movement, and also the softness of the writing brush to express the writing brush method called the side writing brush. The front side and back side of the writing brush can be expressed in a calligraphic drawing. With our technique we can draw characters in actual stroke order on a virtual computer plane as if they are written by an actual writing brush.
A technique to segment dark texts on light background of mixed mode color documents is presented. This process does not perceptually change graphics and photo regions. Color documents are scanned and printed from various media which usually do not have clean background. This is especially the case for the printouts generated from thin magazine samples, these printouts usually include text and figures form the back of the page, which is called bleeding. Removal of bleeding artifacts improves the perceptual quality of the printed document and reduces the color ink usage. By detecting the light background of the document, these artifacts are removed from background regions. Also detection of dark text regions enables the halftoning algorithms to use true black ink for the black text pixels instead of composite black. The processed document contains sharp black text on white background, resulting improved perceptual quality and better ink utilization. The described method is memory efficient and requires a small number of scan lines of high resolution color documents during processing.
Image transformation computations are typically the most computationally intensive operations in raster image processing. Raster image processing is the process of converting a page description language like postscript to a bitmap. Researchers have traditionally assumed a uniprocessor model when optimizing the execution time associated with image transformation. In the case of multiprocessing environments, in addition to computational speed, other factors like the frame-buffer access bottleneck, processor utilization, etc. have to be considered. A block based approach that takes these factors into consideration will be presented along with techniques to choose optimal block dimensions.We will focus on the single-chip multi-processing environment provided on Texas Instruments TMS320C80 which has a RISC processor and four Digital Signal Processors on the same chip. We will compare the performance of the block based approach with a conventional scan-line based approach.
Today, non-impact printers have become a common office peripheral. With this increase in market acceptance, the understanding of computer imaging has left the realm of black magic, known only by a select group of scientists and engineers, and entered into the mainstream of computer literacy.
An image registration method is proposed which provides high visual magnification of micro-position errors so that they may be easily quantified with an unaided eye. The method consists of overprinting a pair of fine line or screen patterns of the same spatial frequency in which one pattern functions as a variable mask that unveils a stepped image embedded in the second pattern in proportion to registration error. The larger scale image can be composed of one or a series of symbols each designed to be unmasked at specific registration error thresholds. Pattern structure and duty cycle differences are used to create the masking function. The use of embedded images allows for a wide variety of visual symbols to be displayed at precise error thresholds. The large symbol dimensions provide the high visual magnification, typically a factor of 100 to 100. Advantages over moire technique are that the symbol outline is sharp, its shape is not dependent on interference effects, and it can independently designed to optimize for intuitive visual impact. Applications include a visual pass/fail mark, a numerical scale providing coarse error measurement, and the printed equivalent of a quadrature detector indicating direction and magnitude of error. The position sensing image pair is compact and suitable for photographic, offset printing or other image generation/replication processes where registration is critical. Imagesetter results for a 0.3 mil registration threshold sensor will be presented.
A novel dry hard copy system for digital medical imaging is reviewed in this paper. This recording system was developed in conjunction with Tektronix Corporation and is based on phase change ink jet technology. The system uses a novel approach to construct gray scale resulting in > 300 dpi resolution and up to 10 bit gray scale output. Special media and inks have been developed to enhance durability and meet requirements for stability and image color. Field evaluations at several hospitals and clinics confirm full diagnostic image quality and show equivalence to conventional wet-processed silver halide laser recording systems.
The science of Airborne and Satellite Reconnaissance for
both military and commercial applications is a technology
consisting of three primary functional components; the imaging
sensor, the softcopy display, and the hardcopy film recorder.
Each of these functional components are constantly being
improved. The sensors or viewing component, known as the "eye in
the sky", have historically been the focus of much attention.
But of what benefit is state of-the-art sensors if the quality of
the images can not be maintained through the transmission of the
electronic image to the ground and the subsequent reconstruction
of the images by a softcopy display or a ground based film
recorder? Therefore, the other components of reconnaissance
which are located on the ground to display and record the
transmitted video are of equal importance. These reconstructed
images, which are the products actually used by individuals, are
claimed to be the products of the airborne or satellite sensors
when, in fact, they are the output on a video display or the
pictures produced by a film recorder. Up until now, the
recorders used for reconstructing the images have not shared in
equal publicity since they lacked the sophistication of the
airborne or satellite sensors.
An analytic method of constructing high fidelity clustered halftone dots is presented.A phase addressable cell space is created which provides numerically precise 2D edge position information. Exposures are produced with variable intensity modulation to phase shift process direction dot edges, and fine granularity timing is used to adjust fast scan direction edges. This allows symmetrically thresholded halftone dots instead of sequentially thresholded dots. Advantages of this technique are more binary halftone gray levels, reduced labor content, spline based tone correction, device independence, and arbitrary screen angle and frequency. An equation which provides the desired shape and tone response over the intensity range is developed and analyzed. Shape information is extracted which is independent of the target printer characteristics. Highly accurate tone information is obtained by integrating the continuous shape function. A printer-dependent, gray enabled modulator drive function is utilized, and step wedge prints are mae with a nominal tone reproduction curve (TRC). Data from densitometer measurements are converted to spline format, and used to correct the nominal TRC with high relative accuracy. Resulting contourless tone corrected prints show very good linearity and shape definition over the entire intensity range.