In the paper, we propose new fast and effective approach for automatic visibility enhancement of images with poor
global and local contrast. Initially, we developed the technique for scanned images with dark and light background
regions and low visibility of foreground objects in both types of regions. Newly proposed algorithm carries out locally
adaptive tone mapping by means of variable S-shaped curve. We use cubic Hermit spline. Starting and ending points of
the spline depend on global brightness contrast, whereas tangents depend on local distribution of background and
foreground pixels. Alteration of the tangents for adjacent areas is smoothed in order to avoid forming of visible artifacts.
The description of several optimization tricks, which allow realization of high-speed algorithm, is given. We compare
proposed method with several well-known image enhancement techniques by means of estimation of Michelson contrast
(also known as visibility metric) for a number of test patterns. Disclosed algorithm outperforms tested alternatives.
Finally, we extend application of proposed method for photo enhancement and correction of images with haze.
The paper is devoted to a novel high-performance algorithm for automatic segmentation and skew correction of several objects on a scanned image. The complex multi-stage technique includes preprocessing, initial segmentation, classification of connected regions, merging of fragmented regions by heuristic procedure, bounding boxes detection and deskew of rectangular objects. Our method is highly effective owing to unification most of operations in one pass. Algorithm provides users with additional functionality and comfort. The method is evaluated by suggested quantitative quality criteria.
The paper is devoted to the algorithm for generation of PDF with vector symbols from scanned documents. The complex
multi-stage technique includes segmentation of the document to text/drawing areas and background, conversion of
symbols to lines and Bezier curves, storing compressed background and foreground. In the paper we concentrate on
symbol conversion that comprises segmentation of symbol bodies with resolution enhancement, contour tracing and
approximation. Presented method outperforms competitive solutions and secures the best compression rate/quality ratio.
Scaling of initial document to other sizes as well as several printing/scanning-to-PDF iterations expose advantages of
proposed way for handling with document images. Numerical vectorization quality metric was elaborated. The outcomes
of OCR software and user opinion survey confirm high quality of proposed method.
KEYWORDS: Printing, Biomimetics, Edge detection, Image enhancement, Visual process modeling, Human vision and color perception, Detection and tracking algorithms, Raster graphics, Image processing, Color printing
Saving of toner/ink consumption is an important task in modern printing devices. It has a positive ecological and social
impact. We propose technique for converting print-job pictures to a recognizable and pleasant color sketches. Drawing a
"pencil sketch" from a photo relates to a special area in image processing and computer graphics - non-photorealistic
rendering. We describe a new approach for automatic sketch generation which allows to create well-recognizable
sketches and to preserve partly colors of the initial picture. Our sketches contain significantly less color dots then initial
images and this helps to save toner/ink. Our bio-inspired approach is based on sophisticated edge detection technique for
a mask creation and multiplication of source image with increased contrast by this mask. To construct the mask we use
DoG edge detection, which is a result of blending of initial image with its blurred copy through the alpha-channel, which
is created from Saliency Map according to Pre-attentive Human Vision model. Measurement of percentage of saved
toner and user study proves effectiveness of proposed technique for toner saving in eco-friendly printing mode.
Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological
impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is
worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational
costs.
We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font
cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript
and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is
an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner
saving for typical office documents up to 15-20%. Rate of toner saving is adjustable.
Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to
random placement of small holes inside the character regions. The suggested method automatically skips small fonts to
preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned
hardcopy successfully too.
When a bound document such as a book is scanned or copied with a flat-bed scanner, there are two kinds of defects in the scanned
image; the geometric and photometric distortion. The root cause of the defects is the imperfect contact between the book to be scanned
and the scanner glass plate. The long gap between the book center and the glass plate causes the optical path from the surface of the
book and the imaging unit(CCD/CIS) to be different from the optimal condition.
In this paper, we propose a method for restoring bound document scan images without any additional information or sensor. We
correct the bound document images based on the estimation of the boundary feature and background profile. Boundary Feature is
obtained after calculating and analyzing the Minimum Boundary Rectangle which encloses the whole foreground contents with
minimum size and the extracted feature is used for correcting geometric distortion; de-skew, warping, and page separation.
Background profile is estimated from the gradient map and it is utilized to correct photometric distortion; exposure problem.
Experimental results show effectiveness of our proposed method.
When scanning a document that is printed on both sides, the image on the reverse can show through with high luminance.
We propose an adaptive method of removing show-through artifacts based on histogram analysis. Instead of attempting
to measure the physical parameters of the paper and the scanning system, or making multiple scans, we analyze the color
distribution to remove unwanted artifacts, using an image of the front of the document alone. First, we accumulate
histogram information to find the lightness distribution of pixels in the scanned image. Using this data, we set thresholds
on both luminance and chrominance to determine candidate regions of show-through. Finally, we classify these regions
into foreground and background of the image on the front of the paper, and show-through from the back. The
background and show-through regions become candidates for erasure, and they are adaptively updated as the process
proceeds. This approach preserves the chrominance of the image on the front of the papers without introducing artifacts.
It does not make the whole image brighter, which is what happens when a fixed threshold is used to remove show-through.
KEYWORDS: Halftones, Image filtering, Image processing, RGB color model, Linear filtering, Visualization, Optical filters, High dynamic range imaging, Detection and tracking algorithms, Printing
Screen or halftone pattern appears on the majority of images printed on electrophotographic and ink-jet printers as well
as offset machines. When such halftoned image is scanned, a noisy effect called a Moiré pattern often appears on the
image. There are plenty of methods proposed for descreening of images. Common way is adaptive smoothing of scanned
images. However the descreening techniques face the following dilemma: deep screen reduction and restoration of
contone images leads to blurring of sharp edges of text and other graphics primitives, on the other hand insufficient
smoothing keeps screen in halftoned areas.
We propose novel descreening algorithm that is primarily intended for preservation of sharpness and contrast of text
edges and for restoration contone images from halftone ones accurately. Proposed technique for descreening of scanned
images comprises five steps. The first step is decrease of edge transition slope length via local tone mapping with
ordering; it is carried out before adaptive smoothing, and it allows better preservation of edges. Adaptive low-pass filter
applies simplified idea of Non-Local Means filter for area classification; similarity is calculated between central block of
window and different adjacent block that is selected randomly. If similarity is high then current pixel relates to flat
region, otherwise pixel relates to edge region. For prevention of edges blurring, flat regions are smoothed stronger than
edge regions. By random selection of blocks we avoid the computational overhead related to excessive directional edge
detection.
Final three stages include additional decrease of edge transition slope length using local tone mapping, increase of local
contrast via modified unsharp mask filter, that uses bilateral filter with special edge-stop function for modest smoothing
of edges, and global contrast stretching. These stages are intended to compensate decreasing of sharpness and contrast
due to low-pass filtering, it allows to enhance visual quality of scanned image.
For parameters adjusting for different scanning resolutions and comparison with existing techniques test target and
criteria were proposed. Also the quality of proposed approach is evaluated by surveying observer's opinions. According
to obtained outcomes the proposed algorithm demonstrates good descreening capabilities.
The paper relates to a method for effective reduction of artifacts, caused by lossy compression algorithms based on
block-based discreet cosine transform (DCT) coding, known as JPEG coding. Most common artifacts produced by such
type of coding, are blocking and ringing artifacts. To reduce the effect of coding artifacts caused by significant
information loss, a variety of different algorithms and methods has been suggested. However, the majority of solutions
propose to process all blocks in the image, which leads to increase of processing time, required resources, as well as
image over-blurring after processing of blocks, not affected by blocking artifacts. Techniques for ringing artifact
detection usually rely on edge-detection step, a complicated and versatile procedure with unknown optimal parameters.
In this paper we describe very effective procedures for detection of artifacts, and their subsequent correction. This
approach helps to save notable amount of computational resources, since not all the blocks are involved in correction
procedures. Detection steps are performed in frequency domain, using only DCT coefficients of an image. Numerous
examples have been analyzed and compared with the existent solutions, and results prove the effectiveness of proposed
technique.
Several measurable image quality attributes contribute to the perceived resolution of a printing system. These
contributing attributes include addressability, sharpness, raggedness, spot size, and detail rendition capability. This
paper summarizes the development of evaluation methods that will become the basis of ISO 29112, a standard for the
objective measurement of monochrome printer resolution.
Several measurable image quality attributes contribute to the perceived resolution of a printing system. These
contributing attributes include addressability, sharpness, raggedness, spot size, and detail rendition capability. This
paper summarizes the development of evaluation methods that will become the basis of ISO 29112, a standard for the
objective measurement of monochrome printer resolution.
In this paper we propose an effective approach for creating nice-looking photo images of scenes having high dynamic
range using a set of photos captured with exposure bracketing. Usually details of dark parts of the scene are preserved in
over-exposed shot, and details of brightly illuminated parts are visible in under-exposed photos. A proposed method
allows preservation of those details by first constructing gradient field, mapping it with special function and then
integrating it to restore lightness values using Poisson equation. Resulting image can be printed or displayed on
conventional displays.
Present paper generally relates to content-aware image resizing and image inscribing into particular predetermined areas.
The problem consists in transformation of the image to a new size with or without modification of aspect ratio in a
manner that preserves the recognizability and proportions of the important features of the image. Most close solutions
presented in prior art cover along with standard image linear scaling, including down-sampling and up-sampling, image
cropping, image retargeting, seam carving and some special image manipulations which similar to some kind of image
retouching. Present approach provides a method for digital image retargeting by means of erasing or addition of less
significant image pixels. The defined above retargeting approach can be easily used for image shrinking easily. However,
for image enlargement there are some limitations as a stretching artifact. History map with relaxation is introduced to
avoid such drawback and overcome some known limits of retargeting. In proposed approach means for important objects
preservation are taken into account. It allows significant improvement of resulting quality of retargeting. Retargeting
applications for different devices such as display, copier, facsimile and photo-printer are described as well.
Sharpness is an important attribute that contributes to the overall impression of printed photo quality. Often it is
impossible to estimate sharpness prior to printing. Sometimes it is a complex task for a consumer to obtain accurate
sharpening results by editing a photo on a computer.
The novel method of adaptive sharpening aimed for photo printers is proposed. Our approach includes 3 key techniques:
sharpness level estimation, local tone mapping and boosting of local contrast. Non-reference automatic sharpness level
estimation is based on analysis of variations of edges histograms, where edges are produced by high-pass filters with
various kernel sizes, array of integrals of logarithm of edges histograms characterizes photo sharpness, machine learning
is applied to choose optimal parameters for given printing size and resolution. Local tone mapping with ordering is
applied to decrease edge transition slope length without noticeable artifacts and with some noise suppression. Unsharp
mask via bilateral filter is applied for boosting of local contrast. This stage does not produce strong halo artifact which is
typical for the traditional unsharp mask filter.
The quality of proposed approach is evaluated by surveying observer's opinions. According to obtained replies the
proposed method enhances the majority of photos.
Printer resolution is an important attribute for determining print quality, and it has been frequently referred to hardware optical resolution. However, the spatial addressability of hardcopy is not directly related to optical resolution because it is affected by printing mechanism, media, or software data processing such as resolution enhancement techniques (RET). The international organization ISO/IEC SC28 addresses this issue, and makes efforts to develop a new metric to measure this effective resolution. As the development process, this paper proposes a candidate metric for measuring printer resolution. Slanted edge method has been used to evaluate image sharpness for scanners and digital still cameras. In this paper, it is applied to monochrome laser printers. A test chart is modified to reduce the effect of halftone patterns. Using
a flatbed scanner, the spatial frequency response (SFR) is measured and modeled with a spline function. The frequency corresponding to 0.1 SFR is used in the metric for printer resolution. The stability of the metric is investigated in five separate experiments: (1) page to page variations, (2) different ROI locations, (3) different ROI sizes, (4) variations of toner density, and (5) correlation with visual quality. The 0.1 SFR frequencies of ten printers are analyzed. Experimental results show the strong correlation between a proposed metric and perceptual quality.
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user
intervention and making photos more pleasant for an observer are important tasks.
The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is
independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based
on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection
filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye
regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART)
is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of
approach implementation using trade-off between detection and correction quality, processing time, memory volume are
possible.
The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying
Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to
choose algorithm parameters via optimization procedure.
Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing
solutions.
KEYWORDS: Printing, RGB color model, Calibration, 3D acquisition, CMYK color model, 3D printing, Detection and tracking algorithms, Color reproduction, Visualization, Inkjet technology
We propose a novel color mapping method that generates smooth color transition and can accommodate the color
preference. The method consists of two stages; rough calibration and black generation. The rough calibration process
generates a three dimensional (3-D) Look-Up-Table converting input RGB data to printable CMY values. When the 3-D
LUT is created, a new intent for color mapping, target color is internally used. The target color is predefined from a
reference color book based on the color preferences of testers involved in the target definition phase. All of the input
data of the 3-D LUT are mapped to the printable values in a printer based on the target color, and then simply converted
to CMYK values. We evaluated the proposed algorithm comparing with a commercial printer profiler and found that the
proposed algorithm makes better printing quality.
Recently, speed and resolution of electrophotographic printer engine have been significantly improved. In today's
market, it is not difficult to find low to mid-end electrophotographic printers with the spatial resolution greater than 600
dpi and/or bit-depth resolution more than 1 bit. Printing speed is determined by the processing time at computer, data
transmission time between computer and printer, and processing and printing time at printer. When halftoning is
performed at computer side, halftoned data would be compressed and sent to printer. In this case, increase in the spatial
and bit-depth resolution would increase data size to be transmitted and memory size at printer. For a high-speed printer,
increased transmission time may limit the throughput in imaging chain. One of possible solutions to this problem is to
develop resolution enhancement techniques. In this paper, a fast and efficient spatial resolution enhancement technique is
proposed. Objectives of the proposed technique are to reduce the data size for transmission and minimize image quality
deterioration. In the proposed technique, number of black pixels in the halftoned data is binary coded for data reduction.
At printer, black pixel placement algorithm is applied to binary coded data. For non-edge area, screen order is utilized for
the black pixel placement. When identified as edge area, locations of black pixels are selected by the edge order designed
by genetic algorithm.
AM-FM hybrid screen represents a clustered dot halftone screen whose dot clusters are aperiodic and dot growth pattern
is irregular. Unlike AM ordered screen, AM-FM hybrid screen is free of the inter-screen and subjective moires. However,
it results in brightness variation called as stochastic noise. In this paper, a new AM-FM hybrid screen design technique is
proposed. In the proposed method, locations of dot cluster centers are determined first by a distance measure based
algorithm. They serve as initial patterns to construct AM-FM hybrid screens. Number of dot cluster centers is chosen
based on the maximum screen frequency allowed for a given printer. Dot cluster centers should be distributed
homogeneously within and between color channels. Also, there should not be noticeable tiling artifacts. Dot orders of
highlight area are decided during the construction of initial patterns. For the gray levels darker than that of initial patterns,
dot clusters start to be formed. In order to expand dot clusters while maintaining the green noise characteristics with a
peak at the principal frequency of cluster centers, an optimum dot growth filter is designed. A set of pixel locations is
determined by the proposed dot growth filter. In order to reduce brightness variation and low frequency noise, channel
dependent dot growth algorithm is proposed. Among the set of candidates, a pixel location that minimizes brightness
variation is selected.
I propose a halftone screen design method based on a human visual system model and the characteristics of the electro-photographic (EP) printer engine. Generally, screen design methods based on human visual models produce dispersed-dot type screens while design methods considering EP printer characteristics generate clustered-dot type screens. In this paper, I propose a cost function balancing the conflicting characteristics of the human visual system and the printer. By minimizing the obtained cost function, I design a model-based clustered-dot screen using a modified direct binary search algorithm. Experimental results demonstrate the superior quality of the model-based clustered-dot screen compared to a conventional clustered-dot screen.
KEYWORDS: Linear filtering, Photography, Modulation transfer functions, Image restoration, Scanners, Cameras, Image filtering, Digital imaging, Image processing, Point spread functions
We consider the problem of restoring a noisy blurred image using an adaptive unsharp mask filter. Starting with a set of very high quality images, we use models for both the blur and the noise to generate a set of degraded images. With these image pairs, we optimally train the strength parameter of the unsharp mask to smooth flat areas of the image and to sharpen areas with detail. We characterize the blur and the noise for a specific hybrid analog/digital imaging system in which the original image is captured on film with a low-cost analog camera. A silver-halide print is made from this negative; and this is scanned to obtain a digital image. Our experimental results for this imaging system demonstrate the superiority of our optimal unsharp mask compared to a conventional unsharp mask with fixed strength.
KEYWORDS: Image filtering, Digital filtering, Photography, Nonlinear filtering, Cameras, Linear filtering, Imaging systems, Modulation transfer functions, Analog electronics, Image processing
We consider the problem of restoring a noisy blurred image using an
adaptive unsharp mask filter. Starting with a set of very high quality
images, we use models for both the blur and the noise to generate
a set of degraded images. With these image pairs, we optimally train the strength parameter of the unsharp mask to smooth flat areas of the image and to sharpen areas with detail. We characterize the blur and the noise for a specific hybrid analog/digital imaging system in which the original image is captured on film with a low-cost analog camera. A silver-halide print is made from this negative; and this is scanned to obtain a digital image. Our experimental results for this imaging system demonstrate the superiority of our optimal unsharp mask compared to a conventional unsharp mask with fixed strength.
A model for the human visual system (HVS) is an important component of many half toning algorithms. Using the iterative direct binary search (DBS) algorithm, we compare the halftone texture quality provided by four different HVS models that have been reported in the literature. Choosing one HVS model as the best for DBS, we then develop an approximation to that model which significantly improves computational performance while minimally increasing the complexity of the code. By varying the parameters of this mode, we find that it is possible to tune it to the gray level being rendered, and to thus yield superior halftone quality across the tone scale. We then develop a dual-metric DBS algorithm that effectively provides a tone-dependent HVS model without a large increase in computational complexity.
We have developed a highly responsible probing system for inspection of electrical properties of assembled PCBs. However, as the duration of the impact occurring between a probe and a solder joint on PCB is very short, it is very difficult to control the harmful peak impact force and the slip motion of the probe to sufficient level only by its force feedback control with high gains. To overcome these disadvantages of the prototype, it needs to obtain some information of the solder joint in advance before the contact. In addition, to guarantee the reliability of the probing task, the probing system is required to measure several points around the probable target point at high sped. Therefore, to meet such requirements, we propose a new non-contact sensor capable of detecting simultaneously position and normal vectors of the multiple points around the probable target point in real time. By using this information, we can prepare a control strategy for stable contact motion on impact. In this paper, we described measuring principle, design, and development of the sensor. The effectiveness of the proposed sensor is verified through a series of experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.