An original algorithm is presented that matches the segments of a pair of stereoscopic images and then improves the edge segmentation of images to make the interpretation process easier. The method does not consider each segment independently to perform the matching, rather it groups and matches the segments simultaneously. First, the linked segments are grouped and matched. Then, the same process is repeated while considering the segments more and more distant. At each step, different criteria such as segment connection type and relative position of the segments are used to select the correct matchings among ail the matching hypotheses. The segmentation improvement takes into account the geometric and topologic properties of objects-the computed sets of segments are considered the projection of the faces of these objects. The segmentation improvement method relies on the comparison of the matched sets obtained previously.
Conventional polygonal approximation techniques have a fixed parameter that makes it difficult to extract the minimum number of critical points that represent faithfully complex contours having various details. We propose multistep polygonal approximation algorithms that integrate line segments detected at different scales or resolutions by means of scale or resolution partitioning of contour patterns. Via a computer simulation, we show that the proposed multistep methods approximate the contours better than conventional methods having a fixed parameter.
We present a new technique for blind restoration of images degraded by a smooth, spatially-invariant, zero-phase blur function. The restored image is obtained by using the information preseived in the phase of the blurred image to form an initial estimate. Successive estimates are produced by iteratively refining the initial estimate. For many images, only a few iterations are required to produce good quality deconvolved images. We provide several examples demonstrating the effectiveness of the new blind deconvolution approach and a discussion regarding the relative merits of the method.
We describe an algorithm for separating joined handwritten digits at locations where the external contour curvature is high. An iterative classification technique that reduces the error rate is outlined. With a rejection rate below 2%, the substitution rate of separated digits is below 11%.
The histoty of halftoning technology is presented as revealed in the U.S. patent literature, with emphasis on clustered dot halftoning. With the goal of gaining insight into current trends in halftoning, I have partitioned the history into four eras, and have categorized halftoning patents by their "scores" on a checklist of five factors that can influence the decision to turn on a microdot to construct a halftone dot. I have learned these lessons: First, both of the two major halftoning technologies have their origins in early work. The 19th century work on classical screening underlying today's electronic clustered dot halftoning is perhaps better known than work at RCA in the 1920s underlying error diffusion. Second, digital electronic halftoning by thresholding, currently the dominant technique for clustered-dot halftoning, suffers in some respects by comparison to two of its antecedents. To overcome these problems the initially elegant machinery for halftoning by thresholding has been encumbered with inelegant modifications. Third, these problems also create an incentive for fundamental innovations. Two prominent ways of doing this are to extend the list of factors that influence the decision to turn on a microdot, or to use irregular dot locations (as is already done in the dispersed dot techniques).
The purpose of this study is to obtain some quantitative measures for the applicability of several color mixing models to a halftone printer. The printer, a Canon Color Laser Copier 500 (CLC-500), is treated as a black box and the measures are the difference between the calculated and measured spectra and ΔEab. Well-known color mixing theories of the Neugebauer equations, Yule-Nielsen model (YN), Clapper-Yule multiple internal reflections (CY), Beer-Bouguerlaw (BB), and Kubelka-Munk theories (KM) are applied to CLC-500 to see how well they can fit the experimental data. Results indicate that the spectral 8-color Neugebauer model has marginal success in fitting the experimental data and the relaxed 3-color version does not fit the data weli. Both YN and CY approaches can fit the data within printer variability. The fittings are rather poor for BB and KM. By using the halftone correction factor, good agreements are obtained for the BB and single-constant KM.
We present a modified halftoning algorithm. The basic idea is to use error diffusion not on a fixed raster, but to adapt the raster to the properties of the original gray-tone image, e.g., the local intensity. The choice of a coarser raster in the high- and low-level input regions leads to the suppression of wormilke textures and to a decrease in calculation effort. The advantages and problems of this approach are discussed and examples are shown.
Undesired moiré patterns may appear in color printing for various reasons. One of the most important reasons is interference between the superposed halftone screens of the different primary colors, due to an improper alignment of their frequencies or orientations. We explain the superposition moiré phenomenon using a spectral model that is based on Fourier analysis. After examining the basic case of cosinusoidal grating superpositions we advance, step by step, through the cases of binary gratings, square grids, and dot screens, and discuss the implications on moirés between halftone screens in color separation. Then, based on these results, we focus on the moiré phenomenon from a different angle, the dynamic point of view: We introduce the moiré parameter space and show how changes in the parameters of the superposed layers vary the moiré patterns in the superposition. This leads us to an algorithm for moiré minimization that provides stable moiré-free screen combinations for color separation.
The advances in the technology of high-performance polygonal scanners, both for laser beam typesetting and the projection of computer-generated images, meet the requirements for laser beam projection of high-definition television (HDTV) onto large screens: screen widths in the order of 30 m (approximately 100 ft). We illustrate the interrelationship between the scanned image quality resolution for laser beam projected TV and HDTV, and the scanner design and manufacturing tolerances-spatial and temporal. Guidelines are provided for system designers to calculate and trade off the specification tolerances for a polygonal scanning subsystem.