A method of selecting optimal color filters to perform accurate multi-illuminant color correction is reviewed. The transmittances for a set of filters obtained by the method were provided to a color filter manufacturer. The manufacturer used a dichroic filter modeling program to produce transmittances that satisfied the physical constraints of the manufacturing process and approximated the optimal filter transmittances. The ideal and manufacturable filters are compared through computer simulation and their accuracy assessed during the CIE ΔEL*a*b measure. The results show that the unconventional shapes of the optimal filters can be weil approximated by actual filters with slight degradation in performance.
We describe a method of enhancing color images by applying histogram equalization to the saturation component in the color difference (C-Y) color space. When histogram equalization is applied to the saturation component of a 24-bit image, the transform often leads to red, green, and blue components that exceed the realizable RGB intensities. The histogram equalization algorithm presented reduces this problem by taking into account the relationship that exists between luminance and saturation and how the luminance value limits the range of possible saturations. This method also retains a more uniform distribution of color saturation once the components are transformed back into the RGB space. This is important for images that contain high-luminance, low-saturation features.
A fractal-based method for color image compression is presented. The method transforms the direct color components into three approximate principal components and applies a fractal-based compression method developed for gray-scale images to each new component. The main principal component, which contains a large amount of energy, is coded with high accuracy, while the other two components can be coded at lower accuracy and a very high compression ratio. The principal-component-based method gives an overall higher quality in the reconstructed image at a similar compression rate compared with compression based on other linear transforms of the color space.
An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quallty that was ostensibly superior to that of existing internationalstandard JPEG, it appears possible that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.
Among the various digital halftoning methods, carrier procedures have the advantage of being fast and requiring few computational resources. However, because they are pixel-oriented algorithms, they offer less flexibility than more complex algorithms that involve the information from a neighborhood or the entire image in the quantization of each pixel. By introducing noninteger ratios between the carrier and raster period, the carrier procedure can be adapted to the spectral characteristics of the visual system. The spectral noise distribution can be optimized in this regard for two-dimensional, periodic carriers with arbitrary shape.
In a raster scanning printer, a laser beam is scanned across a photoreceptor in a direction perpendicular to the photoreceptor motion. When there is vibratory motion of the photoreceptor or wobble in the polygon mirror, the raster lines on the photoreceptor will not be evenly spaced. We analyze the positioning error and show that fractional raster spacing error is equal to photoreceptor fractional velocity error. These raster position errors can result in various print defects, of which halftone banding is the dominant defect. The dependences of halftone banding are examined using a first-order geometry-based printing model, an exposure model, and a more sophisticated laser imaging model coupled with a xerography model. The system model is used to calculate print reflectance modulation due to vibrations in both charged-area and discharged-area
development modes using insulative or conductive development. System parameters examined are halftone frequency, raster frequency, average reflectance, vibration frequency, and multiple-beam interlace spacing.
We propose an algorithm for the computation of a region-based measure of image edge profile (IEP) acutance based on graylevel variations across the boundary of an object. A procedure to calculate the acutance based on region growing and a root-mean-squared gradient measure across region boundaries has been designed and implemented. After testing the algorithm on various images, it is shown that this measure of acutance can accurately reflect
changes in the appearance of objects due to blurring and sharpening operations. Using this technique, it should be possible to quantify the level of enhancement in a digital image by calculating the acutance before and after the enhancement operation. The measure should also be useful in comparing specific features or regions of interest in images produced by different imaging systems.
In recent years several algorithms have been reported for automating fringe data collection in photomechanics using the technique of digital image processing (DIP). Recent advances in phase shifting interferometry have offered some hope for full automation of static problems. However, for real-time dynamic studies conventional recording of fringes is a must. Fringe thinning is a very crucial step in extracting data for further processing. The various DIP algorithms for fringe thinning are surveyed and an attempt is made to explain better the mechanism of fringe skeleton extraction by various algorithms. The algorithm of Ramesh and Pramod is improved to extract fringe skeletons from saddle points in the fringe field. A cornparative performance evaluation of these algorithms is discussed with respect to the quality and accuracy of fringe skeleton extracted and the processing time. Performance evaluation is done on a few computer-generated. test images and also on images recorded by the technique of photoelasticity. The improved version of the algorithm of Ramesh and Pramod is found to give better fringe skeletons; it is also the fastest and the processing time is an order of magnitude less than the other algorithms. It is proposed that these computergenerated test images could be used as standard test images for checking the performance of any new fringe thinning algorithm.
During the next years, profound changes are expected in computer and communication technologies that will offer the medical imaging systems (MIS) industry a challenge to develop advanced telemedicine applications of high performance. Medical industry, vendors, and specialists need to agree on a universal MIS structure that will provide a stack of functions, protocols, and interfaces suitable for coordination and management of high-level image consults, reports, and review activities. Doctors and engineers have worked together to determine the types, targets, and range of such activities within a medical group working domain and to posit their impact on MIS structure. As a result, the fundamental MIS functions have been posed and organized in the form of a general MIS architecture, denoted as ELPIDA. The structure of this architecture was kept as simple as possible to allow its extension to diverse multimode operational schemes handling medical and conversational audiovisual information of different classes. The fundamentals of ELPIDA and pulmonary image diagnostic aspects have been employed for the
development of a prototype MIS.