The Association forinformation and Image Management's
(AIIM) electronic image management standards are described and a new effort at AIIM, a database on standards projects in a wide framework, including image capture, recording, processing, duplication,
distribution, display, evaluation, preservation, and media, is presented.
A mechanism is presented to achieve adaptive scene
contrast enhancement-a common problem in TV and IR imager,j applications-by controlling camera gain and pedestal in an automatic fashion. The goal of adaptivity, in a precise meaning, is "content
windowing" where image signals are selectively extracted and contrast enhanced, probably with respect to both dynamic-range compression and expansion. We adopt the image-analysis strategy,
distinct from classical electronic methods (e.g., automatic gain control circuitry), such that the overall behavior of frame pixels (e.g., image histogram) is optimized for feedback control of camera gain and pedestal in a live video process. The video formation process is linearly modeled so that we can derive an automatic control fashion to meet the proposed image-quality criterion, which is naturally simple and flexible for practical use in a variety of applications. Experiments show that our method adapts well in dynamic environments and can be easily hardware implemented.
The present 18-mm active diameterproximity-focused microchannel plate (MCP) image tube design has been modified to produce significantly higherlimiting spatial resolution. A glass input window of the "bulls-eye" design with the blackened glass border, reduced cathode-to-MCP spacing, reduced channel center-tocenter distance, reduced MCP-to-phosphor screen spacing, a brushed P20 phosphor screen, and a fiber optic output window were used to achieve a limiting resolution in excess of 50 lplmm.
Test results, showing limiting resolution versus applied potentials, are correlated with a simple physical model of performance. The low-light-level white-light sine-wave modulation transfer function,
T(f), has been measured to be T(f) = exp[- (f121 5)1 .46j, where f is the spatial frequency in cycles per millimeter.
An algorithm is presented for rendering of volumetric data sets. The aim of the algorithm is to maximize the image variance in a volumetric rendering where a three-dimensional data set is projected onto a view plane through the perspective mapping. The pixel values in the rendered image are associated with a variable size attribute vector extracted along a line in the volumetric data set. Several algorithms are presented for transforming this variable size attribute vector into a fixed size attribute vector. The fixed size attribute vectors provide a multispectral image representation, which is processed with the Karhunen-Loève transformation in order to separate the information content into orthogonal components that are ordered according to the associated eigenvalues. The components in the Karhunen-Loève transform can be displayed individually
as intensity images or three components can be selected and mapped into a coloring scheme such as the hue-saturation-value color model.
In black-and-white printing the page image can be represented within a computer as an array of binary values indicating whether or not pixels should be inked. The Boolean operators of AND, OR, and EXCLUSIVE-OR are often used when adding new objects to the image array. For color printing the page may be
represented as an array of "continuous-tone" color values, and the generalization of these logic functions to gray-scale or full-color images is, in general, not defined or understood. When incrementally
composing a page image, new colors can replace old in an image buffer, or new colors and old can be combined according to some mixing function to form a composite color, which is stored. This paper examines the properties of the Boolean operations and suggests
full-color functions thatpreserve the desired properties. These functions can be used to combine colored images in ways that preserve information about object shapes when the shapes overlap.
The relationships between the proposed functions and physical models of color mixing are also discussed.
Displaying natural images on an 8-bit computer monitor
requires a substantial reduction ofphysically distinct colors. Simple minimum mean squared error quantization with 8 levels of red and green and 4 levels of blue yields poor image quality. A powerful
means to improve the subjective quality of a quantized image is error diffusion. Error diffusion works by shaping the spectrum of the display error. Considering an image in raster ordering, this is done
by adding a weighted sum of previous quantization errors to the current pixel before quantization. These weights form an error diffusion filter. We propose a method to find visually optimized error
diffusion filters for monochrome and color image display applications. The design is based on the low-pass characteristic of the contrast sensitivity of the human visual system. The filter is chosen so that a cascade of the quantization system and the observer's visualmodulation transfer function yields a whitened error spectrum. The resulting images contain mostly high-frequency components of the display error, which are less noticeable to the viewer. This corresponds well with previously published results about the visibility of halftoning patterns. An informal comparison with other error diffusion algorithms shows less artificial contouring and increased image quality.
The design, hardware implementation, and simulation of
a shift invariant pattern recognizer based on a modified higher order neural network (MHONN) is presented. When the MHONN is integrated with centroid calculation andlogarithmic spiralmapping subsystems,
translation, rotation around the optical axis, and scaling invariant pattern recognition can be achieved. The design objective is to deal with large-scale images with possible pattern deformation, noise, and highly textured backgrounds. Images are acquired with a 256 x 256 infrared sensor. We describe the theory of the MHONN, its hardware implementation, and simulation results.
A predictive vector quantization scheme exploiting the
intervector correlations of adjacent blocks (vectors) of pixels is developed. The model presented utilizes the statistical dependencies of the previously encoded pairs of adjacent blocks to predict future
blocks of picture elements. The state of the vector predictor is represented by a subcodebook composed of a finite number of code vectors. These patterns constitute the most probable candidates for encoding purposes. The entries of the subcodebook are replenished at each state employing interblock dependencies. To further increase the performance of the quantizer, the difference between predicted pixels and original image samples was vector quantized
in the second stage. Excellent subjective performance and SNRs were achieved for monochrome still images, while the range of the bit rates was lower than those of memoryless vector quantization schemes.
The Joint Photographic Experts Group (JPEG) baseline
system, which is scheduled to be standardized in 1992, is applied to character images, and the characteristics of the application are investigated. The JPEG system is suitable for continuous-tone images, however, continuous-tone images are usually accompanied by characters. The image quality of characters is investigated on various magnitudes of quantization tables and the deterioration mechanisms are discussed. A method ofimage quality improvement
that is accomplished by density transformation after decoding is proposed and its effects are confirmed.
Progressive transmission of images based on the lapped
orthogonal transform (LOT), adaptive classification, and human visual sensitivity (HVS) weighting is proposed. HVS weighting for LOT basis functions is developed. This technique is quite general and can be applied to any orthogonal transform. The method is compared with discrete cosine transform (DCT)-based progressive image transmission (PIT). It is shown that LOT-based PIT yields subjectively improved images compared to those based on DCT. This is consistent with the reduction in block structure characteristic of LOT image coding.