Mobile imagers now possess multi-megapixel sensors. Blur caused by camera motion during the exposure is becoming
more pronounced because the exposure time for the smaller pixel sizes has been increased to attain the same photon
We present a method of measuring human hand-eye coordination for mobile imagers. When trying to hold a steady
position, the results indicate that there is a distinct linear-walk motion and a distinct random-walk motion while no
panning motion is intended. By using the video capture mode, we find that the frame to frame variation is typically less
than 2.5 pixels (0.149 degrees). An algorithm has been devised which permits the camera to determine in real-time
when is the optimum moment to for the exposure to begin to best minimize motion blur.
We also observed the edge differences in fully populated "direct" image sensors and Bayer pattern sensors. Because
dominant horizontal and vertical linear motions are present, chromatic shifts are observed in the Bayer sensor in the
direction of motion for certain color transitions.
Edition 2 of ISO 12233, Resolution and Spatial Frequency Response (SFR) for Electronic Still Picture Imaging, is likely
to offer a choice of techniques for determining spatial resolution for digital cameras different from the initial standard.
These choices include 1) the existing slanted-edge gradient SFR protocols but with low contrast features, 2) polar coordinate sine wave SFR technique using a Siemens star element, and 3) visual resolution threshold criteria using a continuous linear spatial frequency bar pattern features. A comparison of these methods will be provided. To establish the level of consistency between the results of these methods, theoretical and laboratory experiments were performed by members of ISO TC42/WG18 committee. Test captures were performed on several consumer and SLR digital cameras using the on-board image processing pipelines. All captures were done in a single session using the same lighting conditions and camera operator. Generally, there was good conformance between methods albeit with some notable differences. Speculation on the reason for these differences and how this can be diagnostic in digital camera evaluation will be offered.
Although it is well known that luminance resolution is most important, the ability to accurately render colored details, color textures, and colored fabrics cannot be overlooked. This includes the ability to accurately render single-pixel color details as well as avoiding color aliasing. All consumer digital cameras on the market today record in color and the scenes people are photographing are usually color. Yet almost all resolution measurements made on color cameras are done using a black and white target. In this paper we present several methods for measuring and quantifying color resolution. The first method, detailed in a previous publication, uses a slanted-edge target of two colored surfaces in place of the standard black and white edge pattern. The second method employs the standard black and white targets recommended in the ISO standard, but records these onto the camera through colored filters thus giving modulation between black and one particular color component; red, green, and blue color separation filters are used in this study. The third method, conducted at Stiftung Warentest, an independent consumer organization of Germany, uses a whitelight interferometer to generate fringe pattern targets of varying color and spatial frequency.
We compared the Spatial Frequency Response (SFR) of image sensors that use the Bayer color filter pattern and Foveon X3 technology for color image capture. Sensors for both consumer and professional cameras were tested. The results show that the SFR for Foveon X3 sensors is up to 2.4x better. In addition to the standard SFR method, we also applied the SFR method using a red/blue edge. In this case, the X3 SFR was 3-5x higher than that for Bayer filter pattern devices.
In this paper we describe the benefit of using a sharp transformation in the context of a color appearance model. The proposed scheme is shown to perform better than other models for the limited set of conditions tested. The testing method is similar to that described by Braun and Fairchild involving paired comparisons between prints under different illumination conditions and images calculated by the models for rendering on a CRT. Our testing shows that using a model that employs spectral sharpening for illuminant color compensation achieves better results than previous methods.
The MIT holographic video display can be converted to color by illuminating the 3 acoustic channels of the acousto-optic modulator (AOM) with laser light corresponding to the red, green, and blue parts of the visible spectrum. The wavelengths selected are 633 nm (red), 532 nm (green), and 442 nm (blue). Since the AOM is operated in the Bragg regime, each wavelength is diffracted over a different angular range, resulting in a final image in which the three color primaries do not overlap. This situation can be corrected by shifting the diffracted spatial frequencies with an holographic optical element (HOE). This HOE consisting of a single grating is placed right after the AOM in the optical setup. Calculation of the required spatial frequency for the HOE must take into account the optical activity of the TeO2 crystal used in the AOM. The HOE introduces distortions in the final image, but these are so small as to be visually negligible. The final images are of a good quality and exhibit excellent color registration. The horizontal view zone, however, diminishes for the shorter wavelengths.
We present true-color holographic stereograms made using multiple layers of Du Pont OmniDex photopolymer. Red, green, and blue color separations are reproduced at optimum replay wavelengths by exposing in blue and post-swelling using monomer color tuning films. This material is also used to record pseudo-color and true-color holograms of real-life scenes. A theoretical analysis of the color-reproduction is applied to the technique being presented and compared to results using other materials. The signal-to-noise ratio, color rendering, and color gamut area properties are shown to be comparable to those found when using dichromated gelatin and considerably better than holograms recorded in silver halide materials.
The one-step ultragram, a flexible-format computer-graphic holographic stereogram, is demonstrated in full color. Geometric limitations due to parallel diffusion screen and integral exposure planes are discussed. Image processing and translation schemes that allow for multicolor registration are demonstrated. Emphasis is placed on the pre- and post-swelling color techniques used to register color separated, predistorted component images in order to produce full-color reflection-format stereograms. Experiments using a single laser wavelength to produce a one-step white light-viewable image for each component color are presented. The wide view angle afforded by the flexible ultragram recording geometry, combined with a true color technique, is shown to result in a practical hardcopy display of three-dimensional computer-generated scenes.
The fidelity of color reproduction achievable in reflection holograms is analyzed by an in- depth theoretical and experimental treatment. A theoretical model is described which incorporates color rendering analysis, effects of bandwidth, signal to noise for color holograms, and wavelength shifting; this analysis considers the effect on color by the holographic process which has been previously neglected. The model compares octagons formed by points on a CIE diagram corresponding to eight Munsell colored chips when reproduced by the holographic image and when lit by a standard light source. The theory is shown to compare well with experimental results obtained using both sandwiched Ilford films and a single panchromatic film. The model is then employed to predict holographic image color reproduction for all possible recording wavelengths. From this analysis optimum wavelength combinations are obtained. It is predicted that the color reproduction obtained by a practical set of recording wavelengths (458, 532, and 633 nm, for example) can actually be more accurate in a holographic image than under laser light, albeit with a reduced gamut area. A brief discussion explains why a short blue component with low diffraction efficiency achieves superior color reproduction.