Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.
The resolution of digital images is limited to the camera's sampling interval, and their visual quality depends on the level of degradations from acquisition, through quantization, transmission and digital filtering, to display. This paper presents the information metric as an effective image quality assessment tool for high-resolution digital imaging-systems design. It shows its capabilities for any set of design constraints by assessing the electro/optical/digital imaging process as a unified system. It ties improvements in resolution and clarity of the final image representation to increases of the acquired information, by correlating the loss of resolution to the loss of information in the assessed system.
The coming information society will require images at the high end of the quality range. We are investigating the important physical factors for the difficult reproduction of high level, high quality sensation in the electronic capture and display of images. We have found a key assessment word 'image depth' that describes appropriately the high order subjective sensation that is indispensable for the display of extra high quality images. Related to the depth of images, we have discovered a new physical factor and the degree of precision required of already known physical factors for the display of extra high quality images. The cross modulation among R, G, and B signals is the newly discovered important physical factor affecting the quality of an electronic display. In addition, we have found that very strict control of distortion in both the gamma and the step response is necessary and that aliasing of the displayed images also destroys the images depth. This paper first outlines the overall objective of our work, and then describes the specific effect of cross modulation distortion, gamma, step response and aliasing that relate to image depth as important for extra high quality imaging.
In images, anomalies such as edges or object boundaries take on a perceptual significance that is far greater than their numerical energy contribution to the image. Wavelet transform highlights these anomalies by representing them with significant coefficients. The contribution of a wavelet coefficient to the perceptual quality of the image is related to its magnitude. Degradation in image quality due to image compression reflects in the form of reduction in the magnitude of the wavelet coefficients. Since, significant wavelet coefficients appear across different scales and orientations, it is important to observe the wavelet transform at different scales and orientations. In this paper, the wavelet transform of a given image and the reconstructed images at various quality levels are represented in the form of energy density plots suggested in reference one. A quality metric is proposed based on the absolute difference between the energy densities corresponding to the original and reconstructed images. Preliminary results obtained using the scale-based image quality evaluation strategy are reported.
Traditional image quality rating schemes use descriptive scales applicable to wide ranges of quality. These scales, based on equal interval verbal descriptors, cannot be used for restricted ranges of quality now encountered in image compression studies. Although numerical category scales have been successfully used in some studies for quantifying small variations in quality arising from lossy image compression, problems arise in more general image coding applications. In this work, we propose a double anchored numerical category scale based on a 3-context visual assessment scheme for image coding applications. The goal is to devise a common subjective scale applicable to a set of images produced from multiple scenes compressed by multiple coding algorithms. Therefore, the contexts are in the use of distinct coders and distinct images. The first two contexts, using a specific image scene, are the visibility of specific coder induced artifacts and the visibility of artifacts arising from different coders. In the third context, the artifact visibility is in terms of the content of different image scenes. Separate scales are obtained for images differing in scene content and for each coding algorithm, using numerical category scaling with explicit high and low anchors. These scales are linked using pairwise matching techniques to obtain a robust image quality sale.
This paper presents a developed resolution transformation method which achieves scaleable resolution transformation of bi-level images with high image quality, real-time processing and small circuitry. The progress of networked multi- functional hard-copy products for printing images from various sources such as facsimile machines, PCs, scanners and digital cameras, which have various resolutions, has created an urgent need for scaleable resolution transformation with high image quality. The proposed method applies outlining and rendering to scaleable resolution transformation. Furthermore, it minimizes the size of the circuitry by modifying their algorithms. Outlines are generated from the bi-level bit map of the source image by fitting approximated B-spline curve upon edge pixels. The bi-level bit map image which has a different resolution is generated from the outlines with local rendering. Curve fitting of approximated B-spline curve and local rendering makes it possible to reduce the circuitry. As a result, real-time scaleable resolution transformation of high quality is achieved with small circuitry. The quality of images transformed from 200 dpi to 600 dpi by the proposed method is almost equivalent to a genuine 600 dpi image.
In this paper, we present a novel approach which represents an image by using high-curvature points of the image surface. The basic concept and the morphological methods for finding these high-curvature points are introduced. This representation can faithfully keep the spatial information. The reconstruction of the original image using these high-curvature points is successfully developed and demonstrated. We also show a primitive linking process and the experiment results which have demonstrated the promising possibility of using this approach to compactly represent an image.
We present in this paper a wavelet packet transform algorithm for color still image and we show how and with what performances the transformed image tree can be pruned and the imagets altered in order to obtain a compression/decompression algorithm respectful of human psychovisual image perception. In the first part we present the basic assumptions for human vision on which we have constructed our algorithm; the second part deals with the color transformation we used before applying wavelet packet transform and in the third part we show that a quasi lossless compression/decompression scheme can be easily obtained with compression ratio up to 1:10 (quantization step was not considered here). Finally, we propose a new quality criterion based upon the results of our tests on human sensitivity to various scale and color image details. This figure of merit can be considered as a multiresolution release of the most commonly used PSNR criterion.
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only listless compression is acceptable. Most existing listless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper a new listless color transform is proposed, based on the Karhunen- Loeve Transform (KLT). This transform removes redundancies in the color representation of each pixel and can be combined with many existing compression schemes. In this paper it is combined with a prediction scheme that exploits spatial redundancies. The results proposed in this paper show that the color transform effectively decorrelates the color components and that it typically saves about a half to two bit per pixel, compared to a purely predictive scheme.
In the color image compression field, it is well known by researchers that the information is statistically redundant. This redundancy is a handicap in terms of dictionary construction time. A way to counterbalance this time consuming effect is to reduce the redundancy within the original image while keeping the image quality. One can extract a random sample of the initial training set on which one constructs the codebook whose quality is equal to the quality of the codebook generated from the entire training set. We applied this idea in the color vector quantization (VQ) compression scheme context. We propose an algorithm to reduce the complexity of the standard LBG technique. We searched for a measure of relevance of each block from the entire training set. Under the assumption that the measure of relevance is a independent random variable, we applied the Kolmogorov statistical test to define the smallest size of a random sample, and then the sample itself. Finally, from blocks associated to each measure of relevance of the random sample, we compute the standard LBG algorithm to construct the codebook. Psychophysics and statistical measures of image quality allow us to find the best measure of relevance to reduce the training set while preserving the image quality and decreasing the computational cost.
A new algorithm that color quantizes video sequences is described. It creates multiple color palettes for a video sequence yet avoids the extreme case of creating a palette for each frame. It exploits the fact that there are many instances during a video sequence where minor color changes occur between consecutive frames. The algorithm calculates elementary statistics for each frame to help ascertain whether or not a new color palette needs to be created. The proposed algorithm performs clustering by grouping the colors of a frame into partitions using a method based on principal components. Prior to clustering, a statistical test determines whether or not the colors in the current frame are significantly different than those of the frame where clustering last occurred. If the algorithm detects a significant difference, it performs a clustering phase to calculate a new color palette. The algorithm skips the clustering phase if a significant difference is not detected. Quantized frames are generated by mapping each color in the original frame to its nearest neighbor in the color palette.
We present a novel halftoning technique for transformation of continuous tone images into binary halftoned separations. The algorithm is based on a successive assessment of the near optimum sequence of positions to render. The impact of each rendered point is fed back to the process as a distribution function thereby influencing the following evaluations. The distribution function is not constant over the density range. In order to be able to separate the dots adequately in the highlights the 'width or radius' of the distribution has to be made larger than in the mid-tones. The human visual system and the effect of dot gain are also taken into account in this algorithm. The notion of incremental dot gain is introduced. Since the series of positions to render are not known in advance the final necessary dot gain compensation is impossible to assess. However the incremental dot gain can be computed in advance for each configuration of dots and taken into account in the process of generating the output. Some aspects of the process have certain resemblance with error distribution based algorithms. However the raster scanning sequence of rendering the output points in usual error diffusion algorithms is completely different from the image dependent traversal described in this paper.