Pixel level processing promises many significant advantages including high SNR, low power, and the ability to adapt image capture and processing to different environments by processing signals during integration. However, the severe limitation on pixel size has precluded its mainstream use. In this paper we argue that CMOS technology scaling will make pixel level processing increasingly popular. Since pixel size is limited primarily by optical and light collection considerations, as CMOS technology scales, an increasing number of transistors can be integrated at the pixel. We first demonstrate that our argument is supported by the evolution of CMOS image sensor from PPS to APS. We then briefly survey existing work on analog pixel level processing an d pixel level ADC. We classify analog processing into intrapixel and interpixel. Intrapixel processing is mainly used to improve sensor performance, while interpixel processing is used to perform early vision processing. We briefly describe the operation and architecture of our recently developed pixel level MCBS ADC. Finally we discuss future directions in pixel level processing. We argue that interpixel analog processing is not likely to become mainstream even for computational sensors due to the poor scaling popular since it minimizes analog processing, and requires only simple and imprecise circuits to implement. We then discuss the inclusion of digital memory and interpixel digital processing in future technologies to implement programmable digital pixel sensors.
This paper presents a comparison between primary (RGB) and complementary (CYMG) CCD color filters arrays, as applied to digital photography. Our analysis is based upon the measured spectral characteristics of the primary and complementary color versions of the Matsushita MN3776 CCD. The important role of the color correction matrix on the quality of the image is considered both in terms of noise and color saturation. Our calculations show that there is a tradeoff between color saturation and ISO speed, when complementary filters are used. Complementary color filters only gain an ISO speed advantage when the color saturation is low. When the color correction matrix is chosen to make the ISO speeds of the two filter systems equivalent, the well capacity of the complementary CCD must be significantly higher because of the higher overall transmission of its color filters. Our comparison includes ISO speed calculations and plots of the color gamut for primary and complementary color filters with various color correction matrices. We conclude that primary color filters are superior for digital photography.
We have developed a software simulator to create physical models of a scene, compute camera responses, render the camera images and to measure the perceptual color errors between the scene and rendered imags. The simulator can be used to measure color reproduction errors and analyze the contributions of different sources to the error. We compare three color architectures for digital cameras: (a) a sensor array containing three interleaved color mosaics, (b) an architecture using dichroic prisms to create three spatially separated copies of the image, (c) a single sensor array coupled with a time-varying color filter measuring three images sequentially in time. Here, we analyze the color accuracy of several exposure control methods applied to these architectures. The first exposure control algorithm simply stops image acquisition when one channel reaches saturation. In a second scheme, we determine the optimal exposure time for each color channel separately, resulting in a longer total exposure time. In a third scheme we restrict the total exposure duration to that of the first scheme, but we preserve the optimum ratio between color channels. Simulator analyses measure the color reproduction quality of these different exposure control methods as a function of illumination taking into account photon and sensor noise, quantization and color conversion errors.
The increase in the popularity of digital cameras over the past few years has provided motivation to improve all elements of the digital photography signal chain. As a contribution towards this common goal, we present a new CFA recovery algorithm, which recovers full-color images from single-detector digital color cameras more accurately than previously published techniques. This CFA recovery algorithm uses a threshold-based variable number of gradients. In order to recover missing color information at each pixel, we measure the gradient in eight directions based on a 5 X 5 neighborhood surrounding that pixel. Each gradient value is defined as a linear combination of the absolute differences of the like-colored pixels in this neighborhood. We then consider the entire set of eight gradients to determine a threshold of acceptable gradients. For all of the gradients that pass the threshold test, we use color components from the corresponding areas of the 5 X 5 neighborhoods to determine the missing color values. We test our CFA recovery algorithm against bilinear interpolation and a single- gradient method. Using a set of standard test images, we show that our CFA recovery algorithm reduces the MSE by over 50 percent compared to conventional color recovery algorithms. In addition, the resolution test we developed also show that the new CFA recovery algorithm increases the resolution by over 15 percent. The subjective qualities of test images recovered using the new algorithm also show noticeable improvement.
The read noise characteristics of a 3T photodiode-based CMOS active pixel image sensor IC is described. The sensor is fabricated in Hewlett Packard's standard 0.5 micrometers and 3.3V mixed-signal process. The read noise characteristic of the analog signal path is theoretically estimated by adding together the noise contributions of the pixel, column amplifier and programmable gain amplifier (PGA). The read noise of the imager is then measured as a function of the on-chip programmable gain with a HP9494 mixed-signal production tester. An analysis of the measured read noise is performed to separate the noise contribution into pre-PGA and post-PGA components. The measured pre-PGA noise component is compared to the calculated estimate of the analog signal path noise. The measured pre-PGA noise is found to be much smaller than the calculated estimate. Consistency is substantially improved if pixel kTC reset noise is excluded from the calculated estimate.
Techniques for characterizing CCD imagers have been developed over many years. These techniques have been recently modified and extended to CMOS PPS and APS imagers. With the scaling of CMOS technology, an increasing number of transistors can be added to each pixel. A promising direction to utilize these transistors is to perform pixel level ADC. The authors have designed and prototyped two imagers with pixel level Nyquist rate ADC. The ADCs operate in parallel and output data one bit at a time. The data is read out of the imager array one bit plane at a time in a manner similar to a digital memory. Existing characterization techniques could not be directly used for these imagers, however, since there is no facility to read out the analog pixel values before ADC, and the ADC resolution is limited to only 8 bits. Fortunately, the ADCs are fully testable electrically without the need for any light or optics. This makes it possible obtain the ADC transfer curve, which greatly simplifies characterization. In this paper we describe how we characterize our pixel level ADC imagers. To estimate QE, we measure the imager photon to DN transfer curve and the ADC transfer curve. We find that both curves are quite linear.Using an estimate of the sense node capacitance we then estimate sensitivity, and QE. To estimate FPN we model it as an outcome of the sum of two uncorrelated random processes, one representing the ADC FPN, and the other representing the photodetector FPN, and develop estimators for the model parameters form imager data under uniform illumination. We report characterization result for a 640 by 512 imager, which was fabricated in a 0.35 micrometers standard digital CMOS process.
Portability and miniaturization are key factors in the electronic product market. A fast growing market is Digital Photographs. with a variety of small size and low weight electronic products. for protessional and personal photography, medical and video con terencing lhe development of lo cost imaging devices drives the market for low cost Packages br ()pto-Electronic imaging dice. A recent packaging technology development (1.2) oIlers a substantial reduction of both costs and physical dimension of the Opto-electronic package.
In image acquisition, the captured image is often the result of the object being convolved with a blur functional. Deconvolution is necessary in order to undo the effects of the blue. However, in real life we may have very little knowledge of the blur, and therefore we have to perform blind deconvolution. One major challenge of existing iterative algorithms for blind deconvolution is the enforcement of the convolution constraint. In this paper we describe a method whereby this constraint can be much more easily implemented in the frequency domain. This is possible because of Parseval's theorem, which allows us to relate projection in the space and frequency domains. Our algorithm also incorporates regularization of the estimated image through the use of Wiener filters. The restored images are compared to the original and noisy blurred images, and we find that the restoration process indeed provides an enhancement in visual quality.
In still image compression, the JPEG lossy still image compression standard for continuous tone image is often used. JPEG compression efficiency is achieved by decorrelating an image using Discrete Cosine Transform (DCT). Resulting quantized DCT coefficients are then entropy coded for higher compression efficiency. In the quest for higher compression efficiency, wavelet is an attractive alternative at the cost of higher computational complexity. It is proposed in here that DCT followed by Embedded Zerotree Coding (EZC) of DCT coefficients can be competitive to wavelet compression scheme. The basis of competitiveness is base don achievable visual quality at medium bit rate relevant to digital camera applications. As the computational complexity of proposed compression scheme using DCT followed by EZC is still dominated by DCT operations, it is also proposed in here that a very efficient DCT algorithm should be used that has a good computational complexity as well as good inherent parallelism. Finally, a VLSI design is proposed in here that harnesses the inherent parallelism in the efficient DCT algorithm for digital camera applications.
In this paper, an auto-focusing system based on image processing is introduced to realize auto-focusing. Image processing method like differential coefficient, maximum variance is involved in the system to pick up the information of the edges, and the statistical rules of the information under different focusing states are used to control the focus to obtain the best focusing state. The system can automatically read the distance information of the worktable and drive the pacing motor to reach the aim of auto-focusing. This auto-focusing system is loaded on the original CCD-based microscopic measuring system an the practical measuring example is given, and it is pointed out that the way mentioned before of heightening the precision is available. It is pointed out that the higher resolution, higher precision image collection board, the CCD camera with higher resolution and the higher precision pacing motor will make the auto-focusing and the measurement of the system more accurate. It can be practical applied an the further perfection of algorithm and software will result in the system having more functions.
In this paper, we propose a new scene-adaptive exposure control algorithm for digital still camera to achieve fast convergence to targeted average luminance level. We focus especially on an electrical shutter control method, which enables the widest range of control. We tested to find out the relationship between the electrical shutter sped, which is decided by the number of reset pulses of CCD, and the luminance. We composed various luminance environments and generated the exposure data of every combination of luminance environments and the number of reset pulses. We obtained the relationship that as increasing the number of reset pulses, the average luminance level of the captured image decreases. The method that we propose is the Secant Method, one of iteration method and we use it for fast and stable convergence to targeted luminance level. On the plane whose axes are average luminance value and the number of reset pulses, a straight line is defined by two points. One point is computed from the image captured at t0. Another point has minimum luminance value under the assumption that maximum reset pulses cause minimum luminance value 0. The new shutter sped on the straight line makes a new point t1. By repeating this, we can find the targeted luminance stable and fast.
In consumer digital cameras, some of the primary tasks in the image capture data path include automatic focus, automatic auto-focus is implemented using maximum contrast, ranging or sonar; white balance using color gamut determinations and 'gray value estimations', and auto- exposure using scene evaluations. We evaluate the system implications of implementing one of these task, namely auto- exposure on an embedded system - a digital camera. These include, among other things, design approach, power consumption, leverage from film cameras, and component count. Commercially available digital cameras and their choice of AE implementation are discussed where appropriate. Such an evaluation will assist, we hope, anyone designing or building a digital camera sub-system.
A digital imaging system that produces high image quality is not created by inspirational design alone, but evolves over time as the challenges posed by new applications are met. Our experience over the last ten years have led to a versatile high quality imaging systems. In this talk, we will describe some of the system challenges encountered and the enhancements incorporated to address them.
Camera calibration is a crucial problem for many industrial applications that incorporate visual sensing. In this paper, we present an approach to computing the intrinsic and extrinsic camera parameters taking into account radial lens distortion. The approach consists of directly searching for the camera parameters that best project 3D points of a calibration pattern onto intensity edges in a 2D image of this pattern without explicitly extracting the edges. Our approach can be considered an extension of Robert's method to obtain a more accurate camera model that adjusts for lens distortion. This approach tolerates less accuracy in the image features and avoids heavy dependence on individual, strongly-localized features since feature localization is instead included as part of the error measure used in the optimization process. After describing the details of our approach, the paper shows some experiments to evaluate the approach performance in terms of accuracy, sensitivity to initial conditions and reliability.
Color signal outputs form digital cameras can be calculated form spectral distribution of an illumination, spectral reflectance of a shooting object, and spectral responsivity of a camera. The methods of measuring spectral distribution of illuminations and spectral reflectance of objects have been established unambiguously, and their characteristics are available from various databases. However, no accurate methods have been clearly defined regarding the measurement of the spectral responsivity characteristics of digital cameras. For objective assessment of the performance of digital cameras which capture color images and output corresponding color information in red-green-blue digital image data, proposed methods incorporate measurements of characteristics for spectral responsivity and related items. In this paper, by adopting compensation of tone characteristics for each pixel, the authors develop yet another new method of measurements to overcome some possible defects in the previously proposed methods. The paper describes an arrangement of equipment, definition of test chart and raw data handling together with some worked examples. The newly developed method has made it possible to measure the spectral responsivity characteristics of digital cameras accurately.
This paper describes a digital image capture simulator that incorporates a lens model based on point-spread function (PSF) data from a commercial lens design package. This lens model has significant advantages over a simple MTF based model, because it accounts for all image degrading aberrations commonly encountered in image capture system. Lens design data in the form of a set of highly resolved PSFs are generated off-line using the lens design package. The data are compared for a range of wavelengths, image plane locations and field positions. To simulate a specific imaging system, these PSFs are re-sampled according to the sensor pixel pitch, system spectral sensitivities, sensor location, etc. The input to the simulator can be either a digital test target or a digital image of a real scene, which is highly over-sampled with respect to the final simulated image and show spatial and spectral characteristics are well known. The simulator output, in the form of the raw data generated by an actual digital capture device, will be highly useful in the parametric study of the design parameters of image capture systems. The performance and limitations of the lens component of the simulator are described.