CMOS active pixel sensor technology, which is widely used these days for digital imaging, is based on analog pixels. Transition to digital pixel sensors can boost signal-to-noise ratios and enhance image quality, but can increase pixel area to dimensions that are impractical for the high-volume market of consumer electronic devices. There are two main approaches to digital pixel design. The first uses digitization methods that largely rely on photodetector properties and so are unique to imaging. The second is based on adaptation of a classical analog-to-digital converter (ADC) for in-pixel data conversion. Imaging systems for medical, industrial, and security applications are emerging lower-volume markets that can benefit from these in-pixel ADCs. With these applications, larger pixels are typically acceptable, and imaging may be done in invisible spectral bands.
A delta-sigma, or sigma-delta, analog-to-digital converter (ADC) comprises both a modulator, which implements oversampling and noise shaping, and a decimator, which implements low-pass filtering and downsampling. Whereas these ADCs are ubiquitous in audio applications, their usage in video applications is emerging. Because of oversampling, it is preferable to integrate delta-sigma ADCs at the pixel level of megapixel video sensors. Moreover, with pixel-level applications, area usage per ADC is much more important than with chip-level applications, where there is only one or a few ADCs per chip. Recently, a small-area decimator was presented that is suitable for pixel-level applications. However, though the pixel-level design is small enough for invisible-band video sensors, it is too large for visible-band ones. As shown here, nanoscale CMOS processes offer a solution to this problem. Given constant specifications, small-area decimators are designed, simulated, and laid out, full custom, for 180, 130, and 65nm standard CMOS processes. Area usage of the whole decimator is analyzed to establish a roadmap for the design and demonstrate that it could be competitive compared to other digital pixel sensors, based on Nyquist-rate ADCs, that are being commercialized.
The pixel array in a conventional image sensor performs worse than the human retina mainly in dynamic range and dark limit. These limitations may be overcome by introducing others, but we aim to overcome them without limitation given biological precedent and inspiration. Whereas conventional image sensors use linear analog pixels in a single-tier process, we design nonlinear digital pixels for multiple-tier processes. A wide dynamic range is easily achieved with nonlinearity, while image quality is ensured through digital signal processing and in-pixel analog-to-digital conversion. For low dark limit and high spatial resolution, we exploit the high fill factor and heterogeneous integration of emerging multiple-tier processes. Our progress is demonstrated with experimental results from three image sensor prototypes, which provide supporting evidence for the proposed approach.
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
Image sensors can benefit from 3D IC fabrication methods because photodetectors and electronic circuits may
be fabricated using significantly different processes. When fabricating the die that contains the photodetectors,
it is desirable to avoid pixel level patterning of the light sensitive semiconductor. But without a physical
border between adjacent photodetectors, lateral currents may flow between neighboring devices, which is called
"crosstalk". This work introduces circuits that can be used to reduce crosstalk in vertically-integrated (VI)
CMOS image sensors with an unpatterned photodetector array. It treats the case of a VI-CMOS image sensor
composed of a silicon die with CMOS read-out circuits and a transparent die with an unpatterned array of
photodetectors. A reduction in crosstalk can be achieved by maintaining a constant electric potential at all
nodes, at which the photodetector array connects with the readout circuit array. This can be implemented by
designing a pixel circuit that uses an operational amplifier with a logarithmic feedback to control the voltage
at the input node. The work presents several optional circuit configurations for the pixel circuit, and indicates
the one that is the most power efficient. Afterwards, it uses a simplified small-signal model of the pixel circuit
to address stability and compensation issues. Lastly, the method is validated through circuit simulation for a
standard CMOS process.
There is an emerging interest in vertically-integrated CMOS (VI-CMOS) image sensors. This trend arises from
the difficulty in achieving high SNR, high dynamic range, and high frame rate with planar technologies while
maintaining small pixel sizes, since the photodetector and electronics have to share the same pixel area and
use the same technology. Fabrication methods for VI-CMOS image sensors add new degrees of freedom to
the photodetector design. Having a model that gives a good approximation to the behavior of a device under
different operating conditions is important for device optimization. This work presents a new approach in
photodetector modeling, and uses it to optimize the thickness of the photosensitive layer in VI-CMOS image
sensors. We consider a simplified structure of an a-Si:H photodetector, and develop an analytical solution and a
numerical solution to state equations taken from semiconductor physics, which are shown to be comparable. If
the photosensitive layer is too thin, our model shows that the contact resistances dominate the device and, if it
is too thick, most charge carriers recombine on their way to the contacts. Therefore, an optimum thickness can