CMOS image sensors are generally considered as being particularly suited to the harsh space environment, if they can get their performance up to the CCD levels. Recent developments indicate however that this object can be achieved.
This paper presents the current state of the art in CMOS Active Pixel Sensors (APS) for space applications at Fillfactory and also highlights some commercial and industrial development that can be of interest for the space community.
CCDs have been used for many years for Hyperspectral imaging missions and have been extremely successful. These include the Medium Resolution Imaging Spectrometer (MERIS)  on Envisat, the Compact High Resolution Imaging Spectrometer (CHRIS) on Proba and the Ozone Monitoring Instrument operating in the UV spectral region. ESA are also planning a number of further missions that are likely to use CCD technology (Sentinel 3, 4 and 5). However CMOS sensors have a number of advantages which means that they will probably be used for hyperspectral applications in the longer term.
There are two main advantages with CMOS sensors: First a hyperspectral image consists of spectral lines with a large difference in intensity; in a frame transfer CCD the faint spectral lines have to be transferred through the part of the imager illuminated by intense lines. This can lead to cross-talk and whilst this problem can be reduced by the use of split frame transfer and faster line rates CMOS sensors do not require a frame transfer and hence inherently will not suffer from this problem. Second, with a CMOS sensor the intense spectral lines can be read multiple times within a frame to give a significant increase in dynamic range.
We will describe the design, and initial test of a CMOS sensor for use in hyperspectral applications. This device has been designed to give as high a dynamic range as possible with minimum cross-talk. The sensor has been manufactured on high resistivity epitaxial silicon wafers and is be back-thinned and left relatively thick in order to obtain the maximum quantum efficiency across the entire spectral range
For scientific and earth observation space missions, weight and power consumption is usually a critical factor. In order to obtain better vehicle integration, efficiency and controllability for large format NIR/SWIR detector arrays, a prototype ASIC is designed. It performs multiple detector array interfacing, power regulation and data acquisition operations inside the cryogenic chambers. Both operation commands and imaging data are communicated via the SpaceWire interface which will significantly reduce the number of wire goes in and out the cryogenic chamber. This “ASIC” prototype is realized in 0.18um CMOS technology and is designed for radiation hardness.
The success of the next generation of instruments for ELT class telescopes will depend upon improving the image quality by exploiting sophisticated Adaptive Optics (AO) systems. One of the critical components of the AO systems for the European Extremely Large Telescope (E-ELT) has been identified as the Large Visible Laser/Natural Guide Star AO Wavefront Sensing (WFS) detector. The combination of large format, 1600x1600 pixels to finely sample the wavefront and the spot elongation of laser guide stars (LGS), fast frame rate of 700 frames per second (fps), low read noise (⪅ 3e-), and high QE (⪆ 90%) makes the development of this device extremely challenging. Results of design studies concluded that a highly integrated Backside Illuminated CMOS Imager built on High Resistivity silicon as the most suitable technology.
Two generations of the CMOS Imager are planned: a) a smaller ‘pioneering’ device of ⪆ 800x800 pixels capable of meeting first light needs of the E-ELT. The NGSD, the topic of this paper, is the first iteration of this device; b) the larger full sized device called LGSD. The NGSD has come out of production, it has been thinned to 12μm, backside processed and packaged in a custom 370pin Ceramic PGA (Pin Grid Array). Results of comprehensive tests performed both at e2v and ESO are presented that validate the choice of CMOS Imager as the correct technology for the E-ELT Large Visible WFS Detector. These results along with plans for a second iteration to improve two issues of hot pixels and cross-talk are presented.
An ASIC is developed to control and data quantization for large format NIR/SWIR detector arrays. Both cryogenic and space radiation environment issue are considered during the design. Therefore it can be integrated in the cryogenic chamber, which reduces significantly the vast amount of long wires going in and out the cryogenic chamber, i.e. benefits EMI and noise concerns, as well as the power consumption of cooling system and interfacing circuits. In this paper, we will describe the development of this prototype ASIC for image sensor driving and signal processing as well as the testing in both room and cryogenic temperature.
The success of the next generation of instruments for ELT class telescopes will depend upon improving the image quality by exploiting sophisticated Adaptive Optics (AO) systems. One of the critical components of the AO systems for the E-ELT has been identified as the optical Laser/Natural Guide Star WFS detector. The combination of large format, 1760×1680 pixels to finely sample the wavefront and the spot elongation of laser guide stars, fast frame rate of 700 frames per second (fps), low read noise (< 3e-), and high QE (> 90%) makes the development of this device extremely challenging. Design studies concluded that a highly integrated Backside Illuminated CMOS Imager built on High Resistivity silicon as the most likely technology to succeed. Two generations of the CMOS Imager are being developed: a) the already designed and manufactured NGSD (Natural Guide Star Detector), a quarter-sized pioneering device of 880×840 pixels capable of meeting first light needs of the E-ELT; b) the LGSD (Laser Guide Star Detector), the larger full size device. The detailed design is presented including the approach of using massive parallelism (70,400 ADCs) to achieve the low read noise at high pixel rates of ~3 Gpixel/s and the 88 channel LVDS 220Mbps serial interface to get the data off-chip. To enable read noise closer to the goal of 1e- to be achieved, a split wafer run has allowed the NGSD to be manufactured in the more speculative, but much lower read noise, Ultra Low Threshold Transistors in the unit cell. The NGSD has come out of production, it has been thinned to 12μm, backside processed and packaged in a custom 370pin Ceramic PGA (Pin Grid Array). First results of tests performed both at e2v and ESO are presented.
The success of the next generation of instruments for 8 to 40-m class telescopes will depend upon improving the image
quality (correcting the distortion caused by atmospheric turbulence) by exploiting sophisticated Adaptive Optics (AO)
systems. One of the critical components of the AO systems for the E-ELT has been identified as the Laser/Natural Guide
Star (LGS/NGS) WaveFront Sensing (WFS) detector. The combination of large format, 1760x1680 pixels to finely
sample (84x84 sub-apertures) the wavefront and the spot elongation of laser guide stars, fast frame rate of 700 (up to
1000) frames per second, low read noise (< 3e-), and high QE (> 90%) makes the development of such a device
extremely challenging. Design studies by industry concluded that a thinned and backside-illuminated CMOS Imager as
the most promising technology. This paper describes the multi-phased development plan that will ensure devices are
available on-time for E-ELT first-light AO systems; the different CMOS pixel architectures studied; measured results of
technology demonstrators that have validated the CMOS Imager approach; the design explaining the approach of
massive parallelism (70,000 ADCs) needed to achieve low noise at high pixel rates of ~3 Gpixel/s ; the 88 channel
LVDS data interface; the restriction that stitching (required due to the 5x6cm size) posed on the design and the solutions
found to overcome these limitations. Two generations of the CMOS Imager will be built: a pioneering quarter sized
device of 880x840 pixels capable of meeting first light needs of the E-ELT called NGSD (Natural Guide Star Detector);
followed by the full size device, the LGSD (Laser Guide Star Detector). Funding sources: OPTICON FP6 and FP7 from
European Commission and ESO.
Avalanche photodiodes are very well suited and extensively used for low light application. In this paper we present a devise using avalanche photodiodes in conjunction with a pulsed laser-source to be used as an optical altimeter. The extreme sensitivity of a dedicated silicon SPAD array is combined with a versatile standard CMOS readout circuit to achieve unique performances. This imaging device is able to perform ranging with four centimeters accuracy over five kilometers distance. It is also capable of delivering quantum limited images. Development of the readout circuit will be disclosed as well as measurement results performed on the final device.
We present the performance characteristics of a CMOS image sensor, manufactured on wafers with a specially designed multiple epitaxial layer. At the homo-junction between two consecutive epitaxial layers a small potential drop or electric field represents a barrier for electrons diffusing towards the back of the wafer. The multiple epitaxial layer stack results thus in a net drive or confinement of photo-charges towards the surface. As a result there is anisotropical diffusion of charge that are generated deep in the Silicon, e.g. by near infrared (NIR) or X-ray radiation. The spectral response is an order of magnitude higher for than for the same image sensor on "regular" wafers. The anisotropical diffusion results in a limited MTF degradation compared to wafers with a single thick epitaxial layer.
We will present a 3044 x 4556 pixels CMOS image sensor with a pixel array of 36 x 24 mm2, equal to the size of 35
mm film. Though primarily developed for digital photography, the compatibility of the device with standard optics for
film cameras makes the device also attractive for machine vision applications as well as many scientific and highresolution
applications. The sensor makes use of a standard rolling shutter 3-transistor active pixel in standard 0.35 μm
CMOS technology. On-chip double sampling is used to reduce fixed pattern noise. The pixel is 8 μm large, has 60,000
electrons full well charge and a conversion gain of 18.5 μV/electron. The product of quantum efficiency and fill factor
of the monochrome device is 40%. Temporal noise is 35 electrons, offering a dynamic range of 65.4 dB. Dark current is
4.2 mV/s at 30 degrees C. Fixed pattern noise is less than 1.5 mV RMS over the entire focal plane and less than 1 mV
RMS in local windows of 32 x 32 pixels. The sensor is read out over 4 parallel outputs at 15 MHz each, offering 3.2
images/second. The device runs at 3.3 V and consumes 200 mW.
We present a 1.3 megapixel CMOS active pixel sensor dedicated to industrial vision. It features both rolling and synchronous shutter. Full frame readout time is 33 ms, and readout speed can be boosted by windowed region of interest (ROI) readout. High dynamic range scenes can be captured using the double and multiples slope functionality. All operation modes and settings can be programmed over a serial or a parallel interface.
In this paper we discuss the dark current increase in CMOS Active pixel Sensors (APS) due to total dose and proton induced damage. We describe measurement results on several diodes that were used to investigate the degradation of the pixel photodiode under ionizing radiation. This study resulted in the design of radiation tolerant pixels that have proven to tolerate at least 200 kGy(Si) total dose from a 60Co source. Standard APS sensors show already large degradation after less than 100 Gy(Si) due to a strong surface leakage current increase. Standard CMOS imagers were also evaluated with respect to proton induced damage. Highly energetic protons can displace atoms from their lattice position, giving rise to an increase in mean level of dark current and non-uniformity.
Co60 irradiations have been carried out on test structures for the development of CMOS Active Pixel Sensors that can be used in a radiation environment. The basic mechanisms that may cause failure are presented. Ionization induced damage effects such as field leakage currents and dark current increase are discussed in detail. Two different approaches to overcome these problems are considered and their advantages and disadvantages are compared. Total dose results are presented on a pixel that can tolerate more than 200 kGy(Si) (20 Mrad(Si)) from a Co60 source.
We describe a compact algorithm that can on the fly detect and correct isolated missing pixels in the output stream of an image sensor, without significantly degrading the image quality. The algorithm is in essence a small kernel non- linear filter. It is based on the prediction of the allowed range of gray values for a pixel, for the gray values of the neighborhood of that pixel. A few examples will illustrate the effect of the algorithm on realistic images.
The paper describes the result of the first phase of the ESPRIT LTR project SVAVISCA. The aim of the project was to add color capabilities to a previously developed monochromatic version of a retina-like CMOS sensor. In such sensor, the photosites are arranged in concentric rings and with a size varying linearly with the distance from the geometric center. Two different technologies were investigated: 1) the use of Ferroelectric Liquid Crystal filters in front of the lens, 2) the deposition of color microfilters on the surface of the chip itself. The main conclusion is that the solution based on microdeposited filters is preferable in terms of both color quality and frame rate. The paper will describe in more detail the design procedures and the test results obtained.
A color CMOS image sensor has been developed which meets the performance of mainstream CCDs. The pixel combines a high fill factor with a low diode capacitance. This yields a high light sensitivity, expressed by the conversion gain of 9 (mu) V/electron and the quantum efficiency fill factor product of 28 percent. The temporal noise is 63 electrons, and the dynamic range is 67 dB. An offset compensation circuit in the column amplifiers limits the peak-to-peak fixed pattern noise to 0.15 percent of the saturation voltage.
Random access active pixel CMOS image sensors generally suffer from non-uniformity in their pixel outputs. This document describes a simple mixed analogue-digital integrated circuit for fixed-pattern-noise compensation. The method has been applied to the range of sensors developed by IMEC, and improves their operation beyond mere static noise suppression.
A new image sensor, using CMOS technology, has been designed and fabricated. The pixel distribution of this sensor follows a log-polar mapping, thus the pixel concentration is maximum at the center reducing the number of pixels towards the periphery, having a resolution of 56 rings with 128 pixels per ring. The design of this kind of sensors has special issues regarding the space-variant nature of the pixel distribution. The main topic is the different pixel size that requires scaling mechanisms to achieve the same output independently of the pixel size. This paper presents some study results on the scaling mechanisms of this kind of sensors. A mechanism for current scaling is presented. This mechanism has been studied along with the logarithmic response of these special kind of sensing cells. The chip has been fabricated using standard 0.7 micrometer CMOS technology.
The paper presents a low cost, miniature sensor that is able to compute in real time (up to 1000 frames/sec) motion parameters like the degree of translation, expansion or rotation that is present in the observed scene, as well as the so-called time-to-crash (TTC), that is the time required for a moving object to collide with the sensor. The sensing principle is that of computing and analyzing the optical flow projected by the scene on the sensor focal plane, through a novel algorithmic technique, based on sparse sampling of the image and one-dimensional correlation. The hardware implementation of the algorithm is based on two custom VLSI chips: one is a CMOS image sensor, having nonstandard pixel geometry, while the other one is a digital correlator that computes at high speed the optical flow vectors. The high-level control and communication tasks are managed by a microcontroller, thus guaranteeing a high level of flexibility and adaptability of the sensor properties towards different application requirements and/or variable external conditions.
In this article we discuss the trade-offs for the design, fabrication and interfacing of fast pixel addressable (random-access) cameras. In order to benefit most from the random addressability, the interface must be optimized for access through a data bus/address bus structure. Measures to correct the camera's inherent non-uniformity must not slow down the interface speed.
We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.
Two CMOS image sensor concepts, developed for motion extraction, are proposed. The algorithm implemented in each pixel is either: the calculation of the temporal variation of the difference of the logarithm of intensity in two adjacent pixels; or a more general implementation of the spatial and temporal filtering over the local neighborhood. Temporal differencing yields peak in the response of pixels with changing intensity. The spatial differencing provides high-pass filtering and invariance to time-varying external lighting. We also compare two ways to use this sensor module to compute the velocity of edges moving along the sensor. In one implementation, the sensor is used as an input for a correlation algorithm, calculating the optical flow vector. The other possibility is to detect motion locally in each pixel, and measuring the time of switching between adjacent pixels which detect the motion.
For time-critical industrial machine vision applications, a dedicated imager has been developed. The camera can be programmed to operate in several resolutions, by binning the signal charges of neighboring pixels on the sensor plane itself. Additionally the readout window was made programmable and an electronic shutter function was implemented. This square 256 X 256 imager was fabricated in a standard 1.5 micrometers CMOS technology. The readout occurs in two phases. After transferring in parallel a row of charges to 256 charge sensitive amplifiers, these signals are coupled to a single output amplifier. By controlling the sequence of addresses and reset pulses of the amplifiers, charges of different pixels are accumulated. This way multiple resolutions can be programmed. The imager is operated at data rates up to 10 MHz providing about 125 full images per second. At lower resolution, even higher frame rates are obtained. The signal to noise ratio is about 35 dB. This paper reports on the fixed pattern noise, response, speed and smear behavior of this imager.
The course covers many basic aspects of CMOS image sensors, starting at device physics and basic laws of light sensitivity and noise, other image sensor technologies, to CMOS pixel and array architectures. Emphasis lies on circuit principles, noise sources, how noise enters the sensor's signal, and technological and optical limits on the optical performance.
This course covers many aspects of CMOS image sensors. Starting with device physics and basic laws of light sensitivity and noise, moving to other image sensor technologies, the course describes CMOS pixel and array architectures. Emphasis is on circuit principles, noise sources, how noise enters the sensor's signal, and technological and optical limits on the optical performance.