A micro-digital sun sensor (μDSS) is a sun detector which senses a satellite’s instant attitude angle with respect to the sun. The core of this sensor is a system-on-chip imaging chip which is referred to as APS+. The APS+ integrates a CMOS active pixel sensor (APS) array of 368×368 pixels , a 12-bit analog-to-digital converter, and digital signal processing circuits. The μDSS is designed particularly for microsatellite applications, thus low power consumption is the major design consideration. The APS+ reduces power consumption mainly with profiling and windowing methods which are facilitated by the specific active-pixel design. A prototype of the APS+ which is designed in a standard 0.18-μm CMOS process is presented. The APS+ consumes 21 mW at 10 fps, which is 10 times less than the state of the art. In order to improve noise performance, a reset noise reduction method, quadruple sampling (QS), is implemented. QS reduces the effect of the reset noise compared to the conventional delta double sampling method, even in a 3-transistor active pixel structure. The APS+ obtains an accuracy of 0.01 deg with the QS method.
We analyze the "ageing" effect on image sensors introduced by neutrons present in natural (terrestrial) cosmic
environment. The results obtained at sea level are corroborated for the first time with accelerated neutron beam tests and
for various image sensor operation conditions. The results reveal many fascinating effects that these rays introduce on
It is generally known that active pixel sensors (APS) have a number of advantages over CCD detectors if it comes to cost
for mass production, power consumption and ease of integration. Nevertheless, most space applications still use CCD
detectors because they tend to give better performance and have a successful heritage. To this respect a change may be at
hand with the advent of deep sub-micron processed APS imagers (< 0.25-micron feature size). Measurements performed
on test structures at the University of Delft have shown that the imagers are very radiation tolerant even if made in a
standard process without the use of special design rules. Furthermore it was shown that the 1/f noise associated with deep
sub-micron imagers is reduced as compared to previous generations APS imagers due to the improved quality of the gate
oxides. Considering that end of life performance will have to be guaranteed, limited budget for adding shielding metal
will be available for most applications and lower power operations is always seen as a positive characteristic in space
applications, deep sub-micron APS imagers seem to have a number of advantages over CCD's that will probably cause
them to replace CCD's in those applications where radiation tolerance and low power operation are important
CMOS APS technology allows including signal processing in the sensor array. Inclusion of functionality however will
come at a cost both financially and in the field of limited applicability. Based on two real world examples (micro digital
sunsensor core and lightning flash detector for Meteosat Third Generation (MTG)) it will be demonstrated that large
system gains can be obtained by devising smart focal planes. Therefore it is felt that the advantages outweigh the
disadvantages for some applications, making it worth to spend the effort on system integration.
An image sensor for an ultra-high-speed video camera was developed. The maximum frame rate, the pixel count and the number of consecutive frames are 1,000,000 fps, 720 x 410 (= 295,200) pixels, and 144 frames. A micro lens array will be attached on the chip, which increases the fill factor to about 50%. In addition to the ultra-high-speed image capturing operation to store image signals in the in-situ storage area adjacent to each pixel, standard parallel readout operation at 1,000 fps for full frame readout is also introduced with sixteen readout taps, for which the image signals are transferred to and stored in a storage device with a large capacity equipped outside the sensor. The aspect ratio of the frame is about 16 : 9, which is equal to that of the HDTV format. Therefore, a video camera with four sensors of the ISIS-V4, which are arranged to form the Bayer’s color filter array, realizes an ultra-high-speed video camera of a semi-HDTV format.
Although the number of pixels in image sensors is increasing exponentially, production techniques have only been able to linearly reduce the probability that a pixel will be defective. The result is a rapidly increasing probability that a sensor will contain one or more defective pixels. Sensors with defects are often discarded after fabrication because they may not produce aesthetically pleasing images. To reduce the cost of image sensor production, defect correction algorithms are needed that allow the utilization of sensors with bad pixels. We present a relatively simple defect correction algorithm, requiring only a small 7 by 7 kernel of raw color filter array data that effectively corrects a wide variety of defect types. Our adaptive edge algorithm is high quality, uses few image lines, is adaptable to a variety of defect types, and independent of other on-board DSP algorithms. Results show that the algorithm produces substantially better results in high-frequency image regions compared to conventional one-dimensional correction methods.
A new high-speed CCD-sensor, capable of capturing 103 consecutive images at a speed of 1 million frames per second, was developed by the authors. To reach this high frame-rate, 103 CCD-storage-cells are placed next to each image-pixel. Sensors utilizing this on-chip-memory-concept can be called In-situ Storage Image Sensor or ISIS. The ISIS is build in standard CCD-technology. To check if this technology could be used for an ISIS, a test sensor called ISIS V1 was designed first. The ISIS V1 is just a simple modification of an existing standard CCD-sensor and it is capable of taking 17 consecutive images. The new sensor called ISIS V2 is a dedicated design in the existing technology. It is equipped with storage CCD-cells that are also used in the standard CCD-sensor, large light-sensitive pixels, an overwriting mechanism to drain old image information and a CCD-switch to use a part of the storage cells also as vertical read-out registers. Nevertheless, the new parts in the architecture had to be simulated by a 3-D device simulator. Simulation results and characteristic parameters of the ISIS-CCD as well as applications of the camera are given.
A bouwblok concept is described which allows one to fabricate several large area CCD image sensors from a single mask set. The size of the various imagers can differ both horizontally as well as vertically. The new method drastically reduces the development time and the associated cost of a new sensor. Because all images use of same basic pixel structure, the characteristics of new configurations can be fairly well predicted.
A technology is described which allows the application of real-time imaging in combination with mega-pixel CCDs. This technology is based on the following characteristics: high-speed transport of the video information through the parallel CCDs in the imaging section, very high-speed transport of the charge packets through the serial section of the devices, and high- speed conversion of the electrons to a measurable voltage by the output amplifier. Key competencies to comply with these requirements are: low-resistive CCD gates, low- capacitance CCD gates and high bandwidth and low noise floor output stages.
This course intends to support engineers who need to develop and design color imaging applications. The participants will get an idea about how differently image sensors behave in comparison to our human visual system and how the processing pipeline has to deal with the issues involved. Examples are the Auto White Balance, Color Demosaicing, Color Matrixing, Vignetting. But not only these items will be discussed, a major part of the course will be devoted to the correction of artifacts introduced by the image sensor. Examples are Dark Current, Defects, Fixed-Pattern Noise, Temporal Noise. "There's more to the picture than meets the eye (Neil Young, 1977)", this could be the title of the course as well. Many processing steps are taking place on the raw data delivered by the image sensor before the RGB data is shown to the end-user. And the course deals with all these processing steps.
SC760: CCD Technology/Digital Photographic Systems Technology
In the past, the quality of a picture taken, for instance by a DSC, was determined to a large extent by the quality of the lens and of the image sensor, but digital-signal-processing power has made a lot of progress over the last year. The understanding of the physics behind the various defects and limitations of the components that make up a DSC has also grown rapidly. The combination of these two factors allows many defects in a DSC to be corrected. Along the road from photons IN to digital numbers OUT, the signal can pass through several calculation and correction cycles to improve the quality of the end result. This course will study the main artifacts that are introduced by the lens and the image sensor, and show how they can be corrected or compensated. Examples are lens vignetting, white balance, color sampling, non-ideal color filters, temporal noise, fixed-pattern noise, dead pixels, dark current, etc. Applications can range from consumer cameras (mobile imaging, DSC, cam-corders, etc.) to professional or scientific applications (medical, broadcast, astronomy, metrology, etc.). All artifacts and correction algorithms will be demonstrated by means of images. A basic understanding of the working principles of image sensors will be reviewed very briefly in the class.