MOSAD, provides a low power on focal pane analog to digital, A/D, process. In this approach, an oversample A/D is placed at each pixel site, with resultant benefits to response linearity and noise performance. An architecture for a visible light imaging sensor using silicon charge well detection was developed for application into video conferencing. There are a total of 76,800 A/D's on the chip. The devise is a monolithic integrated circuit that includes the sensors, A/D's and readout circuitry. A production 1.2 micron CCD/CMOS process was used in it construction. The array was designed with a 320 X 240 format with the pixels placed on 16 micron centers. There was negligible impact on the pixel area due to the A/D such that a fill factor of 67 percent was achieved with front side illumination. On chip power consumption is under 15 milliwatts. Pixels are read in the same manner as accessing the bit locations of a DRAM. As each row of pixels are accessed, they put ones or zeros on the output column that are sensed and passed onto the output buss. The A/D design is based on the patented MOSAD technology, It uses charge well switching at the pixel to convert the accumulated analog signal to digital data. Because of its high noise immunity, no pixel buffer amplifier is required, thus preserving fill factor. Another unique characteristic is the output data format which is directly compatible with Stream Vision, a patented digital display method. This format was adopted to produce a low cost all digital system from camera to display.
New technologies to increase the photo-sensitivity and reduce the shutter voltage of the vertical over-flow-drain (VOD) have been developed for CCD image sensors. A 40 percent photo-sensitivity increase was obtained by forming an anti-reflection film over the photodiode, in addition to reducing the thickness of the P+ layer formed at the photodiode surface. A VOD shutter voltage reduction from 31 V to 18 V was successfully obtained by using an epitaxially grown substrate with double impurity concentration layers. We found that a stacked film structure of Si3N4 on SiO2 film was suitable for the anti-reflection to obtain the maximum increase in sensitivity. A suitable film thickness was estimated by using an optical multiple- reflection analysis simulator, resulting in a 10 nm SiO2 and a 50 nm Si3N4 films. As a result, a 30 percent higher photo-sensitivity than that of the conventional structure was obtained. Additionally, by reducing the depth of the P+ junction formed at the photodiode surface, a 10 percent photo-sensitivity increase was obtained for a 15 percent depth reduction. The VOD shutter voltage reduction was performed by preventing the photodiode depletion layer depth from spreading deeply into the substrate. An epitaxially grown substrate with double impurity concentration layers, whose impurity concentration of the bottom layer is 10 times higher than that of the top layer, was adopted.
A high frame rate optically shuttered CCD camera for radiometric imaging of transient optical phenomena has been designed and several prototypes fabricated, which are now in evaluation phase. the camera design incorporates stripline geometry image intensifiers for ultra fast image shutters capable of 200ps exposures. The intensifiers are fiber optically coupled to a multiport CCD capable of 75 MHz pixel clocking to achieve 4KHz frame rate for 512 X 512 pixels from simultaneous readout of 16 individual segments of the CCD array. The intensifier, Philips XX1412MH/E03 is generically a Generation II proximity-focused micro channel plate intensifier (MCPII) redesigned for high speed gating by Los Alamos National Laboratory and manufactured by Philips Components. The CCD is a Reticon HSO512 split storage with bi-direcitonal vertical readout architecture. The camera main frame is designed utilizing a multilayer motherboard for transporting CCD video signals and clocks via imbedded stripline buses designed for 100MHz operation. The MCPII gate duration and gain variables are controlled and measured in real time and up-dated for data logging each frame, with 10-bit resolution, selectable either locally or by computer. The camera provides both analog and 10-bit digital video. The camera's architecture, salient design characteristics, and current test data depicting resolution, dynamic range, shutter sequences, and image reconstruction will be presented and discussed.
A 2 inch matrix of 12288 light sensors was realized on a plastic substrate by low temperature thin film technology. In comparison to a glass realization the sensor matrix has a reduced weight, is unbreakable and flexible. Each sensor cell contains a photoconductor as a resistive sensor and a TFT for reading out the resistor values line by line. By using amorphous silicon for both the channel of the TFT and the photoconductor it is possible to fabricate the sensor matrix with a low number of masks. As gate insulator amorphous silicon nitride is used. The above mentioned films were deposited in a PECVD reactor in the same vacuum at 180 degree C which is suitable for PES-foils. The mismatch of the thermal expansion coefficients of the thin films and the plastic substrates causes peeling of the films and cracks. The remedy is the use of sputtered adhesion layers for the metallization and PECVD layers. To qualify the different adhesion layers a tape test method was used. Cracks were avoided by reducing the internal stress. For this reason a new layout was designed in which all PECVD layers are patterned in islands providing a minimum of internal stress. The TFTs show an ON/OFF ratio of 6 decades and a mobility of 0.3 cm2/Vs. The change of the a-Si:H conductivity under illumination is 6 nS/W/cm2. This value allows for a dynamic range of the sensor matrix of 60 dB. The new flexible sensor offers a wide variety of applications like an electronic eye with around view for the observation of wok pieces and traffic signs.
Avalanche photodiode (APD) imaging arrays offering programmable gain are a long awaited achievement in electronic imaging. In view of the recent boom in CMOS imaging, a logical next step for increasing responsivity was to integrate APDs in CMOS. Once the feasibility of these diodes has been prove, we can combine the devices with control and readout circuitry, thus creating an integrated 2D APD array. Such arrays exploit the sub-Geiger mode, where the applied voltage is just slightly less than the breakdown voltage. The diodes used in the 2D array were implemented in a standard 2 micrometers BiCMOS process. To keep the readout circuitry simple, a small transimpedance amplifier has been designed, taking into account that there is a significant trade off between noise performance and silicon area. A with other CMOS imagers, we use a random access active pixel sensor readout. The compete imaging array consists of 12 by 24 pixels, each of size 71.5 micrometers by 154 micrometers to fit on a 5 mm2 chip. First images prove the feasibility of avalanche photodiode imaging using standard BiCMOS technology. Thus important data to improve sensor operation has been collected. The complexity of the imager design is increased by special noise and high voltage requirements. Area and calibration restrictions must be considered also for this photo-sensor array.
High-speed sensors have been developed for industrial color and position analysis. These sensors consist of Si-PIN- diodes covered by miniaturized RGB-interference filters, a micro-lens array, an imaging micro-lens, and a glass cover plate. The basic color-receiver comprises three rhombic Si- PIN diodes arranged in a basic hexagonal pattern with a diameter of 700 micrometers . To allow a simultaneous color and position control of still samples as well as for moving samples, e.g. samples lying on assembly lines or conveyor belts, the basic pattern has been arranged in two different receiver arrays. Both diode patterns each consisting of 64 diodes are arranged in user-defined arrays making efficient use of the available sensor area. The diodes are individually contacted to provide a rapid access to certain pixels or pixel clusters. Using interference filters the desired spectral band is transmitted to the detector almost without losses and the undersized band is blocked by reflection. The imaging microlens determines object distance, total visual field, and resolution. By using a coated glass cover plate the effective range is restricted to the desired spectral region. The object distance is adjustable between 10 mm and 50 mm and the diameter of the visual field is between 5 mm and 50 mm. The maximum variance in object height is 0.5 mm and the maximum lateral resolution is 0.5 mm by 0.5 mm. The operating frequency of the total sensor system depends on the data processing unit and is between 2 kHz and 10 kHz. These sensors make high- speed color analysis possible for a frequency range in which CCD-systems are too slow.
A new type of image sensor featuring a unique structure is studied with the aim of achieving both super-high sensitivity and ultrahigh-definition. This image sensor combines a field emitter array (FEA) and a high-gain avalanche rushing amorphous photoconductor target. We investigated the conditions for improving resolution in a vacuum chamber by inserting a mesh electrode between the FEA and the target. The results indicate that the resolution can be improved by strengthening the accelerating electric field between the FEA gate and the mesh, and by placing the mesh closer to the FEA. We also propose a new parallel readout system that is suitable for an ultrahigh-definition image sensor. Dividing the target into multiple segments and reading out signals for each segment simultaneously enables us to decrease the drive frequency. In our first attempt, we synthesized a good 60 X 60 pixel image from two 30 X 60 pixel segments.
A new type of sensor has been developed for applications in high radiation environments such as space. In this paper we present the pixel structure, fabrication cycle and measured performance of a family of active pixel charge injection devices designed in PMOS and respectively CMOS technology. A simple 8 by 8 prototype was developed in 1996. This was followed by a 40 by 54 array having 90 micrometers pixel size. This device has address decoders integrated on chip and, a transfer gate included in each pixel in order to eliminate feed-through noise. These circuits were fabricated at RIT using a 6 micrometers PMOS double polysilicon technology. A third 128 by 128 array having 41 micrometers pixel size has been designed and manufactured at a commercial foundry using 2 micrometers CMOS technology. The on-chip decoders allow resetting of selective regions of the chip.
This paper describes the performance of a family of full- frame sensor designed where a transparent electrode replaces one of the polysilicon gates. The sensors are all fabricated with a true two-phase buried channel CCD process that is optimized for operation in multi-pinned phase mode for low dark current. The true two-phase architecture provides many advantages such as progressive scan, square pixels, high charge capacity, and simplified drive requirements. The uncomplicated structure allows large area arrays to be fabricated with reasonable yield. Inclusion of a transparent gate increases the response by a factor of 10 at 400 nm and 50 percent at 600 nm.
Proc. SPIE 3649, Time-dependent multiwavelength single-frame chemical imaging spectroscopy of laser plumes using a dimension-reduction fiber optic array, 0000 (27 April 1999); https://doi.org/10.1117/12.347064
A single-frame approach to chemical imaging with high spectroscopic resolution is described that makes use of a second generation dimension-reduction fiber-optic array. Laser-induced plume images are focused onto a 17 X 32 rectangular array of square close-packed 25 micrometers cross- sectional f/2 optical fibers that are drawn into a 544 X 1 distal array with serpentine ordering. The 544 X 1 side of the array is imaged with an f/2 spectrograph equipped with a holographic grating and a gated intensified charge- coupled device (ICCD) camera for spectral analysis. Software is used to extract the spatial/spectral information contained in the ICCD images and deconvolute them into wavelength-specific univariate reconstructed images or position-specific spectra that span an 86 nm wavelength space.
Multispectral imaging is an enabling technology for many emerging applications. To accomplish this, multiple images of a single field-of-view must be captured where each image is based on the reflected light in a specific spectral band. A number of issues must be addressed to achieve an effective imaging systems for multispectral applications. The design of a multispectral camera and numerous applications will be described. Applications include fruit and grain sorting, lumber grading, weed identification, precision farming, harsh environment monitoring, and advanced security. A family of multispectral cameras based on beam splitting optics and three CCD sensors have been developed. The family includes line scan and area array cameras utilizing common electronics and mechanical hardware, providing an economical solution to most multispectral applications. Up to five spectral bands within the 400 to 1100 nm sensitivity of the CCD sensor can be imaged with the area array camera and three bands within this same spectral range can be imaged with the line scan camera. The cameras acquire all of the spectral images simultaneously eliminating temporal discrepancies. The common optical aperture results in an identical field of view for each imaging channel. Output is available in analog and digital format. On-board signal processing enables real time image processing of the image data within the camera. Available image processing functions include thresholding, addition/subtraction of images, color space conversion, false color mapping, and area-of-interest.
The 'Bonn University Simultaneous CAmera (BUSCA) is a CCD camera system which allows simultaneous direct imaging of the same sky area in four colors. The optics are designed for an f/8 gem and four 4K X 4K CCDs with 15(mu) pixels covering a field of view of 12 arcmin X 12 arcmin at a 2m class telescope. In September 1998 BUSCA has seen 'First Light'. The instrument is based on three dichroic beam splitters which separate optical wavelength bands such that standard astronomical intermediate-band filter systems can be used. The dichroics are made of plane-parallel glass plates mounted at an angle of 45 degrees. Astigmatism in the transmitted beams is completely canceled by identical plane- parallel glass plates of suitable orientation. BUSCA offers new perspectives in astronomical multicolor photometry: i) The broadband spectral properties of astronomical objects in the optical can be determined with high reliability even in non-photometric atmospheric conditions. ii) Precious observing time is used very efficiently. iii) With the large field of view, extended objects like globular and open star clusters or galaxies are covered in a single exposure. iv) Each exposure results in a complete data set.
Medical fluoroscopy is a set of radiological procedures used in medical imaging for functional and dynamic studies of digestive system. Major components in the imaging chain include image intensifier that converts x-ray information into an intensity pattern on its output screen and a CCTV camera that converts the output screen intensity pattern into video information to be displayed on a TV monitor. To properly respond to such a wide dynamic range on a real-time basis, such as fluoroscopy procedure, are very challenging. Also, similar to all other medical imaging studies, detail resolution is of great importance. Without proper contrast, spatial resolution is compromised. The many inherent advantages of CCD make it a suitable choice for dynamic studies. Recently, CCD camera are introduced as the camera of choice for medical fluoroscopy imaging system. The objective of our project was to investigate a newly installed CCD fluoroscopy system in areas of contrast resolution, details, and radiation dose.
Implementation and test results of an array for image applications with full-frame analog memory is presented. The array was implemented using 1.0 micrometers double metal, single poly n-well standard CMOS technology. The sensor consists of a 24 by 24 pixels square array and circuitry for random access readout. A pixel is composed by a phototransistor and control circuitry to regulate the exposure time to light of phototransistors. Each pixel also includes an analog memory implemented using MOSFET capacitors. The output buffer drives the capacitance of the output line. The system requires a total core area of 5 mm2. Tests were performed for each individual pixels and for the complete array. The voltage output as a function of integration time under different illumination levels shows a linear behavior. Varying the exposure time is possible to change the detector sensitivity. The fixed pattern noise was 0.58 percent of saturation level. Memory capabilities were also tested, allowing non-destructive reading and a storage time over few seconds without a significant degradation.
An experimental beam combiner (BC) is being developed to support the space interferometry program at the JPL. The beam combine forms the part of an interferometer where star light collected by the sidestats or telescopes is brought together to produce white light fringes, and to provide wavefront tilt information via guiding spots and beam walk information via shear spots. The assembly and alignment of the BC has been completed. The characterization test were performed under laboratory conditions with an artificial star and optical delay line. Part of each input beam was used to perform star tracking. The white light interference fringes were obtained over the selected wavelength range from 450 nm to 850 nm. A least-square fit process was used to analyze the fringe initial phase, fringe visibilities and shift errors of the optical path difference in the delay line using the dispersed white-light fringes at different OPD positions.
An adaptive optics system is being developed, which uses integrated circuit technology along with diffractive optics to crete a very compact system. A lenslet array focuses incoming light onto individual actuators. Phase modulation is applied with electrostatic attraction. Gratings on the mirrors split off a part of the light for wavefront sampling. Optics on the back side of the lenslet array combine neighboring beams and focus onto detector elements. This creates a shearing measurement in two orthogonal directions. A resistive grid network reconstructs the wavefront from the individual measurements, and a feedback system nulls the outgoing wave. This paper contains simulations and analysis of the system. A 1D array was simulated, including the wavefront measurement and correction. A sine wave was input to the system, and the resulting phase and point spread function were calculated. System analysis of the wavefront reconstruction and feedback are discussed. Test results for a non-shearing interferometer are presented. Some test results from a test chip are also provided.
This paper discusses the performance of a new VGA resolution color CMOS imager developed by Motorola on a 0.5micrometers /3.3V CMOS process. This fully integrated, high performance imager has on chip timing, control, and analog signal processing chain for digital imaging applications. The picture elements are based on 7.8micrometers active CMOS pixels that use pinned photodiodes for higher quantum efficiency and low noise performance. The image processing engine includes a bank of programmable gain amplifiers, line rate clamping for dark offset removal, real time auto white balancing, per column gain and offset calibration, and a 10 bit pipelined RSD analog to digital converter with a programmable input range. Post ADC signal processing includes features such as bad pixel replacement based on user defined thresholds levels, 10 to 8 bit companding and 5 tap FIR filtering. The sensor can be programmed via a standard I2C interface that runs on 3.3V clocks. Programmable features include variable frame rates using a constant frequency master clock, electronic exposure control, continuous or single frame capture, progressive or interlace scanning modes. Each pixel is individually addressable allowing region of interest imaging and image subsampling. The sensor operates with master clock frequencies of up to 13.5MHz resulting in 30FPS. A total programmable gain of 27dB is available. The sensor power dissipation is 400mW at full speed of operation. The low noise design yields a measured 'system on a chip' dynamic range of 50dB thus giving over 8 true bits of resolution. Extremely high conversion gain result in an excellent peak sensitivity of 22V/(mu) J/cm2 or 3.3V/lux-sec. This monolithic image capture and processing engine represent a compete imaging solution making it a true 'camera on a chip'. Yet in its operation it remains extremely easy to use requiring only one clock and a 3.3V power supply. Given the available features and performance levels, this sensor will be suitable for a variety of color imaging applications including still/full motion imaging, security/surveillance, and teleconferencing/multimedia among other high performance, cost sensitive, low power consumer applications.
A smart image sensor with a size of 6.5 mm X 8.2 mm has been designed by a 2-metals 1-poly 0.7 micrometers CMOS process. It consists of a 16 by 16 photodiode array for picking up the piles, and each pixel is in parallel with a time multiplexed edge detector and a 4-bits memory cell. This parallel structure defines the fill factor is of 1.79 percent. 4 LPGCP-ALUs are used for motion vector estimation on the whole focal sensor plane with a switching logic for shifting the LPGCP-ALU window in the area where motion vectors are detected with a sped of the same as the frame rate.
Temporal noise sets a fundamental limit on image sensor performance, especially under low illumination and in video applications. In a CCD image sensor, temporal noise is well studied and characterized. It is primarily due to the photodetector shot noise and the thermal and 1/f noise of the output charge to voltage amplifier. In a CMOS APS several addition sources contribute to temporal noise, including the noise due to the pixel rest, follower, and access transistors. The analysis of noise is further complicated by the nonlinearity of the APS charge to voltage characteristics, which is becoming more pronounced as CMOS technology scales, and the fact that the reset transistor operates below threshold for most of the reset time. The paper presents an accurate analysis of temporal noise in APS. We analyze the noise for each stage of the sensor operation, and identify the noise contribution from each source. We analyze noise due to photodetector shot noise taking nonlinearity into consideration. We find that nonlinearity improves SNR reset transistor shot noise is at most half the commonly quoted value. Using HSPICE simulation, we find the noise due to the follower and access transistors. As expected we find that at low illumination reset noise dominates, while at high illumination photodetector shot noise dominates. Finally, we present experimental results from test structures fabricated in 0.35(mu) CMOS processes. We find that both measured peak SNR and reset noise values match well with the results of our analysis.
This paper describes a new improved method of employing an amplifier per pixel that eliminates FET threshold and gain variations problems of prior art. Existing amplifier per pixel designs utilizes 3 or 4 FETs per pixel and the amplifier consists of a source follower. The source follower is problematic in 2D arrays due to threshold variations and resulting gain variations per pixel causing extensive peripheral circuitry and/or software to correct. The active column sensor employs a true unity gain amplifier per pixel, eliminating threshold and gain variations. The simplified pixel electronics allow for smaller and/or more sensitive pixels and always at lower cost through improved yields. Disclosure of 1.5 FET double poly, 1.5 FET single poly, and photodiode configurations and with result on various pixels.
Dynamic range is a critical figure of merit for image sensors. Often a sensor with higher dynamic range is regarded as higher quality than one with lower dynamic range. For CCD and CMOS sensors operating in the integration mode the sensor SNR monotonically increases with the signal. Therefore, a sensor with higher dynamic range, generally, produces higher quality images than one with lower dynamic range. This, however, is not necessarily the case when dynamic range enhancement schemes are used. For example, suing the well capacity adjusting scheme dynamic range is enhanced but at the expense of substantial degradation in SNR. On the other hand, using multiple sampling dynamic range can be enhanced without degrading SNR. Therefore, even if both schemes achieve the same dynamic range the latter can produce higher image quality than the former. The paper provides a quantitative framework for comparing SNR for image sensors with enhanced dynamic range. We introduce a simple model to describe the sensor output response as a function of the photogenerate signal, dark signal, and noise for sensors operation in integration mode with and without dynamic range enhancement schemes. We use the model to quantify and compare dynamic range and SNR for three sensor operation modes, integration with shuttering, using the well capacity adjusting scheme, and using multiple sampling.
In order to provide its customers with sub-micron CMOS fabrication solutions for imaging applications, Tower Semiconductor initiated a project to characterize the optical parameters of Tower's 0.5-micron process. A special characterization test chip was processed using the TS50 process. The results confirmed a high quality process for optical applications. Perhaps the most important result is the process' very low dark current, of 30-50 pA/cm2, using the entire window of process. This very low dark current characteristic was confirmed for a variety of pixel architectures. Additionally, we have succeeded to reduce and virtually eliminate the white spots on large sensor arrays. As a foundry Tower needs to support fabrication of many different imaging products. Therefore we have developed a fabrication methodology that is adjusted to the special needs of optical applications. In order to establish in-line process monitoring of the optical parameters, Tower places a scribe line optical test chip that enables wafer level measurements of the most important parameters, ensuring the optical quality and repeatability of the process. We have developed complementary capabilities like in house deposition of color filter and fabrication of very large are dice using sub-micron CMOS technologies. Shellcase and Tower are currently developing a new CMOS image sensor optical package.
We have developed the two kinds of Electron Bombardment CCD Camera employing a full frame transfer type and a frame transfer type electron bombardment sensor made by Hamamatsu Photonics. Especially in order to make a camera practicable for various application, we improve the image resolution and tried to measure the life of frame transfer type sensor. With regard to the resolution, there are many factors which degrades the resolution. This time, we reduced the gap length between a photo-cathode and a CCD to 60 percent of original one, hence, the sensor had greater resolution than 450 TV. Furthermore life of sensor is getting longer, because of the MPP CCD structure. C7162-20 Frame transfer Electron Bombardment CCD camera performs as a standard TV rate camera. From the point of camera performance, it can be replaced the SIT tube camera and I-CCD camera. In order to prove this fact, camera was compared with GEN 4 I-CCD camera which uses a small aperture MCP and a Blue-GaAs photo- cathode coupled to the CCD through optical fiber, and we show the result of comparison.
We have developed an optical approach for modeling the quantum efficiency (QE) of back-illuminated CCD optical imagers for astronomy. Beyond its simplicity, it has the advantage of providing a complete fringing description for a real system. Standard thin-film calculations are extended by (a) considering the CCD itself as a thin film, and (b) treating the refractive index as complex. The QE is approximated as the fraction of the light neither transmitted nor reflected, which basically says that all absorbed photons produce e-h pairs and each photoproduced e or h is collected. Near-surface effects relevant to blue response must still be treated by standard semiconductor modeling methods. A simple analytic expression describes the QE of a CCD without antireflective (AR) coatings. With AR coatings the system is more easily described by transfer matrix methods. A two-layer AR coating is tuned to give a reasonable description of standard thinned CCDs, while the measured QE of prototype LBNL totally depleted thick CCDs is well described with no adjustable parameters. Application to the new LBNL CCDs indicates that these device swill have QE > 70 percent at (lambda) equals 1000 nm and negligible fringing in optical system faster than approximately f4.0.
We report on the design of a system used to measure the multispectral intrapixel response of imaging sensor arrays. An Airy disk spot size of approximately 4 micrometers has been achieved for wavelength bands that extend from the visible blue to near IR. The automated system does rapid intrapixel row and/or column spatial mapping of individual pixels as well as rastered 2D spatial scans over multi-pixel girds. Commercially available equipment including a photometric eyepiece, a reflective objects, programmable pushers, and light-emitting diodes are utilized in the system. Scanned results using the system are presented for both front- and back-illuminating charge-coupled device imagers. The intrapixel response of a front-illuminated device shows good correlation with the physical cross section of the devices tested.