The intrapixel response is the signal detected by a single pixel illuminated by a Dirac distribution as a function of the position of this Dirac inside this pixel. It is also known as the pixel response function (PRF). This function measures the sensitivity variation at the subpixel scale and gives a spatial map of the sensitivity across a pixel. The variation of the pixel sensitivity finds its origin in the physical properties of the pixel. The measurement of this function gives information on effects which occur after the absorption of the incoming photons by the material. Therefore the knowledge of this function is of great interest, on the one hand for the technologist, who could check the quality of the pixels and study the phenomena related to the detection; and on the other hand, for the instrument scientist, who could evaluate the impact of the spatial filtering of the pixel on image quality.
The knowledge of the intra-pixel sensitivity is also important to control and reduce the systematic errors in under-sampled instruments. Indeed, in these instruments, the variation of pixel sensitivity limits the performances of the system and contributes to the instrument error budget [1,2]. All these reasons brings ESA to require the measurement of the intra-pixel sensitivity with a resolution close to the tenth of the pixel in detector technology development activities.
The techniques used to measure the intra-pixel response are based on the projection of optical probes on the detector. These techniques can be divided in two main approaches: the direct approach and the indirect approach.
In the direct approach, we can enumerate the spot scan techniques, which consist in projecting a single spot and scanning it across the detector under test (DUT). These techniques require high-quality optics coupled with high-precision scanning systems. They are effective to understand the subpixel properties but they are time-consuming. It is the reason why the published results in literature are commonly limited to only a few pixels . To overcome this limitation, Biesiadzinski et al  designed a method based on a multi-spot projector, the spots-O-Matic. The spots-O-Matic enables a two-dimensional scan of every pixel in a detector by projecting 160000 spots (on a 400x400 grid) onto the device.
In the indirect approach, the principle consists in estimating the Modulation Transfer Function (MTF) in the spatial domain and computing the detector Point Spread Function, PSF (or its Line Spread Function, LSF) by inverse Fourier transform. The MTF is estimated by projecting onto the detector a pattern with a known spatial frequency content. Guérineau et al  have developed an original technique based on this Fourier-transform approach. It is based on the projection of high-spatial resolution periodic patterns onto the whole sensor without classic optics such as lens or other focusing components but using the self-imaging property (known as the Talbot effect) of a particular class of diffracting gratings illuminated by a plane wave. The main advantage of this approach is that the projection of patterns is optics-less. However this technique allows one to obtain a MTF (then PSF or LSF) of an average pixel corresponding to an area of the detector. Ketchazo et al  presented an adaptation of this technique for the characterization of the intrapixel response of each pixel of a detector. The idea is to couple the projection of the interferograms made of sub-pixel details with a precise micro-scanning process. Viale et al  presented the experimental demonstration and the preliminary results. In the following paragraph, we will discuss the physical origins of the sensitivity variations. This section will be followed by the comparison of the main measurement techniques allowing one to evaluate the intrapixel sensitivity. We will conclude with a discussion on the potentialities of these techniques to fulfill the requirements of astronomical applications.
THE ORIGIN OF THE VARIATION OF THE INTRAPIXEL SENSIVITY
The variation of the pixel sensitivity finds its origin in the physical processes that occur with charges created by the interaction of the photons with the material. A photon with sufficient energy generates at least an electron-hole (e-h) pair in the semiconductor material. This e-h pair is free to move and can diffuse in the structure. In the ideal case, the signal carriers are collected in the pixel where they are created without spreading into neighbouring pixels. However, lateral charge diffusion into other pixels can occur. In this scenario, the charge carriers are collected by the adjacent pixels. This effect is known as the diffusion pixel crosstalk and widens the pixel response function. The consequence is the degradation of the image quality. The last process which can occur after the generation of signal carriers is the recombination of the e-h pairs without being collected. This leads to a loss of information.
We should note that the pixel sensitivity changes with the wavelength. Finally, we should mention that other effects can degrade the pixel sensitivity. We enumerate among others: the pixels defects, the reflection losses of the incoming light and gates absorption (for front-illuminated CCD for example).
Therefore, to improve the collection charge efficiency and the sensitivity in the case of CCDs for example, the backside devices are preferred, the substrate is thinned enough to reduce the epitaxial region and to minimize the field free regions. In this case, charges are created close to the depletion region and are easily collected by the target pixel. However, to increase the detector quantum efficiency (QE) and improve the absorption of shorter and longer wavelengths, the photosensitive thickness is often made larger. For example, the thick substrates detectors are developed to achieve high QE in red and near-infrared wavelength. To overcome the problem of charge diffusion, the CCDs are deep or fully depleted and an appropriated bias is applied to the substrate to minimize the diffusion . Hence there is a trade-off between various aspects of the detector that have impact on its efficiency.
This discussion demonstrates that the pixel response function is far to be a top-hat function with a constant value within the boundaries of a pixel (i.e. uniform pixel sensitivity) and zero outside (i.e, zero crosstalk). The discussion also raises the question of the homogeneity of the intra-pixel responses among the pixels of the detector. One can expect that there is no difference among the pixels (i.e pixel A has the same intrapixel response as pixel B) because the detectors are fabricated by photolithography processes using different masks on the same wafer. However these processes can suffer from potential alignment defects of masks, and the masks can also present geometric defects. These can introduce dispersion in pixel sizes and shapes. Therefore, it is important to measure the pixel response function and check its variation at the scale of a detector. The techniques developed to measure the pixel response function are derived from the commonly used techniques to evaluate the spatial properties of electro-optics systems (cameras, imagers, …). They are based into the projection of the optical probes or input images on the DUT. The detectors have a periodic and pixelated structure then any input image is sampled at a fixed spatial frequency. The process of pixel spatial sampling can create aliasing and “fold” high frequencies of the modulation transfer function. To overcome these limitations, super-resolution techniques are often implemented and the results are corrected from the pixel sampling effect.
In the next section, we will review the classical techniques used to characterize the spatial properties of the detectors and we will highlight those which are well adapted for the study of the intra-pixel response.
OVERVIEW OF MAIN MEASUREMENT TECHNIQUES
In the following paragraph, we first define the criteria on which the comparative study will be carried. This definition is followed by a review on the measurement techniques.
Measurement figures of Merit
The Pixel Response Function (PRF) is the parameter to measure. In practice, the PRF is sampled and determined on a subpixel grid of pitch pech/M where pech is the sampling pitch of the focal plane array and M is an oversampling factor.
The Fourier transform of the PRF is the pixel Optical Transfer Function (OTF) of which the modulus gives the pixel MTF. The pixel MTF describes the ability of the pixel to reproduce the contrast modulation present in the scene (here the pixel field of view) at any frequency. The knowledge of the PRF at pech/M corresponds to exploring the pixel MTF up to M times the Nyquist frequency (fNy) where fNy=1/2pech.
In the following paragraphs, the techniques comparison we make is, as far as possible, in respect to the PRF oversampling factor and the accuracy of PRF estimation.
Overview of the measurement techniques
As we have already mentioned in section I, the measurement techniques can be divided in two approaches. Hereinafter, the first three techniques we present belong to the direct approach. We conclude this section with a discussion on the indirect techniques.
Part 1: the direct techniques
1. The spot scan: it consists in projecting a point source (an infinitesimal pulse i.e., a Dirac distribution in an ideal case) onto the detector. The projection is often achieved by a microscope objective lens [9,10,11]. The microscope allows the de-magnification and focusing of the pinhole illumination onto the DUT. The size of the spot depends on the wavelength and the aperture of the optics (~l/NA). The scanning of the spot across the DUT is usually achieved by the displacement of the detector with two motorized stages. The spot scan provides the entire-two dimensional PRF with only one measurement. Niemi et al  measured, with uncertainties of 3σ, the PRF of one version of the CCD-273 using the spot projection. The CCD-273 is the device that will populate the visible channel of the Euclid payload . Niemi et al estimated the CCD PRF by fitting the data to a model using a Bayesian approach. They showed that the PRF size changes with the wavelength and narrows towards longer wavelengths. They also showed that the PSF size grows as a function of intensity. One of the major drawback of this technique is that point-source objects often provide too little flux, which, coupled with the multiple noise sources (source fluctuations, sensor noise…), could drive to a poor signal-to-noise ratio (SNR).
2. The line response function scan/tilt: instead of a spot, a line (slit image) can be projected and scanned on the device so to produce a line response function. The line response function (LRF) is the two-dimensional convolution of the line source object with the detector PRF. The detector MTF along one direction (perpendicular to the line source) is then deduced from the LRF by a Fourier transform. In contrast with a spot source, a better SNR is obtained using a line source but the information is truncated and needs several slit angles to get a 2D MTF. Instead of displacing successively the line by a small fraction of a pixel pitch, an alternative static technique known as the Vernier technique has been devised. With this technique, the line is tilted slightly allowing to get rid of aliasing effects. Successive pixel rows sample the response function at the pixel pitch but each set of samples is displaced by a fraction of a pixel per row. The technique is used at e2v and the LRF is sampled with a resolution of 1/8 pixel .
e2v estimate the MTF of the CCD-273 devices as part of the ESA pre-development phase for the VIS CCD detectors (Euclid) . The slit is focused using a 20x objective lens, the measured LRF points are interpolated to get a regularly sampled The results of MTF obtained in horizontal and vertical orientations of the slit are presented. The MTFs are corrected from the optical MTF and the results at the pixel Nyquist Frequency are given for different wavelength.
3. The knife-edge technique: the technique uses an image of a sharp, high-contrast edge. This image is the convolution of the PSF with the step function and it is called the Edge Spread Function (ESF). The LRF is deduced by calculating the derivative of the ESF. And the one-dimensional MTF can be deduced as the 1-D Fourier transform of the LRF. The 2D MTF estimation requires several one-dimensional slices at different angles. To overcome the aliasing effect, the knife-edge is tilted slightly and the construction of the oversampled ESF is realised at the subpixel spacing . Experimentation showed that a subpixel bin of one-quarter pixel had the best results . Recently virtual knife edge has been proposed by Karcher et al . Instead of using a real Foucault knife-edge, they selected a grid of pixels and the idea consists in measuring the variation of the total charge in the grid as the beam is scanned across a grid edge .
We notice that the estimation of the PRF through the ESF must be done carefully to avoid bias in the result as the intensity profile (erf function) or its derivative (Gaussian function) are very sensitive to noise.
Summary and comparison of the techniques
|Techniques||Spot Scan||Line Response Function (the Vernier technique)||Tilted knife edge|
|Sub-pixel sampling||Pixel/8 (e2v)||Pixel/4|
|Intrapixel analysis compatibility||Yes||No||No|
Among all these techniques, the spot scan is the most effective for the investigation of the intrapixel sensitivity analysis. Indeed, despite the fact that all these techniques allow the exploration of sub-pixel properties, only the spot scan technique is local enough to permit this analysis for only each pixel considered individually. However, considering the spot scan, the time required to create a detector-wide map is prohibitive. To overcome this limitation, Biesiadzinski et al  designed a method based on a multi-spot projector, the spots-O-Matic.
Principle of the Spots-O-Matic
The principle of this system is derived from the spot-o-matic firstly designed by the same authors to project and scan a single spot . Instead of the spot-o-matic, the spots-o-matic projects around 100k spots by the mean of a 7 in. x 7 in. photolithography mask containing an array of circular apertures of 17 µm diameter each (a pinhole array) back-illuminated by narrow-band laser diodes.
The pinhole array is de-magnified and imaged onto the detector by a camera lens. The pinhole array is mounted at room temperature. The detector is mounted in a cryogenic dewar and operates at 140 K. The lens is affixed in front of the cryostat window on the room temperature side. The setup suffers from spherical aberration and distortion. Therefore the spot array image may not be co-planar with the detector and the focal surface is curved. Hence, the authors restrict their analysis to a local region in the image centre. For example, considering the detector H2RG-236 (18 µm pitch), the authors restrict their measurement to a 1024x1024 pixel region in the image centre, a quarter of the total number of pixels. Nevertheless, the authors announced it woud take only a few weeks to characterize all the pixels of the entire device by a proper focus procedure of each zone of the device.
The PRF is characterized by scanning the spots vertically and horizontally with a step of 2 µm. The spot diameter is 7 µm with a laser diode centered at 1050 nm. During the scan, each pixel is sampled by different spots. The algorithm based on a matched filter approach permits to estimate the pixel response function.
Part 2: indirect techniques
In the indirect approach, the principle consists in estimating the Modulation Transfer Function (MTF) in the Fourier domain and computing the Point Spread Function, PSF by inverse Fourier transform. The MTF is estimated by projecting onto the detector a pattern with a known spatial frequency content. The pattern is usually the fringes generated by a double aperture interferometer  or by a modified Michelson interferometer [19, 20]. These techniques are global as the interferogram can be made large enough to characterize the overall detector sensitive surface. The main drawback is that they often require complex optics to produce these fringes. Guérineau et al.  developed an original technique based on the projection of high-spatial resolution patterns onto the whole sensor without classic optics but using the self-imaging property (known as the Talbot effect) of a diffracting element called Continuously Self-Imaging Grating (CSIG). The CSIG is illuminated by a plane wave, the projection of the pattern onto the detector is optic-less, the test bench alignment budget is low. Figure 5 below gives the principle of the test bench.
The CSIG is a diffractive element, a complete description of its properties can be found in . Depending on the substrate material, different CSIGs can be manufactured to cover the visible, the near-infrared and the long-wave infrared bands. CSIGs allow exploring the spatial response of the detector up to frequencies several times higher than the Nyquist frequency. The highest frequency we can reach depends on the CSIG properties, the pixel pitch and the wavelength.
Viale et al  presented the results of an estimation of the pixel response function of a Dalsa camera (1024x1024 CCD, 12 µm pitch) using a highly resolved CSIG (cutoff frequency higher than 500 mm-1) illuminated by white light. The CSIG allows one to explore the detector MTF up to 12 times the detector Nyquist frequency (i.e the oversampling factor M is equal to 12) and therefore to estimate the PRF with a resolution of pixel/12 (namely 1 µm).
We can consider two measurements procedure: global and local:
The global procedure: hypothesis of identical pixels
The procedure is based on the assumptions that all the pixels under study are identical. The mean PRF corresponding to at least an area of one CSIG period can be estimated. To overcome the aliasing effect, the superresolution procedure consists to displace the CSIG in front of the DUT at the step of pixel/M and acquire MxM images. The combination of the MxM images permits the estimation of the over-sampled interferogram. Viale et al  presents in detail the procedure. The deconvolution is based on a Bayesian approach  which takes into account the noise in the image and some prior information on the object to be restored, the latter being estimated in an unsupervised way from the interferogram itself . The SNR of the reconstructed interferogram must be high enough to improve the accuracy of the PRF estimation. Indeed, studies showed that with a SNR of 500, the maximum error in the estimation is below 2%.
The demonstrator test bench has been developed to validate the measurement technique. The experimental results obtained are given in Figure 6 hereafter: at left, a period (a0=380µm) of the oversampled interferogram considered for the estimation of the pixel response function, and at right, the reconstructed pixel response function with an error bar estimated at 8.5%.
Applying this methodology on the CCD-273 (4096 x 4132 pix, 12 µm pitch) could allow the estimation of 16800 PRFs corresponding to an area of 380 x 380 µm (32x32pix) each. This quite good statistic is get in only 2.4h.
The local procedure: hypothesis of different pixels
The objective is to evaluate the intrapixel response of each pixel considered individually. To get rid of aliasing effect, the superresolution procedure consists to acquire M(a0/pixel) x M(a0/pixel) images taken as different subpixel positions of the CSIG (e.g. considering a scenario with the H2RG-236, it comes to acquire 253x253 images). The M(a0/pixel) x M(a0/pixel) images are thereafter combined to get an interferogram over a period a0 for each pixel of the FPA. With the H2RG, the intrapixel detector-wide map could be obtained in lesser than 1 week. The calculation is made under the hypothesis of 100kpix/s output frequency, 8 outputs and the integration time of 2s.
The implementation of this ultimate measurement requires among others a precise and reliable motion of the CSIG, a stable test bench (i.e vibration-less) and good quality of the optics.
Currently ONERA/DOTA and CEA/Irfu work on the development of a test bench named Intrapix to satisfy these requirements . The displacement of the CSIG will be ensured by means of 2 piezo nano-positioners (ANP Series, Attocube) used in closed loop control based on fiber-optic Fabry-Perot interferometers (FPS, Attocube). The optics shall be made in RSA 6061 (RSA stands for Rapid Solidified Aluminium) and aligned to satisfy a wavefront error at the exit of the collimator better than l/10 RMS.
COMPARISON BETWEEN SPOTS-O-MATIC AND INTRAPIX BENCHES
We have presented two experimental test-benches for the measurement of the intrapixel sensitivity. These benches use highly resolved probes and require precise scanning systems. The Spots-O-Matic is based on the focus of spots array while the Intrapix approach is a projection of patterns using the self-imaging properties of the gratings.
The table below summarizes and compares the main characteristics of these test-benches.
Summary and comparison of the Spots-o-matic and the Intrapix test-benches
|Bench||Approach||Principle||Setup alignment budget||Detector Large format compatible||Multibands compatible||PRF Measurement resolution||PRF estimation accuracy||Time consuming|
|Spots-o-Matic||Direct||Focusing an array of spots||High||No (Optics aberrations reduce the field of measurement)||Yes (Optics must be changed)||7 µm spot size for 18 um pixel pitch||Not available||Yes|
|Intrapix||Indirect||Projection of 2D patterns||Low||Yes||Yes (CSIG must be changed)||1 µm||2% (simulation) 8.5% experimentally achieved in the demonstrator||No|
Because of field limitations due to the use of optics in the Spots-o-Matic, the Intrapix test bench appears to be more appropriated to characterize the intrapixel sensitivity of large detectors than the Spots-o-Matic.
We have reviewed different techniques commonly applied to characterize the spatial properties of the detectors in laboratory conditions. The aim was to evaluate their compatibilities with intrapixel measurement and compare their performances. We have shown that even if a technique permits the exploration of the sub-pixel properties it cannot be compatible with the measurement of the intrapixel response. The Line Response Function and Edge Scan Function techniques are not local enough to allow the investigation of sensitivity of an individual pixel, but the line or the edge are sampled over respectively 4 or 8 pixels. In the scenario where the need is to measure a mean PRF of a detector, the tilted edge technique appears however to be more appropriated (high contrast, high SNR, easy to implement).
The spot scan however is well effective in measuring the intrapixel response. The measurement resolution depends mainly on the spot size and the performances of the displacement plates. Therefore the smaller the spots (and its displacement pitch) are, the better the PRF is resolved. However a trade-off has to be made between the size of the spot and the SNR.
In the case of astronomical detectors constituted of millions of pixels, a pure (simple) local technique as the spot scan is not appropriate. Indeed, it is important to check the uniformity of the intrapixel response over the whole detector. We have shown in these cases that global techniques coupled with a micro-scanning process are required. The technique must solicit the entire FPA and also provide a local assessment of every pixel. We have presented and compared two test benches: the spots-o-matic and the Intrapix. The comparison leads to conclude that the Intrapix bench is the one that could really permit the study of the intrapixel variations at the scale of a large detector.
J. G. Ingalls, J. E. Krick, S. J. Carey, S. Laine, J. A. Surace, W. J. Glaccum, et al., “Intra-pixel gain variations and high-precision photometry with the Infrared Array Camera (IRAC),” SPIE, vol. 8442, no. 84421Y, pp. 1–13, 2012.Google Scholar
J. Anderson, I. R. King, “Toward high-precision astrometry with WFPC2. I. Deriving an accurate PSF”, Publ.Astron.SocPac. vol.112, no 1360–1382, 2000.Google Scholar
N. Barron, M. Borysow, K. Beyerlein, M. Brown, C. Weaverdyck, W. Lorenzon, et al., “Sub-pixel response measurement of near-infrared sensors”, Publ.Astron.Soc.Pac., 119, pp. 466–475, 2007.Google Scholar
T. P. T. G. Biesiadzinski, M. J. S. M. Howe, W. Lorenzon, C. Weaverdyck and J. Larson, “A method for the characterization of subpixel response of near-infrared detectors,” Proc. of SPIE, vol. 7742, no. 77421M, pp. 1–9, 2010.Google Scholar
N. Guérineau, S. Rommeluère, E. Di Mambro, I. Ribet and J. Primot, “New techniques of characterisation”, C. R. Physique, vol. 4, pp. 1175–1185, 2003.Google Scholar
C. Ketchazo, T. Viale, O. Boulade, G. Druart, V. Moreau, L. Mugnier, et al., “A new technique of characterization of the intrapixel response of astronomical detectors”, Proc. Of SPIE, vol. 9154, no. 91541Y, 2014.Google Scholar
T. Viale, C. Ketchazo, N. Guérineau, O. Boulade, F. de la Barrière, V. Moreau, et al., “High accuracy measurements of the intrapixel sensitivity of VIS to LWIR astronomical detectors: experimental demonstration”, Proc. Of SPIE, vol. 9915, no. 991517, 2016.Google Scholar
J. R. Janesick, “Scientific charge-coupled devices”, SPIE-The International Society for Optical Engineering, 2001.Google Scholar
D. Kavaldjiev, Z. Ninkov, “Subpixel sensitivity map for a charge-coupled device sensor”, Opt. Eng, vol. 37, pp. 948–954, 1998Google Scholar
I. Swindells, R. Wheeler, S. Darby, S. Bowring, D. Burt, R. Bell, et al., “MTF and PSF measurements of the CCD273-84 detector for the Euclid visible channel”, Proc. of SPIE, vol. 9143, pp. 1–8, 2014Google Scholar
S-M. Niemi, M. Cropper, M. Szafraniec, T. Kitching, “Measuring a charge-coupled device point spread function”, Exp. Astron., vol. 39, pp. 207–231, 2015Google Scholar
J. Endicott, S. Darby, S. Bowring, D. Burt, T. Eaton, A. Grey et al., “Charge-Coupled Devices for the ESA Euclid M-class mission”, Proc. Of SPIE, vol. 8453, pp. 1–8, 2012Google Scholar
Technical note on the MTF of CCD sensors, A1A-CCDTN105, Issue 5, e2v technologiesGoogle Scholar
S. E. Reichenbach, S. K. Park, R. Narayanswamy, “Characterizing digital image acquisition devices”, Optical Engineering, vol. 30 (2), pp. 170–177, 1991Google Scholar
P. W. Nugent, J. A. Shaw, M. R. Kehoe, C. W. Smith, T. S. Moon, R. C. Swanson, “Measuring the modulation transfer function of an imaging spectrometer with rooflines of opportunity”, Optical Engineering, vol. 49 (10), 2010Google Scholar
A. Karcher, C. J. Bebek, W. F. Kolbe, D. Maurath, V. Prasad, M. Uslenghi, et al., “Measurement of lateral charge diffusion in thick, fully depleted, back-illuminated CCDs”, IEEE Trans. Nucl. Sci, vol. 51, no. 5, pp. 2231–2237, 2004Google Scholar
T. P. Biesiadzinski, “Near-infrared instrumentation and millimetre-wave simulation for cosmological surveys”, Doctorate thesis, 2013.Google Scholar
M. I. Andersen, A. N. Soresen, “An interferometric method for measurement of the detector MTF”, Exp. Astr., vol. 8, pp. 9–12, 1998Google Scholar
M. Willemin, N. Blanc, G.K. Lang, S. Lauxtermann, P. Schwider, P. Seitz, et al., “Optical characterization methods for solid-state image sensors”, Optics and Lasers in Engineering, vol. 36, pp. 185–194, 2001Google Scholar
P. Z. Tackacs, I. Kotov, J. Franck, P. O’Connor, V. Radeka, D. M. Lawrence, “PSF and MTF measurement methods for thick CCD sensor characterization”, Proc. Of SPIE, vol. 7742, no. 774207, 2010.Google Scholar
N. Guérineau, B. Harchaoui, J. Primot, “Generation of achromatic and propagation-invariant spot arrays by use of continuously self-imaging gratings”, Opt. Let., vol. 26, no. 7, 2001.Google Scholar
L. Mugnier, “From Data to Object : the Inverse Problem”, chap. 9, sec. 6 of “Observational Astrophysics”, pp. 575–596, by P. Léna et al., Springer, 2012.Google Scholar
D. Gratadour, D. Rouan, L. M. Mugnier, T. Fusco, Y. Clénet, E. Gendron, and F. Lacombe, “Near-infrared adaptive optics dissection of the core of NGC 1068 with NAOS-CONICA”, A&A 446, 813–825 (2006).Google Scholar