6 February 2018 Contrast computation methods for interferometric measurement of sensor modulation transfer function
Author Affiliations +
Abstract
Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.
Battula, Georgiev, Gille, and Goma: Contrast computation methods for interferometric measurement of sensor modulation transfer function

1.

Motivation

Refinements of traditional film and digital photography, such as light-field capture,1 superresolution,2 high dynamic range, etc., were once confined to professional photographers and optics researchers. Now, these features are becoming available in consumer cameras, especially the ubiquitous cell phone camera. In addition, there is a push toward reducing the size of the optical system, including lenses, focusing mechanisms, sensors, and their pixels. It is the reduced sensor pixel size that motivates the study and measurement of sensor quality.

Image sensors are a critical part of digital cameras and computational photography in general. They are usually characterized by signal-to-noise ratio (SNR), wavelength response, and dynamic range. These metrics are derived from intrinsic parameters, such as noise, quantum efficiency, and full-well capacity. However, there are additional, less-commonly considered sensor parameters influencing image quality. We will focus on the sensor modulation transfer function (MTF) with corresponding intrinsic parameters pixel crosstalk3 and pixel fill factor.4

Generally, camera MTF is a critical characteristic related to the image quality of any camera. It is essentially the product of lens MTF and sensor MTF (assuming the imaging system is linear and shift independent). The quality of results in computational photography applications such as stereo depth, multi-image superresolution, and deconvolution, where a goal is to capture subpixel information, specifically requires a high MTF of both sensor and lens well above the Nyquist frequency.1,2,5 Examination of measured sensor MTF at a wide range of frequencies allows us to understand pixel active area and shape in detail3 and to assess sensor utility for an intended application.

At the sensor level, ideal sensor MTF is reduced by the finite pixel size due to photon integration over the active pixel area. Sensor MTF is also reduced by crosstalk. Generally, crosstalk between pixels occurs when light falling on one pixel influences the response of its neighbors. Overall, pixel crosstalk is part optical crosstalk and part electrical crosstalk.4 Optical crosstalk occurs when some of the photons falling on a given pixel propagate to neighboring pixels. Electrical crosstalk occurs when free electrons from a given pixel diffuse to neighboring pixels.

As sensor pixels shrink, evaluating the sensor MTF using common test chart-based methods (as defined in ISO 12233) may exhibit limitations on measurability or accuracy at high frequencies. This can happen due to limitations of the optics that create the image on the sensor. In this paper, we expand our previous work6 and describe the interferometric methods we have developed to handle ultrasmall pixels and solve problems encountered with earlier approaches. We utilize the detailed mathematics describing sensor response by considering the effects of discretization and spectral leakage (Sec. 3.2.3) and explicitly derive Fourier contrast in terms of Dirichlet kernels7 [Eq. (22)].

2.

Introduction

Sensor MTF is generally defined as the spatial frequency response of the image sensor (in cycles per pixel pitch) in the absence of optics. This motivates an interferometric approach to measure the accurate MTF.8 Typically, interferometers can generate a wide range of sinusoidal signals on the surface of a sensor. One advantage of such an approach to measure the MTF is that there is no mechanical touching of the sensor surface as may be required with the lens-free slanted-edge method.9 Because the approach is lens-free, sensor MTF is measured directly and not confounded by lens MTF. Also, local pixel response and its spatial uniformity across the sensor can be analyzed at the same time as the MTF is measured.

2.1.

Setup

Our proposed interferometric setup is a combination10 of Mach–Zehnder and Young interferometers.11,12 One improvement, compared to prior work,8 that directly relates to our approach is that with this setup, Fig. 1, we produce a reliable and clean signal in the following way. A Young interferometer12 produces a clean optical signal, free of both noise and speckle, because it has no optical elements other than pinholes. However, the output signal has very low intensity due to the light passing through two stages of pinholes. Also, the Young interferometer usually has a path length on the order of meters and so may be impractical due to high intensity requirements. To address these issues, we considered the Mach–Zehnder interferometer;13 however, the traditional setup introduces speckle and noise due to imperfections of the lens/prism surfaces and surface dust. Our design is essentially a Mach–Zehnder interferometer with microscope objectives and pinholes at the output. It has the advantages of both Mach–Zehnder and Young in having high-energy output and being clean of optical artifacts, especially speckle.

Fig. 1

(a) Our interferometer for measuring sensor MTF. (Not to scale) (b) Picture of our interferometer built on a bread board with 25-mm mounting hole grid.

JEI_27_1_013015_f001.png

Another element in our design is that we are using a polarized source and polarizers just before the final objectives. This produces nearly complete constructive and destructive interference, resulting in 98% modulation of the optical signal.

Figure 1(a) shows a diagram of our interferometer, and Fig. 1(b) shows a picture of our current implementation. The beam splitter and first-surface mirrors arranged as in a Mach–Zehnder interferometer split the laser beam 50–50. Polarizers maximize the interference. We are equalizing the beam intensities after splitting using a neutral density filter. Fine adjustment of the remaining small difference is done computationally based on the measurements performed.14

Two microscope objectives and two pinholes are used at the end of the optical path to create fringes as in a Young interferometer. Each pair of mirrors after the beam splitter spans the full four degrees of freedom in the light field. This design achieves easy and fully adjustable aiming of the beams into the objectives.

2.2.

Interference

Interference of electromagnetic waves is produced by the addition of the electric fields of two or more beams. Intensity is proportional to the electric field squared. Details can be found in Ref. 15. The resulting intensity, I, at the sensor can be written as

(1)

I=I1+I2+2αγI1I2cos(ϕ1ϕ2),
where I1 and I2 are the time-averaged intensities of the two beams. The cos(ϕ1ϕ2) term produces the sinusoidal fringes based on the path difference between the beams, where ϕ1ϕ2 is the phase difference. The parameter α is the degree of coherence between the two light sources with a maximum value of 1 for perfect coherence. The parameter γ is the cosine of the angle between the polarizations of the two electric field vectors.

2.3.

Pixel Sampling Equation

For simplicity, our equations are written for a single spatial dimension x on the sensor. Equations can be extended to two dimensions.3,14 Let Πε(x) represent the active pixel response function, the rectangle function of pixel width ε in the x-direction. It can be written as Πε(x)[H(xε2)H(x+ε2)], where H(x) represents the unit step function. Similarly, Πl(x) denotes a rectangle function for sensor width l.

An optical signal f(x), I from Eq. (1), is sampled at pixel pitch p, and assuming pixel active size ε, over the sampling length of the sensor l. The result can be written as

(2)

g(x)={xε/2x+ε/2f(ξ)dξ}Шp(x)Πl(x).

The equation can also be written as

(3)

g(x)={f(ξ)Πε(xξ)dξ}Шp(x)Πl(x),
where Шp(x) is the Dirac comb function of period p. The integral can be written as the convolution f*Πε, thus

(4)

g(x)={f*Πε}(x)Шp(x)Πl(x).

In the case of a single-frequency sinusoidal signal, f(x) will be f(x)=Acos(2πu0x+ϕ0)+B, where A is the AC amplitude, B is the DC offset, and u0 is the frequency of the optical fringes formed on the sensor (in cycles per pixel pitch). We assume an odd number of pixels and origin in the middle of the central pixel. The term ϕ0 is the phase difference between the sinusoidal fringe and pixel grid measured at the center of the sensor. Figure 2 shows the sampling. The captured signal after sampling is

(5)

g(x)={xε/2x+ε/2[Acos(2πu0ξ+ϕ0)+B]dξ}Шp(x)Πl(x).

Fig. 2

Pixel sampling of the sinusoidal optical signal.

JEI_27_1_013015_f002.png

Integrating in the above equation and simplifying using the normalized sinc function sinc(t)sin(πt)πt

(6)

g(x)=[Aεsinc(u0ε)cos(2πu0x+ϕ0)+Bε]Шp(x)Πl(x).

Taking x at discrete locations n with p as pixel pitch

(7)

g(n)={0,|n|>N2Bε+Aεsinc(εu0)cos(2πu0np+ϕ0),|n|N2,
where N is the number of pixels in the x-direction on the sensor, i.e., l=Np.

3.

Contrast Computation Methods

To describe the captured image on the sensor, Eq. (7) takes the following form:

(8)

g(n)={0,|n|>N2Bε+Mεcos(2πu0np+ϕ0),|n|N2.

The variables are M, B, u0, ϕ0, and ε. In an ideal case with no crosstalk, M=Asinc(εu0). In a real case, M could be some other function of u0 and ε. For a given measurement, u0 is fixed. It can be calculated from the setup geometry by the formula u0=pd, where d is the distance between fringes, explained in Eq. (28). ϕ0 is controlled as described in Sec. 3.1.3. ε is a pixel parameter that affects the shape of the MTF as we will see in Sec. 4.1. From each measurement, we could estimate the values of M and B based on spatial or frequency analysis. However, we need only their ratio. Our analysis proceeds by measuring g(n) for many values of u0.

Contrast, C, is defined as the ratio of AC component to DC component

(9)

C=MB,
and the MTF is calculated as contrast as a function of frequency. We obtain the MTF experimentally by estimating the M/B ratio using contrast methods explained later. A captured image contains intensity data for N number of samples, giving us N equations in five unknowns for each measured frequency (e.g., N pixels in x-direction of the sensor at Nyquist). This can be posed as a harmonic regression or curve fitting problem that may have multiple solutions due to aliasing.

This paper proposes improvements to two traditional contrast calculation methods for sinusoidal fringes captured as sensor images. If the active pixel response is a Π function with no crosstalk, the MTF will be a sinc function. In practice, active pixel response may not be a rectangle function, but our current analysis estimates equivalent pixel width ε under that assumption. This is the first step toward a more detailed analysis of active pixel shape and size.

3.1.

Michelson Contrast

Michelson contrast, CMichelson, is defined as

(10)

CMichelson=gmaxgmingmax+gmin,
where gmax and gmin are the extrema of signal g(n). In the case of a sinusoidal signal, such as g(n) in Eq. (8), extrema are observed when the cosine term evaluates to 1 and 1, respectively. It is easy to see that for well-sampled data, Michelson contrast will be equal to M/B.

Discretization of the signal produces different maximum and minimum values, based on sampling frequency and initial phase ϕ0. The extremal values in the sampled data also depend on the number of sinusoidal periods that fit in the sensor width. When the optical fringe period exceeds sensor width, the contrast may not result in M/B.

3.1.1.

Noise effect on Michelson contrast

Michelson contrast is highly sensitive to noise. Since it is calculated only from extrema, i.e., at two pixels, the noise will affect contrast value [e.g., Fig. 3(a)]. To reduce the effect of noise on Michelson contrast, finding an average consensus of contrast from microregions is useful.

Fig. 3

(a) A captured fringe image and its extrema and (b) dust particle artifact in the image.

JEI_27_1_013015_f003.png

Artifacts affect the location of extrema leading to wrong results [e.g., Fig. 3(b)]. By avoiding such regions or taking the average of microcontrasts, we reduce noise and suppress deviations resulting from artifacts.

3.1.2.

Phase

In the case where the optical fringe period is comparable to pixel size, Michelson contrast is also sensitive to the phase difference, ϕ0, between pixel grid and optical fringe signal. Figure 4 shows pixel sampling and Michelson contrast variation for different ϕo values at Nyquist and half Nyquist frequencies.

Fig. 4

Pixel sampling shown at (a and b) Nyquist frequency and (c and d) half Nyquist.

JEI_27_1_013015_f004.png

The effects of phase difference ϕ0 can be demonstrated with a simple simulation. Figure 6 shows the simulation of Michelson contrast for two different initial phases ϕ0 chosen. We take sinusoidal signals of different spatial frequencies for 1000 logarithmically spaced data points. Pixel pitch is 1 unit. We integrate each signal assuming pixel pitch equal to the active pixel size and zero crosstalk. We sample the data at 1000 pixels, calculate Michelson contrast, and plot the MTF.

Under these conditions, we observe sudden drops at certain frequencies and phases. Figures 5(a) and 5(b) demonstrate sudden drops for phases 0 and π2. Contrast varies based on the maxima and minima of the term g(n) in Eq. (8), observed over 1000 pixels. Maximum contrast will be observed when cos(2πu0np+ϕ0) reaches maximum 1 and minimum 1 for at least some values of n.

Fig. 5

Simulated MTF plotted with Michelson contrast: (a) ϕ0=π2 and (b) ϕ0=0.

JEI_27_1_013015_f005.png

For some frequency and phase combinations, the cosine term can result in repetitive values over n. For example, when u0=12 and ϕ0=π2, the cos(nπ+π2) term will be 0 for all n. [Fig. 4(b)]. When u0=14 and ϕ0=π4, the resulting cos[(2n+1)π4] term will be either 12 or 12 [Fig. 4(c)]. Similar behavior is often observed at other integer fractions of Nyquist frequency. These are the locations of the observed drops.

For neighboring frequencies, very close to the observed-drop frequencies, the cosine term will not result in a constant value but will be varying over n across 1000 pixels. Maximum and minimum picked up over 1000 pixels will typically result in a significantly larger contrast value. This change, when observed with limited granularity, will appear as a sudden contrast drop.

3.1.3.

Moiré pattern

For each optical fringe frequency measurement, an optimal pixel grid-to-fringes alignment (or phase) should result in a maximal contrast. In this way, we could plot the perfect curve without sudden drops. Figure 5 shows varying drops for two phases. To set the alignment required for maximal contrast, additional individual measurements with varying initial phase (ϕ0) are required.

To obviate problems with this procedure, we use an alternative solution based on a moiré pattern approach. This approach allows for the collection of data from multiple phases in a single two-dimensional (2-D) image. The sensor needs to be slightly rotated relative to the fringes of the optical signal. For a small angle of rotation of the image sensor, the fringe lines and pixel grid sampling form beats in the 2-D space. Figure 6 shows moiré pattern beats formed horizontally for vertical fringes. Each horizontal pixel row has a different contrast resulting from a different phase alignment. This helps in choosing the maximum contrast line for calculating the MTF values unaffected by the phase difference variation (ϕ0).

Fig. 6

Moiré pattern beats formed on slightly rotated sensor by vertical fringes: (a) simulated image and (b) actual captured image from our sensor.

JEI_27_1_013015_f006.png

3.2.

Fourier Contrast

For a continuous signal in the following form:

(11)

g(x)=Bε+Mεcos(2πu0x+ϕ0)=Bε+Mε2[ei(2πu0x+ϕ0)+ei(2πu0x+ϕ0)],
contrast is defined in Sec. 3 as the ratio of M and B values. Fourier analysis is a well-known method for finding the AC and DC components in unknown frequency signal data. The Fourier transform of the spatial signal is calculated, and a simple peak detection algorithm can be used for identifying dominant frequencies. The estimation of contrast, using the measured magnitudes of the fundamental frequency and the DC, is correct in an ideal case. However, it is subject to deviations due to discretization and spectral leakage as explained in Sec. 3.2.3.

3.2.1.

Formulation in terms of periodic sinc

The Fourier transform of a sampled signal, using engineering notation for convolutions,16 g(x) [from Eq. (4)], G(u), is

(12)

G(u)={F(u)εsinc(εu)}*1pШ1p(u)*lsinc(lu),
where p is the pixel pitch, l is the length of sensor Np, ε is the pixel active size, and u is the function variable in the frequency domain. The Fourier transform of the optical signal f(x) is

(13)

F(u)=F{B+Acos(2πu0x+ϕ0)}=Bδ(u)+A2[δ(u+u0)eiϕ0+δ(uu0)eiϕ0].
The discrete time Fourier transform, substituting Eq. (13) in Eq. (12), is

(14)

G(u)={(Bδ(u)+A2[δ(u+u0)eiϕ0+δ(uu0)eiϕ0])εsinc(εu)}*1pШ1p(u)*lsinc(lu),

(15)

G(u)=lεp{Bsinc(ε0)δ(u)+A2[sinc(εu0)δ(u+u0)eiϕ0+sinc(εu0)δ(uu0)eiϕ0]}*sinc(lu)*Ш1p(u).

Further, after convolving with the sinc function

(16)

G(u)=lεp{Bsinc(lu)+A2sinc(εu0)sinc(l(u+u0))eiϕ0+A2sinc(εu0)sinc(l(uu0))eiϕ0}*Ш1p(u).
The convolution with the term Ш1p(u) represents the Nyquist folding in the frequency domain, and sinc(u0ε) is the modulation term.

3.2.2.

Periodic sinc (Dirichlet kernel)

The convolution of a finite impulse train and a sinc function is a periodic sinc function or Dirichlet kernel,7,17 written as

(17)

1pШ1p(u)*lsinc(lu)=Шp(x)ΠNp(x)eiαxdx=k=N12N12eiαk=sin(Nα2)sin(α2),
where α2πu. For the Dirichlet Kernel, we will be using the notation DN(α)sin(Nα2)sin(α2).

Using Eq. (17) simplifying Eq. (15), the resulting G(u) can be expressed as

(18)

G(u)=ε{(BDN(2πpu)+A2sinc(u0ε)DN[2πp(u+u0)]eiϕ0+A2sinc(u0ε)DN[2πp(uu0)]eiϕ0}.

3.2.3.

Discrete Fourier transform and spectral leakage

To calculate an N-point discrete Fourier transform (DFT), we consider the transform at discrete frequencies u=kl, where k is an integer varying over the N integers in the observed window, and l=Np is the length of the sensor.

Let G^(k) be the discrete transform of G(u)

(19)

G^(k)=ε{BDN(2πkN)+A2sinc(εu0)DN[2π(k+lu0)N]eiϕ0+A2sinc(εu0)DN[2π(klu0)N]eiϕ0}.
The Dirichlet kernel DN(2πkN) evaluates to ±N when k is an integer multiple of N and evaluates to 0 for other integers.

In evaluating the Dirichlet kernel terms DN[2π(k±lu0)N] in Eq. (19), two cases arise:

  • i. If lu0 is an integer:

    This case happens when the spatial signal fits the sensor width in an integer number of periods. The Dirichlet terms evaluate to ±N at the integer multiples of the fundamental frequency and 0 at all other discrete frequencies. In this case, M can be calculated as follows: [Fig. 7(a)]

    (20)

    M=|G(u0)|+|G(u0)|.

  • ii. If lu0 is not an integer:

    This case happens when the spatial signal does not fit the sensor width in an integer number of periods, and the Dirichlet terms evaluate to nonzero values at sampled frequencies. It appears as the combination of different-frequency sinusoids. This phenomenon is known as spectral leakage18 [see Fig. 7(b)]. In this case, M|G(u0)|+|G(u0)|.

Fig. 7

DFT of a finite length sinusoidal sampled signal: (a) when the period fits the sensor width and (b) when the period does not fit the sensor width.

JEI_27_1_013015_f007.png

3.2.4.

Contrast

When spectral leakage occurs for certain frequencies, as mentioned in Sec. 3.2.3, the value of M, the AC amplitude, cannot be represented as a sum of individual magnitude values.

When spectral leakage occurs, the shape of the spectrum distribution follows from the Dirichlet kernel. The power at a single frequency is spilled to other nearby frequencies in such a way that the combined spectral power from all the frequencies remains the same. This can be understood by considering Parseval’s theorem. The value of M, the AC component, can be evaluated by summing power for nonzero frequencies and taking the square root. The DC component can be obtained from the FFT value at 0 frequency. In case of spectral leakage, |G^(0)| value is contaminated by AC signal as well (see Appendix A). For large data, B|G^(0)|.

Concluding from the above, Fourier contrast, CFourier, can be defined as

(21)

CFourierk|G^[k]|2|G^[0]|2|G^(0)|.

This approach has an added advantage in terms of simplicity. According to Parseval’s theorem, normalized power in the Fourier domain and power in the spatial domain are equal. Thus, we can calculate and utilize power in the spatial domain in the case of a clean signal. Also, Fourier contrast is conceptually similar to RMS contrast.19 We divide the standard deviation by the mean of the data, which is similar to the coefficient of variation, used in statistics.

Figure 8 shows the simulation of Fourier contrast for different frequencies (in cycles per pixel pitch). We follow the same simulation setup as mentioned for the Michelson contrast simulation in Fig. 5. The curve has a sudden jump at Nyquist (and its odd harmonics), which when examined closely is seen to be an oscillation with a strong peak exactly at Nyquist.

Fig. 8

Simulated Fourier contrast—MTF.

JEI_27_1_013015_f008.png

This sudden-jump oscillation can be understood in the frequency domain in terms of aliasing as the optical fringe frequency approaches Nyquist. The spectral leakage distribution [Fig. 7(b)] from the central window overlaps with its neighboring window’s distribution. In the spatial domain, this can be observed in the signal energy variation with beats formation as the optical fringe frequency approaches Nyquist. Near Nyquist frequency, the number of beats within the sensor width decreases and energy increases, with oscillation. At Nyquist frequency, the signal will not beat (i.e., infinite period beat) and energy reaches maximum, resulting in contrast value 2Acos(ϕ0)Bπ.

Using algebraic manipulations, the Fourier contrast value in Eq. (21) can be calculated as follows (see Appendix A; again, α2πu):

(22)

CFourier=M1+DN(2α)cos(2ϕ0)N2DN2(α)cos2(ϕ0)N22B[1+MDN(α)cos(ϕ0)BN].

For an active pixel response function Πε, M=Asinc(εu0). We recognize several distinct cases in this contrast evaluation.

  • i. At Nyquist frequency (u0=0.5); DN(α)=DN(π)=0; DN(2α)=DN(2π)=N and using the trigonometric identity 1+cos(2ϕ0)=2cos2(ϕ0) the contrast evaluates to

    (23)

    CFourier=M1+Ncos(2ϕ0)N0N22B(1+0)=M2cosϕ02B=Asinc(0.5ε)|cosϕ0|B.

    • a. When ϕ0=0 or π, CFourier=Asinc(0.5ε)B and it will be 2π when ε=p, A=B.

    • b. When ϕ0=π2, CFourier=0.

  • ii. At sampling frequency (u0=1), α=2π; DN(2α)=N and DN(α)=N

    (24)

    CFourier=M1+DN(2α)cos(2ϕ0)N2DN2(α)cos2(ϕ0)N22B(1+MDN(α)cos(ϕ0)BN)=M1+Ncos(2ϕ0)N2N2cos2(ϕ0)N22B(1+MNcos(ϕ0)BN)=M1+cos(2ϕ0)2cos2(ϕ0)2B(1+Mcos(ϕ0)B)=0.

    By the trigonometric identity, 1+cos(2ϕ0)=2cos2(ϕ0), the numerator results in 0 for any phase ϕ0

  • iii. Very close to the Nyquist frequency, CFourier oscillates based on the Dirichlet kernel values.20

  • iv. At a low and non-Nyquist frequency, contrast approximately evaluates to

    (25)

    CFourierM1+002B(1+0)=Asincε(u0)2Bsincε(u0)2for  A=B.
    The maximum contrast is reached at very low frequency (MTF at 0 frequency) where this formula evaluates to 12. There are many definitions of contrast, and Michelson contrast is widely used. To match our definition of Fourier contrast with Michelson contrast, a multiplication of factor of 2 is required as normalization. With this new normalization, the maximum contrast will be equal to 1, and the normalized Fourier contrast will match with Michelson contrast whenever there are no sudden jumps. Including this normalization, Fourier contrast becomes

    (26)

    CFourier_norm=2CFourier=2k|G^[k]|2|G^[0]|2|G^(0)|.

4.

Simulations

4.1.

Varying Pixel Size

Figure 9 shows theoretical simulated sensor Fourier MTF plots for varying active pixel size ε with constant pixel pitch p=1, with initial phase values of ϕ0=0 and ϕ0=π2. An ideal symmetric Πε active pixel response function is assumed. We use the same simulation settings as mentioned for Fig. 5 plots. Equation (26) is used for calculation of Fourier contrast.

Fig. 9

Theoretical Fourier contrast MTF plots for pixels of the same pitch p=1 but with three different active area sizes: (a) ϕ0=0 and (b) ϕ0=π2.

JEI_27_1_013015_f009.png

In the ideal case of an infinite sensor and infinitesimal sampling, the graph generated would follow sinc(εu0). However, in the graphs where pixel pitch is different from pixel active size, a sudden drop is observed at integer multiples of the sampling frequency. This drop results from the fact that contrast Eq. (26) evaluates to 0, by substituting u0=1 in the Dirichlet kernel terms. The spatial explanation is that, at u0=1, the optical fringe signal period is 1 pixel. This means that for any phase, the integration of the optical signal over the active pixel area results in the same constant value at each pixel. The contrast for this uniform sampled signal data would be zero.

Figure 13 in Sec. 6.1 shows the measured values for our proposed methods. The sudden drop at the sampling frequency confirms agreement with the theoretical plot. In the case where sensor pixel pitch and pixel active size are unknown, precise measurement of the MTF at high frequencies can be studied, with the goal of estimating both values. The locations of sudden drops can be used to estimate the sampling frequency and thus estimate true pixel pitch. With the assumption of rectangular pixel response, active pixel size can be calculated by fitting a sinc function to the data and finding its zero. For example, the location of zero in the red plot is calculated to be 1.25, indicating an active pixel size of 0.8 times pixel pitch.

In general, the MTF curve is a “Fourier fingerprint” of the pixel response function. In our current work, we estimate the pixel response function assuming a rectangle function, Π, or the equivalent sinc function in the Fourier space. In principle, a similar derivation can be applied to different pixel response functions, which can be estimated from experimental data. In Eq. (8), M is a function of u0 and ε. This function can be modeled and estimated within a multiparametric representation. Also, the pixel active area could be irregular in shape in 2-D, and a related analysis can be done using directional optical fringes on the sensor. This approach would require a larger number of measurements using multiple angles and an involved analysis to estimate a 2-D response function and could be explored in detail in future work.

4.2.

Crosstalk Modeling Example

Interpixel crosstalk deteriorates the MTF shape. As pixel size decreases, the deterioration plays a significant role in sensor resolution. The active pixel response function for an ideal pixel is a rectangle function, Πε, with value 1 inside the pixel and 0 outside. For a simple illustration, we show how crosstalk affects the MTF in a one-dimensional (1-D) case. In a hypothetical model where ε=p and 20% of light energy is lost symmetrically to the immediately neighboring pixels, the pixel response kernel can be written as [0.1 0.8 0.1]. Alternatively, the overall active pixel response function can be written as 0.7Πε+0.1Π3ε (Fig. 10).

Fig. 10

Active pixel response function Πε, where ε=p: ideal function and function modeled with 20% crosstalk.

JEI_27_1_013015_f010.png

The Fourier transform of this crosstalk model is a combination of sinc functions

(27)

F[0.7Πε+0.1Π3ε]=0.7sinc(ε)+0.3sinc(3ε).

Figure 11 shows the simulated MTF for this crosstalk model. We observe that the MTF value at Nyquist frequency is reduced, and this reduction can be used as a measure for the degree of crosstalk.

Fig. 11

MTF variation with and without crosstalk, ε=p.

JEI_27_1_013015_f011.png

4.3.

Modulation Transfer Function Simulations

We have generated simulations of a lens-free implementation of the slanted-edge method for measuring sensor MTF. Figure 12 compares simulations of the slanted-edge method to the Fourier contrast method. For Fourier contrast simulation, we used the same simulation setup as for Fig. 8. Here, ε=p=1. This is modeled MTF with 20% crosstalk kernel mentioned in Sec. 4.2. i.e., yielding 0.7sinc(ε)+0.3sinc(3ε) as the theoretical MTF. For the slanted-edge method, rather than using a captured image, we generated synthetic images where a sharp edge is placed at a 5-deg, 10-deg, and 25-deg angles to the vertical axis.21 We applied a 20% crosstalk kernel to the synthetic images and use the resulting images to evaluate the slanted-edge MTF using Imatest software.22

Fig. 12

MTF slanted-edge simulation compared to our Fourier contrast simulation. 20% crosstalk example.

JEI_27_1_013015_f012.png

Our Fourier contrast method applied to simulated sinusoidal fringes data with crosstalk produces results that match with the theoretical formula curve. The jump at Nyquist frequency comes as part of the Fourier method for a finite number of pixel samples in the sensor. This is explained as part of Fig. 8. The slanted-edge method applied to the simulated pixels with the same crosstalk follows the theoretical curve with a small deviation. This deviation increases with the tilt angle. It is known that by increasing the tilt angle, the MTF calculated by the software implementation of ISO12233 deteriorates.21,23 The close match between the simulation curves for the 5-deg tilt confirms that the Fourier contrast method and the slanted-edge method produce comparable results.

5.

Implementation

The interferometer was set up as described in Sec. 2.1. The sensor was attached to the rail as shown in Fig. 1(b). The interference fringes are created on the sensor surface, and the optical fringe spatial frequency is varied by moving the sensor on the rail. The spacing between optical fringes, d, can be written as15

(28)

d=λ2sinθ2zλD,
where λ is the wavelength of the laser light source, z is the distance of the sensor from the pinholes, D is separation between pinholes, and θ is the view angle from a point on the sensor to the pinholes.

In this formulation, u0=pd, where u0 is in cycles per pixel pitch.

To create a high-contrast fringe signal, light cones from the pinholes are directed such that central concentration of each beam falls on the sensor. For each position z on the rail (each spatial frequency), four images are captured: both pinholes closed, both open, left closed, and right closed.

We capture multiple sensor images of optical fringes over a wide range of frequencies under darkroom conditions. Any remaining ambient illumination and fixed noise in the sensor are captured when both pinholes are closed, and this image is subtracted from the fringe and single-pinhole images, creating corrected images.

The measurements taken with each pinhole closed in turn are used to calculate a correction factor for contrast in the following way. When light intensities from the interfering beams differ, there will be a reduction in the contrast. For example, for the optical fringe signal per Eq. (1), I=I1+I2+2αγI1I2cos(ϕ1ϕ2), the contrast will be 2αγI1I2I1+I2 (α is related to coherence and γ is related to polarization) and I1 and I2 are the beam intensities.

For a perfect setup, highest contrast is achieved when I1=I2, α=1, γ=1. But in general, due to imperfections, a correction factor is needed to compensate for the reduction. For our setup, we assume α=1, γ=1. We obtain I1 and I2 from the corrected single-pinhole images. Hence, we apply I1+I22I1I2 as the multiplication factor to the measured contrast. We apply this correction factor on a per pixel basis using the intensity values at each pixel. Phase effects explained in Sec. 3.1.2 are obviated by rotating the sensor slightly relative to the optical fringe orientation, producing a moiré pattern of beats in the image, as described in relation to Fig. 6.

We find minimum and maximum intensity in each sensor image row and calculate Michelson contrast with correction

(29)

CMichelson_corr=I1+I22I1I2gmaxgmingmax+gmin.

For Fourier contrast, we calculate the 1-D FFT for each row of the sensor image to find the AC and DC components and take their ratio. We apply a multiplication factor of 2, as mentioned in Eq. (22) analysis, i.e., using Eq. (26) we calculate CFourier_norm. Thus, Fourier contrast with correction is

(30)

CFourier_corr=2I1+I22I1I2k|G^[k]|2|G^[0]|2|G^(0)|.

For each method, we choose the row with the highest contrast and use the average of their contrast values as the final measure. The averaging produces a measure more robust to noise and artifacts.

Choosing optimal contrast lines also causes Fourier contrast, at Nyquist frequency, to fall into case i(a) in Sec. 3.2.4, where the Dirichlet kernel has maximum value. This is predicted as a sudden jump in the Fourier contrast MTF. A jump is not predicted for Michelson contrast, which uses only extrema values from the signal within the sensor width. The presence of beats will not affect the extrema, and the beats variation does not result in the sudden jump.

5.1.

Crosstalk Measurement

To estimate crosstalk24 for a given image sensor, the MTF value at Nyquist frequency (or MTF50) is usually used. At Nyquist, in the absence of confounding factors, the observed MTF deviation from the theoretical MTF quantifies the degree of crosstalk. One single measurement at Nyquist frequency suffices to determine the crosstalk. Refer to Sec. 4.2 for a simulation.

For optical fringes at Nyquist, using the moiré pattern, the calculation of contrast becomes simpler and can be evaluated using linear filters based on the following approach. We calculate contrast using the kernels for each 1×3 row neighborhood for each pixel. This contrast measure is equivalent to Michelson contrast

Local difference:gdiff(x)=14[2g(x)g(x1)g(x+1)]Local average:gavg(x)=14[2g(x)+g(x1)+g(x+1)]Local contrast:gcontrast(x)=gdiff(x)gavg(x)=g*[121]g*[121],
where * indicates the convolution. For frequencies, other than Nyquist, similar calculations using kernels are difficult.

6.

Results

Several predictions for the measured MTFs result from the theoretical discussion in Sec. 3 and simulations in Sec. 4. The MTF should follow a curve related to those in Figs. 8 and 9. There should be observable troughs at odd multiples of 1/ε, ε being the active pixel size, and sudden drops at odd multiples of the sampling frequency. To the extent that there is crosstalk, there should be a reduction in contrast from the no-crosstalk expectation near Nyquist frequency as in Fig. 11. We also expect a sudden jump at Nyquist frequency for the Fourier contrast MTF (Fig. 8). As the frequency approaches 0, the MTF value should approach 1. The MTF curves would be modulated by the Fourier fingerprint of the pixel shape, crosstalk “shape” and strength, and effects of noise.

6.1.

Measured Results

We used a XIMEA grayscale CMOS sensor MQ013RG-E2 for comparison to theoretical results. Pixel pitch is 5.3  μm. The captured image is 1.3 MP with 1280×1024  pixels. We removed the cover glass on the sensor to avoid noise and interference fringes from the glass. We used the original camera board and camera software that come with the sensor. We selected ISO 100 for lowest noise and raw image setting to capture unaltered data.

Consider Eq. (28), dzλD. For varying z, we used a rail with length 2 m. This enforced an upper limit on z. The lower limit for z was 100 mm for practical purposes. Distance between pinholes (D) was 35 mm. With the current sensor and setup, we were able to measure contrast for frequencies ranging from 0.2 to 2.3 cycles per pixel pitch (or 37 to 434 line pairs per mm). The sensor was held in a mechanical stage that allowed rotation and tilt for the experiments. We used a 633-nm HeNe laser. To avoid stray light, we conducted experiments in a dark room and we put black bellows-type paper in the areas of light reflection. We used sorbothane isolators to support the setup table for isolation from environmental vibrations. We repeated experiments three times, and consistency of the measurements was verified. In the present paper, we publish two typical sets of measurements.

Figure 13 shows the comparison between Michelson and Fourier contrast results and a theoretical sinc curve based on estimated pixel active size. Both Michelson and Fourier contrast values are calculated in the presence of noise [Eqs. (29) and (30)]. Michelson contrast values are calculated from extrema and, therefore, tend to slightly overestimate the contrast compared to Fourier contrast as can be seen in the plot.

Fig. 13

Comparing measured Michelson and Fourier contrast to theoretical curve. The difference between the theoretical and the measured curve is a measure of crosstalk.

JEI_27_1_013015_f013.png

We observe the sudden drop at sampling frequency as predicted. However, the trough of the MTF is not at sampling frequency. This behavior suggests that active pixel size is not the same as pixel pitch, in line with simulations in Sec. 4.1. We estimated the first trough location of the MTF to be 1.3 by approximate interpolation. This suggests ε of 0.77 and we use this value to plot the sinc(0.77u) for reference in the plot of Fig. 13.

Fourier contrast has a sudden jump at Nyquist, as can be seen in Fig. 13. The jump at Nyquist corresponds to the sudden variation in the [1+DN(2α)cos(2ϕ0)N2DN2(α)cos2(ϕ0)N2] term in Eq. (22). This jump is explained in the simulation in Fig. 9 and discussed in Sec. 3.2.4.

Note that our theoretical MTF curve assumes no crosstalk. The deviation of the measured curve from the theoretical MTF curve provides an estimate of the pixel crosstalk, as explained by simulations of crosstalk influence in Sec. 4.2.

In this study, noise influence is reduced using several methods. However, for very low optical fringe frequencies, the large distance from pinholes to sensor and our limited-power laser lead to a low SNR and underestimated contrast. Thus, although the tendency of the measured MTF at low frequencies continues upward toward a predicted value of 1, these data can be considered less reliable for detailed analysis.

6.2.

Slanted-Edge Comparison

“Slanted edge” is the traditional method for evaluating sensor MTF. It can be performed using a lens to project an image of a sharp edge onto the sensor or by laying the sharp edge directly on the sensor surface. The lens-projection method necessarily confounds the lens MTF with the sensor MTF. For this reason, we used the lens-free version in our experimental setup to validate our work relative to prior results in the literature.9,25 We carefully cut a short piece of stainless steel razor blade and placed it on the sensor such that the sharp edge touched the silicon die. A XIMEA CMOS sensor MQ013RG-E2 was used as in Sec. 6.1. We illuminated the sensor using a parallel beam produced with the same 633-nm HeNe laser that was used to produce the fringes. The beam was formed as a narrow-angle cone steered directly from the spatial filter. No beam expander was used. The beam uniformly covered an area larger than the sensor. We captured multiple images with different orientations of the edge angles varying between 10  deg and 10 deg. We used Imatest software22 conforming to ISO 12233 standard to compute the sensor MTF. We computed the average MTF. For more details, refer to our previous publication.6

Figure 14 shows the comparison between our methods versus the lens-free slanted-edge method. This figure shows measurements using our method to observe more granular detail at high frequencies.

Fig. 14

Our calculated MTF results compared with slanted-edge method.

JEI_27_1_013015_f014.png

The slanted-edge method uses a line estimation algorithm in the sampled image and collects data points from each row, from which the edge spread function is estimated. The MTF generated by ISO12233 method uses interpolation and thus misses granular details. Our methods capture these fine details, where a sudden drop or jump is expected as part of theoretical simulations.

Although in simulation, Sec. 4.3, the slanted-edge and Fourier contrast methods produce comparable results, here, in the experimental measurements, they do not. At Nyquist frequency, we observe interferometric MTF value to be 0.52 and lens-free slanted-edge MTF value to be 0.41. One possible explanation in our setup is as follows. The passivation layer and anything else that distances the blade from the sensor surface (e.g., dust particles or microlenses under the blade) will cause the shadow of the edge to be blurred due to diffraction, thus lowering the measure of contrast. We observed microlenses on the surface of our sensor using a Leica DM6000M microscope. We measured their thickness to be 1.7  μm using a Zygo Zegage optical profiler. We calculated the contrast loss for the sharp edge placed at a separation distance of 1.7  μm using the diffraction knife edge formulas.26 We found a reduction of contrast at Nyquist frequency of 0.9 for our setup parameters mentioned in Sec. 5. This is one of the reduction factors explaining why our lens-free slanted-edge method observations are lower than the interferometric MTF values. The passivation layer or microlenses will not affect the formation of sinusoidal fringes on the sensor surface which are due to interference of two plane waves.

7.

Conclusions

Computational photography is grounded in the image sensor. Precise MTF calculation is an important step in camera calibration and sensor evaluation. A camera design will always benefit from a better sensor and especially a sensor with higher MTF and reduced crosstalk. A true MTF curve can provide insight into pixel shape and fill factor. The well-estimated models of pixel response from the MTF can be used for deconvolution or simple sharpening in the image-processing pipeline of raw camera data.

Our system and methods allow for precise sensor MTF calculation over a wide range of frequencies. In industry, crosstalk is inferred from the MTF50 metric, and recent studies indicate a need for a better metric.23,25 Our theoretical analysis gives scope for an improved metric to evaluate pixel crosstalk quantified in terms of the deviation of observed values from theoretical values.

We have designed and implemented an interferometer for measuring sensor MTF and pixel crosstalk. Our setup solves problems in previous designs by removing spurious fringes resulting from double reflection in the optics. We are producing clear, speckle-free images, and a strong optical signal with 100% contrast.

Using interferometric images, we analyze sensor MTF with both Michelson contrast and Fourier contrast methods. For the analysis, we have developed a mathematical framework that predicts peculiarities and fine detail features in the simulated MTF plots and hence in the measurement data. Our results are based on comparing simulations from our theoretical framework (based on aliasing, spectral leakage, and Dirichlet kernel terms) to the measured data.

Our simulations and our measurements show the following features in the MTF. There is a sudden drop at the sampling frequency, a trough at 1ε, and a sudden jump at Nyquist frequency for Fourier contrast. In previous works,8 such features have been observed but were ignored or considered noise.

We compared our results to the standard ISO 12233 slanted-edge approach for measuring MTF below sampling frequency and observed that the lens-free slanted-edge MTF was lower than the interferometer MTF. This may be partly explained by the thickness of observed microlenses. Our interferometric method has high precision with high granularity, covering a wide range of frequencies reaching far beyond Nyquist.

Appendices

Appendix A:

Calculations—Dirichlet Kernel

The DC component of sampled signal g(n)

(31)

G^(0)=n=N12N12g(n)=ng(n)=n[B+Mcos(αnp+ϕ0)],
where α=2πu0

(32)

G^(0)=nB+nM(eiαnpeiϕ0+eiαnpeiϕ0)2.

The sinusoidal series using Lagrange identity can be written as

(33)

meiαm=sin(Nα2)sin(α2).
Using the above equation in Eq. (32) and considering the symmetric nature of summation over n

(34)

G^(0)=BN+Msin(Nα2)sin(α2)(eiϕ0+eiϕ02)=BN+MDN(α)cos(ϕ0),
where DN(α) is the Dirichlet kernel (periodic sinc function) and DN(α)sin(Nα2)sin(α2).

As per Parseval’s theorem, the spectral power can be found from equivalent power summation in spatial domain

(35)

k|G^[k]|2=Nn(g[n])2=Nn[B+Mcos(αnp+ϕ0)]2=Nn{B2+M2cos2(αnp+ϕ0)+2BMcos(αnp+ϕ0)}=Nn{B2+M21+cos(2αnp+2ϕ0)2+2BMcos(αnp+ϕ0)}.

Using Lagrange trigonometric identities, for cos(2αnp+2ϕ0), cos(αnp+ϕ0) terms

(36)

k|G^[k]|2=N2B2+N2M22+NM2sin(Nα)2sin(α)cos(2ϕ0)+2BNMsin(Nα2)sin(α2)cos(ϕ0).

This right-hand side can be written in short notation as

(37)

k|G^[k]|2=N2B2+N2M22+NM22DN(2α)cos(2ϕ0)+2BNMDN(α)cos(ϕ0).

Considering the Fourier contrast definition [Eq. (25)]

CFourier=k|G^[k]|2|G^[0]|2|G^(0)|.
And substituting Eqs. (34) and (37) into it, we get the following for CFourier:

(38)

=N2B2+N2M22+NM2DN(2α)cos(2ϕ0)+2BNMDN(α)cos(ϕ0)[BN+MDN(α)cos(ϕ0)]2BN+MDN(α)cos(ϕ0)=N2B2+N2M22+NM22DN(2α)cos(2ϕ0)+2BNMDN(α)cos(ϕ0)[N2B2+M2DN2(α)cos2(ϕ0)+2BNMDN(α)cos(ϕ0)]BN+MDN(α)cos(ϕ0)=MN22+N2DN(2α)cos(2ϕ0)DN2(α)cos2(ϕ0)BN+MDN(α)cos(ϕ0),

(39)

CFourier=M1+DN(2α)cos(2ϕ0)N2DN2(α)cos2(ϕ0)N22B[1+MDN(α)cos(ϕ0)BN].

Acknowledgments

We would like to thank Amber Sun for her contributions to the original design of the interferometer and valuable help in the early stages of project. We would like to thank Lyubomir Baev for helping build and fine-tune the interferometer setup. We would also like to thank Biay-Cheng Hseih for supporting us with sensors and useful background information for our study.

References

1. T. Georgiev, G. Chunev and A. Lumsdaine, “Superresolution with the focused plenoptic camera,” Proc. SPIE 7873, 78730X (2011).PSISDG0277-786X http://dx.doi.org/10.1117/12.872666 Google Scholar

2. T. E. Bishop, S. Zanetti and P. Favaro, “Light field superresolution,” in IEEE Int. Conf. on Computational Photography (ICCP) (2009). http://dx.doi.org/10.1109/iccphot.2009.5559010 Google Scholar

3. M. Estribeau and P. Magnan, “Pixel crosstalk and correlation with modulation transfer function of CMOS image sensor,” Proc. SPIE 5677, 98–108 (2005).PSISDG0277-786X http://dx.doi.org/10.1117/12.588382 Google Scholar

4. E.-S. Eid, “Study of limitations on pixel size of very high resolution image sensors,” in Proc. of the Eighteenth National Radio Science Conf. (NRSC ’01) (2001). http://dx.doi.org/10.1109/nrsc.2001.929154 Google Scholar

5. T. Georgiev, “Plenoptic camera resolution,” in Imaging and Applied Optics 2015 (2015). http://dx.doi.org/10.1364/aoms.2015.jth4a.2 Google Scholar

6. T. Georgiev et al., “Interferometric measurement of sensor MTF and crosstalk,” Electron. Imaging 2017(15), 52–57 (2017).ELIMEX http://dx.doi.org/10.2352/ISSN.2470-1173.2017.15.DPMI-079 Google Scholar

7. A. M. Bruckner, J. B. Bruckner and B. S. Thomson, Real Analysis, Prentice-Hall, Upper Saddle River (1997). Google Scholar

8. M. Marchywka and D. G. Socker, “Modulation transfer function measurement technique for small-pixel detectors,” Appl. Opt. 31(34), 7198 (1992).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.31.007198 Google Scholar

9. P. D. Burns, “Slanted-edge MTF for digital camera and scanner analysis,” in Is and Ts Pics Conf., pp. 135–138, Society for Imaging Science and Technology (2000). Google Scholar

10. P. Hariharan, Basics of Interferometry, Elsevier Academic Press, Amsterdam (2007). Google Scholar

11. D. Malacara, Optical Shop Testing, Vol. 59, Wiley, Hoboken, New Jersey (2007). Google Scholar

12. M. Born, E. Wolf and A. B. Bhatia, Principles of Optics Electromagnetic Theory of Propagation, Interference and Diffraction of Light, Cambridge University Press, Cambridge (2016). Google Scholar

13. J. D. Jackson, Classical Electrodynamics, Wiley, Hoboken, New Jersey (2013). Google Scholar

14. J. E. Greivenkamp and A. E. Lowman, “Modulation transfer function measurement of sparse-array sensors using a self-calibrating fringe pattern,” Appl. Opt. 33(22), 5029 (1994).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.33.005029 Google Scholar

15. E. Hecht and A. Zajac, Optics, Addison-Wesley, Reading, Massachusetts (1982). Google Scholar

16. S. W. Smith, The Scientist and Engineer’s Guide to Digital Signal Processing, California Technical Publishing, San Diego, California (1997). Google Scholar

17. R. L. Easton, Fourier Methods in Imaging, John Wiley and Sons, Chichester (2010). Google Scholar

18. F. Harris, “On the use of windows for harmonic analysis with the discrete Fourier transform,” Proc. IEEE 66(1), 51–83 (1978). http://dx.doi.org/10.1109/PROC.1978.10837 Google Scholar

19. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032 (1990).JOAOD60740-3232 http://dx.doi.org/10.1364/JOSAA.7.002032 Google Scholar

20. P. Öffner, T. Sonar and M. Wirz, “Detecting strength and location of jump discontinuities in numerical data,” Appl. Math. 4(12A), 1–14 (2013). http://dx.doi.org/10.4236/am.2013.412A001 Google Scholar

21. J. K. M. Roland, “A study of slanted-edge MTF stability and repeatability,” Proc. SPIE 9396, 93960L (2015).PSISDG0277-786X http://dx.doi.org/10.1117/12.2077755 Google Scholar

22. “Imatest master,” imatest,  http://www.imatest.com/products/imatest-master/ (27 July 2017). Google Scholar

23. S. Birchfield, “Reverse-projection method for measuring camera MTF,” Electron. Imaging 2017(12), 105–112 (2017).ELIMEX http://dx.doi.org/10.2352/ISSN.2470-1173.2017.12.IQSP-254 Google Scholar

24. F. Li, H. Eliasson and A. Dokoutchaev, “Comparison of objective metrics for image sensor crosstalk characterization,” Proc. SPIE 7876, 78760L (2011).PSISDG0277-786X http://dx.doi.org/10.1117/12.872494 Google Scholar

25. D. Williams, “Benchmarking of the ISO 12233 slanted-edge spatial frequency response plug-in,” in Proc. IS&T PICS Conf., pp. 133–136 (1998). Google Scholar

26. R. Bansal, Fundamentals of Engineering Electromagnetics, Taylor & Francis, Boca Raton, Florida (2006). Google Scholar

Biography

Tharun Battula is a senior engineer at Qualcomm Technologies Inc. He received his bachelor’s degree in electrical engineering from the Indian Institute of Technology, Kharagpur, in 2011, and his master’s degree in computer engineering from Texas A&M University in 2016. His current research interests include computational imaging, light-field imaging, and image processing. He is a member of SPIE.

Todor Georgiev received his PhD in molecular science from Southern Illinois University. He worked at Adobe Photoshop, where he authored some of the Photoshop features, for example, the healing brush. Currently, he is a principal engineer at Qualcomm working on a range of computational imaging problems.

Jennifer Gille is a senior staff engineer at Qualcomm QTI, where she works on projects within display image processing, depth sensing, virtual reality, and computational camera. She holds her bachelor’s degree in mathematics and her PhD in vision science with an emphasis on color and spatial vision, both from UCLA.

Sergio Goma is a senior director at Qualcomm Inc. in San Diego, where he leads the multimedia R&D/standardization group for imaging, driving Qualcomm’s vision on future imaging technologies. His research interests include computational photography and programmable hardware architectures. Before Qualcomm, Sergio developed the image processing solution present in AMD’s Imageon series of chips. He received his PhD in reliability and fault tolerance of computers and holds several US patents on image processing algorithms and architectures.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Tharun Battula, Todor Georgiev, Jennifer Gille, Sergio Goma, "Contrast computation methods for interferometric measurement of sensor modulation transfer function," Journal of Electronic Imaging 27(1), 013015 (6 February 2018). https://doi.org/10.1117/1.JEI.27.1.013015 Submission: Received 10 May 2017; Accepted 20 December 2017
Submission: Received 10 May 2017; Accepted 20 December 2017
JOURNAL ARTICLE
14 PAGES


SHARE
Back to Top