25 January 2018 Statistical photocalibration of photodetectors for radiometry without calibrated light sources
Author Affiliations +
Abstract
Calibration of CCD arrays for identifying bad pixels and achieving nonuniformity correction is commonly accomplished using dark frames. This kind of calibration technique does not achieve radiometric calibration of the array since only the relative response of the detectors is computed. For this, a second calibration is sometimes utilized by looking at sources with known radiances. This process can be used to calibrate photodetectors as long as a calibration source is available and is well-characterized. A previous attempt at creating a procedure for calibrating a photodetector using the underlying Poisson nature of the photodetection required calculations of the skewness of the photodetector measurements. Reliance on the third moment of measurement meant that thousands of samples would be required in some cases to compute that moment. A photocalibration procedure is defined that requires only first and second moments of the measurements. The technique is applied to image data containing a known light source so that the accuracy of the technique can be surmised. It is shown that the algorithm can achieve accuracy of nearly 2.7% of the predicted number of photons using only 100 frames of image data.
Yielding, Cain, and Seal: Statistical photocalibration of photodetectors for radiometry without calibrated light sources

1.

Introduction

A refinement of the Static Scene Statistical Nonuniformity Correction (S3NUC) method and a method of leveraging the nonuniformity correction for radiometric calibration are developed and tested in this research. The refinement of S3NUC to the Statistically Applied Nonuniformity Correction (SANUC) algorithm drastically decreases the processing time requirements for low photocount NUC. Simultaneously, the algorithmic refinement allows radiometric quantification of the data in terms of photoelectron count without substantial additional computational burden.

The S3NUC method computes the gain and bias of CCD pixels based on linear gain and bias assumptions. While it is known that photodetectors do not exhibit a linear response,1 there are many examples of successfully modeling them such as Refs. 23.4 when input signals are held over a small enough range, and these assumptions are retained in the refined SANUC method. The innovative feature of the S3NUC algorithm is the employment of higher-order statistical moments to negate the requirement for uniform calibrated targets or statistically rich datasets. Instead, it requires two sufficiently static datasets of the same scene at different integration times.5 However, as originally implemented, the S3NUC algorithm can require thousands of frames to produce sufficiently accurate gain and bias estimates, due explicitly to the need to produce accurate estimates of higher-order moments. It is, therefore, desirable to improve the S3NUC algorithm’s accuracy and calculation speed while reducing its data requirements. To accomplish this, the method is modified to preclude the need for higher-order statistical moments, which in turn reduces the requirement for large frame counts to achieve satisfactory calibration. Other techniques have been introduced for achieving nonuniformity correction,23.4 but these methods do not allow for absolute radiometry to be achieved without the addition of a calibrated light source. They allow for the pixel-to-pixel differences in photodetector response to be removed but do not provide an estimate of the gain of the system that allows the true number of photons to be determined from the detector measurement. This research is concerned with demonstrating the absolute radiometry without the use of calibrated light sources. For this reason, the nonuniformity correction capability of the technique is not investigated or compared to other nonuniformity correction algorithms that do not achieve absolute radiometry without the use of a calibrated light source.

2.

S3NUC Method

The S3NUC method models the data reported by a photodetector as a linear combination of random processes. The method requires two different, statistically independent datasets, D1 and D2, encoded by the photodetector according to the following equation:

(1)

D1(i,j,k)=G(i,j)K1(i,j,k)+B(i,j)+n1(i,j,k)D2(i,j,k)=G(i,j)(K2(i,j,k)+ΔK(i,j,k))+B(i,j)+n2(i,j,k).

In this equation, photon inputs K1, K2, and ΔK are multiplied by some gain G, with additive bias, B, and readout noises, n1 and n2, on a two-dimensional pixel array index i, j and frame number, k, basis. The variable ΔK represents an unknown difference in the photocount input between the two sets of data K1 and K2. The data from each pixel are considered random variables and are assumed to be independent of the data in all other pixels in the array. The initially unknown G and B values are considered to be constant in time, while K1, K2, and ΔK are assumed to be statistically independent Poisson random variables, hence the need to differentiate K1 and K2. Similarly, the zero mean additive white Gaussian noises (AWGN) terms n1 and n2 are assumed to be statistically independent. These definitions of the underlying model terms maintain the statistical independence of the datasets D1 and D2.

The moments of the functions in Eq. (1) are found to generate a system of equations for solution. The AWGN n1 and n2 terms are zero mean, and the means of the first and second datasets, D¯1(i,j) and D¯2(i,j), variances, σD12(i,j) and σD22(i,j), as well as third moments, γD1(i,j) and γD2(i,j), respectively, are calculated using the Poisson moment theorem.6 This theorem allows all higher-order moments of a Poisson random variable to be calculated solely from the mean. The separate Poisson random variables K1 and K2 are constrained to share a common mean value K¯, to produce Eqs. (2)–(7) via appropriate algebraic substitutions

(2)

D¯1(i,j)=E[D1(i,j)]=G(i,j)K¯(i,j)+B(i,j),

(3)

D¯2(i,j)=G(i,j)[K¯(i,j)+ΔK¯(i,j)]+B(i,j),

(4)

σD12(i,j)=E{[D1(i,j)D¯1(i,j)]2}=G2(i,j)K¯(i,j)+σn2,

(5)

σD22(i,j)=G2(i,j)[K¯(i,j)+ΔK¯(i,j)]+σn2,

(6)

γD1(i,j)=E{[D1(i,j)D¯1(i,j)]3}σD13(i,j)=G3(i,j)K¯(i,j)σD13(i,j),

(7)

γD2(i,j)=G3(i,j)[K¯(i,j)+ΔK¯(i,j)]σD23(i,j).

The result is a system of six equations in five unknowns, instead of three equations in four unknowns.5 The overdetermined system is then algebraically reduced to the set of equation shown in Eqs. (8)–(12), which may be solved for the desired system parameters G and B and the intermediate residual values K¯, ΔK¯, and σn2

(8)

G(i,j)=σD22(i,j)σD12(i,j)D¯2(i,j)D¯1(i,j),

(9)

K¯(i,j)=γD1(i,j)σD13(i,j)G3(i,j),

(10)

B(i,j)=D¯1(i,j)G(i,j)K¯(i,j),

(11)

ΔK¯(i,j)=D¯2(i,j)D¯1(i,j)G(i,j),

(12)

σn2(i,j)=σD12(i,j)G2(i,j)K¯(i,j).

3.

SANUC Method

The SANUC algorithm represents a special case and specific modification of the S3NUC approach. If ΔK¯(i,j) is selected such that the change in the average number of photons between the datasets is equal to K¯(i,j), the average number of photons expected in the first dataset, then the moments reported in Eqs. (2)–(5) reduce to the results shown in Eqs. (13)–(16). In practice, this can be readily accomplished by doubling the integration time of the sensor being used to gather the data

(13)

D¯1(i,j)=E[D1(i,j)]=G(i,j)K¯(i,j)+B(i,j),

(14)

D¯2(i,j)=2G(i,j)K¯(i,j)+B(i,j),

(15)

σD12(i,j)=E{[D1(i,j)D¯1(i,j)]2}=G2(i,j)K¯(i,j)+σn2,

(16)

σD22(i,j)=2G2(i,j)K¯(i,j)+σn2.

This modification, implemented by controlling the measurement parameters, reduces the overdetermined S3NUC system of equations, which included an explicit ΔK¯ term, to a fully determined system of equations in only four unknowns.

The system is then algebraically reduced to the set of equation shown in Eqs. (17)–(20) and may be solved for the desired system parameters G and B and the intermediate residual values K¯ and σn2. In Eq. (17), the solution for the gain, G, is identical to that produced by the S3NUC approach

(17)

G(i,j)=σD22(i,j)σD12(i,j)D¯2(i,j)D¯1(i,j).

The solution for K¯ is shown in Eq. (18) and differs from the S3NUC solution in that it depends only on first moments of the measured data rather than the skewness of the data. This significantly reduces the uncertainty, since estimates of first moments are always more reliable than sample estimates of third moments, and in practice drastically reduces the number of data frames necessary to achieve an accurate estimate of the bias, B

(18)

K¯(i,j)=D¯2(i,j)D¯1(i,j)G(i,j).

The calculation of the bias, B, is carried out in the same way as the S3NUC solution. However, since the estimate of K¯ is more robust in the SANUC method, the estimate of B is also more accurate than the corresponding S3NUC estimate for lower numbers of data frames

(19)

B(i,j)=D¯1(i,j)G(i,j)K¯(i,j).

Finally, Eq. (20) shows that the readout noise variance is again computed in the same way that it is in the S3NUC approach, but again since an estimate of K¯ is utilized, the SANUC solution has significantly lower variance for lower numbers of data frames

(20)

σn2(i,j)=σD12(i,j)G2(i,j)K¯(i,j).

The SANUC method is first validated by directly comparing estimates of the unknown variables gain, G, and bias, B, to known simulated data. To generate each of an arbitrary number of 100×100  pixel frames of data, a gain, G, is set for each pixel based on a Gaussian distribution with mean 1 and standard deviation 0.05. Similarly, a bias, B, is set for each pixel based on a Gaussian distribution with mean 100 and standard deviation 1. A fixed ΔK¯ equal to K¯ is set for all pixels and frames, based on the requirements of Eq. (14). The K¯ parameter is used to generate the photonic input in each pixel, in each frame, K, by realizing a Poisson random variable with K¯ as the mean. The additional additive noise terms in each pixel, n1,2, are taken from a draw from Gaussian distribution with zero mean, unity variance, and an amplitude weighting of 10, independently for each frame. This test simulation is looped over a varying number of frames to assess the impact of frame count on the gain and bias estimates. Frames in the range of 1 to 500 are chosen in increments of 10.

In Figs. 1 and 2, data points are spaced every 10 frames, with error bars showing one standard deviation. The results shown in Fig. 1 clearly show that the error in G follows the same curve in both S3NUC and SANUC. This is as expected because the equation to recover the gain is the same in both algorithms.

Fig. 1

Absolute logarithmic error of gain.

OE_57_1_014107_f001.png

Fig. 2

Absolute logarithmic error of bias.

OE_57_1_014107_f002.png

In Fig. 2, the error of the bias between S3NUC and SANUC shows a stark difference in results. S3NUC demonstrates much higher error than SANUC at low frame count numbers. SANUC also demonstrates comparatively flat error standard deviation. The higher error in S3NUC is attributed to the cascading error in the bias estimate.

Figure 3 shows a similar trend to Fig. 2 in the differences between S3NUC and SANUC. As with the bias, the S3NUC method relies on the recovered gain to calculate the readout noise variance, while SANUC still only uses the statistics of the data itself, negating a cascading error.

Fig. 3

Absolute logarithmic error of readout noise variance.

OE_57_1_014107_f003.png

It is readily concluded that the SANUC method substantially outperforms the original S3NUC method for low frame counts, producing higher accuracy results with drastically lower frame count requirements. Additionally, the improved reliability of the higher-order variables makes further data exploitation tractable.

4.

Demonstration of the Radiometric Accuracy of the SANUC Algorithm

The intermediate outputs K¯ and ΔK¯ from S3NUC, Eqs. (9) and (11), and K¯̅ from SANUC, Eq. (18), represent estimates of the number of photons received by a photodetector. In both cases, the accuracy of this estimate is primarily dependent upon the accuracy of the gain estimate. However, due to the overall advantages of SANUC for nonuniformity correction, the radiometric accuracy of the K¯ estimate is only assessed against the output of a known light source for the SANUC method. Figure 4 shows the simple imaging system, observing a light source with a known power output, which was used to assess the radiometric accuracy of this method.

Fig. 4

Setup used to test the radiometric accuracy of the SANUC algorithm.

OE_57_1_014107_f004.png

Using well-established radiometry techniques,7 the expected number of photons received by the sensor from a well-characterized light source and optical train can be calculated analytically as

(21)

E[K]=Ipr2PtΔtIavghv[tan(θt)R]2,
where K is the number of photons at the detector, Pt is the power produced by the source, Δt is the integration time of the sensor, hv is the Planck’s constant times the frequency of the light from the source, r is the radius of the receiver aperture, R is the distance from the source to the receiver aperture, and θt is the half the divergence angle of source. When the tangent of this angle is computed and multiplied by R, it becomes the radius of a circle over which the radiation of the diode is distributed. This approach assumes that the light given off by the source produces an illumination pattern that is uniform over the divergence angle of the beam. The ratio of Ip to Iavg accounts for the variation in the light beam created by the diode. Ip is the peak intensity value in the projected beam while Iavg is the average value in the beam. The light source used in this experiment was the Thor Labs LED555L. The relevant technical specifications of this glass lens LED are listed in Table 1, and the power output is well-characterized with respect to input current and bias, viewing angle, and spectrum. For this experiment, the LED was driven with a forward pulsed diode current maintained at 50 mA to generate a consistent 1-mW optical output.

Table 1

Technical specifications for the Thor Labs LED555L laser diode.8

SpecificationMinTypicalMax
Forward voltage at 50 mA (V)3.54
Continuous operating current (mA)2030
Optical output power 50 mA (mW)1
Viewing half-angle (deg)20
Peak wavelength (nm)545555565
Bandwidth FWHM (nm)40

The diode does not produce a uniform illumination pattern. Figure 5 shows the normalized illumination pattern as a function of viewing angle measured by the manufacturer. Numerically integrating this pattern over the x-axis yields a value of 0.5. This makes the average illumination one-half the maximum. Since Eq. (21) computes the number of photons expected for a source that would equally distribute its photons over the range from 20  deg to 20 deg, the theoretical number of photons shown in Table 2 is adjusted upward by a factor of 2 since the 200-mm aperture is placed directly at center of the maximum intensity in the experiment. The aperture itself has a viewing angle of 0.15 deg in the region around the center of the diode light beam. This implies that over the entire aperture, twice as many photons should be measured as what would be calculated by Eq. (21).

Fig. 5

Measured intensity distribution of the Thor Labs LED555L. It shows that the region near the center of the beam has twice the average intensity across the rest of the pattern (estimated by numerical integration of the pattern).8

OE_57_1_014107_f005.png

Table 2

SANUC estimates and radiometry results for fully bright LED.

Background gain estimate661.8228
Background bias estimate42.46
Spot gain estimate83.5506
Spot bias estimate110.56
Theoretical photons2.2406×106
Theoretical lower bound2.4745×106
Theoretical upper bound2.0234×106
SANUC estimated photons2.1844×106
Absolute error2.7%

Images of the source are captured by an AVT Stingray F-504B camera through a 200-μm-diameter pinhole and 18-cm focal length lens, illuminated from a distance of 4.33 m. This camera features a monochrome CCD sensor with a 2452  (h)×2056  (v) image-output resolution and pixel dimensions on the CCD sensor measuring 3.45  μm×3.45  μm.9 The image of the pinhole illuminates an area 200  pixels in diameter, which allows for a sufficient set of independent pixel measurements for statistical analysis. The base exposure time was set at 100 ms and the software driven camera gain to 20 dB.

Both the output of the radiometry equation using the source specifications and the magnitudes reported in the images are in units of photoelectrons, and the radiometry equation accounts for all of the photons from the light source, which enter the entire aperture. After the image is corrected by removing the gain and bias via the general SANUC method, the corrected image in K¯ is simply summed to collect the entire photoelectron count. The total photoelectron count is then divided by the quantum efficiency of the detector to convert the number of photoelectrons into photons. Table 2 details the results of using SANUC for a radiometry estimate on the average of 100 frames of the fully bright LED light source. The experimental results agree with the theoretical radiometry calculation with only 2.7% absolute error from the nominal computed value for theoretical photons shown in Table 2. Table 2 also shows upper and lower bounds for the theoretical photon value. These bounds are computed using extreme values for the parameters shown in Table 3 used in Eq. (21). The extreme values for R are obtained by estimating up to 5 mm of error in using the measuring tape. The extreme values for the diode wavelength are obtained from the manufacturer. The extreme values for the diode current are obtained from the estimated accuracy of the current meter used to measure the diode current. It is expected that the diode optical power should be linearly related to the diode current. The ratio of peak to average beam intensity is obtained using minimum and maximum values on each of the Riemann sum values obtained from the plot of the measured beam intensity shown in Fig. 5. The Riemann sum was computed by sampling that plot in increments of 5 deg. The mean value over each subinterval was used to compute the average irradiance in the pattern. The peak of the pattern was then divided by that average to obtain the nominal value of the ratio. The maximum values on each subinterval were used to compute the minimum value of the ratio, and the minimum value on each subinterval was used to compute the maximum of the ratio. The upper and lower bounds on the integration time were estimated based on the number of digits available for controlling the integration time in the camera software.

Table 3

Parameter values used to compute upper and lower bounds for photons in Eq. (21).

ParameterValue used in lower bound calculationValue used in upper bound calculation
R (diode to camera aperture distance) (m)4.3354.325
Diode wavelength (nm)545565
Diode optical power (mW)0.951.05
Ip/Iavg (ratio of peak radiance to average)1.962.041
Integration time (ms)99.9100.1

5.

Conclusions

The S3NUC method was refined to the SANUC method, drastically reducing the error in the bias and readout noise variance estimates, as demonstrated in simulated data in Figs. 2 and 3. The resulting refined SANUC method exhibits orders of magnitude better performance than the original S3NUC method for calibrations taken with 100 frames of data, improving it suitability for in-situ operations and radiometric calibration. The radiometric estimates of the number of photons collected from the diode by the camera in the experiment described are in good agreement, <2.7% absolute error with those predicted by radiometry, as shown in Table 2. The accuracy of the theoretical photocount is bounded by the upper and lower photon calculations. The estimated number of photons from the SANUC algorithm falls within these bounds showing that the method produces a photocount estimate consistent with theoretical predictions. This result confirms that with as little as 100 frames of data, an accurate estimate of the number of photons from an optical source can be attained.

References

1. I. K. Baldry, Time-Series Spectroscopy of Pulsating Stars, University of Sydney, Sydney (1999). Google Scholar

2. J. G. Harris and Y.-M. Chiang, “Nonuniformity correction of infrared image sequences using the constant-statistics constraint,” IEEE Trans. Image Process. 8(8), 1148–1151 (1999).IIPRE41057-7149 http://dx.doi.org/10.1109/83.777098 Google Scholar

3. M. M. Hayat et al., “Statistical algorithm for nonuniformity correction in focal plane arrays,” Appl. Opt. 38(5), 772–780 (1999).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.38.000772 Google Scholar

4. R. C. Hardie et al., “Scene-based nonuniformity correction with video sequences and registration,” Appl. Opt. 39(8), 1241–1250 (2000).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.39.001241 Google Scholar

5. A. Catarius and M. Seal, “Static scene statistical algorithm for nonuniformity correction in focal-plane arrays,” Opt. Eng. 54(10), 104111 (2015). http://dx.doi.org/10.1117/1.OE.54.10.104111 Google Scholar

6. J. Goodman, Statistical Optics, John Wiley and Sons, Inc., New York (1985). Google Scholar

7. R. Richmond and S. Cain, Direct-Detection LADAR Systems, SPIE, Bellingham, Washington (2010). Google Scholar

8. THOR LABS, “LED555L spec sheet,” Newton (2016). Google Scholar

9. A. V. Technologies, “AVT Stingray F-504B / F-504C product pamphlet,” Stadtroda (2008). Google Scholar

Biographies for the authors are not available.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Nicholas J. Yielding, Stephen C. Cain, Michael D. Seal, "Statistical photocalibration of photodetectors for radiometry without calibrated light sources," Optical Engineering 57(1), 014107 (25 January 2018). https://doi.org/10.1117/1.OE.57.1.014107 . Submission: Received: 10 August 2017; Accepted: 3 January 2018
Received: 10 August 2017; Accepted: 3 January 2018; Published: 25 January 2018
JOURNAL ARTICLE
5 PAGES


SHARE
Back to Top