|
1.IntroductionA refinement of the Static Scene Statistical Nonuniformity Correction (S3NUC) method and a method of leveraging the nonuniformity correction for radiometric calibration are developed and tested in this research. The refinement of S3NUC to the Statistically Applied Nonuniformity Correction (SANUC) algorithm drastically decreases the processing time requirements for low photocount NUC. Simultaneously, the algorithmic refinement allows radiometric quantification of the data in terms of photoelectron count without substantial additional computational burden. The S3NUC method computes the gain and bias of CCD pixels based on linear gain and bias assumptions. While it is known that photodetectors do not exhibit a linear response,1 there are many examples of successfully modeling them such as Refs. 23.–4 when input signals are held over a small enough range, and these assumptions are retained in the refined SANUC method. The innovative feature of the S3NUC algorithm is the employment of higher-order statistical moments to negate the requirement for uniform calibrated targets or statistically rich datasets. Instead, it requires two sufficiently static datasets of the same scene at different integration times.5 However, as originally implemented, the S3NUC algorithm can require thousands of frames to produce sufficiently accurate gain and bias estimates, due explicitly to the need to produce accurate estimates of higher-order moments. It is, therefore, desirable to improve the S3NUC algorithm’s accuracy and calculation speed while reducing its data requirements. To accomplish this, the method is modified to preclude the need for higher-order statistical moments, which in turn reduces the requirement for large frame counts to achieve satisfactory calibration. Other techniques have been introduced for achieving nonuniformity correction,2–4 but these methods do not allow for absolute radiometry to be achieved without the addition of a calibrated light source. They allow for the pixel-to-pixel differences in photodetector response to be removed but do not provide an estimate of the gain of the system that allows the true number of photons to be determined from the detector measurement. This research is concerned with demonstrating the absolute radiometry without the use of calibrated light sources. For this reason, the nonuniformity correction capability of the technique is not investigated or compared to other nonuniformity correction algorithms that do not achieve absolute radiometry without the use of a calibrated light source. 2.S3NUC MethodThe S3NUC method models the data reported by a photodetector as a linear combination of random processes. The method requires two different, statistically independent datasets, and , encoded by the photodetector according to the following equation: In this equation, photon inputs , , and are multiplied by some gain , with additive bias, , and readout noises, and , on a two-dimensional pixel array index , and frame number, , basis. The variable represents an unknown difference in the photocount input between the two sets of data and . The data from each pixel are considered random variables and are assumed to be independent of the data in all other pixels in the array. The initially unknown and values are considered to be constant in time, while , , and are assumed to be statistically independent Poisson random variables, hence the need to differentiate and . Similarly, the zero mean additive white Gaussian noises (AWGN) terms and are assumed to be statistically independent. These definitions of the underlying model terms maintain the statistical independence of the datasets and . The moments of the functions in Eq. (1) are found to generate a system of equations for solution. The AWGN and terms are zero mean, and the means of the first and second datasets, and , variances, and , as well as third moments, and , respectively, are calculated using the Poisson moment theorem.6 This theorem allows all higher-order moments of a Poisson random variable to be calculated solely from the mean. The separate Poisson random variables and are constrained to share a common mean value , to produce Eqs. (2)–(7) via appropriate algebraic substitutions The result is a system of six equations in five unknowns, instead of three equations in four unknowns.5 The overdetermined system is then algebraically reduced to the set of equation shown in Eqs. (8)–(12), which may be solved for the desired system parameters and and the intermediate residual values , , and 3.SANUC MethodThe SANUC algorithm represents a special case and specific modification of the S3NUC approach. If is selected such that the change in the average number of photons between the datasets is equal to , the average number of photons expected in the first dataset, then the moments reported in Eqs. (2)–(5) reduce to the results shown in Eqs. (13)–(16). In practice, this can be readily accomplished by doubling the integration time of the sensor being used to gather the data This modification, implemented by controlling the measurement parameters, reduces the overdetermined S3NUC system of equations, which included an explicit term, to a fully determined system of equations in only four unknowns. The system is then algebraically reduced to the set of equation shown in Eqs. (17)–(20) and may be solved for the desired system parameters and and the intermediate residual values and . In Eq. (17), the solution for the gain, , is identical to that produced by the S3NUC approach The solution for is shown in Eq. (18) and differs from the S3NUC solution in that it depends only on first moments of the measured data rather than the skewness of the data. This significantly reduces the uncertainty, since estimates of first moments are always more reliable than sample estimates of third moments, and in practice drastically reduces the number of data frames necessary to achieve an accurate estimate of the bias, The calculation of the bias, , is carried out in the same way as the S3NUC solution. However, since the estimate of is more robust in the SANUC method, the estimate of is also more accurate than the corresponding S3NUC estimate for lower numbers of data frames Finally, Eq. (20) shows that the readout noise variance is again computed in the same way that it is in the S3NUC approach, but again since an estimate of is utilized, the SANUC solution has significantly lower variance for lower numbers of data frames The SANUC method is first validated by directly comparing estimates of the unknown variables gain, , and bias, , to known simulated data. To generate each of an arbitrary number of frames of data, a gain, , is set for each pixel based on a Gaussian distribution with mean 1 and standard deviation 0.05. Similarly, a bias, , is set for each pixel based on a Gaussian distribution with mean 100 and standard deviation 1. A fixed equal to is set for all pixels and frames, based on the requirements of Eq. (14). The parameter is used to generate the photonic input in each pixel, in each frame, , by realizing a Poisson random variable with as the mean. The additional additive noise terms in each pixel, , are taken from a draw from Gaussian distribution with zero mean, unity variance, and an amplitude weighting of 10, independently for each frame. This test simulation is looped over a varying number of frames to assess the impact of frame count on the gain and bias estimates. Frames in the range of 1 to 500 are chosen in increments of 10. In Figs. 1 and 2, data points are spaced every 10 frames, with error bars showing one standard deviation. The results shown in Fig. 1 clearly show that the error in follows the same curve in both S3NUC and SANUC. This is as expected because the equation to recover the gain is the same in both algorithms. In Fig. 2, the error of the bias between S3NUC and SANUC shows a stark difference in results. S3NUC demonstrates much higher error than SANUC at low frame count numbers. SANUC also demonstrates comparatively flat error standard deviation. The higher error in S3NUC is attributed to the cascading error in the bias estimate. Figure 3 shows a similar trend to Fig. 2 in the differences between S3NUC and SANUC. As with the bias, the S3NUC method relies on the recovered gain to calculate the readout noise variance, while SANUC still only uses the statistics of the data itself, negating a cascading error. It is readily concluded that the SANUC method substantially outperforms the original S3NUC method for low frame counts, producing higher accuracy results with drastically lower frame count requirements. Additionally, the improved reliability of the higher-order variables makes further data exploitation tractable. 4.Demonstration of the Radiometric Accuracy of the SANUC AlgorithmThe intermediate outputs and from S3NUC, Eqs. (9) and (11), and ̅ from SANUC, Eq. (18), represent estimates of the number of photons received by a photodetector. In both cases, the accuracy of this estimate is primarily dependent upon the accuracy of the gain estimate. However, due to the overall advantages of SANUC for nonuniformity correction, the radiometric accuracy of the estimate is only assessed against the output of a known light source for the SANUC method. Figure 4 shows the simple imaging system, observing a light source with a known power output, which was used to assess the radiometric accuracy of this method. Using well-established radiometry techniques,7 the expected number of photons received by the sensor from a well-characterized light source and optical train can be calculated analytically as where is the number of photons at the detector, is the power produced by the source, is the integration time of the sensor, is the Planck’s constant times the frequency of the light from the source, is the radius of the receiver aperture, is the distance from the source to the receiver aperture, and is the half the divergence angle of source. When the tangent of this angle is computed and multiplied by , it becomes the radius of a circle over which the radiation of the diode is distributed. This approach assumes that the light given off by the source produces an illumination pattern that is uniform over the divergence angle of the beam. The ratio of to accounts for the variation in the light beam created by the diode. is the peak intensity value in the projected beam while is the average value in the beam. The light source used in this experiment was the Thor Labs LED555L. The relevant technical specifications of this glass lens LED are listed in Table 1, and the power output is well-characterized with respect to input current and bias, viewing angle, and spectrum. For this experiment, the LED was driven with a forward pulsed diode current maintained at 50 mA to generate a consistent 1-mW optical output.Table 1Technical specifications for the Thor Labs LED555L laser diode.8
The diode does not produce a uniform illumination pattern. Figure 5 shows the normalized illumination pattern as a function of viewing angle measured by the manufacturer. Numerically integrating this pattern over the -axis yields a value of 0.5. This makes the average illumination one-half the maximum. Since Eq. (21) computes the number of photons expected for a source that would equally distribute its photons over the range from to 20 deg, the theoretical number of photons shown in Table 2 is adjusted upward by a factor of 2 since the 200-mm aperture is placed directly at center of the maximum intensity in the experiment. The aperture itself has a viewing angle of 0.15 deg in the region around the center of the diode light beam. This implies that over the entire aperture, twice as many photons should be measured as what would be calculated by Eq. (21). Table 2SANUC estimates and radiometry results for fully bright LED.
Images of the source are captured by an AVT Stingray F-504B camera through a -diameter pinhole and 18-cm focal length lens, illuminated from a distance of 4.33 m. This camera features a monochrome CCD sensor with a image-output resolution and pixel dimensions on the CCD sensor measuring .9 The image of the pinhole illuminates an area in diameter, which allows for a sufficient set of independent pixel measurements for statistical analysis. The base exposure time was set at 100 ms and the software driven camera gain to 20 dB. Both the output of the radiometry equation using the source specifications and the magnitudes reported in the images are in units of photoelectrons, and the radiometry equation accounts for all of the photons from the light source, which enter the entire aperture. After the image is corrected by removing the gain and bias via the general SANUC method, the corrected image in is simply summed to collect the entire photoelectron count. The total photoelectron count is then divided by the quantum efficiency of the detector to convert the number of photoelectrons into photons. Table 2 details the results of using SANUC for a radiometry estimate on the average of 100 frames of the fully bright LED light source. The experimental results agree with the theoretical radiometry calculation with only 2.7% absolute error from the nominal computed value for theoretical photons shown in Table 2. Table 2 also shows upper and lower bounds for the theoretical photon value. These bounds are computed using extreme values for the parameters shown in Table 3 used in Eq. (21). The extreme values for are obtained by estimating up to 5 mm of error in using the measuring tape. The extreme values for the diode wavelength are obtained from the manufacturer. The extreme values for the diode current are obtained from the estimated accuracy of the current meter used to measure the diode current. It is expected that the diode optical power should be linearly related to the diode current. The ratio of peak to average beam intensity is obtained using minimum and maximum values on each of the Riemann sum values obtained from the plot of the measured beam intensity shown in Fig. 5. The Riemann sum was computed by sampling that plot in increments of 5 deg. The mean value over each subinterval was used to compute the average irradiance in the pattern. The peak of the pattern was then divided by that average to obtain the nominal value of the ratio. The maximum values on each subinterval were used to compute the minimum value of the ratio, and the minimum value on each subinterval was used to compute the maximum of the ratio. The upper and lower bounds on the integration time were estimated based on the number of digits available for controlling the integration time in the camera software. Table 3Parameter values used to compute upper and lower bounds for photons in Eq. (21).
5.ConclusionsThe S3NUC method was refined to the SANUC method, drastically reducing the error in the bias and readout noise variance estimates, as demonstrated in simulated data in Figs. 2 and 3. The resulting refined SANUC method exhibits orders of magnitude better performance than the original S3NUC method for calibrations taken with frames of data, improving it suitability for in-situ operations and radiometric calibration. The radiometric estimates of the number of photons collected from the diode by the camera in the experiment described are in good agreement, absolute error with those predicted by radiometry, as shown in Table 2. The accuracy of the theoretical photocount is bounded by the upper and lower photon calculations. The estimated number of photons from the SANUC algorithm falls within these bounds showing that the method produces a photocount estimate consistent with theoretical predictions. This result confirms that with as little as 100 frames of data, an accurate estimate of the number of photons from an optical source can be attained. ReferencesI. K. Baldry, Time-Series Spectroscopy of Pulsating Stars, University of Sydney, Sydney
(1999). Google Scholar
J. G. Harris and Y.-M. Chiang,
“Nonuniformity correction of infrared image sequences using the constant-statistics constraint,”
IEEE Trans. Image Process., 8
(8), 1148
–1151
(1999). http://dx.doi.org/10.1109/83.777098 IIPRE4 1057-7149 Google Scholar
M. M. Hayat et al.,
“Statistical algorithm for nonuniformity correction in focal plane arrays,”
Appl. Opt., 38
(5), 772
–780
(1999). http://dx.doi.org/10.1364/AO.38.000772 APOPAI 0003-6935 Google Scholar
R. C. Hardie et al.,
“Scene-based nonuniformity correction with video sequences and registration,”
Appl. Opt., 39
(8), 1241
–1250
(2000). http://dx.doi.org/10.1364/AO.39.001241 APOPAI 0003-6935 Google Scholar
A. Catarius and M. Seal,
“Static scene statistical algorithm for nonuniformity correction in focal-plane arrays,”
Opt. Eng., 54
(10), 104111
(2015). http://dx.doi.org/10.1117/1.OE.54.10.104111 Google Scholar
J. Goodman, Statistical Optics, John Wiley and Sons, Inc., New York
(1985). Google Scholar
R. Richmond and S. Cain, Direct-Detection LADAR Systems, SPIE, Bellingham, Washington
(2010). Google Scholar
“LED555L spec sheet,”
(2016). Google Scholar
“AVT Stingray F-504B / F-504C product pamphlet,”
Stadtroda
(2008). Google Scholar
|