Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

. A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JEI.26.1.013014]


Introduction
The Raspberry Pi Foundation provides low-cost, high-performance single-board Raspberry Pi computers to educate and solve real-world problems. As of early 2016, over 8 million Raspberry Pi's had been sold, making it one of the most popular single-board computers on the market. 1 These small single-board computers are quickly moving from the do-ityourself, or DIY, community into mainstream technology development. Many are being used to acquire a wide range of measurements and are being incorporated into instruments for a multitude of applications including medical support and e-health 2-7 robotics, 8 surveillance monitoring, 9 and food production optical sorting. 10 The advent of open source software (and some hardware) has only quickened this trend. The Raspberry Pi credit-card-sized computer supports several accessories, including a camera module containing the Sony IMX219 sensor. This computer and camera configuration is of particular interest since it can provide raw-data format imagery that can be used for a multitude of applications, including computer vision, biophotonics, medical testing, remote sensing, astronomy, improved image quality, high dynamic range (HDR) imaging, and security monitoring. This paper evaluates the characteristics of the Raspberry Pi V2.1 camera based on the Sony IMX219 sensor and the radiometric performance of its raw-data format imagery, so the system can be effectively used for scientific imaging and engineering purposes.
The Raspberry Pi 3 is the third generation single board Raspberry Pi computer and became available to consumers in February 2016. Some of the more significant Raspberry Pi attributes, including interfaces, are described in Table 1. At the time of this writing, a Raspberry Pi 3 sold for about $35 USD and the V2.1 camera module sold for approximately $25 USD. 1,11 The Raspberry Pi Foundation provides several operating systems for the Raspberry Pi 3, including Raspbian and a Debian-based Linux distribution, as well as third-party Ubuntu, Windows 10 IOT Core, RISC OS, and specialized distributions for download.
To understand the scientific and engineering potential of these versatile imaging sensors, a comprehensive laboratorybased radiometric characterization was performed on a small number of Raspberry Pi V2.1 camera modules. The camera is based on the Sony IMX219 silicon CMOS back-lit sensor and produces 8 megapixel images that are 3280 × 2464 pixels in size. The IMX219 sensor operates in the visible spectral range (400 to 700 nm) and uses a Bayer array with a BGGR pattern. Sensor specifications are detailed in Table 2. 12 The Raspberry Pi also provides a visible and nearinfrared version of the Sony IMX219 called the NoIR camera. This camera has No infrared (NoIR) filter on the lens, which allows imaging beyond the visible range. In this paper, the NoIR version was not considered.
The V2 camera module operates at a fixed focal length (3.04 mm) and single f-number (F2.0) typically focused from the near-field to infinity. Images can be captured at ISO settings between 100 and 800 in manually set increments of 100 (although not verified above 600 in this investigation) and camera exposure times between 9 μs and 6 s (although not verified above 1 s in this investigation) using a rolling shutter. Some of the more significant camera specifications are shown in Table 3. In addition to still photos, the Raspberry Pi Sony IMX219 sensor supports a cropped 1080p format at 30 frames per second (fps) and full-frame imaging video at up to 15 fps, but not in raw-data format. The entire camera board is small-25 mm × 25 mm × 9 mm and weighing about 3 g. It connects directly to the Raspberry Pi 3 through a 15 pin mobile industry processor interface (MIPI) camera serial interface and is shown alongside a Raspberry Pi 3 in Fig. 1.

Radiometric Characterization Overview
Several scientific and engineering applications require rawdata format imagery with known and calibrated radiometric properties. A camera's radiometric characterization typically includes dark frame assessments, linearity, image noise assessments, exposure or electronic shutter stability assessments, flat fielding, spectral response measurements, and an absolute radiometric calibration. Dark frame knowledge and flat fielding improve image quality by correcting for fixed pattern noise (FPN) and other spatial effects such as vignetting. Linearity characterization is essential for scientific and engineering applications. Understanding noise as a function of signal level is important for properly exposing imagery, determining the number of samples required for a particular application, and optimizing denoising algorithms. Spectral response information is used in traditional photographic color balancing 13 and for spectroscopy, 14,15 remote sensing, 16 astronomy, 17,18 and many other science and engineering applications. [19][20][21] Absolute calibration relates image acquisition conditions (including illumination and viewing geometry), exposure time, ISO, and pixel digital number (DN) value to spectral radiance.
To perform the radiometric characterizations described in this paper, the camera was accessed and controlled with software from within the Python programming language using the PiCamera application programming interface (API). While finer grain control of the camera can be achieved through low level C libraries, such as OpenMax IL, all of the functionality necessary for the activities in this paper is available from the PiCamera API. Raw-data format images were preprocessed on the Raspberry Pi with a Python script  utilizing the NumPy library and saved in the NumPy file format. The preprocessed raw images were transferred to a separate computer and read into MATLAB with a NumPy data format reader. All further processing was accomplished using MATLAB. The radiometric characterizations described in this investigation include dark frame assessments at multiple ISO and exposure settings, camera linearity assessments as a function of ISO setting and exposure time, sensor image noise as a function of ISO setting, exposure stability assessments, spectral band specific flat fielding function measurements, camera spectral response measurements, and an absolute radiometric calibration to tie measured camera DN values to NIST-traceable SI radiance units. Further information on the techniques that were utilized is described in Ref. 22.

Dark Frame Assessment
A camera dark frame assessment was performed to quantify and correct for the camera's fixed-pattern noise bias. Camera ISO settings were varied from 100 to 600 in steps of 100 at two different exposure times, 5 and 50 ms. After camera warm-up, defined in this investigation as 200 exposures, groups of 250 images were acquired in a dark room at ∼25°C with black cloth covering the camera aperture for each camera setting. The dark frame statistical properties were analyzed and are shown in Tables 4 and 5. The entire image frame was used in this assessment. As expected, the dark images became noisier with increasing ISO setting. Data taken at the higher exposure setting are also slightly noisier. In all cases, the mean and median values were essentially identical.
A 250-frame mean dark image was generated at each ISO setting. Since dark frames are temperature dependent, they were acquired at the same experimental conditions as the bright frames. Histogram plots generated for the mean dark images with the lowest and highest ISO setting further describe the noise variation in ISO and are shown in Fig. 2 for the 50-ms dataset. The ISO 600 histogram is slightly broader, and, while not shown, the tails are significantly longer. A 500 × 500 pixel subset of the corresponding  Journal of Electronic Imaging 013014-3 Jan∕Feb 2017 • Vol. 26 (1) 250-frame mean dark images is presented in Fig. 3 to show the fine scale spatial structure. As indicated in the tables, histogram comparisons between the two different ISO settings are nearly identical between imagery acquired at 5 and 50 ms.

Camera Linearity
Raspberry Pi camera linearity was evaluated as a function of both exposure time and ISO setting. These measurements were obtained by imaging an in-house developed 1.5-m diameter large integrating sphere lamped with Luxeon Rebel 4000 K white-light LED sources mounted on relatively large 40-mm diameter heat sinks to maintain temperature stability. 23 In an integrating sphere, light rays from a source (input) are uniformly scattered by highly reflective diffuse inner walls, as shown in Fig. 4, to produce uniform illumination across the camera field of view (placed at the output). The sphere's spectral radiance was monitored with a NIST-traceable spectrometer using a bare fiber. The LED sources were powered using a stable power supply. LED current was set so that the measured DN value at the center of the image in the green band was ∼80% of the maximum DN value at the longest exposure time or highest ISO setting depending on the test sequence. Since the product of the light source spectral shape and sensor response peaks in the green spectral region, 23 green band pixels have larger DN values than red or blue band pixels. Reducing the exposure time or ISO setting (depending on the test sequence) from this set point enabled the camera to be tested over an extended portion of its dynamic range. For this test, the camera was positioned in front of the sphere, as shown in Fig. 5.
Since the focal length of the lens is 3.04 mm, the camera was effectively focused at infinity in this position. In this assessment, the data were normalized using the mean of a 200 × 200 pixel region in the center of the image.

Linearity with Exposure Time
Camera linearity with exposure time was determined at an ISO setting of 100. In this set of measurements, five images were taken at each exposure time setting. The bright images were temporally and spatially averaged to establish a mean DN value within the center 200 × 200 pixel region. The raw data were found to be linear with respect to exposure time for the green band, as shown in Fig. 6. Table 6 summarizes the linear fit through the data. In this table and subsequent tables, root mean square error (RMSE) is defined as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 3 2 6 ; 1 5 3 where n is the number of data points and e i is the residual or difference between the model (a straight line fit in this case) and the measured data at each point.

Linearity with ISO
Camera linearity with ISO setting was determined at an exposure time setting of 10 ms. In this set of measurements, five images were taken at each ISO setting. As with the previous linearity assessment, the bright images were temporally and spatially averaged within a 200 × 200 pixel region in the center of each image to establish a mean DN value. The raw data were found to be linear with respect to ISO setting for the green band raw data, as shown in Fig. 7. Table 7 summarizes the linear fit through the data.

Sensor Image Noise
Total sensor image noise (S Total ) can be expressed in terms of photon shot noise (S Shot ), read noise (S Read ), and FPN (S FPN ), as described in Eq. (2): 24 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 6 3 ; In this investigation, the team used a mean-variance method to characterize noise as a function of signal. This method, which plots pixel variance against the mean signal on a linear plot, yields results that are relatively simple to interpret. A more detailed description of various methods, including the photon transfer method, is described in Ref. 25.
Sensor noise characterization is usually performed on single or pairs of frames of data by acquiring imagery within an integrating sphere without a lens or optic in place. The near perfectly uniform illumination field produces a near-uniform mean signal (DN) across the FPA that is independent of position with the exception of FPN. Using this technique, a mean signal and variance are calculated for each frame of data acquired. Sphere illumination (radiance level) is varied to generate means and variances across the dynamic range of the sensor.
While some third party camera boards give users the ability to change lenses, the camera module provided by Raspberry Pi, has a fixed (glued) lens not easily removable, in front of the IMX219 sensor, which introduces signal rolloff with field angle (see Sec. 7) This spatially varying roll-off effect prevents one from obtaining a near-uniform mean signal within a single frame. In this investigation, temporal mean signal and pixel variance values were instead determined by analyzing N frames of data (all pixels) acquired at a fixed set of conditions (ISO, exposure time, and sphere illumination), as described by Ref. 24. While a large amount of data are needed, this technique removes FPN from the assessment and derives the lowest possible sensor image noise value, comprised solely of shot and read noise.
After warm-up, 250 frames of data were acquired at five different illumination levels (including dark frames)    spanning the dynamic range of the sensor. Pixel locations were then sampled across the FPA at every 1000 pixels so that each 8 megapixel image produced ∼8000 data points of mixed RGB. Data were acquired at ISO values of 100, 200, and 400 at 5-ms exposure times. The resulting meanvariance plots are shown in Fig. 8. Linear fits were made through the data, as shown in Table 8. As expected, the slope scales with ISO setting.

Camera Exposure Stability
Although one would expect that after turning the camera on, the electronic shutter should be very stable, the team saw some unexpected variation in camera output and decided to measure camera stability. Raspberry Pi camera exposure stability was tested at an exposure setting of 5 ms and an ISO setting of 100. Frames were acquired every 2.0 s. Illumination to the sphere was set such that the center green pixels measured ∼800 DN. Four hundred frames were acquired, spatially averaged, and normalized to the steady state temporal mean (mean of the last 150 data points). These values, shown as a percentage of the steady state temporal mean, are plotted as a function of time in Fig. 9 (dark frames) and Fig. 10 (bright frames). Turning the camera on and taking images can cause changes in output due to sensor warming. The plots show that after approximately a 200 frame warm-up period, data values reach steady state.
The data were modeled as a solution to a thermal lump circuit with a step function due to the initiation of acquiring data, as shown below: 26 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 6 3 ; 1 2 6 Signal% ¼ where A is a scale factor, F is the frame number, F c is a frame constant (analogous to a time constant), and A 0 is an offset constant. A time constant can be calculated by multiplying F c by the frame rate. The team expects that results will change slightly if the rate at which data are taken is changed. For the dark frame data, the fitted values for A, F c , and A 0 are −0.008, 53.1, and 100.0, respectively. The data show that dark frames are changing on the order of 0.01%, which is negligible for almost any potential application. For the bright frame data, the fitted values for A, F c , and A 0 are 0.309, 58.9, and 99.697, respectively. Although the bright frame transient behavior is small compared to photon noise, the data does show that one should allow the camera to come to equilibrium for some applications.

Flat Fielding
As part of this investigation, flat fielding surfaces were developed for the Raspberry Pi camera for each demosaicked RGB band. These measurements were acquired at an ISO setting of 100 and an exposure time of 20 ms. To reduce any local integrating sphere surface defects, the sphere was imaged at four different azimuthal positions and three different view angles. To reduce image noise from the flat fielding surface, three images were acquired at each azimuthal/view   Journal of Electronic Imaging 013014-6 Jan∕Feb 2017 • Vol. 26 (1) angle position. Median images were then generated based on these 36 images (4 azimuthal positions × 3 view angle positions × 3 images per position). To eliminate the influence of one band on another, a simple bilinear demosaicking algorithm was used. 27 Since lens roll-off is the dominant feature in the flat fielding surface, a new flat fielding surface will need to be acquired each time the lens is changed. The resulting RGB demosaicked flat fielding surfaces are shown as images and three-dimensional surfaces in Figs. 11-13. All surfaces were peak normalized to one. For visualization purposes, the mesh plot (three-dimensional surface) sampling was reduced by displaying the mean value of each 16 × 16 pixel block.
Diagonal transects were taken across each of the three flat fielding surfaces, from top right to bottom left and from bottom right to top left. These diagonal transects overlay each other showing optical symmetry, as seen in Figs. 14-16. Each figure contains transects through a single image alongside transects through the 36-image median image. The red band transect, while similar, is not identical to the blue and green transects, as shown in Fig. 17. This may be caused by the red band filter attenuating the signal as a function of field angle and warrants additional study. The RGB flat fielding surfaces shown in Figs. 11-13 were fit to a surface using the functional form shown in Eq. (4).
While higher order terms were considered within this Fourier series expansion, surface noise began to be fit in addition to the general shape of the surface, and the overall fit did not improve: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 3 2 6 ; 7 0 8 fðxÞ ¼ a 0 þ a 1 cosðwxÞ þ b 1 sinðwxÞ þ a 2 cosð2wxÞ þ b 2 sinð2wxÞ: (4) The coefficients that were obtained when fitting these functions are shown in Table 9. Parameters that measure the goodness of fit are also included in the table. Note that the coefficients for the green and blue bands are nearly identical and are consistent with the curves shown in Fig. 17.

Spectral Response
A camera's spectral response is a measure of how each detector responds to a given input illumination as a function of wavelength. The Raspberry Pi camera's spectral response was determined by imaging a quartz tungsten halogen lamp filtered using a monochromator, as shown in Fig. 18, and then comparing those measurements to that obtained with a calibrated power meter. Illumination wavelength was varied from 350 to 800 nm. At each wavelength step, the monochromator provided 1.5 to 2.0 nm spectrally wide    illumination. Illumination from the monochromator exit slit was centered on the FPA to remove lens roll-off (vignetting) variability from the assessment. The light beam exiting the monochromator was also diffused using a few small sheets of lens paper. The acquired spectral response was peak normalized and is shown in Fig. 19 in arbitrary units. These measured spectral responses are broad and significantly overlap each other. Spectral response measurements of two different Raspberry Pi cameras were taken. These measurements showed very similar results, as displayed in Figs. 20-22.

Absolute Radiometric Calibration
An absolute radiometric calibration was performed on a single Raspberry Pi V2.1 camera, which enables one to convert camera acquired DN values into engineering units of radiance. An absolute radiometric calibration can be used to quantify the brightness of objects in a scene and enables a user to preset and optimize camera parameters, such as exposure time, ISO, and f-number, before image acquisition.
The absolute radiometric calibration is based on a general radiometric equation for a well behaved (or correctable) pixel at a fixed ISO or gain setting, within a linearly behaved (or correctable) sensor. 22,28 Since a pixel's DN (count) is proportional to the number of signal electrons N e within a pixel, 28 the generalized radiometric equation for a dark frame subtracted image, where the bias has been removed and the electronic gain is unity, can be written as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 3 2 6 ; 1 4 2 where QSE is the quantum scale equivalence, 28 which relates counts to electrons, τ is the exposure time, A d is the detector area, f# is the camera's f-number, h is Planck's constant,   c is the speed of light, λ is the wavelength of light, LðλÞ is the spectral radiance, TðλÞ is the optical transmission, and ηðλÞ is the quantum efficiency. In this equation, the solid angle is approximated by π∕4ðf#Þ 2 , which yields an ∼4% error from the exact expression at F2.0. This error is corrected as part of the calibration process. To simplify the above equation, one can define the camera's spectral response SðλÞ, which is related to amps per watt, as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 3 2 6 ; 1 4 1 SðλÞ ¼ TðλÞηðλÞλ: In many cases, one does not know the exact quantum efficiency or optical transmission of a camera and what is measured (in DN) is actually a signal that is proportional to SðλÞ. If one peak normalizes SðλÞ to unity, the integral of     SðλÞ over wavelength is the effective spectral width of the spectral response. 29 This allows one to define average spectral radiance as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 6 3 ; 7 1 9L Using a parameter like the QSE, 28 which relates the number of electrons to counts, one can rewrite Eq. (5) as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 6 3 ; 6 5 1 DN ¼ π τA d ISOL 4ðf#Þ 2 100 QSE hc : The QSE can be defined as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 6 3 ; 5 9 5 QSE ¼ where N well is a pixel's well capacity in electrons and N DR is the digital count range (1024 for a 10 bit system minus dark frame offset). Usually, QSE is defined for cases where electronic gain is unity. When ISO is used, this assumption is not always kept, but, for simplicity, we have used the ratio of ISO to QSE as a generalization to include electronic gain.
To perform an absolute camera calibration, the I2R 1.5 m diameter integrating sphere was illuminated with white-light Luxeon Rebel 4000K LEDs (as before) and imaged by the Raspberry Pi camera nearly simultaneously as a NIST-traceable spectrometer, calibrated to better than 5% absolute accuracy, to measure the sphere's spectral radiance.
When acquiring imagery for the calibration, camera exposure was set to 10 ms and ISO was incrementally set to 300, 400, and 500. As mentioned earlier, current to the LEDs illuminating the 1.5-m sphere was set to maximize camera DN in the green band without causing saturation.
Five dark images and 60 bright images (4 azimuthal positions × 3 view angles × 5 images per position) of the sphere were acquired. As with the linearity measurements, these multiple bright images were acquired to reduce local integrating sphere surface defects and image noise. The entire image was used in this assessment. The bright images were dark frame subtracted, flat field corrected, and then temporally and spatially averaged to establish a mean DN value.
If we define a calibration coefficient C as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 6 3 ; 2 6 6 C ¼ We can rewrite Eq. (8) above as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 6 3 ; 2 1 1 DN ¼ τ ISOL C ðf#Þ 2 : The calibration coefficient can then be determined for each RGB band such that: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 6 3 ; 1 4 4L Using F2.0, the resulting three-point mean calibration coefficients, determined at three different ISO values, are shown in Table 10.
To keep the Raspberry Pi cameras radiometrically calibrated, this type of assessment would have to be performed periodically. The frequency of this calibration would depend on the radiometric accuracy required, camera operation, and operation conditions.

Results
A comprehensive radiometric characterization was performed on the Raspberry Pi V2.1 camera module. The camera was found to be stable over short periods, measured in days, and performance was repeatable between multiple cameras. Camera exposure stability was extremely stable (<0.1% variation) after warm-up. Raw-data format DN values were linear with ISO and exposure time over the regions investigated. Flat fielding surfaces were symmetric, indicating that the optical center of the camera was aligned well to the geometric center of the FPA. Without flat fielding corrections, raw-data format image brightness decreased ∼75% when transecting from the center to the edge of the image.
To qualitatively evaluate the overall effect of applying dark frame subtraction, flat fielding and absolute radiometric calibration, a "typical" raw image was acquired at an ISO setting of 100 and an exposure time of 20 ms. The rawdata format image was demosaicked using a simple bilinear algorithm and displayed in RGB, as shown in Fig. 23. The image was then dark frame subtracted and flat fielded using the functions described above (Fig. 24) and finally radiometrically calibrated using the calibration coefficients provided in this paper (Fig. 25). The final image is a radiometrically correct image that can be converted to units of radiance. Note the improvement in color quality and brightness uniformity when all the corrections are applied.

Conclusion
The Raspberry Pi V2.1 camera module, operated using the Raspberry Pi 3 single-board computer, has been radiometrically calibrated to produce high quality imagery appropriate for scientific and engineering use. The radiometric calibration coefficients determined in this investigation were applied to imagery acquired with the V2.1 camera module to recover information in SI units of radiance. This finding opens up a wide range of scientific applications associated with computer vision, biophotonics, remote sensing, HDR imaging, and astronomy, to name a few. While the camera modules appeared stable after warm-up over the few month investigation, the camera's value to the scientific community will be determined in part by longer term stability. The small number of camera modules that were investigated produced consistent, repeatable results. A larger scale investigation involving many more cameras will need to be performed before the community can feel confident that the results of this investigation can be applied to other Raspberry Pi V2.1 camera modules. It should be noted that each camera module will be slightly different, and, for some applications, each individual camera module will have to be characterized.
Mary Pagnutti is president and cofounder of Innovative Imaging and Research. She received her BE and ME degrees in mechanical