Open Access
28 December 2018 Performance comparison of state-of-the-art high-speed video cameras for scientific applications
Julien Manin, Scott A. Skeen, Lyle M. Pickett
Author Affiliations +
Abstract
Time-resolved visualization of fast processes using high-speed digital video-cameras has been widely used in most fields of scientific research for over a decade. In many applications, high-speed imaging is used not only to record the time history of a phenomenon but also to quantify it, hence requiring dependable equipment. Important aspects of two-dimensional imaging instrumentation used to qualitatively or quantitatively assess fast-moving scenes include sensitivity, linearity, as well as signal-to-noise ratio (SNR). Under certain circumstances, the weaknesses of commercially available high-speed cameras, i.e., sensitivity, linearity, image lag, etc., render the experiment complicated and uncertain. Our study evaluated two advanced CMOS-based, continuous-recording, high-speed cameras available at the moment of writing. Various parameters, potentially important toward accurate time-resolved measurements and photonic quantification, have been measured under controlled conditions on the bench, using scientific instrumentation. Testing procedures to measure sensitivity, linearity, SNR, shutter accuracy, and image lag are proposed and detailed. The results of the tests, comparing the two high-speed cameras under study, are also presented and discussed. Results show that, with careful implementation and understanding of their performance and limitations, these high-speed cameras are reasonable alternatives to scientific CCD cameras, while also delivering time-resolved imaging data.

1.

Introduction

Since the end of the 19th century, our knowledge has benefited from the progress made in high-speed photography.1 Instrumentation limitations have been a barrier to fully understand certain phenomena for almost 100 years, but high-speed photography now provides the necessary level of performance for researchers to track high-speed processes. Beyond their extensive use in movies and sports broadcasting, high-speed cameras are used in a multitude of applications, including ballistic, explosions, fluid-dynamic, combustion, or vehicle crash-testing for advanced research and development.24

Many techniques have been proposed to increase temporal resolution since photography was invented in the 1840s. The poor sensitivity of the photographic supports (such as copper or glass plates) at the time prevented any action-type shooting to be performed with normal lighting. To overcome the lack of light sensitivity and the associated motion blur, short illumination durations were implemented in the early days via the use of flashes.5,6 Almost a century and a half ago, Marey7 designed a photographic rifle able to shoot a succession of frames at speeds up to 60 frames per second (fps). High-speed photography was leveraged by more light sensitive emulsions, as well as by the introduction of the roll film at the end of the 19th century. The roll film (or reel in cinematography) permitted the large-scale development of the cinema or motion picture along with high-speed photography. The intermittent camera design, similar to the cinematograph, was improved to increase the framerate to several hundred and even 1000 fps in the early 1930s.8 For higher speeds, the rotary prism camera was able to record “registered” images at speeds up to 18 kfps with a four-facet prism and 16-mm film in the mid-1960s.9 This technology synchronizes the prism and the film such that they are both moving at proportional speeds. Originally used for streak photography,10,11 Miller solved the “streak” effect of rotating-mirror cameras for 2-D photography by using relay lenses to refocus the images onto the film.12 The design has been refined to acquire photographs of atomic explosions with acquisition speeds over 10 Mfps.13

The progress in semiconductor technology and the invention of the charge-coupled device (CCD)14 opened the door to the digital imaging era. Invented around the same time, the complementary metal-oxide semiconductor (CMOS) technology did not see the same development as the CCD for imaging applications until the early 1990s. The breakthrough for the high-speed CMOS camera came via a design from Etoh.15 The camera commercialized by Photron was able to acquire 4500 fps at full resolution (256×256  pix2) by using 16 parallel readouts and a microchannel plate intensifier. Despite the fact that film cameras offered better performance, digital high-speed imaging was an important step forward at the time because the digital format provided immediate results without requiring film development. The main advantage of the CMOS cameras over the CCD technology for high-speed imaging is that the charge or voltage is read out for each pixel, while it is transferred from pixel to pixel and to the register in a CCD sensor. This pixel-to-pixel transfer allows each photosensitive area to be larger but substantially decreases the pixel readout rate. The working principle of CMOS cameras “from photon to count” is briefly described in the next section.

Still, depending on the architecture, both CCD and CMOS technologies have been proven to be valid options for high-speed camera designs.16 Most of the methods for high-speed photography described above have been applied to digital imaging. A 25 Mfps rotating-mirror camera was designed by Chin et al.17 by using 128 CCD sensors instead of film. Higher speeds above several hundred million fps can be achieved with this technology, but the very low amount of light requires the use of gate intensifiers. These devices, called framing cameras, have a limited number of frames to be recorded, generally ranging from 12 to 128 images. Another design is the one by Etoh ,18 who developed a 1 Mfps camera using an in-situ storage image CCD sensor. The camera was able to record 100 frames at full resolution (312×260  pix2) at various speeds up to the aforementioned framerate. Commercial in-situ storage image sensor cameras can currently be found with enhanced resolution and rated speeds up to 10 Mfps, using modified or hybrid CMOS sensors. Streak cameras belong to a class of their own due to the different approach and perspective on imaging. Nevertheless, Gao et al.19 recently pushed the limits of high-speed 2-D imaging further by introducing compressed ultrafast photography, allowing to record transient scenes up to 100 Gfps using a streak approach. This approach relies on a streak camera to acquire the image of an object that has been spatially encoded in time due to a digital micromirror device placed upstream. Recently, Ehn et al.20 captured trillions of images/s, by reconstructing the time-dependent pattern orientation of a coded illumination of a scene with an ultrafast laser on a single exposure.

This study concentrates on commercially available high-performance, high-speed CMOS cameras for continuous recording, meaning that the total number of frames is only dependent upon resolution and storage capacity. This means that framing or on-chip storage cameras were not considered in this evaluation. A total of five high-speed cameras from four manufacturers were tested, but performance and reliability limited the comparison to the respective top-of-the-line products (at the time of writing) from the main two high-speed camera manufacturers. The objective of this manuscript is to present the testing procedures used to evaluate the cameras, as well as the results of the evaluation. The results will focus on characteristics and performance from a general perspective, with an emphasis on high-speed applications. Given the proprietary nature of each design, we mostly refrain from attempting to describe the underlying structure and characteristics of these imaging devices. Another important objective of this report is to communicate camera specificities that affect quantification for scientific applications, thereby assisting researchers in future experimentation as well as identifying opportunities for future development by camera manufacturers.

The document has been divided into five sections. Following the present introductory section, a basic description of the working principle of CMOS camera is provided, followed by a nonexhaustive list of the parameters and characteristics important to high-speed scientific imaging. Section 3 details the procedures used to evaluate the different parameters to characterize high-speed cameras. Section 4 presents the results of the characterization, emphasizing on the differences between the two cameras under study, also offering comparisons to other scientific imaging devices. Section 5 concludes this manuscript by summarizing the results of the present investigation.

2.

High-Speed Digital Imaging

As mentioned in Sec. 1, high-speed digital cameras have been quickly progressing in the last decade. Some background regarding basic camera operation is required to understand the important parameters of high-speed cameras and the relationship to various metrics available to evaluate their performances.

2.1.

Overview of CMOS Sensors Working Principle

The two cameras presented and evaluated in this work are equipped with CMOS sensors. The advantage of CMOS over CCD regarding speed has been mentioned in Sec. 1. As of today, the CMOS technology dominates the field of imaging sensors, even though high-speed cameras represent a small fraction in the market. The manufacturers of the two cameras under test in this study have been producing high-speed cameras and pushing the limits of technology every time a new iteration was released.

A CMOS (or CCD) sensor is based on metal-oxide semiconductor, meaning that field effect transistors are employed to gate the charge coming from the semiconductor, i.e., the photosensitive area. The photosite material (semiconductor) used in most modern cameras is silicon. Silicon is used because the valance to conduction energy gap, or bandgap, of 1.1 eV is near ideal for wavelengths in the visible or near infrared. This means that when a photon of higher energy (above 1.1 eV or below 1127 nm) hits the surface of the silicon, that photon will be absorbed in the silicon and produce a charge, assuming ideal conversion. The charge is related to the amount of photons hitting the photosensitive area, and being converted by the semiconductor. The silicon is doped with different materials with positive and negative charges to create a diode-like structure. As explained in Sec. 1, the main difference between a CMOS and CCD sensor is the way charges are moved out of the photosite to the readout part of the sensor (or camera electronic). A CMOS sensor reads the charge out under the form of voltage or current directly next to the photosite on the pixel. This allows parallel readout, a major advantage when it comes to speed. Several transistors are used to perform the necessary operations to move and read the charge produced by the semiconductor: reset, switch, and readout. The reset transistor allows the photosite (photodiode) to be reset to the initial potential, the switch transistor allows the charge (photoelectrons) to be placed on the readout bus, and the readout transistor converts the charge to an output voltage that gets placed on the readout bus. This generic three-transistor active pixel architecture is still used in many cameras, but the lack of global shutter, i.e., the typical rolling shutter, makes this layout unsuitable for high-speed imaging. Other architectures are commonly employed, using more transistors to accomplish different features necessary to high-speed digital imaging, such as gate transfer or global shutter, meaning that all pixels are exposed at the same time. After the charge has been converted to voltage, it is further amplified, and sent to the several on-chip analog-to-digital converters (ADC). Each pixel photonic-derived voltage is converted into digital units based on the level and ADC bit-depth (e.g., 12 bit). The digital information is then transferred to the storage unit of the camera. The on-board memory is a crucial part of continuously recording high-speed camera systems such as the ones under study. With the high continuous pixel readout rates of such cameras, the amount of information being stored in the memory every second is beyond typical transfer rates. Specifically, designed solid-state drivers must be used to sustain the transfer rates.

2.2.

Specifics of High-Speed Cameras

Evaluating high-speed digital cameras is, in many aspects, similar to evaluating any digital imaging device. The high-speed part of the evaluation deals with time-critical parameters, such as digital exposure time or minimum interframe time. Because the sensors have been optimized for speed, their design and architecture may deviate from typical CMOS sensors. As such, it may be difficult to compare the results of the tests with single-shot cameras.

The characteristics and performance of high-speed digital cameras are linked, but the quantities provided on the manufacturer’s specification document only represent a small part of the actual camera performance. The specifications of the two cameras evaluated in this work are provided in Sec. 4 (see Table 1), but the following paragraphs provide a description of the different parameters related to general camera characteristics and performance.

Table 1

Characteristics and specifications for the two high-speed cameras as provided by the respective camera manufacturers. The ISO sensitivity is reported under ISO 12232 Ssat21 for the tungsten illumination. Spectral response is given at 10% QE.

PropertiesCamera ACamera B
Sensor resolution (pix2)1280×8001024×1024
Sensor technologyCMOSCMOS
Sensor typeMonochromeMonochrome
Pixel size28  μm20  μm
Bit depth12-bit12-bit
Maximum pixel readout26.3  Gpix/s21.5  Gpix/s
Maximum framerate1×106  fps2.1×106  fps
Minimum shutter265 ns159 ns
Minimum interframe375 ns500 ns
Maximum ISO sensitivity100,00050,000
Spectral response (nm)365–965380–910
Fill factor65%58%
Peak QE51%46%

One of the first characteristics of digital imaging is the pixel count or sensor resolution. The number of pixels is analog to the number of lines on a television or computer screen. The higher the number, the more spatial information the image can contain. Recent professional or scientific digital CMOS cameras can count over 50 megapixel onto a 35-mm format (full-frame) sensor. The ratio of the number of pixels and the sensor’s physical size provides the pixel pitch, or the distance between two pixel centers, along one characteristic dimension. Most sensors use square pixels, but rectangular pixels are also quite common. In many cases, the pixel size is used instead of the pitch, therefore assuming that there is no gap between pixels. The size of the pixel (or pitch) for standard lens-mount devices can range from just a few microns for high-resolution scientific cameras to over 30  μm for high-speed systems. As described earlier, the size of the pixel does not directly translate into the dimension of the photosensitive area because of the on-pixel electronics. The fill factor provides a measure of the photosite dimension as the ratio between the photosensitive area and the pixel area. To overcome the reduced light-sensitive area of a pixel, a microlens array is commonly arranged atop the sensor to improve light-collection and increase effective fill factor. It is clear that pixel size and fill factor are two key parameters to high-speed imaging as more light is being collected by large pixels and fill factors. The bit-depth of the camera digitization unit (i.e., the ADC) is another very important parameter of digital imaging. In the case of high-speed cameras, the digital dynamic range (bit-depth) is the result of a compromise between image quality (dynamic range, noise, etc.) and conversion speed (pixel readout).

Not always indicated on the specification sheet, the photonic conversion efficiency is another important factor to digital imaging. The efficiency of conversion from photons to photoelectrons, commonly called quantum efficiency (QE), is wavelength-dependent and represents the percentage of photons converted to photoelectrons or charge by the semiconductor. Many properties concerning sensor operation or performance, highly relevant to overall camera performance, are not always disclosed by the manufacturers due to intellectual property protection. One often overlooked characteristic of the sensor is the full well capacity (or full well depth). This quantity, generally reported in electrons, provides information regarding the capacitance properties of the photosite, through the number of electrons that one pixel can handle before saturation. It is related to the actual dynamic range of the sensor, if the photoelectron noise is known. The photoelectron noise comes from photon noise, read noise, and dark current, and represents the noise induced in the semiconductor and conversion electronics housed by the pixel. The photon noise comes from the statistical noise in the number of photons hitting the sensor, it is generally given in electrons. The read noise, also given in electrons, corresponds to the noise produced by the conversion from photoelectrons to voltage, as well as the on-chip amplification noise. The dark current is produced by several processes occurring inside the semiconductor; it is commonly provided as electrons/pixel/s and is related to the total amount of photons hitting the sensor. Another type of noise important to imaging is the fixed pattern noise,22 which represents the spatial nonuniformities in intensity observed across the sensor. This noise comes from manufacturing tolerances regarding silicon doping, transistor performance (switching speed, jittering, etc.), on-pixel amplifier gain differences, or other spatially dependent variables, such as multichannel amplification and analog-to-digital conversion. The conversion gain applied at the pixel-site is another interesting characteristic regarding digital camera performance; unfortunately, it is rarely reported on the specification document. As the name indicates, the conversion gain refers to the factor applied to convert photoelectrons into voltage. The overall gain should also include the off-chip amplification stages such that voltage can be linked to photons. Electron-to-voltage conversion and amplification have improved substantially over the last decades of APS development, bringing noise down by orders of magnitude, allowing higher gains, and offering better low-light condition performance.

All the elements of a digital high-speed camera described above result in digital images of different qualities based on the characteristics and performance of each part. As such, metrics can be used to evaluate the quality of the final product: the images. An ideal camera would be expected to provide a measure of the number of photons that hit the sensor from the visualized scene under all conditions. Typical quantities related to photonic quantification in CMOS-based cameras, and tested in this work, include signal-to-noise ratio (SNR), camera intensity linearity, framerate (or pixel throughput), or light sensitivity. Based on past experience, other parameters affecting light quantification in high-speed CMOS cameras need to be investigated. The procedures of the different tests are detailed in Sec. 3.

3.

Characterization Methodologies

This section details the methods proposed to evaluate digital cameras. Even though these tests were aimed at high-speed cameras, they are well adapted to lower speed or high resolution camera systems. The equipment developed and used for the tests is also described in this section.

The equipment developed for these tests includes a spatially uniform light source with short pulse capabilities. The different experimental setups offered highly adjustable and controllable parameters to mimic different lighting and camera acquisition strategies. The cameras and optics were firmly attached to an 8×4  ft. (roughly 2400×1200  mm) damped optical table. The laboratory temperature and pressure were controlled over the duration of the tests. The room light was kept to a minimal for testing purposes and has been verified not to affect the acquired data. Accurate positioning to control distances, particularly important when quantifying photonic collection, was ensured by a three-dimensional micrometric stages’ arrangement. A 4 in. square light-emitting diode (LED) panel equipped with royal blue emitters centered around 460 nm was used for continuous, diffuse, and nearly monochromatic lighting. An 8 in. Hoffman Optics integrating sphere, equipped with a tungsten light bulb and an adjustable shutter (micrometric accuracy), was employed for diffuse, broadband illumination. The integrating sphere is equipped with a photodetector calibrated under the nominal tungsten filament supplied current condition. A picoammeter monitors the photodetector illumination, corresponding to the integrating sphere output radiance. An ultrafast LED system, equipped with either a 9  mm2 green emitter (centered around 520 nm) as well as a 1  mm2 violet emitter (centered at 405 nm), was employed when a punctual, pulsed, and nearly monochromatic illumination was required. Figure 1 shows the normalized spectral radiances of the different illumination sources used in the experiments.

Fig. 1

Normalized radiance as a function of wavelength for the blue LED panel, the integrating sphere, the green and violet LED systems used as illumination sources.

OE_57_12_124105_f001.png

The ultrafast LED systems are able to achieve very short pulses at megahertz repetition rates. When equipped with the small (1  mm2) violet emitter, the system can produce light pulses as short as 10 ns, with enough intensity to illuminate the camera sensors to digital saturation under most conditions. On the other hand, the larger emitter (green, 9  mm2) can produce sub-100 ns light pulses with peak optical power in excess of 40 W. The calibrated integrating sphere maximum spectral illumination is located around 1060 nm with a maximum spectral radiance of 0.616  W/sr/nm/m2 at the output.

Accurate timing is paramount when evaluating high-speed camera shutter and interframe performance, as well as jittering. An 80-MHz arbitrary waveform generator was used to ensure timing accuracy. Because of the complex electronics employed to generate the ultrafast LED pulses, the time-delay between the command signal and the actual optical output of the LED emitter has been measured. Figure 2 reports the command signal, and the LED pulse signal acquired by a high-speed (150-MHz bandwidth) 1  mm2 silicone photodetector. The two signals were recorded by a 1-GHz bandwidth digital oscilloscope. The LED system was driven by a 30-V supply voltage, and a 5-V, 20-ns long command signal.

Fig. 2

Command signal and measured light output for a short pulse using the violet LED. The LED system is driven by a 30-V supply signal and a 20-ns long command (black curve). The optical output is delayed 60  ns from the command signal.

OE_57_12_124105_f002.png

Figure 2 shows that the delay between the command signal and the actual light output from the LED system is 60  ns. Note that changing the supply voltage modifies the delay; for instance, a 15-V supply voltage would delay the light output another 15 ns, for a 75-ns total delay. The LED pulse width is slightly shorter (15 ns) than the 20-ns command (both evaluated at full-width at half-maximum). We believe that the low-level tail observed on the measured LED pulse is due to capacitance effects in the photodiode at high output levels, despite the high-bandwidth of the device.

The two cameras have been tested with and without an objective attached to the Nikon F-type front lens mount. When a lens was used, a versatile Nikkor 50 mm, f/1.2 lens was mounted. The objective was used at different speed settings, depending on the testing (illumination) requirement: f/1.2, f/2, and f/8. The use of the lens, as well as the settings—diaphragm (f-stop) and focusing ring position—are detailed next in the description of the different tests performed.

As mentioned earlier, the cameras underwent a series of tests aiming at evaluating their performances under different types of applications. The following paragraphs will describe the different tests, as well as the procedures employed for each one of them in this study.

3.1.

Readout Performance

The effective pixel throughput rate is taking into account the frame acquisition time and the frame readout time. To provide a more universal metric, the effective pixel throughput is presented here in pixel/s. It is defined mathematically as the resolution Resimg times the maximum framerate Facq associated:

Eq. (1)

Rpix=Resimg×Facq,
with Rpix the effective pixel throughput. The resolution is simply obtained by accounting for the number of pixels acquired in the image. The framerate is the acquisition frequency, in Hertz (Hz) or fps. Note that the actual pixel readout rate can be extracted from the effective pixel throughput if the actual image acquisition and interframe times are known. This will be discussed as part of the electronic shutter performance testing.

3.2.

Intensity Response Linearity

There are different ways to test the response of the cameras to different levels of light intensity. The EMVA 1288 standard23 can be used as a guideline to assess and report camera linearity of digital acquisition systems. From a practical point of view, an easy way to measure camera response is to simply vary exposure time, covering the dynamic range, while keeping the illumination constant. This method assumes that the actual exposure gate times match the set durations.

In this work, camera response to illumination intensity has been tested using the calibrated integrating sphere described earlier. The output radiance has been varied from zero to saturation with the integrating sphere placed right against the cameras’ F-mount, i.e., at the flange, without a lens, as shown in Fig. 3. Because the actual spectral photonic conversions of the two cameras are unknown, the digital camera responses are presented as function of the normalized radiance. Intensity response is in general an intrinsic characteristic of the sensor, but the tests have been repeated at several framerates and exposure durations.

Fig. 3

Schematic showing the calibrated integrating sphere in front of the camera, with the output port located at the lens mounting flange.

OE_57_12_124105_f003.png

Note that as for most results reported in this work, the digital intensity level will cover the range from 0 to 4000 Cts, rather than the 4096 levels suggested by the bit-depth of the cameras (12-bit). This is because, on the one hand, dark-field correction and variation in pixel intensity due to noise limits the bottom and top ends of the dynamic range, respectively. On the other hand, as mentioned above, camera A resets the dark image to a positive value (to account for digital intensity distribution around the reset value), which in turn limits the usable dynamic range to slightly below the expect 12-bit depth.

3.3.

Image Signal-to-Noise Ratio

The SNR of an imaging device is a paramount piece of information. It becomes especially important in conjunction with the light sensitivity of the camera. In this work, the SNR was measured as a function of digital level (light intensity in Cts) using the following expression to express in units of decibels:

Eq. (2)

SNR=20·log(μSσS).

In Eq. (2), μS and σS are the mean and standard deviation of the signal S, respectively. In order to measure the SNR across the dynamic range of the camera, the sensor was illuminated with a diagonal intensity gradient covering the entire dynamic range. The SNR was then computed applying Eq. (2) on all pixels of the sensor, except the extreme ends of the dynamic range (0 and 4095 Cts). The advantage of this method compared to changing the illumination intensity uniformly across the sensor is that the SNR can be obtained over the entire dynamic range with a single set of images (100 images in this case). Note that the results were confirmed at several intensities with the sensor uniformly illuminated, using the data recorded for linearity assessment described above.

3.4.

Camera Sensitivity

When acquiring at high frequency, sensitivity is crucial to imaging due to the inherent lack of integration time. Because of the aforementioned shortcomings of the film-imported standards, detailed procedures should be laid out to objectively compare the devices. Methods to evaluate camera sensor sensitivity include using a camera lens or not, illuminating with broadband or near monochromatic light, with a continuous or pulsed light source, by placing the light source in the near or far-field, etc. The selected method to measured camera sensitivity is similar to the way this parameter is generally tested in sensor evaluation standards (c.f., ISO 1223221) using a calibrated illumination source, such as the tungsten-based integrating sphere described earlier. Similar to the arrangement employed to evaluate camera linearity, the source is placed directly against the F-mount flange (see Fig. 3). Because the integrating sphere is a continuous source, variation in exposure time between cameras can compromise the validity of the tests. The cameras were operated at 1000 fps, relying on a relatively long exposure time (50  μs) to limit differences between cameras, and any exposure time difference was accounted for during data analysis (c.f. Sec. 3.5). The light intensity was varied such that the digital dynamic range of the camera would be fully evaluated. Only the uniformly lighted central region of the sensor has been averaged and quantified to avoid the effect of intensity fall-off near the edges. The two cameras having different pixel dimensions, the radiant flux emitted by the source was corrected by the pixel area.

It is difficult to quantify irradiance in this case because of unknown parameters, such as spectral response or photonic conversion for both cameras. The photodiode current (in μA) measured on the calibrated light source can be used instead of the photonic irradiance. This method presents a major drawback because of the broadband illumination source, which means that dissimilarities in camera spectral response may be interpreted as differences in sensitivity, especially important in the near-infrared. It must also be noted that camera linearity affects the results of these tests, inducing errors if the cameras behave differently.

Another important note on sensitivity is that it should not be dissociated from SNR because similarly to the increased noise observed in highly sensitive films (high ISO ratings), digital sensors can offer high sensitivity but poor noise performance (depending on the electron to count conversion). This means that sensitivity should be compared at equal SNR values, as tested by the Snoise method of the ISO 12232:2006 standard. Unfortunately, the ISO 12232:2006 Ssat method relies on saturation level instead of a given SNR value, thereby foregoing this crucial information. In neither case does the ISO 12232:2006 standard account for differences in pixel size, which makes comparison between different sensors very difficult.

3.5.

Electronic Shutter Performance

The accuracy of the exposure gate time is another important parameter for high-speed cameras. The exposure time being potentially very short (below 1  μs in some cases), the rise and fall time of the gate shape must be kept very short. The accuracy has been tested by sweeping a short light pulse in time through the exposure gate. The violet LED light source was used at the conditions represented in Fig. 2 (30 V, 20 ns), producing a 15-ns long light pulse. The light source was placed 60 mm away from the 50 mm lens (at f/1.2) attached to the camera; an engineered diffuser was placed 25 mm away from the lens to uniformly distribute the light. Two exposure times have been tested: 2.5 and 50  μs, at 100 and 10 kHz, respectively. Note that the targeted digital intensity in the middle of the exposure gate corresponds to half the dynamic range. Another aspect of gate time is precision, referred to as jittering, meaning how repeatable the exposure gate is with respect to image trigger (or frame period). Both accuracy and precision have been measured with the testing procedure described above.

3.6.

Image Lag

The effect of a frame (n) on the subsequent ones (n+1, n+2, etc.) is generally called image ghosting or image lag. This “memory” effect has been a recurrent problem on digital imaging systems and high-speed CMOS cameras also suffer from image lag.24,25 The effects vary from camera to camera (or sensor to sensor), but one typical manifestation is the appearance of a dimmed version of the previous image. Figure 4 provides a visual example of the effects of image lag on a high-speed camera imaging, the Sandia Thunderbird on a back-illuminated background. Figure 4(a) shows the back-illuminated Sandia logo, while Fig. 4(b) shows the subsequent image when the illumination was turned off. The intensity range has been adjusted to highlight the effects of image lag and is reported in the top-left corner of both images.

Fig. 4

Example images showing the effects of image lag. (a) A back-illuminated object (Sandia Thunderbird logo) and (b) the subsequent nonilluminated frame and the effects of image lag. Note that the digital intensity range, reported in the top-left corner, has been adjusted to highlight these effects.

OE_57_12_124105_f004.png

In this example, the effects of image lag can be appreciated outside of the Sandia logo in Fig. 4(b), with darker regions corresponding to lighted areas in the previously acquired image. In this case, the image lag makes the intensity on the subsequent nonilluminated frame to decrease with respect to the expected level. It can be noted that this set of images was not acquired with the cameras investigated in the present work. Depending on the sensor or light configuration (from light to dark or dark to light), this “ghost” version of the previous image can either be positive (the subsequent image intensity is higher than expected) or negative (the subsequent image intensity is lower than expected). Because of the different manifestations of this lag, several hypotheses have been put forth to explain it. Most explanations agree on the fact that some charge is not depleted to the readout circuitry and leftover in the silicon layers or possibly in the semiconductor. This charge is therefore readout on a subsequent frame and produces the ghosting effect. Image lag is potentially affecting every frame but becomes particularly apparent when the intensity varies significantly between images. Studies have shown that image lag tends to increase with photodiode size (pixel area).26 Thus, addressing image lag has certainly been a great challenge to the respective design teams of the two camera sensors because of the large pixels. There are many ways to evaluate and measure image lag; but describing and quantifying the many effects would substantially extend this article. The authors are still investigating image lag and are working on implementing correction procedures for both cameras.

The present study addresses the spatial dependence of image lag but also quantified the amount of lag in terms of image intensity. Image lag has been evaluated in this work by lightly illuminating a diffuser screen uniformly with the blue LED panel and by using the pulsed high-power green LED driver to illuminate a small region of the image. The cameras were equipped with the 50 mm (at f/1.2) lens and focused onto the diffuser placed at 500  mm from the respective cameras’ F-mount flanges. A schematic of the setup showing the various LED light sources, the diffuser screen, and the camera equipped with the 50-mm lens is provided in Fig. 5. The green LED source was turned on once every five frames, leaving four lightly illuminated images in between light pulses. The opposite schedule was also tested, with all but one frame illuminated by the LED light pulse over a five-frame sequence. The repeatability of the LED pulse is critical to this test and its consistency has been verified through monitoring of the pulses by a high-bandwidth photodiode. The pulsed LED system was temperature-controlled to increase pulse repeatability. Both illumination systems (pulsed and continuous LEDs) were adjusted to keep similar digital intensity levels on both cameras. The lightly illuminated background is necessary to prevent the camera intensity distribution from dropping to the bottom of the digital intensity scale. More information on image lag and the effects is given later along with the results.

Fig. 5

Schematic showing the arrangement used to evaluate camera image lag. (a) The diffuser screen and LED light sources are shown on the left, (b) while the camera is on the right.

OE_57_12_124105_f005.png

4.

Test Results and Comparisons

The results presented in this section show the outcomes of the tests performed under the procedures detailed in the previous section for two state-of-the-art high-speed cameras. It is important to note that because the digital camera technology is constantly evolving, the tests and results reported in this work correspond to the high-end models of the main two manufacturers of such high-speed cameras as of the submission date of this document. As mentioned in Sec. 1, high-end cameras from other vendors have been tested, but neither the performance nor the usability proved to be comparable to the two units evaluated herein: Phantom v2512 and Photron SA-Z. The results of the tests presented hereafter are camera specific, and other models from a same manufacturer may behave very differently. To avoid amalgam and reduce confusion about these two cameras, they are going to be referred to as camera A for the Phantom v2512 and camera B for the Photron SA-Z. The characteristics and specified performances of the two cameras under test are provided in Table 1.

One can quickly notice that the two cameras are different, with most parameters listed in Table 1 presenting diverse values, although many parameters conerge due to the high-speed nature of the devices. The specifications already reveal some interesting design differences, with camera B featuring a square sensor with smaller pixels compared to camera A and its widescreen sensor with larger pixels (28 versus 20  μm). It must be noted that although high-speed cameras generally use sensors with similar pixel sizes, they are large compared to most CMOS sensors. Both cameras use unfiltered (monochrome) CMOS sensors, but the differences in reported sensitivity (ISO 12232 standard21) are substantial. The sensitivity values reported in Table 1 may come from different testing procedures. Another weakness is that the ISO test does not account for effective pixel size (pitch) differences between cameras. It is evident that larger pixels will collect more light than smaller ones, other parameters considered equivalent. As such, the authors do not believe that the reported ISO quantities should be used to evaluate one camera’s light sensitivity. With respect to speed, despite the fact that the published maximum readout of camera A surpasses that of camera B, the latter can reach higher framerates. Camera B features an absolute shorter electronic exposure time, but the minimum interframe time is longer than that of camera A. As expected, based on sensor technology, both cameras present similar maximum QEs and spectral ranges, with camera A having a slight edge in that regard. It is important to note that even though the two cameras are different, they both offer state-of-the-art performance on paper. Some numbers of Table 1 are, in fact, from a factor 2 to an order of magnitude higher than similar high-end, high-speed cameras available about a decade ago.

Other important considerations not revealed by the specifications concern the way the cameras format and output the data for postanalysis. Each camera is different, and high-speed cameras generally propose their own format to output the data, in addition to common image formats (e.g., tiff, jpeg, png). For this study, the native high-speed packaged formats from the respective cameras have been used to process the data. The data contained in these formats are unprocessed and uncompressed.

Both cameras are equipped with a flat-field correction regarding the intensity of the background, in which the aim is to bring the background level down to zero count by offsetting the intensity level for all pixels by the dark image intensity level. From the one hand, this feature provides flatter, nicer looking images, by effectively canceling fixed pattern noise, but also ensures that the 12-bit digital levels are fully used in the recorded images. The drawback is that because all pixels are expected to be at zero count, the noise distribution is half clipped, thus artificially providing an average positive intensity value. Another problem is that it is difficult to know the actual digital level corresponding to light intensity near the bottom end of the scale. Note that both cameras offer different ways for the user to go around this issue. Camera A offsets the digital level 64 counts (Cts) in the saved raw data, thus providing the full noise distribution and a “real” average value for the zero light intensity level. Camera B allows the user to turn the dark-field correction off, therefore providing the actual dark image. The main issues of this method are significant fixed pattern noise and digital dynamic range reduction due to the relatively high dark level.

There are many ways to evaluate or test acquisition systems, and the results are dependent upon the methodologies employed during testing. The main objective of the present tests was to compare two commercial cameras in a specific class under highly controlled environment, conditions, and procedures. Because the cameras’ specific designs are proprietary information, it is sometimes complicated to explain the results of the tests, as mentioned in Sec. 1. In such cases, the impact of camera performance deficiency will be interpreted as practical issues encountered during high-speed imaging experiments.

4.1.

Acquisition Rate

Probably the first information that comes to mind regarding high-speed cameras is how fast a specific model can acquire. However, the maximum framerate of a camera does not necessarily provide a complete answer to evaluate its acquisition speed performance. The pixel throughput provides a more universal quantity, combining the acquisition speed and the size of the images. The data plotted in Fig. 6 did not require any specific testing procedure, but only applied Eq. (1) to the framerate and associated image resolution. The symbols represent the maximum data rates the cameras can acquire at full resolution, all available resolutions at a 21 aspect ratio, as well as the maximum resolution at the absolute camera frame-rate. Other aspect ratios were tested, and it appears a 11 aspect ratio provides poorer performance for both cameras, while wider formats generally improve global throughput. A 21 aspect ratio was used for its practicality and easy comparison between cameras.

Fig. 6

Pixel throughput as a function of framerate for both cameras. The symbols represent the actual data, while the lines correspond to linear fits of the respective data for both cameras.

OE_57_12_124105_f006.png

The bottom axis is represented in log-scale, showing a monotonic, near linear relationship for both cameras. The pixel throughputs reported in Fig. 6 show that both cameras are comparable, with total throughput upwards of 20  Gpix/s and maximum framerate of 1 Mfps or more. At the same time, it clearly appears that camera A performs better than camera B at most acquisition frequencies. The crossing point applying a linear fit between the reported data points lies between 600 and 700 kHz. Both cameras can acquire a million fps or more, with camera B providing a noticeable advantage compared to camera A at ultra-high speeds. At this framerate, camera A has a peak readout slightly above 4  Gpix/s, while camera B outputs almost 6  Gpix/s under the same field, supporting camera B’s superior performance at higher framerates.

4.2.

Intensity Response

The linear response of a detector is paramount for any light quantification attempt. Two-dimensional extinction imaging, for instance, is a type of experiment, where linear intensity response is desirable to avoid corrections in postprocessing. The results of the linearity tests applying the method described above are shown in Fig. 7. The responses of both cameras, in terms of digital intensity level, are reported as a function of the normalized illumination intensity from the calibrated light source. We acknowledge that the digital levels have been reported until 4000 Cts, rather than 4095 Cts (12 bit), mainly to avoid saturation. The black dashed line represents a straight line joining both extremes (0 and 4000 Cts), it is called the end-point method and is commonly used to visually assess camera intensity response. Linear regressions have been computed to provide least square estimators to the camera responses with a zero digital level intercept and are also plotted.

Fig. 7

Digital intensity level as a function of normalized illumination intensity for both cameras. The dashed black line represents the end-point fit, while the colored dashed lines provide least square linear regressions with zero starting point for both camera responses.

OE_57_12_124105_f007.png

Both cameras present fairly good responses to illumination intensity, with a noticeable advantage for camera A. Nevertheless, even though the cameras behave well, they are not perfectly linear, as can be seen when compared to the different fits plotted in Fig. 7. The coefficients of determination returned by the least square fits were both above R2=0.99. Looking at the deviation from the end-point line, the error in measured intensity stays below 4% for camera A, while it is more than double that figure for camera B. It is noteworthy adding that other tests performed under different camera configurations (e.g., framerates, exposure times) returned identical profiles. It can be noted that uncertainty is not represented in this plot because the combined deviation coming from camera intensity readout accuracy (see SNR section below, suggesting readout error below 0.6%) and the uncertainty in the output radiance of the calibrated light source (as measured by the photodiode) are insignificant in the test results of Fig. 7.

A choice needs to be made when applying a correction to make the camera response linear with photonic intensity. The end-point method is widely used to that end. The advantage of the end-point-based correction is that it keeps the dynamic range within the same scale, which can be useful (or necessary) when working with 8, 12, or 16-bit integers. On the other hand, the magnitude of the correction would be maximal (up to near 10% for camera B) around the middle of the dynamic range, which certainly corresponds to the most usable intensity range. The least-square regressions featured in Fig. 7 limit the magnitude of the correction over the entire range, such that measured digital intensities are altered a minimal amount. The caveat of the method is that corrected intensities may extend beyond the native bit-depth of the imaging device, but the intensities can always be scaled down to match the original digital dynamic range of the instrument.

4.3.

Signal-to-Noise Ratio

The SNR of an imaging system is another important metric. The SNR of a digital imaging system is also affected by pixel dimension and can also be reported as a function of light density (i.e., per unit area). A more common way is to plot the SNR as a function of light intensity. In the case of Fig. 8, and to offer a visual comparison of the cameras’ SNR, the SNR for each pixel has been plotted as a function of the mean digital level of this same pixel, applying Eq. (2). This approach of plotting all pixels provides a reference regarding the deviation among pixels by producing a cloud of points. The dashed lines for both cameras represent the average SNR of the cameras based on all pixels, as function of read digital level.

Fig. 8

SNR as a function of digital intensity for both cameras. The data-points have been calculated from Eq. (2). The “acceptable” and “excellent” SNR thresholds are based on the guidance given by the ISO 12232 standard.

OE_57_12_124105_f008.png

The first observation that needs to be made on Fig. 8 is the high SNR levels reached by both cameras. The pixel-averaged peak SNR for camera A lies above 45 dB, while it almost reaches 42 dB for camera B. The difference in SNR is 4  dB above a quarter of the dynamic range to camera A’s advantage. According to the ISO 12232 standard previously mentioned, both cameras present “excellent SNR” (SNR=40  dB), but camera A achieves this level at about half its dynamic range, while camera B reaches the 40 dB threshold at 80% of its dynamic range. The “acceptable SNR” line at SNR=10  dB has also been located based on the ISO standard. Because of the different pixel sizes, the higher SNR achieved by camera A is expected, because larger pixels can accommodate larger photosite, in addition to superior fill-factors. The 4-dB difference between the two cameras agrees with the factor 2 in area between the pixels expected to more than double the effective photo-sensitive area of camera A compared to camera B. The profile of the SNR curves from both cameras is very similar and the shape suggests that the SNR is driven by shot-noise for both cameras, typical for digital imaging systems.

4.4.

Camera Sensitivity

As detailed above, the sensitivity of a high-speed camera is critical to most experiments. Because of the different pixel sizes, the flux coming from the source has been corrected for the difference in pixel area between the two cameras. The area-corrected source radiance has been plotted in Fig. 9 as a function of the digital level achieved by each camera. These results were obtained by correcting the radiant flux from the calibrated integrating sphere measured by the photodiode for pixel area and comparing it to the average digital intensity of the central region of the chip. As mentioned earlier, even though the source radiance is known, differences in camera spectral response limit the interpretation. Note that the two cameras do not require the same radiance to reach digital saturation, as expected because of the different pixel areas. As detailed earlier, the maximum digital level reported is 4000 Cts, compared to the 4095 Cts expected with these 12-bit cameras, to avoid pixel saturation.

Fig. 9

Illumination source radiance (based on photodiode current) corrected for pixel area as function of digital level for both cameras from a calibrated, broadband light source. A lower source intensity to reach a given digital level indicates a superior sensitivity.

OE_57_12_124105_f009.png

When the source radiance is corrected for pixel area, both cameras present relatively similar sensitivity performance, with a slight advantage to camera B, reaching digital saturation before camera A taken at the same radiant flux. Uncertainty is not represented here for the reasons mentioned above about Fig. 7, that uncertainty is insignificant and does not affect the conclusions. Without the SNR information reported above, these results would be surprising considering that fill factor, QE, and extended spectral response favor camera A. Going back to the SNR results of Fig. 8, the advantage of camera B appears to vanish when compared at iso-SNR levels. The 4-dB SNR advantage of camera A more than compensates for the slight deficiency in sensitivity reported in Fig. 9. The larger pixel area of camera A is certainly responsible for the superior SNR due to, expectedly, larger photosites (assuming the electronics cover the same space), as suggested above. Camera B must therefore rely on a higher conversion gain setting to achieve the specified sensitivity. The higher gain eventually results in a degradation of the maximum SNR, as shown in Fig. 8.

On the other hand, the smaller pixels of camera B should provide higher image (digital) resolution when the cameras use the same optics (magnification). The size of the pixel becomes important when using the camera with a microscope lens,27 where the higher digital resolution resulting from the smaller pixels may provide increased image detail under certain conditions. Pixel size is one parameter of the sensor affecting camera performance, but it will depend on the situation whether smaller or larger pixels should be preferred. We need to stress, once again, that these measurements were done with a broadband light source, and that the spectral responses of the cameras affect these results. To measure whether camera A or B is more sensitive from a more practical perspective (experiments), the same test should be performed with monochromatic light matching the wavelength of the experiments.

4.5.

Global Shutter Performance

The accuracy of the exposure time may or may not be critical to some experiments. However, the repeatability of the exposure gate is another very important parameter of high-speed cameras. As described earlier, the profile of the exposure gate has been measured for both cameras under different operating conditions. The results of these tests are plotted in Fig. 10, showing the normalized pixel intensity envelopes for the respective camera at a 100 kHz frequency with a 2.5-μs exposure time (software setting). The solid lines correspond to the normalized mean intensity over the illuminated region monitored at each time (every 100 ns). The lower and upper envelopes report the overall dispersion around the mean for all uniformly illuminated pixels. The intensities have been compiled and later averaged over 100 images, as for most experiments reported in this document.

Fig. 10

Normalized signal amplitude as a function of time from frame trigger for both cameras (exposure gate profile). The areas correspond to the envelopes of the normalized pixel intensities across a uniformly illuminated region. Both cameras were set to 100 kfps and 2.5  μs exposure time.

OE_57_12_124105_f010.png

The exposure gates of the two cameras plotted in Fig. 10 present some differences. Camera B starts opening slightly earlier than camera A, but the slope is slower, allowing camera A to reach full exposure earlier. Similar observations can be made during the closing transient period, with a slower closing slope for camera B. From a quantitative standpoint, camera A presents rise and fall times (10% to 90%) of 190  ns, while camera B opens in 630  ns and closes in about 540 ns. The actual shutter duration (based on the full-width at half-maximum) for camera A matched the set value, with a 2.50-μs long gate. Camera B stayed open for a little longer than specified, with an exposure duration of 2.79  μs. Exploring other exposure times and camera frame rate, camera A provided gate widths in line with the camera-indicated duration (which can differ from the set value) throughout the tests. On the other hand, camera B consistently kept the gate open for 0.3  μs longer than the set value. These differences are minimal and should not affect the experiments for all but the shortest shutter times. Under extreme conditions, the rise and fall times of the two cameras can be expected to limit the dynamic range during submicrosecond exposures.

The standard deviation lines of Fig. 10 for both cameras also differ, with camera B showing more deviation in intensity than camera A. It can be observed that the transients (opening and closing) of camera B present large deviation compared to camera A. A closer look into this aspect revealed that all pixels do not open and close at the same, as expected with a global shutter implementation. In other words, some pixels open sooner than others. Interestingly, most pixels present a similar exposure gate width, meaning that pixels opening early will also close early, and vice-versa. This behavior is highlighted in Figs. 11(a) and11(b), in which maps of normalized intensity about half way during the opening transient are displayed for both cameras. Figure 11(c) shows the corresponding histogram of intensity for the two maps.

Fig. 11

(a) and (b) Normalized spatial intensity distribution for both cameras during the transient opening period of the exposure gate. (c) The histogram shows the normalized intensity distribution associated with the two maps displayed above. Both cameras were set to 100 kfps and 2.5-μs exposure time, imaging a 512×256  pix2 region.

OE_57_12_124105_f011.png

The normalized intensity maps of Fig. 11 show that the intensity across the image of camera B during the opening period of the electronic shutter varies widely compared to that of camera A. The variation in intensity does not present a specific pattern, but rather makes the image look like it is contaminated by “speckle.” This is a sign that all pixels do not open at the same time, as suggested by the large deviation shown in Fig. 10. This behavior is believed to come from variation in characteristics between the different electronics (global shutter transistors) contained in each pixel. This temporal disparity between pixels is most likely due to unmatched transistor switching time or actuation voltage threshold. The “speckle” pattern on the image is consistent throughout an image sequence for both cameras, with the same pixels opening and closing early. Likewise, pixels opening slightly later than average will close later, and do so consistently. Similar observation has been made under different camera configurations, with exposure behavior being pixel-dependent. This means that one pixel of the sensor opening and closing early in one configuration (i.e., camera framerate, resolution, and exposure time) is expected to open and close early in another, and vice-versa.

To quantify the impact of this scatter in pixel opening time, the histograms of intensities for the two maps are reported at the bottom of the figure. From the intensity distributions taken in the middle of the shutter opening period, it can be observed that the spread is wider for camera B, compared to camera A. The intensity distribution of camera B is almost twice as wide as that of camera A: 20% of the intensity range (full-width at half-maximum) for camera B and 12% for camera A. The distributions [Fig. 11(c)] during the transients for both cameras present relatively large spread in intensity. This might limit intensity quantification of short, intermittent light sources. Even though the differences reported in Fig. 11 between the two cameras are substantial, they are not nearly as pronounced as what the results of Fig. 10 suggest. This is because the span in intensity during transient opening and closing is comparable, while the time-related response to shutter opening and closing of camera B is significantly less accurate than that of camera A.

4.6.

Image Lag

When gradients in illumination intensity occur between successive images, one particular image intensity distribution (global or spatial) may affect the subsequent frame(s). High-speed imaging aims at visualizing fast and transient phenomena, in which case monitored light intensity may vary widely between images. Another typical experimental arrangement presenting high luminosity difference between frames is when the system is operated in image-straddling schedule, similar to the one shown in Fig. 4. As explained earlier, image lag is a complex process and covering it all would extend beyond the scope of this manuscript. The results of the tests presented thereafter provide the reader with basic information regarding image lag, as well as the effects on the two cameras tested therein. Figure 12 shows two sets of four images, each set corresponding to cameras A or B. As detailed in the methodology applied to assess image lag, the cameras imaged a lightly illuminated surface, to purposely offset image counts above zero. This background illumination is uniform across the image, with an intensity of 200  Cts (roughly 5% of the dynamic range), as represented in the top left image of each set. A pulsed light spot illuminated the imaged surface in a five illuminated, five nonilluminated sequence such that five consecutive frames were illuminated and preceded or followed by five consecutive nonilluminated frames. The top right image of each set in Fig. 12 shows the pulsed light spot (with the lightly illuminated background removed). The intensity scale is provided on the right side in digital levels (Cts) and the pulsed illumination was adjusted to obtain 80% of the cameras’ digital dynamic range. Both cameras were set to 1000 fps and 50-μs exposure time, and even though the full field of view for each camera was recorded, the results presented therein correspond to an 800×800  pix2 region for comparison purposes. Each map in Fig. 12 is the result of the mean computed over 100 repetitions.

Fig. 12

Effect of lag on image spatial intensity distribution for both cameras. The top row shows the background intensity and the light pulse specificities. The bottom row shows the image lag for frame n+1 following either a lighted or a dark frame. Note the different scales of intensity displayed.

OE_57_12_124105_f012.png

The bottom left map of each set shows the effect of lag when the “lag” image (n+1) is not illuminated, but immediately follows a set of illuminated frames (with a light pulse), noted pulse, and corresponding to frame n. The intensity maps for both cameras are obtained via the following expression:

Eq. (3)

ILight=In+1IBG.

In Eq. (3), ILight is the background-corrected intensity for frame n+1, assessing the image lag intensity for a nonilluminated frame following a lighted image. The variables In+1 and IBG correspond to the intensities of the “lag” frame and the background image. The background image intensity IBG corresponds to the lightly illuminated background intensity, as detailed above. We validated that this approach was appropriate by verifying that the background has recovered from any lag after a light pulse sequence.

The last image of each set (bottom right) in Fig. 12 is the opposite of the one described above, with a pulsed image immediately following a dark frame. The magnitude of the image lag in this case is obtained via the following expression:

Eq. (4)

IDark=In+1IPulse.

Equation (4) provides the image lag intensity ILight for an illuminated image n+1 following a set of lightly illuminated (dark) ones. IPulse is the image featuring the light pulse; it is taken as the image preceding the last pulse-lighted frame of the sequence. This frame was shown to present intensity distribution and magnitude in line with a continuous (no skipped pulses) light-pulse sequence.

The bottom row of Fig. 12 shows that both cameras suffer from image lag, affecting the image intensity on the order of 70 Cts. Paying more attention to the spatial distribution of the lag-affected images reveals that the cameras behave differently. The illumination pattern from the pulsed LED at the top of Fig. 12 (pulse) shows the pulsed illuminated region to induce intensity gradient (with either the preceding or following image being dark). The size of the spot highlights the differences between the two cameras when it comes to digital spatial resolution. With its larger pixels, camera A presents a smaller light spot than camera B. In addition, it must be noted that the two cameras present slightly different flange distances (distance from sensor to F-mount flange), thus affecting the effective system’s magnification and the imaged spot size.

The two pairs of images at the bottom of Fig. 12 present the effects of image lag on spatial intensity distribution. The bottom left images show that the intensity goes below the original background level for a nonilluminated frame following an image with a light-pulse. The magnitude of the intensity change is similar for cameras A and B, even though camera A exhibits a slightly more severe lag. The “ghost” image is in spatial agreement with the illumination pattern for camera A, with the manifestation of lag appearing at the location of the pulse, and of similar aspect. This is different for camera B, where the lag appears to be concentrated around the center point of the sensor. Further analysis revealed that there is a slight spatial dependence on camera B as well, but most of the “lag” effects are observed in the center of the chip. The drop in intensity after an illuminated frame causes problems with the dark-field correction, as the pixel values would drop below the zero reset value (bottom of the digital scale). This explains why a light uniform illumination is necessary to perform these tests and quantify the magnitude of the image lag.

The behavior is somewhat similar, in opposite ways, for the dark–light illumination schedule. In this situation, the intensity of the first illuminated frame increases over the steady illumination value. In contrast with the previous case, where the intensity decreased, this behavior would be referred to as positive image lag. The magnitude of the image lag for camera B is, in this case, similar to that measured on the dark frame. Even though the effect is visible on camera A, it seems to be less affected than camera B in this configuration. Before looking at the magnitude of the lag in the cases studied above, it is worth mentioning that, for both cameras, it takes more than one image for the sensor to fully recover and stabilize back to the baseline level.

Figure 13 presents the histograms of intensity distribution quantifying image lag under the two configurations tested. These histograms represent the intensity of the lag in the maps reported under ILight and IDark for both cameras in Fig. 12. All pixels of the affected area, as shown in Fig. 12, are represented, therefore including overall intensity dispersion. Again, these results were averaged over 100 repetitions to limit the significance of shot-to-shot variability and uncertainty.

Fig. 13

Histograms of intensity quantifying the effects of lag on image intensity for both cameras under reciprocal lighting configurations. (a) Camera A and (b) camera B.

OE_57_12_124105_f013.png

It can be seen that under the light–dark illumination schedule (ILight), the lag reduces the image level by as much as 70 Cts for both cameras (when illumination is set to 75% of the dynamic range). On the other hand, when an illuminated frame follows a dark one (IDark), the behavior appears to be reverse with camera B, while camera A does not show noticeable effects, with most of the distribution center around 0 Cts. It is very difficult to provide a complete description of the image lag for camera B due to the global intensity dependence (area coverage, resolution, etc.), as mentioned earlier. Conversely, it seems easier to correct the effects on camera A, and a relationship can be built based on intensity difference. Further testing measured the maximum intensity drop due to image lag for camera A at just below 100 Cts (with a 4000 Cts intensity difference), about 2.5% of the dynamic range. These results have been consistent throughout the different test configurations.

During the various tests on image lag, we noticed that the lag for camera B seems to be related to the total intensity on the chip, based on magnitude and area coverage. When the illumination covers the entire chip, for instance, the effects are significantly higher than when only a small portion is illuminated (like in the current case). Such behavior has also been observed when the camera operates at lower resolution, which generally results in milder lag effects. However, further investigation reveals that measured digital intensities under skip-illumination schedule differ from the continuously illuminated images.

The results presented above show that both cameras behave differently when it comes to image lag, but that the lag appears to be related to the sensor electronics. The behavior of both cameras showing overshoot or undershoot depending on the illumination schedule is typically observed in amplifiers, especially in high-bandwidth systems, as expected for high-speed cameras reading out above 20 billion pixels a second.

We have been investigating image lag on many high-speed cameras, and we understand the complexity, and identified effective correction procedures to correct these systems and make them more reliable. A follow-up article will detail the results of our analysis and propose correction methods to obtain quantitative information in terms of photonic intensity from these state-of-the-art high-speed imaging devices.

5.

Summary and Conclusions

Two high-performance high-speed cameras produced by the largest two companies in the business have been evaluated and compared. The two cameras currently represent the state-of-the-art in terms of imaging technology for high-speed continuous recording. The evaluation consisted of a series of tests to measure the different characteristics commonly used to evaluate machine vision systems, as well as other parameters related to high-speed digital imaging. The procedures of the different tests have been comprehensively described to help understand the method, as well as to provide guidance and potentially set guidelines for future camera characterization.

The results of the tests demonstrated that both cameras perform very well in most tests, although they appear to achieve comparable levels of performance with different design decisions and hardware implementations. Both cameras present similar pixel throughput, meaning that they perform similarly when it comes to pixel readout rate. Camera A (Phantom v2512) achieves a slightly higher pixel throughput at lower framerates (and higher resolution), but the maximum framerate for camera B (Photron SA-Z) is more than double that of camera A. From an intensity response perspective, both cameras feature very good linearity, with an R2 above 0.99. Camera A proved to be noticeably more linear over the full range, with a maximum error below 4%, equivalent to less than half the dispersion exhibited by camera B. Both cameras demonstrated excellent SNR, with peak SNRs around 45 and 42 dB for cameras A and B, respectively, expectedly due to camera A featuring larger pixels. With their relatively large pixel areas, both cameras are also very sensitive, a key parameter to high-speed imaging, with camera A having the edge in raw sensitivity due to the use of larger pixels. Nevertheless, accounting for pixel area gives camera B a slight advantage in terms of sensitivity, although this appears to be offset by the lower SNR of camera B. The cameras behaved differently when tested for shutter accuracy and precision, with camera A being both more accurate and more precise than camera B. Both cameras showed variation in pixel opening and closing timings, with a larger variance observed for camera B. The manifestations of image lag for both cameras are also different, even though the magnitudes of the effects are similar under certain test conditions. Camera A presents a spatial dependence to lag, while camera B showed that the effects of the lag were more concentrated to the center of the sensor. It is difficult to quantify the effects of image lag for both cameras, but the present tests revealed that the image intensity could be affected by a few percent. Future work will aim at describing image lag in detail, with the objective to provide the reader with correction procedures to produce photonically accurate images.

Acknowledgments

The authors would like to acknowledge Vision Research and Photron for their collaboration and assistance during testing and data analysis. This study was performed at the Combustion Research Facility, Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

References

1. 

P. W. W. Fuller, “An introduction to high speed photography and photonics,” Imaging Sci. J., 57 (6), 293 –302 (2009). Google Scholar

2. 

S. T. Thoroddsen, T. G. Etoh and K. Takehara, “High-speed imaging of drops and bubbles,” Annu. Rev. Fluid Mech., 40 257 –285 (2008). https://doi.org/10.1146/annurev.fluid.40.111406.102215 ARVFA3 0066-4189 Google Scholar

3. 

I. Eriksson et al., “New high-speed photography technique for observation of fluid flow in laser welding,” Opt. Eng., 49 (10), 100503 (2010). https://doi.org/10.1117/1.3502567 Google Scholar

4. 

P. R. Slangen et al., “High-speed imaging optical techniques for shockwave and droplets atomization analysis,” Opt. Eng., 55 (12), 121706 (2016). https://doi.org/10.1117/1.OE.55.12.121706 Google Scholar

5. 

J. Muybridge, “The horse in motion,” Nature, 25 605 –605 (1882). https://doi.org/10.1038/025605b0 Google Scholar

6. 

H. E. Edgerton and J. R. Killian, Flash!: Seeing the Unseen by Ultra High-Speed Photography, 2nd ed.Charles T. Branford, Newton, Massachusetts (1954). Google Scholar

7. 

E. J. Marey, “Determination experimentale du mouvement des ailes des insectes pendant le vol,” C. R. Acad. Sci. Paris, 67 1341 –1345 (1868). Google Scholar

8. 

G. J. Pendley, “High speed imaging technology: yesterday, today, and tomorrow,” Proc. SPIE, 4948 110 –113 (2003). https://doi.org/10.1117/12.516992 Google Scholar

9. 

J. H. Waddell, “The rotating-prism camera: an historical survey,” J. SMPTE, 75 (7), 666 –674 (1966). https://doi.org/10.5594/J07135 JSMTA4 0036-1682 Google Scholar

10. 

M. Sultanoff, “A 100,000,000-frame-per-second camera,” Rev. Sci. Instrum., 21 (7), 653 –656 (1950). https://doi.org/10.1063/1.1745678 RSINAK 0034-6748 Google Scholar

11. 

M. M. Frocht, P. D. Flynn and D. Landsberg, “Dynamic photoelasticity by means of streak photography,” Proc. SESA, 14 (2), 81 –90 (1957). Google Scholar

12. 

C. D. Miller, “Half-million stationary images per second with refocused revolving beams,” J. Soc. Motion Pict. Eng., 53 (5), 479 –488 (1949). https://doi.org/10.5594/J11690 JSMTA4 0361-4573 Google Scholar

13. 

S. F. Ray, High Speed Photography and Photonics, SPIE Press, Bellingham, Washington (1997). Google Scholar

14. 

W. S. Boyle and G. E. Smith, “Charge coupled semiconductor devices,” Bell System Tech. J., 49 (4), 587 –593 (1970). https://doi.org/10.1002/bltj.1970.49.issue-4 BSTJAN 0005-8580 Google Scholar

15. 

T. Etoh, “High-speed video camera of 4,500 pps,” Jpn. Telev. Assoc., 46 (5), 543 –545 (1992). Google Scholar

16. 

R. Hain, C. J. Kahler and C. Tropea, “Comparison of CCD, CMOS and intensified cameras,” Exp. Fluids, 42 (3), 403 –411 (2007). https://doi.org/10.1007/s00348-006-0247-1 EXFLDU 0723-4864 Google Scholar

17. 

C. T. Chin et al., “Brandaris 128: a digital 25 million frames per second camera with 128 highly sensitive frames,” Rev. Sci. Instrum., 74 (12), 5026 –5034 (2003). https://doi.org/10.1063/1.1626013 RSINAK 0034-6748 Google Scholar

18. 

T. G. Etoh et al., “An image sensor which captures 100 consecutive frames at 1000000 frames/s,” IEEE Trans. Electron Devices, 50 (1), 144 –151 (2003). https://doi.org/10.1109/TED.2002.806474 IETDAI 0018-9383 Google Scholar

19. 

L. Gao et al., “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature, 516 (7529), 74 –77 (2014). https://doi.org/10.1038/nature14005 Google Scholar

20. 

A. Ehn et al., “Frame: femtosecond videography for atomic and molecular dynamics,” Light Sci. Appl., 6 (9), e17045 (2017). https://doi.org/10.1038/lsa.2017.45 Google Scholar

21. 

Photography—digital still cameras—determination of exposure index, ISO speed ratings, standard output sensitivity, and recommended exposure index,” (2006). Google Scholar

22. 

J. Lukas, J. Fridrich and M. Goljan, “Digital camera identification from sensor pattern noise,” IEEE Trans. Inf. Forensics Secur., 1 (2), 205 –214 (2006). https://doi.org/10.1109/TIFS.2006.873602 Google Scholar

23. 

Standard for characterization and presentation of specification data for image sensors and cameras,” (2005). Google Scholar

24. 

E. R. Fossum, “Charge transfer noise and lag in CMOS active pixel sensors,” in Proc. IEEE Workshop CCDs and Advanced Image Sensors, (2003). Google Scholar

25. 

Y. Junting et al., “Two-dimensional pixel image lag simulation and optimization in a 4-T CMOS image sensor,” J. Semicond., 31 (9), 094011 (2010). https://doi.org/10.1088/1674-4926/31/9/094011 Google Scholar

26. 

Y. Xu and A. J. P. Theuwissen, “Image lag analysis and photodiode shape optimization of 4T CMOS pixels,” in Proc. IISW, (2013). Google Scholar

27. 

J. Manin et al., “Microscopic investigation of the atomization and mixing processes of diesel sprays injected into high pressure and temperature environments,” Fuel, 134 531 –543 (2014). https://doi.org/10.1016/j.fuel.2014.05.060 FUELAC 0016-2361 Google Scholar

Biographies of the authors are not available.

© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2018/$25.00 © 2018 SPIE
Julien Manin, Scott A. Skeen, and Lyle M. Pickett "Performance comparison of state-of-the-art high-speed video cameras for scientific applications," Optical Engineering 57(12), 124105 (28 December 2018). https://doi.org/10.1117/1.OE.57.12.124105
Received: 10 July 2018; Accepted: 5 December 2018; Published: 28 December 2018
Lens.org Logo
CITATIONS
Cited by 29 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Signal to noise ratio

Sensors

High speed cameras

Digital imaging

Light emitting diodes

Camera shutters

Back to Top