Fluorescence lifetime imaging microscopy (FLIM) is an intrinsically quantitative tool to image the lifetime of molecular fluorescence. Changes in fluorescence lifetime are an important biomedical indicator, as the fluorescence lifetime can change, for instance, in the presence of oxygen or ions,1,2 changes in local pH,3 and interactions between proteins in living cells.4,5
There are two main approaches to estimate fluorescence lifetime, one in the time domain (TD), and the other in the frequency domain (FD).6 In TD-FLIM, a train of pulsed light, the width of which should be significantly smaller than the decay time of the fluorescent sample, is used for excitation. The decay curve of the emission photons is detected using a time-resolved detection system.78.–9 It is an inherently direct measurement of the fluorescence decay. The data analysis of TD-FLIM is typically achieved by fitting the experimental data to a linear combination of decaying exponentials, as shown in Eq. (1):
The values of represent the different lifetime components in the sample under study, and the values of are their proportional contributions. The fitting process not only costs computation time but generally requires a high level of expertise to obtain reliable results.10 The TD-FLIM system is also relatively expensive, since it requires short pulsed lasers and fast, sensitive detection systems.
An alternative way is through the frequency-domain approach. FD-FLIM uses periodically modulated light for the excitation and deduces the lifetime values from the phase change and/or the modulation depth change between excitation and emission signals, as shown in Eqs. (2) and (3):6122.214.171.124.16.–17 For example, most of the TD-FLIM measurements are generally performed using confocal microscopes, whereas FD-FLIM can also be done on wide-field microscopes. For future applications in medical diagnostics, industrial inspection, and agriculture, this has obvious advantages. The use of the confocal microscope not only increases the cost of TD-FLIM systems, but also significantly increases the acquisition time for images. In standard FD-FLIM systems such as the one that we use as a reference system, image acquisition can be faster than a TD-FLIM system with an equivalent image size, typically 10 min for a TD-FLIM system and 5 s for an FD-FLIM system per lifetime image. The fast acquisition time makes it easier for FD-FLIM to monitor fast lifetime changes in cellular images. This, in turn, offers obvious advantages for future applications.
To retrieve the phase change and the modulation depth change, the sensitivity of the detector is modulated at the same frequency, and a series of images are taken at different phase offsets.1819.–20 The current state-of-the-art FD-FLIM system requires an image intensifier, the use of which is necessary for low light levels and MHz demodulation frequencies.11,17,21 The demodulation is done by controlling the voltage of the photo cathode in the image intensifier. Although this technique is well developed and has been commercialized, there are still several fundamental drawbacks introduced by this technique. These will be described in the following section.
We propose improving FD-FLIM instrumentation by replacing the image intensifier–based charge coupled device (CCD) camera with an application-specific CCD design. We have designed, built, and tested such a CCD image camera that can be modulated at the pixel level, which we have named the MEM-FLIM camera (modulated electron multiplied all-solid-camera for fluorescence lifetime imaging microscopy). In the current version of our CCD design, the electron multiplication principle has not yet been implemented. This will occur in the next version.
Theory and Principle
In conventional FD-FLIM systems, the fluorescent molecule is illuminated by an amplitude-modulated light source, and the generated fluorescent light is demodulated by using a microchannel plate (MCP) image intensifier. The main disadvantage of FD-FLIM is the requirement of the MCP image intensifier. The image intensifier consists of a photo cathode that converts the incident photons to electrons, an MCP that accelerates and multiplies the electrons, and a phosphorus screen that converts the electrons back to photons. An illustration of the image intensifier structure is shown in Fig. 1. The image intensifier is then coupled to a CCD image sensor by using a fiber optic taper. The demodulation is done by changing the photo cathode voltage.
As we see in Fig. 1, high voltages up to several kilovolts are needed to operate the image intensifier. The spatial resolution is compromised by the photo cathode and the MCP.22 To modulate the sensitivity of the camera, a periodical demodulation signal () is applied to the cathode. A higher voltage on the photo cathode compared to the one on the entrance of MCP will let none of the electrons through, whereas a lower voltage will open the intensifier. This means that during the demodulation, half of the signal is lost. High voltage, up to several kilovolts, will be applied on the phosphorus screen (). Furthermore, the system is relatively costly, bulky, and vulnerable to overexposure.
Our noncooled MEM-FLIM sensor has been designed for pixel-level modulation, which means that the demodulation is done on the camera pixel itself, instead of on the image intensifier, which sits in front of the CCD camera in the conventional method. The principle of the MEM-FLIM camera at the pixel level is illustrated in Fig. 2. Demodulation signals, which have a 180-deg phase difference, are applied on two adjacent toggling gates of one pixel. In the first half of the demodulation cycle, the photo-generated charge will be transferred to one storage gate (STG), and in the second half of the cycle to the other STG. In this way, two phase images are obtained in one integration and read-out cycle. So the readout image contains these two phase images interleaved with each other, which we called “phase one” image and “phase two” image, as shown in Fig. 2(c). There is no dedicated register for transferring the charge to the horizontal register during the readout. The photo gates (PGs), toggle gates (TGs), STGs, and barrier gates (BGs) are all used for vertical transport during the readout. The chip, therefore, resembles a frame transfer sensor.
The incoming light is thereby captured by modulated pixels, recording two phase images at once. This is in contrast to an image intensifier with a duty cycle of about 50% when recording a single phase image. By removing the intensifier and fiber/lens coupling from the camera, a noise source is eliminated as well as a source of image distortion.
Our system has been designed with a variable integration time such that 1 ms . The choice of is related to the strength of the fluorescent image. The image is then read out before the next integration cycle begins. The time for integration plus readout time plus a user-chosen delay is referred to as the frame time , that is
We are not the first group to use the approach of demodulation at the pixel level. In 2002, Mitchell et al.23,24 demonstrated the feasibility of measuring fluorescence lifetime with a modified CCD camera. By modulating the gain of a CCD at a frequency of 100 to 500 KHz, images were recorded with an increasing delay. This camera, however, was not really suitable for FLIM since the maximum modulation frequency could only be 500 kHz. The “sweet spot” for frequency in an FD-FLIM system is approximately , which for translates to about 30 MHz. The value of 500 kHz is clearly too low.
In 2003, Nishikata et al.25 succeeded in taking two phase images simultaneously at a modulation frequency of 16 kHz. Again, the modulation frequency is much too low but the two-phase approach can be found in our work as well.
Later Esposito et al.26,27 developed this technique further and performed FLIM measurements at 20 MHz using a CCD/complementary oxide semiconductor (CMOS) hybrid sensor (SwissRanger SR-2). The SR-2 was originally developed for full-field 3-D vision in real time.28 Later in this manuscript, we will compare the performance of this camera to our implementation for FD-FLIM.
A solid-state camera can also be used in TD-FLIM. The MEGA frame project started in 2006, and is time domain based. A CMOS single-photon avalanche diode (SPAD)-based camera has been developed for TD-FLIM.29,30 The prototype camera has .
Materials and Methods
System Configuration and Materials
Our reference FLIM system includes an Olympus inverted microscope system IX-71 (Olympus), a LIFA system (Lambert Instruments, Roden, The Netherlands) which includes an Intensified CCD camera (GenII with S25 photocathode) as the reference camera (Lambert Instruments, Roden, The Netherlands), and a Dell computer installed with the Windows XP operating system. The MEM-FLIM system replaces the reference camera with our MEM-FLIM camera, while the rest of the system remains the same.
The reference FLIM system is controlled via LI-FLIM software version 1.2.6 developed by Lambert Instruments. The MEM-FLIM camera, controlled via Labview 8.5. Matlab 7.9.1 (R2009b), is used to convert image data to the . fli file format, which is used in LI-FLIM software. The converted image data are then processed by the LI-FLIM software to extract lifetime measurements. The lifetime measurement results from the MEM-FLIM system are compared to those from the reference FLIM system.
A single-band excitation filter (Semrock , Rochester, New York), a 495-nm LP dichroic mirror (Semrock ) and a single-band emission filter (Semrock ) are used in the fluorescence filter cube. An Olympus oil objective with a magnification of and a numerical aperture (NA) of 0.6 is used in the resolution measurement. A Zeiss air objective with a magnification of and an and a Zeiss oil objective with a magnification of and an are used in the lifetime measurements. A light emitting diode (LED) (Luxeon Rebel, LXML-PR01-0225), the peak wavelength of which is at 460 nm, can be controlled (modulated) by both the reference FLIM system and the MEM-FLIM system. The MEM-FLIM camera has a pixel size at 17 by 17 μm. The reference system has an effective pixel size at 20.6 by 20.6 μm. A stage micrometer (Coherent 11-7796) is used for measuring the sampling density of the cameras.
To determine the phase change and the modulation change introduced by the system itself, the system has to be calibrated with a fluorescent material with a known lifetime before carrying out subsequent lifetime experiments. We have used a 10 µm fluorescein solution (Sigma Aldrich 46955) ()31,32 for the system calibration. The fluorescein is dissolved in 0.1 M Tris buffer, and the pH is adjusted to 10 using NaOH. Fixed U2OS (osteosarcoma) cells that express green fluorescent protein (GFP) (supplied from Leiden University Medical Center), and GFP-actin–stained live cells (provided from the Netherlands Cancer Institute) were used for the fluorescent lifetime measurements.
Camera Characteristics: Background
Linearity of photometric response
It is extremely convenient for a scientific camera to have a linear response to the incident light. The linearity of photometric response of a camera is gauged by the coefficient of regression, calculated from a straight-line fit of intensity readout data under various exposure times. Below saturation, the CCD is usually photometrically linear. The closer the coefficient of regression is to 1, the better the linearity of the camera.
Sampling density refers to the physical scale between pixels in the digitized microscope image. An image with that covers a physical area of has a sampling density of samples per micron in both directions. Equivalently, the sample distance along any of these directions is . The sampling densities along both the horizontal and the vertical directions are preferably the same.33 The sampling densities of the MEM-FLIM camera and the reference camera are measured by using a stage micrometer. The , 0.5-NA objective lens is used in the experiment.
Owing to inevitable aberrations and diffraction phenomena, the image of an object observed with an optical system will be somewhat degraded. As a rule, the bright areas in the image will not be as bright as in the original pattern, and dark areas will not be as dark as in the original pattern. There will be a smooth transition along the originally high-contrast edges. The optical transfer function (OTF) is a commonly used quantity for describing the resolution and performance of an optical system.34 One way to measure the OTF is to use a test pattern such as that shown in Fig. 3. A higher OTF indicates a better performance of an optical system. Using the method described in Ref. 33, the OTF can be calculated from the edge response. Our measurements are made in both the horizontal direction and the vertical direction. The MEM-FLIM and reference FLIM systems share the same system settings (microscope, filter cube, illumination) except that the fluorescence emission is switched and directed into the two different camera ports. Thus the OTF directly reflects the performance of the camera. All OTF measurements have been made with a magnification of , 0.6 NA objective lens and a 180 ms integration time. The test pattern was illuminated via transmitted white light.
The OTF can be influenced by effects such as the misdiffusion of the electrons generated outside the depletion layer, nonideal charge transfer effects, the photosensitivity of the device, and so on.35
The main possible noise sources for digitized fluorescence images can be characterized as: photon noise due to the random arrival of photons, dark current noise due to random generation of electrons by thermal vibrations, readout noise due to the on-chip amplifier that converts the electrons into a change in analog voltage, and quantization noise due to quantizing the pixels of a sensed image into a number of discrete levels.
The Poisson distribution assumption of the photon noise will be validated using the following method. Two images are taken under the same illumination condition. The photon noise level is determined by subtracting these two images so that deterministic pixel variability in the image (e.g., shading) can be eliminated. Half of the variance in the difference image is used to represent the variance of a single pixel. To confirm the assumption that the noise source of the camera is Poisson distributed, we take two images and obtain the difference image under different illumination intensities. If the assumption is valid, then half of the variance in the difference image reflects the noise level in the single image. The variance for a Poisson distribution should be linear with the mean intensity.6
Dark current noise
Dark current noise refers to the creation of electron-hole pairs due to thermal vibrations.37 It is intrinsic to semiconductors and is a stochastic process with a Poisson distribution and thus . It reduces the dynamic range of the camera, since it produces an offset to the readout value, and can be a substantial source of noise. Cooling the camera reduces the dark current significantly.
The dark current can be influenced by the previously defined integration time () and frame time () in the MEM-FLIM camera, and it is, therefore, necessary to evaluate their individual contributions. This can be accomplished by varying the aforementioned . The linearity of the dark current noise in the integration time is also validated using the same method as in Sec. 3.2.1. Because the name “dark current” refers to the electron-hole pairs that are created when the camera is not exposed to light, measuring dark current is relatively simple and requires no optical setup.
Readout noise is a fundamental trait of CCD cameras caused by the CCD on-chip electronics before digitizing. It is independent of integration time but dependent on the readout bandwidth. By validating the linearity of the dark current noise to the integration time, the readout noise with a mean of and a variance can be deduced from the fitting by extrapolating the noise level in the limit as the integration time goes to zero.
Quantization noise is the roundoff error when the analog-to-digital converter (ADC) converts a sensed image to a finite number of discrete levels, and thus and . Quantization noise is inherent in the quantization process. For a well-designed ADC with the number of bits higher than 8 (the MEM-FLIM camera has 14 bits, and the reference camera has 12 bits), the quantization noise can be ignored, as the signal-to-noise ratio (SNR) is given by .6,37,38
Sensitivity relates the A/D converter units (ADU) of a digital camera system to the number of photo-electrons produced by incident photons reaching the pixels.
Comparing camera sensitivities
To compare the sensitivities of the two cameras, a LED from which the intensity can be finely controlled by the LED current setting is used for illumination.39 The camera readout is compared with the intensity from the LED, which is measured by a photodiode placed next to the LED. In this way, the sensitivity of the MEM-FLIM camera and the reference camera can be compared in the same optical setup.
The “sensitivity” of a camera can also be described by the minimum light that can be detected. When the detected signal is smaller than the noise floor of the camera, the signal will be buried in the noise. Thus the noise floor, such as readout noise and dark current noise, determines the limits of the camera sensitivity. Assuming the photon noise is Poisson distributed, the mean of the minimum signal above the noise floor is , and the standard deviation of the signal is . We note that is composed of several independent terms .
When the integration time is small, the noise floor is determined by the readout noise of the camera. We assume that the requirement for a signal not being buried in the noise floor is that the difference between the signal level and the noise level is at least times bigger than the standard deviation of the signal, Eq. (6):
At a longer integration time, the influence of the dark current noise cannot be ignored, since the dark current noise increases with the integration time . Concurrently, the signal level is also increasing linearly with the integration time. We note that given an integration time , the Poisson character of the photon signal and the dark current means that and , respectively. We assume that the signal can be distinguished from the noise floor if the range of the signal does not overlap with the range of the noise, which gives us Eq. (7). Thus when the rate of electron generation ( and ) meets the condition in Eq. (7), the signal will be above the noise floor and can be detected by the camera.
It is clear from this result that for long integration time (), the signal can be detected if:
Results and Discussion
Camera Characteristic: Performance
A linear regression line is fitted to the intensity data for various exposure times, as shown in Fig. 4. The MEM-FLIM camera exhibits linear photometric response for almost the entire dynamic range, resulting in the coefficient of regression . Since one image consists of two phase images (named phase one image and phase two image), we split these two phase images and analyze them separately.
As shown in Fig. 5(a) and 5(b), in both the horizontal and vertical directions, the sampling densities of the MEM-FLIM camera are the same: samples per micron. 170 μm corresponds to the actual dimension of the section in the stage micrometer that is scanned (Fig. 5). The MEM-FLIM camera has a square sampling. The sampling distances are . When dividing the pixel size (17 μm) by the magnification of the objective lens (), we get . This value differs from the measured sampling density () due to internal demagnification in the microscope. The internal demagnification in the light paths of the MEM-FLIM system and the reference system are different since the light paths of the two systems are not exactly the same.
Both the pixel size and the pixel number in the MEM-FLIM camera are the same in the horizontal and vertical directions; however, the image has a rectangle shape. This is due to every image containing two phase images as described in Sec. 2. These two phase images can be separated. If we assign the green color to one thresholded phase image and the red color to the other thresholded phase image, by overlapping the two phase images, we see that these two phase images match very well and result in the yellow color shown in Fig. 5(c) and 5(d). Less than 2% of the pixels, as shown in Fig. 5, differ between the two thresholded phase images. The images of Fig. 5(a) and 5(b) appear stretched due to two square image pixels in the vertical direction corresponding to a single square pixel on the sensor with two storage areas.
The comparison of the OTF of MEM-FLIM and the reference camera is shown in Fig. 6. The use of the stage micrometer (as in Fig. 5) with the knowledge of the actual CCD pixel size makes it possible to determine the absolute physical frequency of cycles/mm shown in Fig. 6. The effect of differing optical magnification between the two systems is thereby compensated. The OTF of the MEM-FLIM camera is higher than that of the reference camera. As a consequence, the image quality for the MEM-FLIM camera is better than for the reference camera. Actual images will be shown later in this manuscript. The (incoherent) diffraction cutoff frequency of the lens40 is which for green light () and gives . The limiting factor in the OTF above is, therefore, not the objective lens but the camera system. The slight increase of the MEM-FLIM OTF above the objective lens OTF has two sources. First, all three curves have been normalized to unity, although the exact transmission at for the two cameras is probably less than 1, and second, there is a slight amount of partial coherence associated with the condensor optics.
Poisson noise distribution
The validation of the Poisson distribution of the noise source is shown in Fig. 7. The linear fit indicates that the variance of the difference images increases linearly with the mean intensity, which shows that the noise source in the image is Poisson distributed. The integration time is 180 ms.
Dark current noise
Figure 8(a) shows the relationship between dark current and integration time when the frame time is fixed. The mean value of each column in a dark image is calculated and plotted for different integration times. By subtracting two images obtained at the same setting, the offset and the fixed pattern of each image can be eliminated. Because dark current noise follows Poisson statistics, the variance in this difference image equals twice the average intensity in one image.6 The generated dark current is linear in the integration time, which is plotted in Fig. 8(b). When the integration time is 600 ms, the dark current is of the full dynamic range. Since the electron-to-ADU converting factor is known from the absolute sensitivity experiment, which is , the dark current can also be written as . By fixing the integration time and varying the frame time, we see in Fig. 9 that the dark current is not influenced by the frame time and can be neglected.
Readout noise can be obtained from the fittings in Fig. 8(a). When the integration time goes to zero, the noise source due to the dark current is eliminated. Thus the constant terms in the fittings represent the readout noise. The readout noise is independent of the integration time. The average readout noise of the MEM-FLIM camera is . In the same way, the readout noise from the reference system can be determined to be (figure not shown). The factor of 1.7 between these two results is most likely because we are working with the first version of the MEM-FLIM chip/camera, while the reference system, as an existing commercial product, is already well optimized.
The sensitivity of the MEM-FLIM camera is shown in Fig. 7. The linear fit indicates that the noise source in the image is Poisson distributed, as explained in Sec. 4.1.4, and the slope of the fitting represents the sensitivity of the camera. The sensitivity of the MEM-FLIM camera is . For the reference camera, the same procedure resulted in a sensitivity of . To compare with the sensitivity of the reference camera , one needs to multiply the sensitivity of the MEM-FLIM camera by , which is , so that the bit differences of the two cameras (the MEM-FLIM camera 14 bits, the reference camera 12 bits) are taken into account. For these experiments, the analog gain of the MEM-FLIM camera was set to 6 dB, and the MCP voltage of the reference camera was set to 400 V.
Comparing camera sensitivities
The camera readout has a linear photometric response to the LED intensity. By fitting a straight line to the camera readout for various LED intensities, the slope of the fit indicates the ability of the camera to convert photo electrons to ADU. Sensitivity can be increased by increasing the electronic gain of the camera, as shown in Fig. 10(a). A comparison of the sensitivities of the MEM-FLIM camera to the reference camera is shown in Table 1. To increase the sensitivity of the reference camera, we can use different MCP voltages, while with the MEM-FLIM camera we can adjust the analog electronic gain. If we define the sensitivity of the reference camera at the MCP voltage of 400 V as 1, and take into consideration the bit differences of the two cameras, then we can compare the sensitivities of the two cameras. When comparing the sensitivities of the reference camera at different MCP voltages, one only needs to divide the slope of the fitting at a higher MCP voltage with the slope at the MCP voltage of 400 V (2.07). For example, the slope of the reference camera at the MCP voltage of 500 V is 8.88, which gives its sensitivity as . For the sensitivity of the MEM-FLIM camera, one needs to convert the bit difference first by multiplying the slope of the MEM-FLIM camera by before dividing it by 2.07. For example, the MEM-FLIM camera at 6-dB analog gain is as sensitive as the reference camera at the MCP voltage of 400 V. From Table 1 we can see that the MEM-FLIM camera () is as sensitive as the reference camera ().
Sensitivity comparison of the reference camera and the MEM-FLIM camera.
|Reference camera 400 V||2.07||1|
|Reference camera 500 V||8.88||4.29|
|Reference camera 800 V||117.28||56.71|
|MEM-FLIM camera 6 dB||24.77||2.99|
|MEM-FLIM camera 42 dB||495.3||59.88|
The cost of using a higher analog gain in the MEM-FLIM system is a reduced SNR as shown in Fig. 10(b). The SNR () in the image obtained from the MEM-FLIM camera at a small analog gain (6 dB) is higher than the SNR (20.8) in the image obtained from the reference camera at its lowest MCP voltage (400 V). For the reference camera, the SNR goes down when the MCP voltage increases. When the MEM-FLIM camera is set at a high analog gain (42 dB) and the reference camera set to a high MCP voltage (800 V), the SNR is comparable (MEM-FLIM camera: ; reference camera: ).
We can get the minimum signal that can be detected by the MEM-FLIM camera from Eq. (6). When the integration time is short, the noise floor will be dominated by the readout noise . From Figs. 7 and 8(b), we know that . We assume that the signal can be distinguished from the noise floor if the difference between the noise floor and the signal is times bigger than the standard deviation of the signal: [Eq. (6)]. When , based upon the Chebyshev inequality,41 the probability that the signal level can be mistakenly identified as noise will be . The Chebyshev inequality is distribution free, so it is not necessary to know the probability distribution of the signal. If we make use of the assumption that the signal has a Poisson distribution and that the average value of the signal is sufficiently high (), then the probability given above drops to . This means signal detection at the level is essentially guaranteed. In this case using Eq. (6), the minimum signal that can be detected by the MEM-FLIM camera is . Using the same method, the minimum signal that can be detected by the reference camera is .
We have measured the fluorescence lifetime of various objects, e.g., fluorescent solution and biological samples. Below are examples of the lifetime measurements on biology samples: fixed U2OS (osteosarcoma) cells that expressed GFP supplied from Leiden University Medical Center, GFP-actin–stained live HeLa cells, and GFP-H2A–stained live U2OS cells provided from the Netherlands Cancer Institute. In all experiments, the calibration is done to determine the phase and modulation change introduced by the system itself by using a fluorescein solution at 10 μM, the lifetime of which is known to be 4 ns.31,32 The modulation frequency of the MEM-FLIM system is at this time hardwired in the MEM-FLIM camera to 25 MHz. Results from the reference system served as a basis for comparison. The typical fluorescence lifetime of GFP is 2 to 3 ns.42,43
The fluorescence lifetime measurements are carried out in the following steps: (1) change the phase delay between the camera demodulation signal and the LED-modulated input signal and take a number of phase images—in our case, six original phase images; (2) separate two phase images from one original phase image taken in the first step (six original phase images thus produce 12 phase images), and put them in the right order; (3) correct for the background image; (4) convert the image data to the .fli format and read the file into LI-FLIM software; and (5) choose the region of interest and analyze the data.
GFP-stained fixed U2OS cells
The comparative lifetime measurement was performed on the fixed GFP cell shown in Fig. 11. U2OS is a human osteosarcoma cell line. A Zeiss objective with a magnification of and a numerical aperture of 0.5 was used for this experiment. The integration time of the sample in both systems was set to 100 ms.
To compare images from two cameras, the histograms of the two images are stretched over the range of 0 to . One maps the intensity value to the value 0 and to by the transformation given in Eq. (2).44 The original intensity at position then transforms to . In our case, we choose and to be 5% and 99.9% to exclude the outliers. BN is chosen to be 8, so the mapped intensity range is from 0 to 255. Note that the values of are floating point numbers.
We can see that the field of view of the reference camera is bigger than the MEM-FLIM camera in Fig. 11(a) and 11(c), but the resolution of the MEM-FLIM camera is significantly better than the reference camera in Fig. 11(b) and 11(d). Detailed structure inside the cell can be seen in the image taken with the MEM-FLIM camera. This structure is not readily visible in the image with the reference camera.
The lifetime images from the both cameras are compared in Fig. 11(e)–11(h). The MEM-FLIM camera clearly yields a better spatial resolution in the lifetime images. A area was used corresponding to an area of for the reference camera and for the MEM-FLIM camera. The lifetimes derived from the phase change for the reference and MEM-FLIM system are and , respectively. The lifetimes derived from the modulation depth change are and , respectively. The lifetime uncertainty is the standard deviation of the 100 lifetimes in the area. The modulation on the sample for the reference camera reached 0.64, while the modulation for the MEM-FLIM system is 0.55. The difference between the lifetimes derived from the phase change and the modulation change can be explained by the heterogeneity of GFP lifetime components. By doing multifrequency measurements on the reference system, the lifetime components in the sample are determined to be 1.24 ns (41%) and 5.00 ns (59%). The data are consistent with the values in the literature (1.34 ns [46%] and 4.35 ns [54%]).45
The fluorescent lifetime, as recorded with the MEM-FLIM camera, is in good agreement with values from the reference camera. Compared to the reference camera, the lifetime uncertainties () measured from the MEM-FLIM cameras are higher than those from the reference camera since the modulation depth for the MEM-FLIM camera is not (yet) as good as in the reference camera. However, image quality (detail) of the MEM-FLIM camera is significantly better than that of the reference system.
GFP-actin–stained HeLa cells
For these experiments, we imaged HeLa cells, stably expressing GFP-tagged -actin with the MEM-FLIM and reference cameras. The -actin expression in these cells is quite low and they therefore present an example of a typical low-intensity preparation. A Zeiss oil objective with a magnification of and a numerical aperture of 1.3 was used for this experiment. The integration time for both the reference camera and the MEM-FLIM camera was 1000 ms. The intensity images undergo the same gray-value stretching process as described in Sec. 4.2.1.
The lifetimes derived from the phase change for the reference camera and the MEM-FLIM camera are and , and the lifetimes derived from the modulation depth change are and , respectively. The modulation on the sample for the reference system reached 1.05, while the value for the MEM-FLIM camera was 0.38. From Fig. 12, we can see that the MEM-FLIM camera has a higher resolution and a better image quality than the reference camera. The fibers in the cell can be seen in the MEM-FLIM image but not in the reference image.
The lifetime images derived from the phase change of both cameras are also compared in Fig. 12(d)–12(f). In the lifetime image of the MEM-FLIM camera, the difference within the cell—the spatial variation—can be seen. Just above the middle of the image the lifetime (color) differs from the surrounding cellular material (as shown within the white rectangle). This structure can also be seen in the intensity image. This detail is blurred in the lifetime image from the reference camera.
GFP-H2A–stained live U2OS cells
For these experiments, we imaged U2OS cells, stably expressing GFP-H2A with the MEM-FLIM and reference cameras. A Zeiss oil objective with a magnification of and a numerical aperture of 1.3 was used for this experiment. The image comparison in Fig. 13 again shows that the MEM-FLIM camera has a higher resolution than the reference camera, while the reference camera has a larger field of view than the MEM-FLIM camera. The integration time for both the reference camera and the MEM-FLIM camera was 200 ms, and the phase-based, lifetime results are comparable with measured by the MEM-FLIM camera and measured by the reference system. The intensity images undergo the same gray-value stretching process as described in Sec. 4.2.1.
Discussion and Conclusion
We have designed, built and tested an all-solid-state CCD-based image sensor and camera for fluorescence lifetime imaging. A detailed comparison between the MEM-FLIM and reference cameras is shown in Table 2. Using the MEM-FLIM camera, we successfully measured the lifetimes for various fluorescent objects including biological samples.
Comparison of the MEM-FLIM and the reference cameras.
|MEM-FLIM camera||Reference camera|
|Fill factor (%)||44||>50|
|CCD pixel size (µm)||17||20.6a|
|Active pixel number||212×212||696×520|
|Modulation frequency (MHz)||25||0.001–120|
|ADC readout frequency (MHz)||25||11|
|Sampling density (samples/μm @ 20×)||1.24×1.24||1.07×1.07|
|OTF @ 500 cycles/mm||0.75||0.39|
|Compared sensitivity (ΔADU/ΔI||≥2.99||≥1|
|Detection limit at short integration time (e−)||51.4||35.4|
|σreadoutADU(e−)||5.9 (13.72)||3.4 (5.67)|
|Dark current (e−/ms)||0.29||0.08|
The pixel size of the CCD sensor itself is 6.45 μm; we are using 2×2 binned mode, which gives 12.9 μm, and the pixels as “projected” onto the photocathode by the fiber optic taper are magnified 1.6×, arriving at 20.6 μm of effective pixel size of the intensified camera system.
The MEM-FLIM results are comparable to the reference system. There are several advantages for the MEM-FLIM system over the reference system. (1) The camera can be modulated at the pixel level, permitting the recording of two phase images at once. The acquisition time can thus be shortened by using the MEM-FLIM camera, which causes less photobleaching in the biological sample. (2) The MEM-FLIM camera does not need high-voltage sources and RF amplifiers, and the system is more compact than the reference system. (3) In the MEM-FLIM system, one can change the integration time and the analog gain, which has no effect on the optical system itself. In the conventional frequency-domain FLIM system, one needs to control both the integration time and the MCP voltage to make use of the full dynamic range of the camera. However, changing the MCP voltage by more than approximately 50 V (depending on the intensifier and the MCP voltages used) means changing the system itself, which in turn means that the calibration done at another MCP voltage is no longer reliable. So one needs to pay extra attention when adjusting the settings on the conventional frequency-domain FLIM system. (4) Possible sources of noise and geometric distortion are significantly reduced. (5) The image quality from the MEM-FLIM camera is much better than the conventional intensifier-based CCD camera, and the MEM-FLIM camera thereby reveals more detailed structures in the biological samples. (6) The quantum efficiency of the MEM-FLIM camera is much higher than the reference camera. For the MEM-FLIM camera, the quantum efficiency is determined by the characteristics of the front-illuminated CCD, about 30%, 50%, and 70% at 500, 600, and 700 nm, respectively. For the reference camera, the quantum efficiency of the photo cathode at 500 nm is around 11%. Further, there are losses in other parts of the system, including the fiber optics and the CCD camera, not all of which can be attributed to true quantum effects.
It is also interesting to compare our results to the previously developed CCD camera described in Refs. 26 and 27, as shown in Table 3. Both the SR-2 and the MEM-FLIM cameras are able to measure fluorescence lifetimes, and the modulation depth and the lifetime results are comparable. The quantum efficiencies of the two cameras are comparable since they are both determined by the characteristics of a front-illuminated CCD. There are big improvements in the MEM-FLIM camera compared with the SR-2 camera. Although both the MEM-FLIM and the SR-2 cameras are noncooled, we can see clear influence of the dark current on the SR-2 camera. The presence of an edge artifact in the phase images in Fig. 2(e) and (2f) of Ref. 26 and Fig. 3 of Ref. 27 can be attributed to the dark current. In the MEM-FLIM camera, however, there is a uniform phase response across the sensor and the dark current influence can be ignored. The MEM-FLIM camera has more than twice as many pixels, smaller pixel sizes for better spatial sampling density, and a fill factor that is 2.75 times that of the SR-2. The modulation frequency of the MEM-FLIM camera described in this manuscript is 25 MHz, while the SR-2 camera is 20 MHz. As mentioned in Refs. 26 and 27, the modulation frequency can, in principle, be significantly increased for both cameras, but all measurements of camera performance would have to be re-evaluated for any higher frequency. At this time we can only compare performance at the frequencies that have been used.
Comparison of the MEM-FLIM camera and the SR-2 camera.
|Sensor type||CCD||CCD/CMOS hybrid|
|Pixel size (µm)||17×17||40×55|
|Fill factor (%)||44||16|
|Modulation frequency (MHz)||25||20|
|Measured GFP lifetime (phase)||2.6±0.4||2.6±0.4|
|Measured modulation depth||55±2%||50±3%|
|Dark current influence||Can be ignored||Cannot be ignored|
Besides transferring the photo-generated charge alternately to the two adjacent CCD storage registers in the vertical direction at the modulation frequency (as shown in Fig. 2), we also tried another architecture: transferring the charge to registers located in the horizontal direction. The former architecture is called vertical toggling, while the latter is called horizontal toggling. The architecture of the horizontal toggling sensor is similar to an interline CCD. The advantage of horizontal toggling design over the vertical one is that in the vertical toggling design the light source needs to be switched off during the image transfer period, since the photogate of the sensor is also used for charge transfer. In the horizontal design, however, dedicated registers are used to transfer the charge, which means there will be no smear effect if the light is left on during image transfer. Since this disadvantage of vertical design can be overcome by using a properly designed light source and the vertical toggling design outperformed the horizontal design (data not shown), we focused on the vertical toggling design as the architecture of choice for the system.
The MEM-FLIM camera is able to measure the fluorescence lifetime, but the modulation frequency is now limited to 25 MHz. We intend to achieve higher modulation frequencies in the next-generation camera. The next-generation camera will also have larger pixels (better light gathering) and more pixels (larger field-of-view) compared to the current design. Improved chip-level mask design should improve the modulation depth.
The camera is not perfect and there is still room for improvement. For example, the lifetime derived from the phase change is quite stable, but when the integration time of the experiment is increased, the lifetime derived from the modulation depth change has a tendency to increase as well. This effect can be explained by a known defect in this version of the MEM-FLIM sensor chip. The MEM-FLIM chip has a mask protecting parts of the surface from exposure to photons. In the current version, there is a slight displacement of the mask from its intended position. This means that the photoelectrons that we measure are to a certain extent caused by contributions from the wrong source. This defect will be corrected in the next version of the sensor chip.
Funding from Innovation-Oriented Research Program (IOP) of The Netherlands (IPD083412A) is gratefully acknowledged. We thank Dr. Vered Raz of the Leiden University Medical Center for providing us with the U2OS cells.
J. E. M. VermeerE. B. van MunsterN. O. Vischer, “Probing plasma membrane microdomains in cowpea protoplasts using lipidated GFP-fusion proteins and multimode FRET microscopy,” J. Microsc. 214(Pt 2), 190–220 (2004).JMICAR0022-2720http://dx.doi.org/10.1111/j.0022-2720.2004.01318.xGoogle Scholar
J. W. Borstet al., “ATP changes the fluorescence lifetime of cyan fluorescent protein via an interaction with His148,” PLoS ONE 5(11), e13862 (2010).1932-6203http://dx.doi.org/10.1371/journal.pone.0013862Google Scholar
Q. ZhaoI. T. YoungJ. G. S. de Jong, “Photon budget analysis for fluorescence lifetime imaging microscopy,” J. Biomed. Opt. 16(8), 086007 (2011).JBOPFO1083-3668http://dx.doi.org/10.1117/1.3608997Google Scholar
S. BrustleinF. DevauxE. Lantz, “Picosecond fluorescence lifetime imaging by parametric image amplification,” Eur. Phys. J. Appl. Phys. 29(2), 161–165 (2005).EPAPFV1286-0050http://dx.doi.org/10.1051/epjap:2004204Google Scholar
A. Lerayet al., “Quantitative comparison of polar approach versus fitting method in time domain FLIM image analysis,” Cytometr. Part A 79(2), 149–158 (2011).1552-4922http://dx.doi.org/10.1002/cyto.a.v79a.2Google Scholar
T. W. J. GadellaA. J. van HoekA. J. W. G. Visser, “Construction and characterization of a frequency-domain fluorescence lifetime imaging microscopy system,” J. Fluor. 7(1), 35–43 (1997).JFLCAR0022-1139http://dx.doi.org/10.1007/BF02764575Google Scholar
P. J. VerveerA. SquireP. I. H. Bastiaens, “Global analysis of fluorescence lifetime imaging microscopy data,” Biophys. J. 78(4), 2127–2137 (2000).BIOJAU0006-3495http://dx.doi.org/10.1016/S0006-3495(00)76759-2Google Scholar
P. J. VerveerQ. S. Hanley, Frequency Domain FLIM Theory, Instrumentation, and Data Analysis, Vol. 33, pp. 59–61, Elsevier B. V. Oxford, United Kingdom (2009).Google Scholar
O. Holubet al., “Fluorescence lifetime imaging (FLI) in real-time: a new technique in photosynthesis research,” Photosynthetica 38(4), 581–599 (2000).PHSYB50300-3604http://dx.doi.org/10.1023/A:1012465508465Google Scholar
M. J. BoothT. Wilson, “Low-cost, frequency-domain, fluorescence lifetime confocal microscopy,” J. Microsc. 214(1), 36–42 (2004).JMICAR0022-2720http://dx.doi.org/10.1111/j.0022-2720.2004.01316.xGoogle Scholar
A. EspositoH. C. GerritsenF. S. Wouter, “Optimizing frequency-domain fluorescence lifetime sensing for high-throughput applications photon economy and acquisition speed,” J. Opt. Soc. Am. A 24(10), 3261–3273 (2007).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.24.003261Google Scholar
A. D. Elderet al., “Calibration of a wide-field frequency-domain fluorescence lifetime microscopy system using light emitting diodes as light sources,” J. Microsc. 224(Pt 2), 166–180 (2006).JMICAR0022-2720http://dx.doi.org/10.1111/jmi.2006.224.issue-2Google Scholar
B. Q. SpringR. M. Clegg, “Image analysis for denoising full-field frequency-domain fluorescence lifetime images,” J. Microsc. 235(2), 221–37 (2009).JMICAR0022-2720http://dx.doi.org/10.1111/jmi.2009.235.issue-2Google Scholar
A. ElderS. SchlachterC. F. Kaminski, “Theoretical investigation of the photon efficiency in frequency-domain fluorescence lifetime imaging microscopy,” J. Opt. Soc. Am. A 25(2), 452–462 (2008).JOAOD60740-3232http://dx.doi.org/10.1364/JOSAA.25.000452Google Scholar
Q. S. Hanleyet al., “Fluorescence lifetime imaging: multi-point calibration, minimum resolvable differences, and artifact suppression,” Cytometry 43(4), 248–260 (2001).CYTODQ0196-4763http://dx.doi.org/10.1002/(ISSN)1097-0320Google Scholar
Andor Technology, “Digital Camera Technology,” http://ebookbrowse.com/gdoc.php?id=60264070&url=bc86b192f6a974f978c100a05ab12c25.Google Scholar
A. Mitchellet al., “Direct modulation of the effective sensitivity of a CCD detector: a new approach to time-resolved fluorescence imaging,” J. Microsc. 206(3), 225–232 (2002).JMICAR0022-2720http://dx.doi.org/10.1046/j.1365-2818.2002.01029.xGoogle Scholar
A. Mitchellet al., “Measurement of nanosecond time-resolved fluorescence with a directly gated interline CCD camera,” J. Microsc. 206(3), 233–238 (2002).JMICAR0022-2720http://dx.doi.org/10.1046/j.1365-2818.2002.01030.xGoogle Scholar
K. NishikataY. KimuraY. Takai, “Real-time lock-in imaging by a newly developed high-speed image processing charged coupled device video camera,” Rev. Sci. Instr. 74(3), 1393–1396 (2003).RSINAK0034-6748http://dx.doi.org/10.1063/1.1542663Google Scholar
A. Espositoet al., “All-solid-state lock-in imaging for wide-field fluorescence lifetime sensing,” Opt. Express 13(24), 9812–9821 (2005).OPEXFF1094-4087http://dx.doi.org/10.1364/OPEX.13.009812Google Scholar
A. Espositoet al., “Innovating lifetime microscopy: a compact and simple tool for life sciences, screening, and diagnostics,” J. Biomed. Opt. 11(3), 034016 (2006).JBOPFO1083-3668http://dx.doi.org/10.1117/1.2208999Google Scholar
T. Oggieret al., “An all-solid-state optical range camera for 3D real-time imaging with sub-centimeter depth resolution,” Proc. SPIE 5249, 534–545 (2004).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.513307Google Scholar
D.-U. Liet al., “Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm,” J. Biomed. Opt. 16(9), 096012 (2011).JBOPFO1083-3668http://dx.doi.org/10.1117/1.3625288Google Scholar
D.-U. Liet al., “Time-domain fluorescence lifetime imaging techniques suitable for solid-state imaging sensor arrays,” Sensors 12(5), 5660–5669 (2012).SNSRES0746-9462http://dx.doi.org/10.3390/s120505650Google Scholar
D. MagdeR. WongP. G. Seybold, “Fluorescence quantum yields and their relation to lifetimes of rhodamine 6G and fluorescein in nine solvents: improved absolute standards for quantum yields,” Photochem. Photobiol. 75(4), 327–334 (2002).PHCBAP0031-8655http://dx.doi.org/10.1562/0031-8655(2002)075<0327:FQYATR>2.0.CO;2Google Scholar
T. Frenchet al., “Two-photon fluorescence lifetime imaging microscopy of macrophage-mediated antigen processing,” J. Microsc. 185(3), 339–353 (1997).JMICAR0022-2720http://dx.doi.org/10.1046/j.1365-2818.1997.d01-632.xGoogle Scholar
I. T. Young, “Calibration: Sampling Density and Spatial Resolution,” in Current Protocols in Cytometry, J. P. Robinsonet al., Eds., Vol. 1, pp. 2.6.1–2.6.14, John Wiley & Sons, Inc., New York (1997).Google Scholar
I. T. Young, “Image fidelity: characterizing the imaging transfer function,” in Fluorescence Microscopy of Living Cells in Culture-Part B, D. L. TaylorY. L. Wang, Eds., pp. 2–45, Elsevier, San Diego (1989).Google Scholar
A. J. P. Theuwissen, Solid-State Imaging with Charge-Coupled Devices, Kluwer Academic, the Netherlands (1996).Google Scholar
D. Marcuse, Engineering Quantum Electrodynamics, Harcourt, Brace & World, New York (1970).Google Scholar
J. C. Mullikinet al., “Methods for CCD camera characterization,” in SPIE Symp. Electr. Imaging Sci. Tech. Vol. 2173, pp. 73–74, SPIE, Bellingham, Washington (1994).Google Scholar
F. R. Boddeke, “Quantitative fluorescence microscopy,” Ph.D. Thesis, Delft University of Technology (1998).Google Scholar
J. W. Goodman, Introduction to Fourier Optics, 3rd ed., Ben Roberts, US (2005).Google Scholar
R. D. YatesD. J. Goodman, Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers, Wiley & Sons, Hoboken, NJ (2005).Google Scholar
V. Ghukasyanet al., “Fluorescence lifetime dynamics of enhanced green fluorescent protein in protein aggregates with expanded polyglutamine,” J. Biomed. Opt. 15(1), 016008 (2010).JBOPFO1083-3668http://dx.doi.org/10.1117/1.3290821Google Scholar
T. Nakabayashiet al., “Application of fluorescence lifetime imaging of enhanced green fluorescent protein to intracellular pH measurements,” Photochem. Photobiol. Sci. 7(6), 668–670 (2008).PPSHCB1474-905Xhttp://dx.doi.org/10.1039/b800391bGoogle Scholar
I. T. YoungJ. GerbrandsL. van Vliet, Fundamentals of image processing, Delft University of Technology, the Netherlands (1998).Google Scholar