## 1.

## Introduction

The rate of acquisition of fluorescent signals in 3-D using digital imaging-based microscopy is a critical factor in a number of applications, particularly in high-throughput microscopy. Because the depth of focus is relatively shallow, and becomes shallower as the resolving power of the objective lens improves (as the numerical aperture increases), it is necessary to acquire image samples at multiple levels of focus in the object of interest to avoid missing potentially important sources of signal. In applications requiring large sample sizes, the rate of acquisition sets an upper limit on possible throughput. In applications where the object of interest moves during acquisition, this rate correlates directly with the amount of blur in the acquired images. The method presented here is designed specifically to increase the rate of acquisition of signal originating throughout 3-D samples of interest.

Current practice is to acquire individual 2-D image planes at discrete levels in the specimen by moving focus between each acquisition, then to combine these planes to produce 3-D stacks (or “volumes”), which are processed further as needed. Collection of multiple individual planes introduces delays because: 1. quickly changing focus introduces vibrations in the sample, which degrade the image, but waiting for the vibrations to dampen or slowly moving focus slows the process; and 2. images must be collected, digitized, and transmitted individually. In wide-field microscopy, delays from the second of these problems arise in part from time intervals required to open and close illumination and camera shutters. In confocal scanning microscopy (CSM) and multiphoton fluorescence excitation microscopy (MPFEM), long image-collection time is due to the need to scan the illumination over the specimen. Shorter confocal acquisition times may be possible with alternative confocal methods, but images made with each of the confocal methods include only a thin layer of the specimen, and it is likely that not all the structures of interest are contained in that thin layer. In this regard, image volumes acquired using wide-field microscopy benefit from detecting fluorescence further away from each plane of focus, while simultaneously suffering image degradation by the addition of out-of-focus blur. Deconvolution reduces the blur in 3-D image volumes, but is computationally burdensome.

An extended depth of focus (EDOF) method, *i.e.*, one that would allow obtaining 2-D images in which all portions of a thick specimen appear in focus, would greatly benefit high-throughput microscopy. An existing method for EDOF is wavefront coding. In this method, a phase mask is placed in the objective back focal plane (BFP) (or at a plane conjugate to the BFP) that distorts the imaging characteristics of the objective in such a way that, although they are severely degraded, they are practically independent of the depth at which the objective is focused.^{1} Wavefront coding, however, is not yet applicable to the high numerical aperture (NA) objectives necessary for high-resolution imaging. Here we present a method for EDOF that is applicable to any objective and, although slower than wavefront coding, is more rapid than the existing methods outlined before. The method is based on collecting a through-focus image as a single 2-D image and then deconvolving this image using existing algorithms for deconvolution. Part of this work is described by us in a United States patent.^{2} This work is organized as follows. Section 2 describes the mathematical foundations of the proposed method. Section 3 presents the results of applying the proposed method to fluorescent images. In Sec. 4 we describe research projects in which we have successfully applied the EDOF method we developed. Section 5 compares results using different deconvolution methods, in particular linear versus nonlinear. In Sec. 6 we analyze the sensitivity of our method to using a point spread function (PSF) that is not at a depth representative of the specimen, and show that the method is rather robust to a mismatch in the PSF depth. In Sec. 7 we present important earlier developments that relate to the proposed method, but that in one way or another fell short of the goal of the method we propose here of a fast and robust method for rendering 2-D images with extended depth of focus. Lastly, Sec. 8 summarizes our conclusions and describes directions for possible enhancements to the proposed method.

## 2.

## Description of the Method

Under imaging conditions that often hold, fluorescent image formation can be approximated as the 3-D convolution

## Eq. 1

$$g\left({\mathbf{x}}_{i}\right)={\int}_{\mathcal{O}}s\left({\mathbf{x}}_{o}\right)h({\mathbf{x}}_{i}-{\mathbf{x}}_{o})d{\mathbf{x}}_{o}={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}s\left({\mathbf{x}}_{o}\right)h({\mathbf{x}}_{i}-{\mathbf{x}}_{o})d{\mathbf{x}}_{o},$$In our method, the image is not collected as a 3-D stack, but as a 2-D image. More specifically, it is collected by opening the camera and illumination shutters and moving the microscope stage or objective through focus. This has the effect of accumulating in the charge-coupled device (CCD) chip the intensity at all the planes through which the microscope is focused. The resulting 2-D image is

## Eq. 2

$${g}_{2}\left({\mathbf{r}}_{i}\right)={\int}_{{z}_{i}\u220a\mathcal{I}}g\left({\mathbf{x}}_{i}\right)d{z}_{i}={\int}_{{z}_{i}\u220a\mathcal{I}}d{z}_{i}{\int}_{\mathcal{O}}d{\mathbf{x}}_{o}s\left({\mathbf{x}}_{o}\right)h({\mathbf{x}}_{i}-{\mathbf{x}}_{o}),$$## Eq. 3

$${g}_{2}\left({\mathbf{r}}_{i}\right)={\int}_{(-\infty ,-\infty )}^{(\infty ,\infty )}d{\mathbf{r}}_{o}{\int}_{-\infty}^{\infty}d{z}_{o}s\left({\mathbf{x}}_{o}\right){\int}_{-\infty}^{\infty}d{z}_{i}h({\mathbf{x}}_{i}-{\mathbf{x}}_{o}).$$## Eq. 4

$${h}_{2}({\mathbf{r}}_{i}-{\mathbf{r}}_{o})={\int}_{-\infty}^{\infty}d{z}_{i}h({\mathbf{x}}_{i}-{\mathbf{x}}_{o}),$$## Eq. 5

$${s}_{2}\left({\mathbf{r}}_{o}\right)={\int}_{-\infty}^{\infty}s\left({\mathbf{x}}_{o}\right)d{z}_{o}.$$## Eq. 6

$${g}_{2}\left({\mathbf{r}}_{i}\right)={\int}_{{\mathbf{r}}_{o}\u220a\mathcal{O}}{h}_{2}({\mathbf{r}}_{i}-{\mathbf{r}}_{o}){s}_{2}\left({\mathbf{r}}_{o}\right)d{\mathbf{r}}_{o}.$$There are many approaches for deconvolving incoherent imagery. For the images presented here, we use a maximum-likelihood (ML) approach based on the expectation-maximization (EM) formalism of Dempster, Laird, and Rubin.^{3} The ML-EM algorithm that results can be found in Ref. 4. However, the algorithm is somewhat slow for routine application, and thus in our regular research we use faster constrained ML deconvolution algorithms that we have derived.^{5}

## 3.

## Experimental Methods and Results

We tested the method in two ways. First, we collected a 3-D stack of images and then collapsed the stack into a 2-D integrated-intensity projection image that was deconvolved as described before. The original 3-D stack was also deconvolved with the same algorithm. This approach somewhat allows us to compare the resolution obtainable with the method presented here against that of the much slower method of 3-D data acquisition and deconvolution. The second test was done with images collected by leaving the camera and illumination shutters open while the microscope stage was moving through focus. For both cases, the 3-D PSF of the microscope was computed from the model of Gibson and Lanni^{6} using Gauss-Kronrod quadrature^{7, 8} to numerically evaluate the integrals. The 2-D PSF was calculated from the 3-D PSF using Euler’s method. We are currently investigating how the choice of more precise quadrature methods affects the PSF and, more importantly, the deconvolved image.

## 3.1.

### Three-Dimensional Stack

The first test image is from fluorescently labeled actin filaments in yeast cells collected with a $60\times \u22151.4\phantom{\rule{0.3em}{0ex}}\mathrm{NA}$ objective. The image has $128\times 128\times 128$ cubic pixels $0.11\phantom{\rule{0.3em}{0ex}}\mathrm{\mu}\mathrm{m}$ on each side (3-D stack courtesy of Karpova at the National Cancer Institute, National Institutes of Health, Bethesda, Maryland). Figure 1 shows the recorded and deconvolved EDOF images. Figure 1a is a 2-D projection of the 3-D stack obtained by adding all the 2-D optical slices into a single image. Figure 1b is the 2-D deconvolution of Fig. 1a using the 2-D PSF defined in Eq. 4 and 3000 iterations of the EM-M algorithm. Figures 1c and 1d are obtained by deconvolving the 3-D stack using a 3-D PSF and the EM-ML algorithm for 1000 and 3000 iterations, respectively. The time required by the 2-D deconvolution in Fig. 1b is about two orders of magnitude shorter than the time required for Fig. 1d that uses the same number of iterations. Despite this large time difference, the results obtained by the two methods are similar.

## 3.2.

### Two-Dimensional Projection

Figure 2 shows images of 4',6-diamidino-2-phenylindole-stained nuclei from an approximately $7.1\text{-}\mathrm{\mu}\mathrm{m}$ -thick section of mouse kidney. The images were collected with a $100\times 1.4\phantom{\rule{0.3em}{0ex}}\mathrm{NA}$ oil-immersion objective and a Roper Scientific (Trenton, New Jersey) Quantix57 CCD camera driven by custom-written software. The pixels are $80\text{-}\mathrm{nm}$ squares. The through-focus excursion was $60\phantom{\rule{0.3em}{0ex}}\mathrm{\mu}\mathrm{m}$ (from approximately $26.5\phantom{\rule{0.3em}{0ex}}\mathrm{\mu}\mathrm{m}$ above to approximately $26.5\phantom{\rule{0.3em}{0ex}}\mathrm{\mu}\mathrm{m}$ below the specimen). For PSF computation, the fluorescent wavelength ${\lambda}_{\mathrm{f}}=456\phantom{\rule{0.3em}{0ex}}\mathrm{nm}$ , and we assumed that the specimen was immediately below the coverslip. We obtained nearly identical results, assuming the specimen to be $3.5\phantom{\rule{0.3em}{0ex}}\mathrm{\mu}\mathrm{m}$ under the coverslip.

Two nuclei that appear bright and well focused when the focus is set at the top of the section [Fig. 2a] are blurred when the focus is set at the bottom of the section [Fig. 2b] and, in one case (in the upper right-hand corner), obscured by an in-focus nucleus. In the through-focus image [Fig. 2c], all the nuclei are visible but are severely blurred. The blur, however, is removed in the deconvolved image [Fig. 2d].

## 4.

## Application to Our Research

The method presented here was developed in response to our need to obtain location information from thick specimens when the time to collect a 3-D stack is inadequately long. Here we list two examples in which the acquisition of full stacks are limiting: 1. time-lapse acquisition of fluorescent spots that rapidly move out of focus, and 2. identification of labeled cells within a relatively large subject. In both instances, our approach significantly increases throughput at the imaging stage. The reduced image acquisition time is because our method avoids multiple start-and-stop focusing movements and delays caused by the multiple shutter opening and closing, and by multiple data transfers from camera to computer to storage. Furthermore, deconvolution times and data storage requirements are considerably reduced. Deconvolution time is reduced not only because of the reduction from 3-D to 2-D, but also because for routine work, instead of the EM algorithm, we use the faster algorithms we developed for maximum-likelihood constrained deconvolution.^{5} In addition, although the result is a 2-D image, it is entirely adequate to address common research problems.

In our time-lapse acquisition of fluorescent spots, the approach presented here allows us to track fluorescent protein-tagged chromosome regions as they move around the living *Saccharomyces cerevisiae* nucleus, imaging at a rate of two *full* frames of
$1392\times 1040$
pixels per second. With our equipment, this is a significantly better temporal resolution than we can achieve with 7-slice stacks using only 1/8 of the frame, and thus greatly increases our efficiency in terms of imaging time per cell. In this experiment, the issues are to identify when rapid movements begin and stop, as assessed in large numbers of cells so that rare events are not missed, and to determine whether the rates of movement are different in different cell types. The method presented in this work has proven completely acceptable in this regard, and superior to full stacks in particular, because of the more rapid throughput.

For the identification of labeled cells within a relatively large subject, the approach presented here provides an image adequate to identify the presence or absence of a fluorescently labeled cell in large numbers of *C. elegans* adults (where image throughput is important) without first having to identify the proper plane of focus. The need to first identify such a plane would be detrimental for automated screening of large populations of worms.

## 5.

## Linear Versus Nonlinear Deconvolution

It is often argued that for 2-D deconvolution of microscopic images, it is unnecessary to use nonlinear methods, such as the EM-ML described earlier or the Jansson-van Cittert method of repeated convolution,^{9, 10, 11} the argument being that the 2-D optical transfer function (OTF) does not suffer from the missing cone that affects the 3-D OTF of wide-field microscopes. In fact, the 2-D OTF corresponding to either
$h({\mathbf{r}}_{o},0)$
(for thin specimens) or to
${h}_{2}\left({\mathbf{r}}_{o}\right)$
given in Eq. 4, are nonzero over a circle of radius
$2\phantom{\rule{0.3em}{0ex}}\mathrm{NA}\u2215\lambda $
in the spatial frequency domain, and there are no regions inside this circle where the OTF is zero. However, because the OTF is exactly zero outside this circle, linear deconvolution methods (whether iterative or not) cause artifacts in the estimated specimen function, the main artifact being pixels with negative values that result from enhancing frequency components within the passband of the OTF without obtaining the out-of-band frequency components necessary to obtain a non-negative specimen function estimate. We tested two of the most widely used linear deconvolution algorithms, namely the Moore-Penrose pseudo-inverse (sometimes referred to as the linear least squares algorithm^{12}) and a filter based on the Wiener filter (or, more properly, the Wiener-Helstrom filter. See, for example, pages 206 to 210 of Ref. 13). In either case, the Fourier transform of the estimated specimen function is calculated as

## Eq. 7

$${W}_{\mathrm{MP}}\left(\mathbf{\rho}\right)=\{\begin{array}{cc}1\u2215{H}_{2}\left(\mathbf{\rho}\right)\hfill & \hfill \mathrm{for}\phantom{\rule{0.3em}{0ex}}\mid {H}_{2}\left(\mathbf{\rho}\right)\mid \ge \epsilon \hfill \\ 0\hfill & \hfill \mathrm{otherwise}\hfill \end{array},$$## Eq. 8

$${W}_{\mathrm{WH}}\left(\mathbf{\rho}\right)={H}_{2}^{*}\left(\mathbf{\rho}\right)\u2215[{\mid {H}_{2}\left(\mathbf{\rho}\right)\mid}^{2}+\epsilon ],$$^{4}and by others

^{14}that the EM-ML algorithm uses the frequency components passed by the imaging system to obtain the frequency components that are either blocked or severely attenuated by the imaging system. This capability is of great help in the current application where the practical cut-off frequency is severely reduced by the projection in $z$ .

## 6.

## Sensitivity to Point Spread Function-Depth Mismatch

We obtain the PSF we use for deconvolution from the theoretical model of Gibson and Lanni^{6} at a representative depth of the specimen under the coverslip. For example, if a
$10\text{-}\mathrm{\mu}\mathrm{m}$
-thick specimen extends from 15 to
$25\phantom{\rule{0.3em}{0ex}}\mathrm{\mu}\mathrm{m}$
under the coverslip, ideally we would use a PSF calculated for a depth of
$20\phantom{\rule{0.3em}{0ex}}\mathrm{\mu}\mathrm{m}$
. Unfortunately, it is impossible to know *a priori* the depth spanned by the specimen, and thus the depth at which we calculate the PSF—the mid-depth through the volume collected—is unlikely to be halfway through the specimen and might even be outside the specimen depth. We assessed the sensitivity of the deconvolved image to the mismatch between the depth of the specimen and the depth at which the PSF is calculated with a simulation. We generated images at different depths and then deconvolved with PSFs calculated also at different depths. For this simulation, we generated 3-D synthetic images using PSFs that vary with the depth under the coverslip as

## Eq. 9

$$g\left({\mathbf{x}}_{i}\right)={\int}_{\mathcal{O}}h({\mathbf{r}}_{i}-{\mathbf{r}}_{o},{z}_{i},{z}_{o})s\left({\mathbf{x}}_{o}\right)d{\mathbf{x}}_{o},$$## 7.

## Earlier Developments

A method to render images with EDOF was proposed by Holmes, ^{15} in which a 3-D stack of images
$g\left({\mathbf{x}}_{i}\right)$
is collected and its Fourier transform
$G(\xi ,\eta ,\zeta )$
is computed. From the Fourier transform, the central slice
${G}_{2}\left(\rho \right)=G(\xi ,\eta ,0)$
is extracted. Likewise, the central slice of the OTF
${H}_{2}\left(\rho \right)=H(\xi ,\eta ,0)$
is extracted. From these central slices, they calculate the Fourier transform of the estimated specimen function

^{15}are mathematically equivalent, their approach requires the collection of a 3-D stack that greatly increases the data acquisition time and is one of the main reasons for using an EDOF image instead of the full 3-D image.

Häusler^{16} and Häusler and Körner^{17} suggest a method in which an image is acquired in the way we propose, that is, by keeping the camera shutter open while the specimen moves through focus. Häusler^{16} processes the resulting image with an optical filter. That is, the recorded image is projected through a lens that has a mask in its back-focal plane. The optical density of this mask is proportional to the function
${W}_{WH}\left(\rho \right)$
. This processing, however, assumes that the image was formed under coherent illumination, a situation that does not apply to fluorescence microscopy. Häusler and Körner^{17} suggest processing the image by analog electronics using the so-called
$\rho $
-filter. That is, a filter whose amplitude is proportional to
$\rho =\Vert \underset{\u0331}{\rho}\Vert =\sqrt{{\xi}^{2}+{\eta}^{2}}$
. The selection of this filter is based on the observation that the projection OTF (see Fig. 4) is proportional to
$1\u2215\rho $
except for
$\rho \approx 0$
. The
$\rho $
-filter, however, is known to greatly amplify high-frequency noise. Häusler and Körner suggest using homomorphic filtering to reduce the noise. Even with the noise reduction, their method is limited by the use of a linear filter for deconvolution, which at best enhances components up to about 1/4 of the cut-off frequency of the imaging system.

Neither the method proposed by Holmes
^{15} nor that of Häusler
^{16, 17} can render an EDOF image with the resolution afforded by the method we propose.

We have similarly acquired through-focus fluorescent image data using a spinning disk confocal microscope. In this application, fluorescent signals above and below the volume do not interfere with the signals of interest, deconvolution is not required and, by translating along $x$ as well as $z$ during two overlapping acquisitions (positively along $x$ for the first image, negatively along $x$ for the second), have acquired stereo pair images. However, this approach is not as sensitive as the method described before, and the focus and translation movements must be relatively slow to avoid streaking introduced by the moving pinholes.

## 8.

## Conclusions and Future Work

We presented a method to obtain extended depth-of-focus images that combines a new way to acquire images and a constrained deconvolution algorithm. The method allows for fast data acquisition by collecting the whole image volume within a single image, thus eliminating the dead time that results from stopping and starting focus changes and opening and closing the illumination and camera shutters. There are, however, several issues to investigate to optimize the throughput of the method. For example, how far above and below the specimen is it necessary to collect light? If the recorded volume is too small, then Eq. 4 is no longer independent of the axial coordinates
${z}_{o}$
and
${z}_{i}$
, and the 2-D recorded image can no longer be approximated as a 2-D convolution. On the other hand, a thick volume requires longer exposure time that increases photobleaching and risks saturating the CCD camera. Another important issue is the choice of the deconvolution algorithm. The EM-ML we currently used is flexible and robust, but slow. We have developed deconvolution algorithms that are significantly faster but less robust (in particular to truncated data).^{5} For the routine application of our method, we now use these faster algorithms. Finally, we will determine rapid and high resolution methods for producing stereo pairs of images.

## Acknowledgments

The authors thank Danny J. Smith for his help setting up and maintaining the computers used for this research; Ben Fowler for his expert help with the microscopes; and Mary Flynn and Adrienne Gidley for their help with the everyday running of our laboratories. The support of NIH grants R01 GM49798, R01 GM55708, and R01 EB002344; NSF grant EPS-0132534; the College of Engineering of the University of Oklahoma, and the Oklahoma Medical Research Foundation is gratefully acknowledged.