Advances in light sources, computational power, and molecular tags have resulted in a renewed interest in improvements in microscopy. In this work, we focus on four measures of microscope performance: (1) resolution of smaller objects even below the diffraction limit, (2) increased field of view (FOV) toward gigapixel images, (3) the ability to “section” a turbid medium, rejecting out-of-focus objects and thus providing three-dimensional images, and (4) high contrast without exogenous materials, which enables in vivo imaging. For example, in imaging of skin cancers, the clinician would like to resolve subcellular components to at least below the dermoepidermal junction ( deep), with as large an FOV as possible. High-numerical aperture (NA) objectives generally have high magnification, thus limiting the FOV. Confocal reflectance microscopy1 uses a high-NA objective to provide resolution and sectioning without added contrast, but the high NA and consequent high magnification determine that increased FOV requires mosaics of images.
One relatively new and practical solution to provide quick and accurate sectioning is structured illumination microscopy (SIM).2 By projecting a high-frequency modulation pattern through the illumination path onto the specimen, isolation of discrete planes is demonstrated, eliminating clutter that would otherwise obscure useful information. Due to the high frequency of the modulation patterns, it has been found that high-resolution information beyond the diffraction limit, previously inaccessible, can be recovered through innovative processing. Thus, we can reduce the NA and increase FOV while still maintaining most of the resolution and sectioning ability of the confocal microscope. We demonstrate the use of random modulation patterns of incoherent [light-emitting diode (LED)] light for resolution enhancement and sectioning. The optimal solution to form an image with high lateral resolution requires a pattern to be at the Nyquist limit, which, after some reconstruction in the frequency space, allows for double the resolution.3 In practice, there is a trade-off between sectioning depth and the total resolution of the system based on the spatial frequency of the modulation pattern.4
Original developments of super-resolution using structured illumination required the precise movements of a sine pattern with a discrete phase and frequency. To illuminate the image fully, a period phase shift must be applied to the sine pattern, requiring an extremely finely tuned mechanical device. Additionally, the pattern must be rotated to a least two other positions to provide resolution enhancement in the - and -(lateral) directions. Recently, attempts have been made to reduce the necessity for a well-defined sine pattern. Removing this dependency reduces the overall cost and complexity of manufacturing and manipulating the pattern on a micron scale, as well as removing the need for perfect phase alignment within a turbid media.5–8 Dynamic speckle illumination microscopy uses a speckle illumination pattern to provide sectioning at significant depths but has been demonstrated using fluorescence only.9,10 Instead, we use a random pattern in reflectance, comprised of different samples. This method has the benefit of not requiring exact phase shifts but also allows for in vivo imaging without any tagging.4,11 However, without discrete frequencies and phase shifts, the original method for super-resolution, developed by Gustafsson, cannot be applied to random patterns. Thus, we develop two methods of providing both sectioning at depth and resolution enhancements using a random pattern.
Two algorithms are demonstrated for producing sectioning and super-resolution with an unknown random pattern. The first method we will consider is the maximum a posteriori probability (MAP) processing technique.5 MAP processing produces a high-resolution, unsectioned image and a low-resolution, sectioned image, which are then combined to produce a high-resolution, sectioned image. Typically, when MAP is used in conjunction with SIM, a sinusoidal pattern with a known frequency and phase is required to produce sectioning. The literature does not extend the work to sectioning at depth and makes no attempt to demonstrate their functionality against a subsurface object. Additionally, the transfer function of the system must be reasonably characterized to estimate the modulation pattern at the detector accurately. To alleviate some of these constraints, we modify the algorithm to use random patterns that are not known ahead of time. To validate the techniques developed above, we test and consider the quality of sectioning and super-resolution.
Second, we consider a different technique of deblurring that relies on sparse priors.12 Due to the fact that the point spread function (PSF) will vary as a function of depth, deblurring a thick specimen can often be tedious and a source of errors as it requires multiple estimations of the PSF to correctly reconstruct the image. By sectioning the image using SIM first, we can account for the depth dependency of the PSF. When this sectioned image is deblurred, we show that the results for thick specimens are significantly better than deblurring alone. Sparsity makes the assumption that the object is comprised mostly of sharp edges, allowing us to deconvolve a generic PSF with excellent results. Throughout this paper, we will use the same nomenclature as Levin et al.,12 referring to this technique as one of sparse priors. To our knowledge, these two techniques described above have not been used for subsurface imaging in biological specimens and provides a very promising technique for extracting both axial and lateral information about a specimen, even within turbid medium.
The deblurring techniques leveraged in this paper fall under a class of algorithms known as deconvolution, where an estimated PSF is decoupled from a signal in the presence of noise.13 Each algorithm requires multiple iterations to converge on a solution, which are constrained to their respective regularization functions. To further simplify these methods, we implement all methods using an incoherent light source (reducing cost and increasing safety) and take all images without fluorescent tagging, making extension to most in vivo applications extremely practical. Mainstream techniques typically rely on fluorescent tagging, alleviating many signal-to-noise constraints but limiting the use in in vivo applications. The methods described throughout this paper are applied to images taken in reflectance only. It is understood that less information will be obtained using reflectance. However, given the limited number of available exogenous labels, a label-free technique is of great value. By using a 1951 Air Force Resolution Chart (AFT), we are able to demonstrate the resolution, and sectioning improvements are robust at the surface as well as within a turbid, semiopaque media.
In short, this paper demonstrates an MAP-based algorithm that produces both sectioning and super-resolution without capturing data about the patterns a priori. Further, we show that this MAP processing algorithm benefits from the use of a random pattern for sectioning, as three-phase sectioning does not section well within a turbid media. Second, we use a sparse priors algorithm against a sectioned image. We demonstrate that this provides superior sectioning against targets than deconvolution alone and does not require the adjustment of the PSF for variations in depth, as previous algorithms required.
Data Collection and Processing
In all cases, we make the assumption that the measurements are constructed in the following manner:14 Further, without knowledge of a priori, any errors can be exacerbated when estimating the sharp image . Selection of the proper regularization parameters will be critical to achieve the best results. Given the ill-posed nature of deconvolution (i.e., numerous possible solutions), regularization provides a constraint on our algorithm to ensure that any final result does not vary significantly from the original image. For all experiments, a regularization value was selected empirically to provide the best resolution without over smoothing the image or deviating drastically from the wide-field image. The exact values of these are described further in their respective sections. Beyond deblurring, structured illumination is used to isolate a particular plane of interest, as is done in confocal microscopy,15 known as optical sectioning. Optical sectioning in structured illumination is achieved by modulating an plane, at a discrete -axis position (focal plane), with a high-frequency illumination pattern. Locations where the illumination pattern is not in focus, i.e., above and below the focal plane, blurring occurs. Typically, the modulated or AC portion of the signal is isolated from the unmodulated portion or DC, through the use of the differencing scheme2,3,5
Our experimental setup is specifically designed to allow for manipulation of the pupil on both the transmit (before the specimen) and receive (after the specimen) side of the illumination path. Knowing that the pupil plane contains the Fourier transform of the image, a simple aperture can be used in each pupil to band limit the spatial frequency of the microscope.16 Constricting the aperture changes, the diffraction limit of the system. This adjustment allows for the raw, high-NA images to be compared to the processed low NA counterpart. The following sections will show how each processing technique is conducted and how well they compare to the ground truth.
In all experiments, 80 random, unknown patterns are used to thoroughly illuminate the entire specimen. The random patterns are binary in nature using the rand() function in MATLAB® with uniform distribution and adjusted to a 25% fill factor. Using a random pattern, the frequency of the pattern spans the entire bandwidth of the system and is limited only by . This fill factor was selected to ensure that the specimen was adequately illuminated and that we were able to achieve strong contrast at depth. This contrast will be demonstrated in Sec. 2.2 as we show objects can still be resolved even through a semiopaque media. Using a lower fill factor allows us to achieve deeper sectioning but requires additional images.17 The number of samples used, 80, was selected empirically. There is a trade-off between image contrast and processing time, which both increase as the number of images increases. Previous work has shown that in vivo sectioning is possible in as little as 30 frames.4 However, here, the higher number of samples is chosen to best demonstrate the processes developed in this paper.
A 635-nm incoherent LED with a linewidth of 17 nm was used as the light source. The patterns were projected onto the specimen using a TI Lightcrafter digital micromirror device (DMD). The images were then captured using an Allied Vision Guppy PRO F-125 CCD camera with a sampling rate of 22 Hz, providing all 80 samples in just over 3.5 s. All data were processed in MATLAB® R2013a, using an Intel Core i7-2675QM CPU at 2.20 GHz. The objective has a nominal magnification of in air with an NA of 0.25. During the experimentation, by adjusting the apertures in the pupil planes, the NA of the system was significantly lowered to test for super-resolution. A layout of the microscope is shown in Fig. 1. Before constricting the apertures, the maximum resolution of the system was measured as in agreement with Zemax simulations of the system. All images are compared to their wide-field equivalent that is constructed by taking the average of all images as described byFigure 1 provides a graphical representation of the microscope. For the structured illumination processing to work properly, the pattern projected by the DMD must be conjugated with the focal plane, as well as detection plane. The dashed lines in the image planes (labeled with an “”) denote the location of the pattern in space. All other sections labeled with an “” denote the pupil plane, which contains the Fourier transform of the image. In and , an aperture is added to low-pass filter the frequency content of both pattern and specimen before it arrives at the camera.
Many different sectioning techniques have been developed to provide sectioning from three phases.8,18–21 However, as shown by Hoffman and DiMarzio,11 the susceptibility of phase misalignment in turbid media renders most images useless. Using a random pattern, rather than one of discrete phase changes, ensures that sectioning at depth produces nominally artifact-free images.4 To decouple the modulated and unmodulated portions of light, the following pair-wise comparison [Eq. (4)] is applied and results in uniformly sectioned images when using multiple, random, modulated patterns. This is just an extension of the typical differencing scheme from Eq. (2) but ensures that all 80 images are subtracted from one another
Additionally, we rely heavily on the use of a random modulation pattern, as this ensures that we are working at the limits of the OTF. The use of random patterns increases the amount of data required for sectioning but simplifies the hardware and characterization of the system required for proper functioning. An experiment is run to quantify the axial resolution of this sectioning technique. The random pattern is projected onto a mirror at a focal plane where the pattern is completely in focus. The location of the mirror relative to the focal plane is adjusted by increments of . The results are plotted in Fig. 2. At , there is a drop in amplitude across all frequencies of about 1 to 4 dB, enough to blur the pattern, but not likely to significantly impact the sectioning. However, around difference from the focal plane, the amplitude drops by 5 to 15 dB throughout the spectrum. With this much loss in signal, there is no longer a detectable signal from the modulation pattern, thus, all light from these depths will be rejected by our sectioning algorithm. From this, we should expect a strong drop in signal outside of (above and below the focal plane) yielding a sectioning thickness of about .
Another advantage of a random broadband pattern is that the maximum frequency present will always be limited by the OTF of the system. This ensures that the image will always be modulated at the Nyquist limit of the system. This can be visualized in Fig. 3. Here, we show experimentally the frequency spectrum of three patterns projected onto a mirror. Figure 3(a) contains a low-frequency discrete square pattern well within the limits of the OTF (first-order side bands are shown with white arrows). Because this is a square wave, we note the higher order harmonics make up the diagonal line in the figure. The black circle in the figure represents the approximate OTF of this system. Figure 3(b) is a high-frequency square pattern much closer to the edge of the OTF. This setup will provide much better resolution enhancement during reconstruction. As demonstrated by Gustafsson,3 a total of resolution improvement can be obtained when the frequency of the pattern exists at the edge of the OTF. Should the OTF shrink, this may begin to alias, no longer providing sectioning or super-resolution. However, the random pattern in Fig. 3(c) conforms to the OTF continuing to provide both sectioning and super-resolution at any OTF size. This feature obviates the need to characterize the system before developing the projection patterns. Ultimately, there is a trade-off between resolution and depth of sectioning. For a high-resolution modulation pattern, we are able to get close to the full improvement as demonstrated by Gustafsson. However, within a turbid media, contrast is rapidly lost, producing poor sectioning. In contrast, a low-frequency pattern provides much better sectioning at depth but limited resolution improvement.
Maximum a posteriori probability
The first technique that we explore is known as MAP processing. The MAP processing algorithm is outlined in Fig. 4. Previous implementations of MAP processing required the pattern to be known a priori; however, our research will show that we can still perform sectioning and super-resolution, even without prior knowledge of the patterns. Starting with 80 samples modulated with a random pattern , a wide-field image, , is produced using Eq. (3), and a sectioned image, , is produced using Eq. (4). The wide-field image is then subtracted from each sample to produce an estimate of the random modulation pattern as follows, . This operation is completed in the image domain and ideally removes the specimen component of the image () from the pattern + image, returning on an estimation of the pattern only. The results from this process can be seen in Fig. 5. The MAP processing then attempts to minimize the mean square error between the estimated high-resolution images and the observed images; and , finding the most probable high-resolution image required to produce the sampled array of low-resolution images. As described by Lukeš et al.,5 a gradient decent algorithm is applied to iteratively minimize the following equation:21,22 This process is completed by extracting the low- and high-frequency components of their respective images and adding the two together as described by
Typical implementations of MAP expect that the modulation pattern is known a priori.8 In the case of imaging in turbid media, the exact position of the pattern will not be consistent from specimen to specimen. Thus, to remove this constraint, we estimate the modulation after the data have been collected. This removes the necessity of characterizing the system ahead of time, whose positions will change within turbid media, as required in previous implementations of MAP. The MAP processing algorithm that is implemented throughout the rest of this paper is based on a modified version of the SIMToolbox, developed by the Multimedia Technology Group, Czech Technical University, Prague.22 The algorithm has been modified using the pattern estimation technique described above. On average, the MAP processing took iterations to converge on a solution in a total time of 35 s. We note that all experiments going forward will use our modified version of the MAP processing algorithm, which allows for the use of blind modulation patterns. All experiments use no a priori knowledge of the pattern and are integrated into the MAP algorithm. The above equations describe the modifications applied in this paper, but we refer readers to the original paper for additional details.
Next, we apply a deconvolution technique that relies on sparse priors. In sparse reconstruction, images are assumed to be mostly binary in nature and their details of interest are expected to have sharp edges. These assumptions allow for high-quality reconstruction against small point scatters and thin lines, such as fibers. Objects such as these, which are beyond the working diffraction limit, typically reconstruct nicely. The deconvolution processing is based on the algorithm developed by Levin et al.12 and implemented as described. To estimate the super-resolution image, we minimize the following:12 In previous implementations, this algorithm relied on varying the PSF to account for depth. Our research shows that, with sectioning, this depth dependency can be removed, requiring only a single PSF estimation for deconvolving the image. Additionally, within turbid media, the deconvolution process is far less effective without sectioning first.
The algorithm is outlined in Fig. 6. After the 80 modulated images are collected, the sectioned image is produced. The sectioned image is then deconvolved from an estimated PSF. The PSF is iteratively refined until the algorithm converges. An assumption of sparse priors is applied in this technique. There are two reasons why sectioning first is necessary in this algorithm: first, the PSF will vary as a function of depth. By sectioning first (isolating a single depth plane), the PSF should be approximately uniform across the entire image, greatly simplifying the deconvolution procedure. Second, the use of sparse priors will constrain the edges of single point objects. In our case, the modulation pattern itself contains many point objects. By sectioning first, the pattern is nominally removed, leaving only the specimen to be processed. Similar to the MAP processing, this algorithm is run iteratively until convergence, continuously updating the PSF. For our application, we run this process about 200 times in 28 s, including sectioning, which provides a reasonable trade-off between overall convergence and minimal processing time. Adjustment of the total number of iterations or increasing the size of the PSF will have a dramatic effect on how long it takes to complete the processing. The process has been shown to work well in other microscopy techniques in various biological specimens.23–25
For this experiment, a PSF is modeled as an Airy disk26 and then the Fourier transform is taken to produce the OTF estimation
A 1951 USAF resolution test chart (AFT) is used to evaluate and compare the two algorithms. For the experiments going forward, we will define the resolution as the distance between two black lines on the chart, regardless of the thickness of the line. Figure 7 compares the full-resolution image at full NA, 0.25 (a), to low-resolution image when the aperture in the detection path is constricted (b). At full resolution, the elements can be resolved, with the smallest lines in width (group 7, element 6). The aperture is then closed making the smallest resolvable line in width (group 6, element 3), corresponding to an NA of . It is found that both the MAP (c) and sparse processing (d) provide a resolution improvement. The MAP processing reveals elements with a linewidth of (group 6, element 6), increasing the resolution by . The sparse prior processing reveals elements with a linewidth of (group, 7 element 1), increasing the resolution by about . With respect to total resolution improvement, it can be seen that the sparse processing produced an image that more accurately reflected the pattern seen in the ground truth image (a). However, it is found that both techniques introduce a few artifacts in very low-frequency regions. These artifacts can be seen in the background, as well as the black box between the 7 and 6. The cause of these artifacts is likely nonuniform illumination from the random patterns that are used. These artifacts can also be seen in the wide-field image (b), where there is still some structure present in the image. Further, we see that relative to the MAP processing, the sparse deconvolution overregularizes the location of the three lines, reconstructing the lines as slightly thinner than those seen in the ground truth image. Cross sections from two elements of the AFT have been plotted in Fig. 8(a); group 6 element 5 and Fig. 8(b); group 7, element 1. Here, it is clear how well the three elements are resolved, as well as the contrast improvement provided by each of the techniques.
Next, microbeads, made of polystyrene, in diameter are placed on top of a mirror, as shown in Fig. 9. The resolution of the system is reduced () and the wide-field image is shown in Fig. 9(a). The region boxed in red is shown in the image below [Fig. 9(d)]. The MAP processed image is shown in (b) and (e), whereas the sparse priors processed image is shown in (c) and (f). In the low-resolution image, individual beads cannot be discerned when they are adjacent to one another. The MAP processing creates contrast and sharpness around the edges. However, because the MAP processing uses a Gaussian prior as its regularization function, it has a tendency to smooth over edges making many beads appear as one large blob. Although it sharpens large areas of contiguous beads, it is incapable of segregating adjacent beads from one another. In the sparse priors technique, the processing does an excellent job of differentiating each bead. Arrows have been superimposed onto the figure, to show two regions where the resolution has undergone marked improvement. In Fig. 10, a cross section of the beads shows an almost a resolution improvement from the original image to that of the sparse priors processing. We measure the full width half max (FWHM) of the low-resolution image to be as expected. After the sparse processing, there are two well-defined beads with a FWHM of , demonstrating the accuracy of this procedure. As for the MAP processing, it is apparent that the deconvolution process is merging the individual beads in the image making it difficult to isolate one from one another.
Given the desire to apply these techniques to in vivo applications, a biological specimen is investigated to ensure that both the super-resolution and sectioning aspects are present in the reconstructed image. Figure 11 shows a layer of onion cells placed above the AFT. The space in between the two targets is filled with ultrasound gel (). Based on previous experiments on our sectioning technique, we have shown that sectioning resolution of this random pattern technique is . Therefore, we expected to remove scattered light from the AFT at depth, as well as scattered light from the onion cells a few microns below the surface. In the first image (a), dark areas are seen below the onion cells due to absorption of the light from AFT beneath the cells. Additionally, contrast among each of the cells is reduced from scattered light throughout the sample. The sectioning process is verified by noting that the dark regions have been removed in both (c) and (d), as well as clarifying the interface between cells. In both of the processed images, distinct lines between the cells of each pair are seen, giving strong contrast in the image. Comparing the MAP processing in (c) to the sparse processing in (d), many of the point scattering objects, as well as the regions between the cells are even further defined. To ensure that the sectioning and super-resolution are complimentary, the deconvolution is applied directly to the wide-field image (b). Without applying the sectioning first, the scattered light from the onion cells and AFT below are not rejected. As such, the super-resolution technique actually enhances the contrast of the AFT below. It is worth noting the contrast at the edges of the onion cells. In the wide-field example, the regions among cells are almost indistinguishable. Both processing techniques increase the contrast greatly. In reviewing the sparse priors technique (d), the edges are further narrowed, producing high-contrast, high-frequency edges between each of the cells. In the deconvolution only image, some of the lines are enhanced slightly, however, they are not nearly as effective without the sectioning.
To further verify the technique, the focus is adjusted to a depth below the onion cells, as demonstrated in Fig. 12. The lines are distorted as a result of an index of refraction change from the air objective to the onion cells. Use of an index matching material would likely mitigate this distortion, greatly improving the results. It can be seen that the sparse priors technique gives additional, albeit, modest resolution improvement, which is consistent with previous results. Specifically, three lines in group 8 element 6 (lower-right side) are now resolvable, which are not visible in the other three images. There are some air bubbles between the onion cells and the resolution chart, which is present in the ultrasound gel. Due to the index of refraction change, in some areas, the modulation pattern does not focus on the correct plane. As a result, there are some dark lines and spots in the image. However, on the whole, the sectioning has removed much of the scattered light from the cells above the resolution chart, providing better contrast.
Last, we compare these methods to two other common deconvolution techniques without any sectioning. We apply a Lucy–Richardson and maximum likelihood deconvolution algorithm. Each method was implemented in MATLAB® 2013a by MathWorks, using a template PSF, similar to the one applied in the sparse deconvolution method. These results are plotted in Fig. 13. This plot looks at cross sections from the resolution bars of Fig. 12, group 6, element 6 (bottom-right section of the imaged resolution chart). The plot on the left side of Fig. 13 plots a cross section of the vertical bars. Both the sparse method and MAP processing methods greatly increase the contrast of these bars. The other two methods produce results similar to no processing at all. The right graph plots the cross section of the horizontal bars. The sparse method succeeds in producing contrast between the lines while the MAP processing improvement is modest at best. However, the other two deconvolution methods, without sectioning, again do not appear to produce quality results. This experiment agrees with the original hypothesis that applying the sectioning first allows for superior deconvolution and better overall results.
Random patterns of incoherent light can be used in SIM to provide resolution enhancement and sectioning. These experiments demonstrate that even with limited a priori knowledge of the pattern and the optical system we can, with the right processing, achieve both optical sectioning and spatial resolution beyond the diffraction limit. Because the algorithm is robust with respect to variations in the pattern, the severe alignment constraints of many structured illumination approaches are eliminated. Given that traditional three-phase SIM and super-resolution SIM do not work at depth, we are able to show that random patterns are capable of providing both. Recognizing that only a few fluorescent probes are approved for in vivo imaging, we have chosen to test our system and algorithms with reflectance imaging, which requires no exogenous materials. In this work we achieved a transverse resolution enhancement of a factor of 1.4 using the MAP algorithm with Gaussian priors and 1.6 with sparse priors.
Considering the particular application of imaging skin, to obtain results comparable to conventional laboratory examination of hemotoxylin-and-eosin-stained biopsy specimens, we need resolution of about 1 mm to match the laboratory microscope resolution and sectioning of about 5 mm to mimic the typical microtome sections. Using this approach to SIM, we can reduce the NA and magnification of the objective to improve the FOV while recovering much of the resolution and sectioning ability.
The sparse priors approach is particularly suited to isolated point objects, such as the subcellular organelles, and provides resolution enhancement of a factor of 1.6, even without knowledge of the random patterns or the PSF of the microscope. For less sparse objects, the Gaussian priors algorithm provides resolution enhancement of a factor of 1.4. We have demonstrated a particularly useful technique for producing high-quality sectioned images at resolutions and FOVs greater than conventional microscopy, with minimal knowledge about the system a priori. This advance should prove to be extremely useful in the context of low-cost, low-maintenance microscopy in in vivo applications.
The authors would like to thank Guoan Zheng, Kaikai Guo, and Joseph Hollmann for their input on the paper, as well as Kalina Yang for helping with additional data collection. This work was done in part under the National Science Foundation Grant No. CBET-1510281.
Zachary R. Hoffman is a PhD candidate at Northeastern University where he works in Professor Charles DiMarzio’s lab. His research focuses on developing and improving methods of structured illumination microscopy for resolving subsurface information in-vivo. He is also working as an engineer at Draper Laboratory, developing lightbased inertial sensors.
Charles A. Di Marzio holds degrees from the University of Maine, WPI, and Northeastern University. After 14 years at Raytheon Company in coherent laser radar, he joined Northeastern University. He is an associate professor of electrical and computer engineering,mechanical and industrial engineering, and bioengineering in Northeastern University. He is the author of the textbook Optics for Engineers. His interests include confocal microscopy, structured illumination, and the interaction of light and sound.