In this paper, we introduce a technique that produces confocal-like optical sectioning from a single structured illumination microscopy (SIM) image. This new method requires a single sample, which is time limited only by the frame rate of the camera. This allows us to section samples at physiologically relevant time scales. This is in comparison to confocal microscopy and traditional structured illumination, which require either sequential scanning of or at least three frames of data, respectively. The method outlined also produces more robust sectioning within a turbid medium than traditional structured illumination.
Optical sectioning has provided pathologists and clinicians with the ability to image biological samples noninvasively, at or below the surface. In cases such as skin cancer, malignant cells are often located below the stratum corneum, a layer of cornified epithelial cells that occludes living subsurface cells.1 This makes it difficult for a pathologist to determine the health of cells without excising the cells for further analysis. Sectioning is used to build a depth map of a specimen, measuring axial information providing a three dimension or tomographic map to give depth information about subsurface objects.2,3 Additionally, optical sectioning produces higher contrast images by rejecting scattered light from out-of-focus planes.4
Currently, the most common method of optical sectioning is confocal microscopy. Confocal microscopy works by passing received light through a pinhole, which rejects out-of-focus light from reaching the detector. While confocal microscopy produces sectioning, it also rejects a disproportionally large amount of light, requiring a high powered source to function properly. Additionally, because the pinhole only allows a single pixel to be imaged at a time, raster scanning is required to build a full two-dimensional image.4,5 Recently, a new method of optical sectioning known as SIM, which does not require the use of a pinhole, has been developed. SIM has the advantage of using a widefield imaging technique, eliminating the need to raster scan. A high-frequency pattern is used to modulate the plane of interest. Optical sectioning is achieved by decoupling the AC light (in-focus) from the DC light (out-of-focus) of a given image. Decoupling is achieved by phase shifting the pattern to at least three different positions and then pairwise subtracting them from one another.2,3 SIM, however, has not been used for imaging within highly scattering media as issues with contrast and phase alignment at depth produce very weak sectioning.6–8 There have been other attempts to overcome these issues using many more samples, such as tens of random modulation patterns, but this comes at the cost of requiring much more data to produce sectioning.9 These factors have greatly limited the usage of SIM, especially with regard to in vivo imaging.
The methods described above require multiple samples over a period of time to produce sectioning, making in vivo imaging difficult. Also, because SIM requires alignment of three different phases, small differences in optical path length can introduce strong artifacts, particularly at depth, as we will show later in this paper. This paper will show that only a single phase is required to produce sectioning at depth, providing axial information about the specimen and increasing the contrast.
Using a specialized two-dimensional Hilbert transform, Nadeau et al. showed that only two images are required to decouple the signals from one another.10–12 We directly apply the Hilbert demodulation technique to show that it also works with sectioning on a micron scale. Additionally, we extend the work by reducing the number of images required from 2 to 1, making single-shot optical sectioning possible. We show that subsurface objects can be sectioned through a turbid medium, generating better contrast and resolution than the traditional three-phase SIM at depth and evidence for application in in vivo skin imaging.
Structured Illumination—Conventional Approach
Structured illumination is achieved by projecting high frequency, spatially patterned light onto a specimen. The typical setup for SIM and for all experimentation throughout this paper is sketched in Fig. 1. The pattern lies on a plane conjugate to both the CCD and a discrete plane of focus at the specimen. As a result, light scattered from the in-focus plane is modulated, separating it from the out-of-focus light. Separating these two components allows for the removal of unwanted light from regions above and below the plane of interest. This is accomplished by measuring a total of three images, typically with phases 0, 120, and 240 deg, and then processing them with the differencing scheme:2
When selecting the frequency of the modulation pattern, there is a trade-off between sectioning depth and sectioning resolution. A high-frequency pattern will produce higher sectioning resolution as it quickly blurs away from the conjugate plane. However, it becomes difficult to resolve at depth, limiting the absolute sectioning depth. Using a lower frequency pattern will result in poorer sectioning resolution but good sectioning depth. The absolute value of the resolutions will depend on the numerical aperture (NA) of the system. However, for this paper, we use a single-modulation pattern of for all depths and samples to simplify comparison across all scenarios. This particular value was selected to achieve a reasonable sectioning resolution () and sectioning depth (). At these sizes, we will be able to section at biologically relevant sizes and depth, specifically when considering skin cells and at depths located near the junction of the epidermis and dermis.1 Also, because we have used only a single modulation frequency, data acquisition time and processing will remain constant for all targets and depths.
Spatial Frequency Domain Imaging—Using Hilbert Transform
To achieve single-image sectioning, we extend the spatial frequency domain imaging (SFDI) work done by Nadeau et al.12 Similar to SIM, SFDI functions by modulating an image with a known frequency and phase. SFDI works to separate the absorption coefficient () and reduced scattering coefficient () of a material, which have differing sensitivities to the spatial frequency of the projected light. As a result, and can be decoupled using the DC and AC portions of the signal, respectively.13–15 In SIM, we use the same principal to decouple in-focus and out-of-focus light.
Conventional SFDI requires three phases to be measured along with one image without modulation, for a total of four images. Recent advancements in signal processing and SFDI have produced a method of demodulating an image of unknown phase, frequency, and angle using the 2-D Hilbert transform. Developed by Larkin et al.,10,11 a spiral function is applied to demodulate a 2-D fringe pattern of unknown frequency and phase. In the two-dimensional case, the Hilbert transform is applied using the spiral function, where and are positional indices within the frequency domain:12 to perform fast, accurate SFDI processing. With this method, only two images are required to decouple and . SIM and SFDI differ in that the higher frequency modulation of SIM quickly goes out-of-focus away from the focal plane. As such, we treat the in-focus and out-of-focus regions as separate, distinct regions.
Using a one-dimensional simulation, we describe how the in-focus light (AC) and out-of-focus light (DC) components from a signal can be isolated from one another. For the simulation, the “Hilbert” function from Mathworks MATLAB v2015a is used but serves as a suitable stand-in for the spiral function when applied to the two-dimensional cases going forward.
First, synthetic data are generated to demonstrate the algorithm. Here, the synthetic data represent a spatially varying signal in one direction, a simplified version of the two-dimensional images, which will be considered later on. This is the signal that will lie in the conjugate plane, just beyond the objective in Fig. 1. A random signal is constructed with a mean of 0.5, which represents the signal we hope to extract from a given focal plane (shown in green in Fig. 2). A modulation pattern is constructed as , which is the patterned light that will be projected by the digital micromirror device (DMD). This modulation pattern represents an irradiance pattern projected onto the sample, which has a mean of 0.5 and ranges from 0.25 to 0.75 (shown by the dashed blue line) and multiplied by the in-focus signal (). Independently, a second low spatial frequency, random pattern is created to represent the out-of-focus light (, shown in red). This signal is generated with low spatial frequency, as it represents the background scattered light from regions outside the conjugate plane.
To simulate some error, we couple about 2% modulation into the out-of-focus light to visualize how small errors, such as imperfect focusing, will impact the final reconstruction. At the extremes, 100% of the modulation pattern would exist in the out-of-focus signal, making it impossible to decouple the two signals, thus, producing no sectioning at all, whereas as an error of 0% would produce a perfect reconstruction, with infinitely small sectioning resolution. Here, the small error is used to ensure that our simulation accurately models these imperfections. The 2% value is intended to show a small amount of error expected in real-world data, while still demonstrating the accuracy of the simulation/technique. The in-focus and out-of-focus signals are then added together to construct the simulated signal, shown in black. The construction of the signal is thus
The modulated in-focus portion of the signal is recovered by subtracting the combined in-focus and out-of-focus signals, without modulation:
The unmodulated portion, , can be independently measured without projecting a modulation pattern [simulated here, ] or estimated by filtering out the modulation pattern, as we will do later in this paper. In Eq. (4), we are left with the modulated in-focus portion of light. The in-focus section can then be reconstructed using the “Hilbert” function to demodulate the signal as follows:
We have now successfully decoupled the in-focus light from the background. To show the accuracy of the technique, Fig. 2(d) shows an estimation of the in-focus light compared to the original signal. We note that there is some loss in accuracy due to modulation from some of the out-of-focus regions. This type of error will manifest itself as a loss of contrast, particularly in regions of high-spatial frequency. In the above simulation, was known and subtracted from the modulated signal. This can be achieved by measuring the signal twice: one measurement with modulation and one without. However, similar results can also be achieved with only one measurement, as will be shown in the next section. Rather than trying to measure the unmodulated signal, it is estimated by filtering out the modulated portion of the image. In both scenarios, this unmodulated portion is then subtracted from the modulated measurement.
Structured Illumination—Single Image Approach
By projecting a high-frequency modulation pattern under planar illumination, at a plane conjugate to the CCD, all components of this plane are spatially shifted outside the broadband signal. This scattered out-of-focus light remains centered in the baseband of the spatial frequency domain due to the pattern being blurred outside of the focal plane. In Sec. 3.2, a method of decoupling the in-focus and out-of-focus signals, which can readily be adapted to this type of 2-D application, is described. Our algorithm, outlined in Fig. 3, describes the process required to separate the modulated and unmodulated images, producing a high-contrast sectioned image. In the interest of developing an optical sectioning system with the least number of samples required, we have extended this work to function using only a single sample. Using a single sample will prove to be extremely useful in real-time biological imaging. Specifically, motion artifacts can create constraints on any method that requires multiple samples of data for reconstruction. As long as there is minimal movement within the integration time of the frame, a single sample will suffice in producing high-quality sectioned images in vivo. Also, in comparison to the three-phase SIM, there is no need to finely align multiple phases of data, making our method particularly robust at depth.
In the 1-D simulation, we directly measure the broadband signal without modulation. With this new method, only a single image, which contains both the in-focus and out-of-focus components and the modulation pattern, is measured. Using a combination of a low-pass filter and a notch filter, which are defined based on the frequency of the modulation pattern, the low frequency information from the out-of-focus light is estimated, as shown in Fig. 4. It should be noted that, for these experiments, a generic Gaussian filter is applied to eliminate the modulation pattern. However, the selection of this filter will differ depending on the frequency and angle of the modulation pattern applied, i.e., the region of data that will be filtered within the frequency domain. For our experimental data, the frequency () and angle (30 deg) are known ahead of time and can be tuned to filter out the first orders of the modulation pattern. From multiple experimental datasets, we have empirically selected a Gaussian filter with a full width, half max, of 10 cycles, providing the best sectioning results. This estimated unmodulated image can then be subtracted from the image, leaving only the modulated in-focus signal. As we described in Sec. 3.2, we apply the Hilbert transform using the spiral function technique to remove the modulation pattern from the in-focus signal. The result is a sectioned image that has been demodulated. Depending on how well is estimated, the success of the demodulation will vary; for instance, we may find some residual banding if the modulation pattern cannot be completely filtered out. However, it is shown that our method provides good sectioning over a wide variety of situations.
A 635-nm LED with a line width of 17 nm was used as the light source. The patterns were projected onto the specimen using a TI Lightcrafter DMD. The images were then captured using an Allied Vision Guppy PRO F-125 CCD camera. The objective has a nominal magnification of in air with a NA of 0.25. When combined with the tube lens, the system has an overall magnification of . The configuration of the microscope is outlined in Fig. 1 and is used for all data captured throughout this paper.
To show that this technique is comparable to ordinary SIM, we must be able to verify that we can accurately extract topographic and tomographic information. In cases of biological samples, such as skin imaging, we would like to isolate planes of focus located within the sample, which would otherwise be occluded by scattered light from surrounding layers. To test the topographic capabilities of the system, a paper card, which contains multiple layers of fibers positioned at various unknown depths, is imaged. Each single image result is compared against the typical three-phase sectioning method of SIM to ensure the accuracy of the results. A widefield image is also constructed by summing each of the three phase images together. This simulates an image of the specimen as it would be seen through a conventional microscope without any sectioning. Next, a tomographic phantom is constructed based on Glazowski and Zavislan.16 A 1951 Air Force resolution chart is placed below a piece of ground glass, which serves as a source of scattering. Between the two planes, ultrasonic gel with an index of refraction of 1.33 is used to simulate water contained within the skin tissue.
Single Image Results
We start by sectioning a business card, which contains multiple layers. To compare to the three-phase SIM, three total images are taken with phases of 0, 120, and 240 deg. Each image encompasses a region, with a square wave modulation pattern of frequency projected at an arbitrary angle of 30 deg. The images are processed using Eq. (1) to produce the demodulated AC signal. Then, only a single phase image is processed using the Hilbert technique developed above.
Figure 5 shows a comparison of the wide-field (a) versus the three image sectioning (b) and single-image sectioning (c). It is clear that both methods remove a great deal of light from the out-of-focus regions, isolating a single plane-of-interest. Additionally, the contrast is greatly improved in the remaining regions. There are some small artifacts in the single-image sectioning. Specifically, some banding remains from an imperfect estimation of the DC image. Additionally, there are some edge effects from the application of the Hilbert function. Taking the difference between the three-phase SIM and single-phase SIM, there are only small changes between images (, where the images are scaled from 0 to 1). However, on the whole, we produce a high-quality sectioned image, providing good isolation of a single plane, quite comparable to the three phase reconstruction. Furthermore, we have been able to produce this image with one-third the number of samples as required by traditional SIM, decreasing the measurement time from 150 to 50 ms. In this experiment, Fig. 6 quantifies the difference between the actual and estimated DC images, with the vast majority of pixels within a few percent ().
Image Results from Multiple Depths
By building a z-stack from 25 individually processed images, a full map has been built of each specimen across a depth of at increments, as shown in Fig. 7. The same modulation frequency is used across all depths, where three phases are taken at each depth. These data can be used to verify the axial resolution and isolation of planes through the construction of a high depth-of-field (DoF) image. To show the extended DOF, we use a maximum intensity projection, which combines the pixels with the highest amplitude from each depth into a single image. This projection removes all scattered light and shows the entire specimen in focus along the -axis, helping to visualize the two-dimensional structure without any occlusion from the layer above or below the focal plane.
For this data, we use the three-phase SIM as our ground truth and qualitatively compare it to our single image method. We should expect to see individual fibers existing on discrete planes, as well as continuity along each of the fibers. Reviewing Fig. 7, we see how well the single-image sectioning is able to section the image. It is clear that the fibers are well isolated from one another, on par with typical SIM methods. Figure 8 shows the structure of the fibers. Here, even the smaller fibers are retained providing good resolution and reconstruction of the card. When comparing the two height maps to one another, we find that all layers sectioned using the single-image technique are within . Similar to the single image above, the large DoF image matches within a few percent (, where the image amplitude is scaled from 0 to 1).
Sectioning Versus Depth
Additionally, we want to ensure that this method works at depth, even when the plane of interest is occluded by scattering layers above and below the plane of interest. To model skin imaging, we have built a phantom, as developed and demonstrated by Glazowski and Zavislan,16 for testing sectioning in highly scattering media. This phantom consists of a 1951 Air Force resolution chart at a depth of below a piece of highly scattering ground glass. The space between the target and the glass has been impregnated with a gel () to simulate water. The objective lens has been increased to ; otherwise, all other components of the optical setup are the same, as described in Sec. 3.5.
As demonstrated in Fig. 9, the resolution chart was imaged using both the three-phase and single-phase SIM. In the widefield image (d), we see the structure present from the resolution chart below, but the contrast is low due to scattered light from the ground glass above. Reviewing the three-phase SIM, we notice that the sectioning is poor (e). This is likely due to small changes in the phases of the modulation patterns as it passes through multiple surfaces before reaching the focal plane.7,8 Because three images are used, sectioning is highly dependent on how well each of the three phases overlaps at the focal plane. Any changes in phase are liable to produce artifacts at the intended target. This can be slightly mitigated using additional phases or multiple random patterns, but at the cost of additional samples.9,17 For this experiment, the phase changes are likely due to differences in the optical path length from the rough surface of the ground glass above the target. However, with in vivo imaging, phase misalignment may be exacerbated by a host of factors, such as vibration, diffusion, and small changes in index of refraction, all of which can be overcome using our single-phase sectioning system.
As expected, the single-phase sectioning provides much better resolution as it is much less sensitive to phase. Because the spiral function used for the Hilbert transform does not require a-priori knowledge, small deviations in the frequency or angle of the pattern do not negatively impact the sectioning. This results in a much more robust method of sectioning at depth. Figures 9(a)–9(c) show good contrast at the target, isolating a single plane from the scattered light. We show that the processing works regardless of the phase, as long as the modulation pattern is present [(a) 0 deg; (b) 120 deg; (c) 240 deg]. By taking the mean of all three phases after they have been individually processed, we see that there is a further improvement in noise reduction and contrast [Fig. 9(f)]. Figure 10 shows the contrast improvement, by taking a cross-section of the image along group 6, element 4, with a line width of [outlined in red in Fig. 9(d)]. There is a clear difference in the contrast of the signal as the scattered light from above the target has been removed. Note that there is a small trend along the resolution chart; this is due to slight nonuniformities in the lighting of our sample and is not related to the target itself. By calculating the relative contrast (max–min) for each cycle, there is a improvement in the vertical direction and a improvement in the horizontal direction. The three-phase sectioning has been left out as the bars are no longer resolvable at this depth.
A measurement of the resolution chart was also taken at the surface () to verify that the system is diffraction limited (). We find that the minimum resolution at the surface is , which agrees with our expectations. As we attempt to image the chart at depth, there will be considerable resolution loss due to the scattering of light above the target. In Fig. 9(d), it can be seen that, without any sectioning, the resolution varies quite a bit but, at best, has a resolution of about (group 6, element 4, highlighted in red). Reviewing Fig. 9(f), it can be seen that the next element down (group 6, element 3) is now visible, having increased the lateral to . Therefore, at a depth of , we find there to be a loss of resolution. However, after sectioning, this can be modestly improved.
Three-Phase Structured Illumination Microscopy at Depth
To better understand why the three-phase sectioning suffers within a turbid media, we have imaged the square pattern on a mirror at the surface () and within our skin phantom (; Fig. 11). Taking the cross-sections, how the relative phase and frequency begin to diverge can be seen. The amplitude from the surface measurements (left) extends from 0.1 to , the total dynamic range of the camera, whereas at depth, the signal contrast is cut in half. As a result, the sectioning contrast of traditional SIM is greatly degraded, achieving a maximum amplitude of 0.5. Additionally, the phase of each signal is shifted slightly, creating a low frequency modulation, which does not actually exist on the mirror. As we attempt to image the pattern beyond , these errors are greatly exacerbated, rendering the final three-phase sectioned image nearly useless. The alignment of these patterns is critical for the success of the three-phase SIM. However, our single-phase technique is robust to small changes in frequency and phase and, therefore, produces high-quality sectioning in these scenarios.
Sectioning Biological Samples
To further validate the method, sectioning was applied to a number of biological samples, including onion cells and in vivo to the skin of a human arm. Again, a square modulation pattern of is projected horizontally to modulate the image. Two layers of onion cells were placed directly on top of one another. The system was focused at the top layer of cells, as shown in Fig. 12. The widefield image [Fig. 12(a)] shows multiple cells aligned diagonally across the image. Scattered light from the cells below the surface can be seen in the image. The single-image sectioning [Fig. 12(c)] works in removing the scattered light from below. However, there are strong banding artifacts due to an imperfect demodulation. This imperfect demodulation comes as a result of data loss, where the patterned light goes to zero. This banding is exacerbated by choosing a low frequency modulation pattern. A higher frequency pattern would produce more uniform sectioning; however, it would decrease our sectioning depth. In clinical application, adjusting the frequency of the modulation pattern relative to the imaging depth might be worth considering. Adding in the other two phases ameliorates this issue by filling gaps of data loss. By processing the other two phases and adding them together [Fig. 12(d)], we see that the results are nearly identical to the three-phase sectioning [Fig. 12(b)]. In comparison to the traditional three-phase SIM, the results show that similar sectioning and resolution can be achieved through the methods developed above.
One particularly strong advantage of our method is the robustness of sectioning within a turbid media. Figure 13 shows the layer of onion cells about below the cells, as shown in Fig. 12. The scattered light from the cells above can be seen clearly in the widefield image (A). Again, the single-image sectioning (C) produces high-quality sectioning with only minimal artifacts. When adding all three phases together (D), the advantages over the traditional SIM method (B) can be seen; first, many artifacts in traditional SIM can be seen around the edges, reducing total contrast within the image. Additionally, there is some residual modulation pattern within the image. After the sectioning is applied, the two layers are completely separated from one another. Given that onion cells have a thickness of to , this experiment demonstrates that our axial resolution is .
Finally, we apply our sectioning method directly to in vivo sectioning of human skin. The results are shown in Fig. 14. The widefield image shows very little detail as there is a great deal of scattering. Given the frame rate (21 Hz) of the camera, there is motion from frame to frame. The motion results from many factors including muscle twitches beneath the skin, as well as larger movements due to breathing and heartbeats. Due to all of this motion, the traditional three-phase imaging (B) is completely distorted. The differencing scheme relies on only the modulation changing phase but does not account for motion within the specimen itself. With the specimen changing location over the three frames, the result does not contain useful information. Reviewing Fig. 14(b), only highly reflective portions of the image are seen in the processed image. However, there is no indication of structure, such as nuclei, which would present themselves as dark circles within the tissue. This is an important detail as even minor changes in position of the pattern can result in major degradation of the sectioned image.
Applying the single-image processing (C) provides good sectioning and contrast of the cells below the surface. Here, we can see the granular layer of the skin, where the dark spots (shown by the superimposed arrows) point out the nucleus of the cells. The depth, dimension, and relative sizes appear to agree with those shown by Rajadhyaksha et al.4 Furthermore, the three phases can be added together (D) to increase the contrast of the image. For images that contain motion, the contrast is improved. As compared to the differencing method (B), there are only minor motion artifacts; otherwise, the method is much more robust to small changes in the pattern position.
To show the motion frame to frame, Fig. 15 plots an RGB map of the three images used in Fig. 14(d). At the bottom left edges of the structures within the image, the color is predominately red. The red represents the first frame in the stack and shows that only some of the cells are in this location for a moment. By the third frame (shown in blue), the cells have shifted toward the upper right corner. It can be seen that within those three frames, the subject moved a few microns diagonally, resulting in an imperfect reconstruction of the multiframe sectioning techniques.
Traditional structured illumination has provided a strategy for producing optical sectioning when compared to confocal microscopy. However, limitation in overall speed still restricts its absolute application in biological imaging. Here, it is shown that, using the Hilbert transform, optical sectioning can be produced using only a single image. The sectioning works well for extracting 3-D information about highly structured material, as well as subsurface objects within a highly turbid media. Through a number of experiments, we have shown that the methods developed are able to provide both lateral and axial information about a specimen. Additionally, we can isolate the plane of interest with similar quality, or in turbid media, better than that of the typical three-phase SIM, as shown in the experiments above. These results improve further when there is a component of motion, which would otherwise misalign the phases of the modulation pattern in the traditional three-phase SIM. More importantly, the processing algorithm only relies on one frame of data, limiting the overall speed to the integration time of the camera. This opens up the possibility of using fast imaging, which relies on stroboscopic light sources, which previously were incompatible with SIM. The process now allows for real-time confocal imaging on biological samples. Additionally, we have shown that within a turbid medium, the sectioning ability far exceeds that of traditional SIM, greatly enhancing its resolution at depth and making in vivo applications possible. The algorithm described in this paper takes SIM one step closer to producing high-contrast images at depth, approaching the quality of confocal with the advantage of sectioning in real time.
Using structured illumination for sectioning within turbid media has always been difficult due to loss of contrast and the need for exact alignment of phases. As a result of this and the need for multiple images to produce high-quality sectioning, SIM has never been a serious tool in skin imaging. However, the methods we have developed in this paper overcome many of these limitations, taking SIM much closer to producing confocal like images. Using only a single frame, we have shown that high-quality sectioning can be produced even within tens of microns deep within highly scattering media, such as skin. This method makes SIM a powerful new tool for in vivo and real-time imaging of biological samples.
The authors would like to thank Kyle Nadeau, Will Goth, Joseph Hollmann, Guoan Zheng, and Kaikai Guo for their input on the paper. This work was done in part under and NSF grant from the Division of Chemical, Bioengineering, Environmental, and Transport Systems (CBET); Award No. 1510281.
Zachary R. Hoffman is a PhD candidate at Northeastern University where he works in Professor Charles DiMarzio’s lab. His research focuses on developing and improving methods of structured illumination microscopy for resolving subsurface information in-vivo. He is also working as an engineer at Draper Laboratory, developing light based inertial sensors.
Charles A. DiMarzio holds degrees from the University of Maine, WPI, and Northeastern University. After 14 years at Raytheon Company in coherent laser radar, he joined Northeastern University. He is an associate professor of electrical and computer engineering, mechanical and industrial engineering, and bioengineering in Northeastern University. He is the author of a textbook, Optics for Engineers. His interests include confocal microscopy, structured illumination, and the interaction of light and sound.