Open Access
21 June 2018 Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display
Author Affiliations +
Abstract
Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample’s three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans.

1.

Introduction

The observation of three-dimensional (3-D) behavior of animals and 3-D movement of living cells has significant implications for neurobiology and medical science,1,2 which require a microscope providing a high-spatiotemporal resolution. The scanning-type 3-D microscopy represented by confocal microscopy provides high-resolution 3-D images up to the Abbe limit but has a fundamental frame-rate limitation because of its scanning time.3 A number of approaches have been proposed to capture 3-D information by reducing the scanning time4 or by mapping the axial information onto a planar capturing device.5 However, previous approaches focused on reconstituting the obtained 3-D information with post-processing rather than visualizing it with 3-D display systems in real time, which enables observation of the sample’s 3-D behavior and direct interaction.

In 3-D microscopy, how to display the 3-D information as well as how to obtain it is vitally important. The best way to deliver 3-D information to the observer is to present whole 3-D images with a 3-D display system. Various 3-D display systems including stereoscopic display, multiview display, integral imaging, and holographic display were designed to reconstruct microscopic samples in 3-D.68 Light-field microscopy (LFM)1,6,913 is an adequate method to watch 3-D behavior of the sample; it obtains 3-D information in a single-shot with a microlens array (MLA) at the image plane. The pixels behind each microlens record both the position and the direction of light rays, and the captured four-dimensional (4-D) light field can be reconstructed to a 3-D scene in real time not only computationally1,912 but also optically6,14 with a 3-D display system. Based on the structural symmetry between LFM and integral imaging, the in-vivo micro-objects were reconstructed in 3-D space in real time.15,16

However, despite its special and powerful characteristics, LFM has not been used practically in biological and medical imaging because of its reduced lateral resolution. The MLA in front of a charge-coupled device (CCD) sacrifices the lateral resolution and enhances the depth of field (DOF), and the total amount of obtained information is limited by the number of pixels and diffraction limits.9 Recently, a light-field deconvolution microscope (LFDM) was introduced to deconvolve a high-resolution 3-D scene using a point spread function of LFM computed with wave optics.1,10 However, the post-processing time was too long for the real-time observation, and the improved resolution (up to 600  cycles/mm with 20×/0.5 NA objective and 125  μm f/20 MLA) was worse than a two-dimensional (2-D) optical microscope (OM, 2000  cycles/mm with 20×/0.5 NA objective). Furthermore, the maximum resolution at the native object plane was limited to the pitch of MLA, and it suffered from reconstruction artifact around the native object plane.

Here, we present dual-dimensional microscopy (DDM) that captures both 2-D and light-field images of an in-vivo sample simultaneously and optically visualizes it with a 3-D display system in real time. From our preliminary approach,17 a real-time upsampling algorithm was proposed in which an upsampled light field is synthesized from the captured light field and 2-D images based on Fourier slice photography theorem.18,19 The whole process from capturing to displaying is done in real time with a parallel computation algorithm. The upsampled light-field images are optically reconstructed with a computational light-field display (LFD). The wave optics simulation verifies that the DDM provides higher resolution up to the diffraction limit and higher correspondence to the reference 3-D data than LFM. Compared with conventional LFM, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. A DDM setup is implemented by appending a dual-view observation attachment to an LFM setup. We present a real-time 3-D interactive experiment with Caenorhabditis elegans (c. elegans) in which we can observe the 3-D behavior (spatiotemporal behavior with the enhanced depth of field) of c. elegans via a computational LFD and track it with the stage. We also suggest a bandwidth reshaping method by applying an additional aperture inside DDM to deconvolve 3-D volume without reconstruction artifacts. The reconstruction artifacts region could be eliminated by increasing the DOF of the OM path.

2.

Dual-Dimensional Microscopy and Light-Field Upsampling

Figure 1(a) shows the schematic diagram of a DDM setup. The light rays originated from the specimen are collimated by the infinity-corrected objective lens. Then, the collimated light beam is divided into two at the beam splitter. One path is identical to an LFM (LFM-path), and the other path is an optical microscope (OM-path). The transmitted light beam is converged by a tube lens, and a light-field image is obtained with an MLA located at the image plane and a CCD (CCD1) focusing on the back focal plane of the MLA. The reflected light beam is reflected again by a mirror and converged after a tube lens, and a 2-D image is captured with the other CCD (CCD2) located at the image plane. Note that the MLA and the CCD2 are both located at the native image plane. The CCD1 and CCD2 are synchronized with the external signal. The DDM setup can capture 2-D images and light-field images simultaneously with this optical configuration.

Fig. 1

The schematic diagram of the DDM and the principle of the light-field upsampling: (a) optical configuration of DDM setup. A beam splitter divides the LFM-path and OM-path. An MLA located at the image plane delivers light-field information to CCD1, focusing on the back focal plane of the MLA through the relay lenses. CCD2 is located in the image plane of the OM-path. Two CCDs are synchronized by external signals. The captured two images can be upsampled in the spectral domain. For example, if sample’s light field is (b) and its light-field spectrum is (c), CCD1 captures (d) low-resolution light-field image and CCD2 captures (f) high-resolution 2-D image. (e) The 4-D Fourier spectrum of the captured light field (only 2-D light field is shown) and (g) the 2-D Fourier slice obtained from the captured 2-D image can be synthesized under the Fourier slice photography theorem. (h) The upsampled light field is obtained by 4-D inverse Fourier transform of (i) synthesized light-field spectrum.

JBO_23_6_066502_f001.png

The obtained 2-D and light-field images can be synthesized to an upsampled light-field image with an algorithm based on Fourier slice photography theorem.1820 Assume that the 4-D light field of a sample at the object plane (z=0) is l(x,y,θ,ϕ). Then, the 2-D image i(x,y) captured from CCD2 can be expressed as follows:

Eq. (1)

i(x,y)=kθmθmϕmϕml¯(x,y,θ,ϕ)dϕdθ,
where θm and ϕm denote the maximum numerical aperture (NA) of the objective lens in the x- and y-directions, respectively, l¯ is the normalized light field at the image plane considering the angle between the radiance and the image plane, and k is a capturing constant. In the DDM setup, the CCD1 in the LFM-path directly samples l¯, and the resolution is limited by the lens pitch and number of pixels. From the light-field image captured with CCD1, the normalized light-field spectrum L¯(fx,fy,fθ,fϕ) can be calculated as follows:18

Eq. (2)

L¯(fx,fy,fθ,fϕ)=xmxmymymθmθmϕmϕml¯(x,y,θ,ϕ)e2πi(fx+fy+fθ+fϕ)dϕdθdydx,
where xm and ym are the maximum field of views in the x- and y-directions, respectively. From Eq. (2), the Fourier slice LS of the light field can be generated with the slicing operator.18,20 In particular, the Fourier slice LS,cen located at the fxfy plane (fθ=0, fϕ=0) is interpreted as follows:

Eq. (3)

LS,cen=L¯(fx,fy,0,0)=xmxmymym[θmθmϕmϕml¯(x,y,θ,ϕ)dϕdθ]e2πi(fx+fy)dydx.

The term inside the parentheses can be substituted with the captured 2-D image i(x,y) as shown in Eq. (1) as follows:

Eq. (4)

LS,cen=xmxmymym1ki(x,y)e2πi(fx+fy)dydx,
and Eq. (3) can be expressed as 2-D Fourier transform of i(x,y) as follows:

Eq. (5)

LS,cen=1kI(fx,fy),
where I(fx,fy) indicates the Fourier spectrum of i(x,y). Note that i(x,y) is simply zero beyond the field of view, so Eq. (4) is identical to a Fourier transform. As MLA and CCD2 are located at the native image plane, DDM always satisfies Eq. (5). Therefore, the captured light-field image and 2-D image can be fused at the light-field Fourier spectral domain, and we name this process light-field upsampling because the total bandwidth of the light-field spectrum is enhanced with the additional 2-D image.

Figures 1(b)1(i) show an example of light-field upsampling in DDM. Assume a sample whose normalized light field at the image plane and its light-field spectrum are shown in Figs. 1(b) and 1(c), respectively. Using DDM, a light-field image with a low resolution [Fig. 1(d)] and a 2-D image with a high resolution [Fig. 1(f)] are obtained. The resolution of the obtained light-field image l˜(x,y,θ,ϕ) is Nlx×Nly×Npx×Npy, where Nlx and Nly are the number of microlenses in the x- and y-directions, respectively, and Npx and Npy are the number of pixels behind each lens in the x- and y-directions, respectively. The resolution of captured 2-D image i˜(x,y) is W×H, where W and H are the number of pixels in CCD2 in the x- and y-directions, respectively.

To fuse those two images in light-field spectral domain with Eq. (5), the lateral resolution of l˜ should be matched to that of i˜ (W×H) during the light-field upsampling. Various super resolution algorithms can be applied with the consideration of image characteristics of bio samples.21,22 In this paper, a simple zero padding method in Fourier domain is applied without any texture assumptions for the real-time calculation and robustness.23 Figure 1(e) shows zero-padded light-field spectrum L˜(fx,fy,fθ,fϕ) generated from l˜. Meanwhile, the Fourier slice of the captured 2-D image I˜(fx,fy) is derived with the 2-D Fourier transform as shown in Fig. 1(g). By substituting the low-resolution Fourier slice for the high-resolution Fourier slice I˜(fx,fy) of light-field spectrum L˜(fx,fy,0,0) as shown in Fig. 1(i), the upsampled light-field spectrum is achieved. Here, a constant is multiplied to match the intensity of spectrum. Finally, the upsampled light field is derived with 4-D inverse Fourier transform of the spectrum. As shown in Fig. 1, the upsampled light field from DDM [Fig. 1(h)] has higher resolution and higher correspondence to original light field [Fig. 1(b)] than the result from LFM only [Fig. 1(d)]. As the upsampling process is composed of repeated Fourier transforms and inverse Fourier transforms, the real-time calculation is available with parallel computation.24,25 Previously, a similar light-field upsampling algorithm was introduced by Lu et al.11 They captured a 2-D image and a light-field image with a CCD and updated light-field spectrum iteratively with the high-resolution 2-D image. Compared with the previous method, we built a noniterative light-field upsampling algorithm for real-time observation, and this simplicity reduces the total calculation time and enables real-time 3-D observation.

3.

Real-Time Three-Dimensional Visualization of Upsampled Light-Field Image Using a Computational Light-Field Display

In LFD, light-fields reconstructed from stacked LCD panels can be expressed as a multiplication of pixels. In a computational LFD with two LCD panels, the light-field ld(i,j,s,t) reproduced by one pixel at the frontal panel f(i,j) and the other pixel at the rear panel r(s,t) can be represented as ld(i,j,s,t)=f(i,j)×r(s,t). The whole light-field distribution is a multiplication of the all possible pixel pairs within a maximum viewing angle. The layer images f and r are optimized with the iterative nonnegative matrix factorization (NNMF) algorithm.2629 Here, the additive update rules are applied for the factorization. Update rules for f and r are defined as follows:

Eq. (6)

f=fϕT(L˜dLT)ϕTϕandr=rΨT(L˜dLT)ΨTΨ,
where L˜d and LT are the reconstructed light field of current iteration and target light field, respectively, and

Eq. (7)

ϕ=diag(G*r)*HandΨ=diag(H*f)*G,
where H and G are the projection matrices of frontal and rear panel, and the diag and * denote diagonal operator and matrix multiplication, respectively.

The upsampled light field was provided to the observer with a computational LFD in real-time based on the parallel computation.24,25 For the real-time 3-D visualization, both light-field upsampling and layer image optimization should be performed in real time. Figure 2(a) shows the parallel computation algorithm of light-field upsampling. As the light-field upsampling for one scene is composed of a 4-D Fourier transform, 2-D Fourier transform, and 4-D inverse Fourier transform, the key for the real-time computation was parallelization of multidimensional Fourier transforms. The algorithm ran multiple one-dimensional (1-D) fast Fourier transform threads on a GPU (cuFFT), with CUDA programming.25

Fig. 2

(a) The parallel computation algorithm for light-field upsampling. The 4-D and 2-D Fourier transforms were divided to multiple 1-D Fourier transforms, and each 1-D Fourier transform was parallelized with cuFFT function in CUDA. Note that the zero padding was performed in the fx and fy directions only. (b) The parallel computation algorithm for layer image optimization in computational LFD and (c) the average computation time for each step in light-field upsampling and layer image optimization. Each value is an average of 100 measurements.

JBO_23_6_066502_f002.png

At the GPU device, the light-field upsampling was performed in parallel as shown in Fig. 2(a). Each 2-D and 4-D Fourier transform was divided into 1-D Fourier transforms, and each 1-D Fourier transform was parallelized with cuFFT function, provided by CUDA. For example, the 4-D FT of (Nlx×Nly×Npx×Npy) was done with four 1-D cuFFTs. In the first 1-D cuFFT for Nlx, the batch size was the multiplication of the number of the other three components (Nly×Npx×Npy). After each 1-D cuFFT, the transpose function should be applied for the next 1-D cuFFT. After 4-D Fourier transform of light-field image and 2-D Fourier transform of the 2-D image, they were fused in the light-field spectral domain. Here, the high-frequency component was padded with zero for the resolution matching. After fusing, 4-D IFFT was performed in parallel in the same way. The upsampled light field was converted to optimized layer images of a computational LFD.

The real-time layer image optimization method in the computational LFD was introduced in previous works.2628 Figure 2(b) shows the parallel algorithm of the layer image optimization. The target light-field LT was set to the upsampled light-field (mn×1), where m is the number of total viewpoints and n is the number of pixels for a single layer (W×H). At first, the memory was allocated in the GPU device for the frontal layer image f (n×1), rear layer image r (n×1), projection matrices H and G (mn×n), the contribution to the light field of the frontal and rear layers Fd and Rd (mn×1), and the reconstructed light-field L˜d (mn×1). Then, each layer image is initialized with certain values. The initial condition could be random values between 0 and 1 or the estimation from the target light field. Here, the frontal and rear layer images f and r were initialized with the central view image of target light-field LT. Then, the layer images were updated with the iterative update rules [Eq. (7)]. Between each iteration, the reconstructed light-field L˜d was updated. As the frontal and rear layer images are updated in series, each update step can be fully parallelized. The iteration number was set to 5 for the convergence of NNMF and the real-time calculation. Finally, the optimized frontal and rear layer images were obtained.

4.

Simulation Results

4.1.

Wave Optics Simulation of Dual-Dimensional Microscopy

To quantitatively verify the light-field upsampling and DDM, wave optics simulations were performed with the reference 3-D data. A focal stack of the convallaria sample, which was obtained with a confocal microscope, was used as the reference data. The images of DDM setup obtained from the reference data were simulated with the point spread functions of the LFM and the OM, which can be calculated with wave optics.10,23,30 The resultant light field and 2-D images were generated with the integration of convolution of the focal stacks and point spread functions. Note that the point spread functions in LFM vary with both axial and lateral positions of the sample, whereas those in OM vary with axial position only. Further information about image simulations of LFM was introduced in previous works.1,10 Figure 3(a) shows a simulated light-field image and 2-D image of reference data captured with a DDM setup. A 20×/0.5 NA objective lens and a 125-μm pitch, f/20 MLA are assumed in the simulation. The reference 3-D data of convallaria obtained with a confocal microscope (ZEISS, LSM-700) were resized to 375  μm×375  μm×31  μm. The resolution of the light-field image was 60×60×15×15 and that of the 2-D images was 900×900. As the aperture of the objective lens created small circles behind each microlens as shown in Fig. 3(a), we utilized only central 5×5 view images for the upsampling.

Fig. 3

The simulation results of DDM. (a) The light field and 2-D images captured with the DDM setup. 20×/0.5 NA objective lens was assumed. (b) The perspective views generated from (left) the original reference data, (center-left) the light-field image, (center-right) the upsampled light-field generated from light-field image and 2-D image, (right) the reconstructed light-field image with a computational LFD. The numbers inside each image indicate PSNR value to reference data. (c) The enlarged images of center view images of (left) reference, (center-left) LFM, (center-right) DDM, and (right) LFD. (d) The optimized frontal and rear layer images generated from the upsampled light-field image for the computational LFD.

JBO_23_6_066502_f003.png

The upsampled light-field image [Fig. 3(b) right] was generated with these two images and was compared with the original light field [Fig. 3(b) left] and the light field obtained only with the LFM [Fig. 3(b) center]. For the fair comparison, the light-field images from the LFM (center column) were upsampled with the zero padding method. The simulation results were evaluated with the peak signal-to-noise ratio (PSNR) value to the original ground truth, which is defined as follows:

Eq. (8)

PSNR=10log10MAXI2MSE,
where MAXI is the maximum possible pixel value of the image and MSE is the mean squared error. The numbers inside the image in Fig. 3(b) are the PSNR value to the original light field. All view images from upsampled light fields from DDM show higher coincidence with the original light fields than those from the LFM only (6  dB difference). As shown in Fig. 3(c), DDM could reproduce detailed high-resolution components compared with the LFM results.

4.2.

Three-Dimensional Visualization of Upsampled Light Field with a Computational Light-Field Display

The layer images are optimized with the iterative NNMF to reproduce the target light fields.2628 Figure 3(d) shows simulation results of layer image optimization in computational LFD for DDM. The frontal and rear layer images were generated with the NNMF algorithm using additive update rules. In the simulation, we assumed a computational LFD system composed of two 22-in. LCDs with 1920×1080 resolution and 12-mm gap, which was actually used in the experiments. The layer images were generated to reconstruct 5×5 perspective view images, and the maximum viewing angle was 12.19 deg. The simulation results of the reconstructed perspective images with LFD are shown in the right column of Fig. 3(b). The PSNR values to the original light field are slightly lower than those of upsampled light-field images, but they are still much higher than those of LFM. As the biomedical samples usually show higher correspondence between directional view images, the reconstructed light-field images agree with the target light-field images. As LFD reproduced the upsampled light-field images with a higher correspondence, the observer can perceive the in-vivo samples directly through the computational LFD.

5.

Experimental Results

5.1.

Experimental Setup

A DDM was implemented based on a transmissive OM (Olympus, BX-51T) as shown in Fig. 4(a). A side-by-side observation body (Olympus, BX2-SDO) was attached to the microscope for the separation of LFM-path and OM-path; it is composed of a beam splitter and a mirror as introduced in Fig. 1(a). All experiments were performed with a 40×/0.75 NA dry apochromat objective (Olympus, UPLFLN40X). A 125-μm MLA with 2.5-mm focal length (FresnelTech) was fixed at the customized MLA holder and one-axis stage. The relay lens located at the LFM-path was composed of nose-to-nose-connected two camera lenses (Canon, EF 100 mm f/2.8 Macro USM). Two 5.5-μm pixel pitch, 32-Hz frame-rate CCDs with 2336×1752  pixels were utilized (Allied Vision Technology, Prosilica GX2300C). One focused on the back focal plane of the MLA with the relay lens, whereas the other was located at the image plane with the camera adapter (Olympus). The MLA holder, relay lenses, and CCD1 were aligned with an optical jig mounted on the tube lens. The capturing of two CCDs was synchronized by external signals generated from a data acquisition board (National Instrument). The captured 2-D images and light-field images were transferred to a GPU device (NVIDIA, GTX1080) with VIMBA SDK. For LFD, two IPS-LCD monitors (LG-22MP57HQ-P) were utilized. The system was implemented without an additional light source by disassembling one LCD (frontal layer) and using the backlight unit of the other (rear layer). In the IPS-LC panel, the horizontal and vertical linear polarizers are attached to the frontal and rear sides, respectively. Therefore, by locating the frontal panel upside down, we can use IPS-LC panels without detaching the polarizer. Two panels were stacked with the precise lateral and angular calibrations. The pixel pitch of the LCD was 254  μm, the resolution was 1920×1080, and the gap between panels was 12 mm. The maximum viewing angle was 12.49 deg, and the 5×5 light-field images were utilized as the target light field. The frontal and rear panels were driven by the GPU and showed optimized layer images calculated in the GPU device.

Fig. 4

The DDM setup and experimental results with c. elegans. (a) The implemented DDM setup appended to LFM. A side-by-side observation body was attached to the LFM. The implemented DDM captures (82×62×12×12) light-field images and (984×744) 2-D images in 20 Hz with the synchronization (Video 1). The experimental results of LFM and DDM at (b) t=0.00  s and (c) t=5.20  s were obtained with the DDM setup (Videos 2 and 3). “Left,” “center,” and “right” sub-images indicate the directions of view images (d) the optimized layer images (top) and the reconstructed light-field images with a computational LFD (bottom). The implemented DDM-LFD system can optically reconstruct upsampled light-field images of c. elegans in 3-D and in real-time (20 Hz) (Video 4) (Video 1, MOV, 2268 KB [URL: https://doi.org/10.1117/1.JBO.23.6.066502.1]; Video 2, MOV, 1507 KB [URL: https://doi.org/10.1117/1.JBO.23.6.066502.2]; Video 3, MOV, 5392 KB [URL: https://doi.org/10.1117/1.JBO.23.6.066502.3]; Video 4, MOV, 5284 KB [URL: https://doi.org/10.1117/1.JBO.23.6.066502.4]).

JBO_23_6_066502_f004.png

5.2.

Dual-Dimensional Microscope

The implemented DDM provides (82×62×12×12) light-field images and (984×744) 2-D images in 20 Hz with the synchronization. The 3-D behavior of c. elegans was captured with the DDM setup continuously in 20 Hz (see Video 1). Figures 4(b) and 4(c) show upsampled light-field images with the zero padding algorithm (left) and the proposed light-field upsampling algorithm (right) in 0.00 and 5.20 s, respectively. The central 5×5 view images are used for the light-field upsampling. Note that only the captured light-field image was utilized for the left result, whereas the light field and 2-D images were both used for the right result. The results from the DDM setup provided higher resolution and more detailed information of c. elegans compared with the conventional LFM (see Video 2). As the perspective views were generated from a single exposure, we could obtain the high-resolution video of c. elegans while changing the perspective views freely in real time. The Video 3 shows an example of reconstitution of c. elegans movement with the change of perspective view and time frame. The DDM makes it possible to observe the 3-D movement of a live sample with a higher resolution.

The direct comparison between the reconstructed perspective view images and the true perspective view images was impossible because we cannot obtain the exact depth map of the moving c. elegans. Nevertheless, we can conclude two facts from the experiments. First, the implemented DDM setup provides light-field videos with a higher resolution than that of a LFM. Second, the experimental results accord well with the simulation results, which clearly verified the correlation between the images from the DDM setup and the reference data.

5.3.

Real-Time Light-Field Upsampling and Layer Image Optimization with Parallel Computing

Figure 2(c) shows the average computation time for every step in light-field upsampling. The image size indicates the resolution of the 2-D image, and each value is the average of 100 calculations. The total computation time increased with the image size because the 1-D array in every 1-D fast Fourier transform [O(nlogn)] stage becomes longer too. For the same reason, most of the time was consumed at the final 4-D inverse Fourier transform stage, which dealt with the largest size light-field (W×H×L×L). However, the total calculation time for a 70×50×5×5 light-field image and an 840×600 2-D image was 225 ms, which could provide the upsampled light field in 4.4 Hz with a PC and a GPU (NVIDIA, GTX-1080). The lag should be reduced further using multiple GPUs.

5.4.

Real-Time Three-Dimensional Observation with a Computational Light-Field Display

With the implemented DDM-LFD system, the observer could watch the 3-D movement of c. elegans in real time, and the direct operation such as tracking or focus changing was also possible. Figure 4(d) shows the optimized layer images and 3-D images of c. elegans reconstructed via a computational LFD at t=0. The frontal and rear layer images were calculated in real time, so the c. elegans behavior was visualized in 3-D and in real time. The reconstructed 3-D images showed right parallax as shown in Fig 4(b) (Video 4). Compared with the target light field (Video 3), the reconstructed perspective view images showed uniform color tones along the viewpoints. As a computational LFD presents high-resolution light-field images beyond its data capacity (number of pixels) by the benefit of the correspondence between perspective view images, the reconstructed 3-D images might lose some information as shown in Video 4. However, the DDM-LFD provided correct 3-D images in real time, which were enough to watch the 3-D behavior of c. elegans. Furthermore, the original 2-D and light-field images and upsampled light field could be saved in a solid-state drive.

6.

Discussion

The simulation and experimental results showed that the DDM captures very high-resolution layer (984×744) and low-resolution volume (82×62×12×12 of light fields) together. The experimental results showed that the reconstructed view images from DDM provide higher resolution than those from LFM in real time. However, light-field images from LFM contain more detailed information, and a higher resolution 3-D image can be restored with deconvolution.1,10,12 Here, we analyze the theoretical depth-dependent band limit of the DDM. It is known that the band limit of the LFM is inversely proportional to depth for large |z| where the point light source forms the diffraction limited spot.10 Furthermore, the band limit has minimum value at z=0, where the LFM could capture the images only with MLA sampling rate.9,10 In the intermediate region, the band limit is not well established but is known to have a quasiuniform resolution similar to the peak resolution. The whole depth-dependent band limit can be expressed as follows:

Eq. (9)

νlf(z)={M2pl,z=0M0.47pl,0<|z|pl2/(2M2λ)pl(0.94λM|z|),|z|>pl2/(2M2λ),
where M is the magnification of the objective, λ is the wavelength, pl is the MLA pitch and the Sparrow 2-point criterion is assumed.23 However, the band limit for the OM is calculated as Eq. (10) within the DOF Dom, as follows:

Eq. (10)

νom(z)=NA0.47λ.|z|Dom/2Dom=[nλ/(NA2)+npp/(M·NA)],
where n is the refractive index, pp is CCD pixel pitch, and the Sparrow 2-point criterion is assumed too. Figure 5(a) shows the depth-dependent band limit and DOF of OM, LFM, and DDM. In LFM, the band limit is determined by the lenslet sampling rate at the native object plane (z=0), and the reconstruction artifact often occurs in the deconvolution process around the plane [|z|pl2/(2M2λ)]. Therefore, the additional OM-path in the DDM compensates for this degradation successfully. As the light-field upsampling algorithm simply substitutes the information from the native object plane, the band limit of the DDM follows the bigger value between νlf(z) and νom(z). The DDM captures the information up to the objective diffraction limit at the object plane, the peak-resolution of LFM for Dom/2<|z|pl2/(2M2λ), and inversely proportional to |z| for the rest as follows:

Eq. (11)

νddm(z)={NA0.47λ,|z|Dom/2M0.47pl,Dom/2<|z|pl2/(2M2λ)pl(0.94λM|z|).|z|>pl2/(2M2λ).

Fig. 5

(a) The simulated depth-dependent band limit of DDM, LFM, and OM. 20×/0.5 NA objective lens and 125  μm/2.5  mm MLA were assumed. The band limit of DDM follows the bigger value between that of LFM and OM. Note that the additional 2-D information from the OM-path greatly compensates for the image degradation at z=0. (b) The band limit changes when the additional circular aperture was applied to the OM-path. The band limit of DDM can be reshaped for various purposes. This system can compensate for not only the image loss at native object plane (z=0) but also the reconstruction artifacts in |z|pl2/(2M2λ).

JBO_23_6_066502_f005.png

The resolution upper bound of the obtained perspective view images in DDM is the wide-field diffraction limit at the native object plane.

If the experimenter wants to focus on the information from other depths, it is easily achieved by moving the stage axially. Compared with the conventional OM, the observer can navigate the perfect depth plane while watching the low-resolution image from large DOF with the DDM setup. It helps to understand the 3-D behavior of the specimen and reduce the experiment time dramatically.

Nevertheless, the discontinuity of the band-limit over depth makes the view image unnatural. The blur unexpectedly increases at the boundary (|z|=Dom/2), which is different from any other images from conventional capturing devices. In this point of view, we can reshape the band limit by applying additional aperture only at the OM-path in front of the tube lens. As this additional aperture results in the smaller NA of the OM-path (NAom), we can extend the DOF by sacrificing the maximum resolution. Figure 5(b) shows the band limit changes by applying different apertures in the OM-path. When the NA is large, the high-frequency components and high resolution are preserved, but the DOF becomes narrow; however, when the NA is small, vice versa. We can freely change the NA of the LFM-path and OM-path according to the need and experimental circumstances.

When we changed the NAom, the light-field upsampling algorithm could not be applied directly as the DOF is changed. Instead, we could reconstruct the depth information with the deconvolution. Compared with the results in previous works,1,10 the additional information from the OM-path can greatly compensate for the image loss not only at the native object plane but also at the reconstruction artifact zone. Two apertures—MLA and circular aperture—are similar to the coded apertures designed for different depths, and the methodology of multiple-coded apertures could be directly applied to the conventional LFDM method.31,32

DDM is not just a high-resolution 3-D imaging method; it is also a real-time interactive 3-D observation method of in-vivo samples. The real-time 3-D observation of micro-objects with a computational LFD enables the real-time 3-D interactive experiment. As shown in Videos 3 and 4, the experimenter can capture the full 3-D behavior of c. elegans with instant tracking and focus-changing by watching the 3-D videos of c. elegans. Due to its structural simplicity, DDM could be more effective with various fluorescent technologies. The combination of whole brain Ca+2 imaging and DDM could capture the neuronal activity of the entire nervous system of c. elegans with a high resolution. Our real-time 3-D interactive system is applicable not only to the biological experiments but also to practical clinical field, such as endoscopy,33 which examines the disease and gives treatment directly. Finally, the dual-dimensional imaging scheme is not limited to the microscopy. It can be applied to the real-scale imaging system, such as light-field cameras and integral imaging system. The light-field upsampling algorithm can be applied directly to the real-scale objects, which can dramatically enhance the lateral resolution.

7.

Conclusion

Here, we demonstrated a real-time high-resolution in-vivo 3-D observation method dubbed DDM. The higher resolution light-field image was obtained in real time by combining a light-field image from LFM and a 2-D image from OM. The upsampled light-field images of in vivo objects were shown with a stacked LFD in real time. Two synchronized CCDs captured both (82×62×12×12) light-field images and (984×744) 2-D images in 20 Hz with the synchronization, and the upsampled (960×720×5×5) light fields were generated from them. Then, the optimized layer images were generated for LFD from the upsampled light fields. The 3-D images were optically reconstructed in 3-D with LFD, so the observer could watch the 3-D movement of c. elegans through the DDM setup and directly interact with it by moving the stage.

The simulation results showed that DDM greatly enhances the lateral resolution up to the diffraction limit at the native object plane and compensates for the image degradation at the native object plane. As DDM provided a very high-resolution layer and a low-resolution volume together, the experimenter can navigate the perfect depth plane by axially moving the stage. Furthermore, the band limit of DDM can be reshaped for various purposes by applying an additional aperture in OM-path. The structural simplicity of DDM encourages various applications over the field of microscopy. DDM can also be applied to the endoscopy33 or real-scale light-field cameras.34

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

This research was supported by Projects for Research and Development of Police Science and Technology under Center for Research and Development of Police Science and Technology and Korean National Police Agency. (Grant No. PA-H000001). We wish to thank Professor Junho Lee and Dr. Daehan Lee (Department of Biological Sciences, Seoul National University) for the generous donation of the c. elegans samples used in this study.

References

1. 

R. Prevedel et al., “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods, 11 (7), 727 –730 (2014). https://doi.org/10.1038/nmeth.2964 JSIDE8 0734-17681548-7091 Google Scholar

2. 

H. Lee et al., “Nictation, a dispersal behavior of the nematode caenorhabditis elegans, is regulated by il2 neurons,” Nat. Neurosci., 15 (1), 107 –112 (2012). https://doi.org/10.1038/nn.2975 NANEFN 1097-6256 Google Scholar

3. 

M. Rajadhyaksha et al., “In vivo confocal scanning laser microscopy of human skin: melanin provides strong contrast,” J. Invest. Dermatol., 104 (6), 946 –952 (1995). https://doi.org/10.1111/1523-1747.ep12606215 JIDEAE 0022-202X Google Scholar

4. 

M. B. Ahrens et al., “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods, 10 (5), 413 –420 (2013). https://doi.org/10.1038/nmeth.2434 1548-7091 Google Scholar

5. 

S. Abrahamsson et al., “Fast multicolor 3d imaging using aberration-corrected multifocus microscopy,” Nat. Methods, 10 (1), 60 –63 (2013). https://doi.org/10.1038/nmeth.2277 1548-7091 Google Scholar

6. 

J. Kim et al., “Real-time integral imaging system for light field microscopy,” Opt. Express, 22 (9), 10210 –10220 (2014). https://doi.org/10.1364/OE.22.010210 OPEXFF 1094-4087 Google Scholar

7. 

Y. Kumagai et al., “Magnifying endoscopy, stereoscopic microscopy, and the microvascular architecture of superficial esophageal carcinoma,” Endoscopy, 34 (05), 369 –375 (2002). https://doi.org/10.1055/s-2002-25285 ENDCAM Google Scholar

8. 

J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inf. Disp., 18 (1), 1 –12 (2017). https://doi.org/10.1080/15980316.2016.1255672 Google Scholar

9. 

M. Levoy et al., “Light field microscopy,” ACM Trans. Graphics, 25 (3), 924 –934 (2006). https://doi.org/10.1145/1141911 ATGRDFATGRDF 0730-03010730-0301 Google Scholar

10. 

M. Broxton et al., “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express, 21 (21), 25418 –25439 (2013). https://doi.org/10.1364/OE.21.025418 OPEXFF 1094-4087 Google Scholar

11. 

C.-H. Lu, S. Muenzel and J. Fleischer, “High-resolution light-field microscopy,” in Computational Optical Sensing and Imaging, CTh3B-2 (2013). Google Scholar

12. 

M. Levoy, Z. Zhang and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc., 235 (2), 144 –162 (2009). https://doi.org/10.1111/jmi.2009.235.issue-2 JMICAR 0022-2720 Google Scholar

13. 

J. Kim et al., “F-number matching method in light field microscopy using an elastic micro lens array,” Opt. Lett., 41 (12), 2751 –2754 (2016). https://doi.org/10.1364/OL.41.002751 OPLEDP 0146-9592 Google Scholar

14. 

J.-H. Jung, J. Kim and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett., 38 (1), 76 –78 (2013). https://doi.org/10.1364/OL.38.000076 OPLEDP 0146-9592 Google Scholar

15. 

J. Kim et al., “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express, 21 (16), 18742 –18753 (2013). https://doi.org/10.1364/OE.21.018742 OPEXFF 1094-4087 Google Scholar

16. 

F. Okano et al., “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt., 36 (7), 1598 –1603 (1997). https://doi.org/10.1364/AO.36.001598 APOPAI 0003-6935 Google Scholar

17. 

J. Kim et al., “A single-shot 2D/3D simultaneous imaging microscope based on light field microscopy,” Proc. SPIE, 9655 96551O (2015). https://doi.org/10.1117/12.2185253 Google Scholar

18. 

R. Ng, “Fourier slice photography,” ACM Trans. Graphics, 24 (3), 735 –744 (2005). https://doi.org/10.1145/1073204 ATGRDF 0730-0301 Google Scholar

19. 

R. Ng et al., “Light field photography with a hand-held plenoptic camera,” Comput. Sci. Tech. Rep., 2 (11), 1 –11 (2005). Google Scholar

20. 

J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express, 19 (19), 18729 –18741 (2011). https://doi.org/10.1364/OE.19.018729 OPEXFF 1094-4087 Google Scholar

21. 

S. Farsiu et al., “Advances and challenges in super-resolution,” Int. J. Imaging Syst. Technol., 14 (2), 47 –57 (2004). https://doi.org/10.1002/(ISSN)1098-1098 IJITEG 0899-9457 Google Scholar

22. 

M. Bertero et al., “Image deblurring with poisson data: from cells to galaxies,” Inverse Prob., 25 (12), 123006 (2009). https://doi.org/10.1088/0266-5611/25/12/123006 INPEEY 0266-5611 Google Scholar

23. 

J. W. Goodman, Introduction to Fourier optics, Roberts and Company Publishers, New York (2005). Google Scholar

24. 

A. Eklund, P. Dufort, “Non-separable 2D, 3D, and 4D filtering with CUDA,” GPU Pro 5: Advanced Rendering Techniques, 469 –492 AK Peters/CRC Press, New York (2014). Google Scholar

25. 

S. Al Umairy et al., “On the use of small 2D convolutions on GPUs,” Computer Architecture, 52 –64 Springer, Berlin (2011). Google Scholar

26. 

S. Lee et al., “Additive light field displays: realization of augmented reality with holographic optical elements,” ACM Trans. Graphics, 35 (4), 60 (2016). https://doi.org/10.1145/2897824.2925971 ATGRDF 0730-0301 Google Scholar

27. 

G. Wetzstein et al., “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graphics, 31 (4), 1 –11 (2012). https://doi.org/10.1145/2185520 ATGRDF 0730-0301 Google Scholar

28. 

D. Lanman et al., “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graphics, 29 (6), 163 (2010). https://doi.org/10.1145/1882261 Google Scholar

29. 

S. Moon et al., “Depth-fused multi-projection display using scattering polarizers,” Digital Holography and Three-Dimensional Imaging, W2A-18 Optical Society of America, Washington, D.C. (2017). Google Scholar

30. 

M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, Elsevier, New York (1980). Google Scholar

31. 

C. Zhou, S. Lin and S. Nayar, “Coded aperture pairs for depth from defocus,” in IEEE 12th Int. Conf. on Computer Vision, 325 –332 (2009). Google Scholar

32. 

A. Levin et al., “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics, 26 (3), 70 (2007). https://doi.org/10.1145/1276377 ATGRDF 0730-0301 Google Scholar

33. 

J. Liu et al., “Light field endoscopy and its parametric description,” Opt. Lett., 42 (9), 1804 –1807 (2017). https://doi.org/10.1364/OL.42.001804 OPLEDP 0146-9592 Google Scholar

34. 

Y. Jeong et al., “Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera,” Appl. Opt., 54 (35), 10333 –10341 (2015). https://doi.org/10.1364/AO.54.010333 APOPAI 0003-6935 Google Scholar

Biography

Jonghyun Kim is a research scientist at NVIDIA Research. He received his BS degree from the School of Electrical Engineering, Seoul National University in 2011, and his PhD in the Department of Electrical Engineering and Computer Science, Seoul National University, in 2017. He is the author of more than 20 journal papers and 35 conference papers. His current research interests include light-field microscopy, light-field display, augmented reality display, and holographic display. This work was conducted while he was at Seoul National University as a postdoc researcher.

Seokil Moon received his BS degree in electrical engineering from Pohang University of Science and Technology, Pohang, South Korea. Currently, he is working toward his PhD in the School of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea. His current research interests focus on light-field imaging technique and visualization.

Youngmo Jeong received his BS degree in electrical and computer engineering from Seoul National University, Korea, in 2013 and is currently working toward his PhD in electrical engineering, Seoul National University, Korea. His primary research interests are in the areas of 3-D display, optical information processing, and augmented reality.

Changwon Jang received his BS degree in electrical engineering from Seoul National University, Seoul, Korea, in 2013, where he is currently working toward his PhD at the School of Electrical and Computer Engineering. His primary research interests focus on the areas of 3-D display and digital holography.

Youngmin Kim is a senior research engineer at Korea Electronics Technology Institute, Korea. He received his BS degree in 2005 and his PhD in February 2011 in electrical engineering from Seoul National University, Seoul, Korea. He is the author of more than 30 journal papers and has written two book chapters. His current research interests include 3-D display, holography, VR/AR display, and visual fatigue associated with 3-D display. He is a fellow of the Optical Society of Korea.

Byoungho Lee received his PhD from University of California at Berkeley in 1993. Since September 1994, he has been in the faculty at the School of Electrical Engineering, Seoul National University, where he is currently the head of the department. He received the Jinbojang National Badge of Korea (2016). He is a fellow of SPIE, OSA, and IEEE and the president-elect of the Optical Society of Korea.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jonghyun Kim, Seokil Moon, Youngmo Jeong, Changwon Jang, Youngmin Kim, and Byoungho Lee "Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display," Journal of Biomedical Optics 23(6), 066502 (21 June 2018). https://doi.org/10.1117/1.JBO.23.6.066502
Received: 14 February 2018; Accepted: 30 May 2018; Published: 21 June 2018
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
3D displays

Microscopy

3D image processing

In vivo imaging

Fourier transforms

Image resolution

Video

Back to Top