## 1.

## Introduction

Integral imaging is a three-dimensional (3-D) sensing and display technique, which was first proposed by Lippmann in 1908.^{1} Unlike stereoscopic 3-D display or holography,^{2}3.4.5.^{–}^{6} integral imaging can provide full-parallax and continuous-viewing 3-D images without using any special glasses or coherent light.^{7}8.9.10.^{–}^{11} An integral-imaging system utilizes a microlens array (MLA) to pick up and reconstruct lifelike true 3-D images. One of the main problems in integral imaging is its limited depth-of-field (DOF); many researchers have proposed some useful methods to improve this.^{12}13.14.15.^{–}^{16} One approach to overcome this problem is to reduce the numerical aperture of the microlenses. However, such a reduction would reduce the lateral resolution of the elemental images.^{17}

It has been known for some time that obstructing the center of the aperture of an optical system—i.e., using an annular aperture—makes the central maximum of the Airy pattern narrower and increases the DOF.^{18}^{,}^{19} Also, Martínez-Corral et al.^{20} proposed an amplitude-modulating method and presented a simulating experiment to show that the DOF of an integral-imaging pickup system was significantly enhanced by simply placing an opaque circular mask behind each microlens. But, to our knowledge, no optical integral-imaging pickup experiments have been presented to verify this method so far.

In this paper, we analyze the light intensity distribution in the amplitude-modulating pickup system and implement an optical pickup experiment using an amplitude-modulated sensor array (SA) to generate the DOF-enhanced elemental image array (EIA). The obtained EIA is then used for computational reconstruction to produce the 3-D images with extended DOF.

## 2.

## Intensity Distribution in Integral-Imaging Pickup Process

We assume that the integral-imaging system is linear and shift invariant and is illuminated by a monochromatic light source with a wave length $\lambda $.

Figure 1 shows the schematic of the DOF-extending method in the integral-imaging pickup process and an opaque mask with diameter $q$ is placed in front of each sensor to obstruct the central part of that sensor. The 3-D object point $O({x}_{0},{y}_{0})$ is located out of the reference plane with a depth value ${z}_{0}$ and produces a blurred image on the complementary metal-oxide-semiconductor (CMOS). The central sensor is denoted as the (0th, 0th) sensor. The pitch of the SA is $p$ and the focal length of the SA is $f$. Distances $l$ and $g$ are related by the Gaussian lens law $1/l+1/g-1/f=0$.

Pupil function for the (0th, 0th) sensor in Fig. 1 is given by

where $p>q\ge 0$, function## (2)

$$\mathrm{Circ}(x,y;p)=\{\begin{array}{ll}1& \sqrt{{x}^{2}+{y}^{2}}\le p/2\\ 0& \text{otherwise}\end{array}$$Accordingly, the pupil function for the ($m$’th, $n$’th) sensor can be expressed as ${P}_{mn}(x,y)={P}_{00}(x-mp,y-np)$. Here, $m$ and $n$ account for the indices of that sensor. For any monochromatic channel with a wavelength $\lambda $, the phase transformation of the ($m$’th, $n$’th) sensor is written as

## (3)

$${T}_{mn}(x,y)={P}_{00}(x-mp,y-np)\phantom{\rule{0ex}{0ex}}\mathrm{exp}\{-j\frac{k}{2f}[{(x-mp)}^{2}+{(y-np)}^{2}]\},$$According to the paraxial approximation and the Fresnel diffraction theory, the light intensity distribution on the CMOS $({x}^{\prime},{y}^{\prime})$ can be obtained as

## (4)

$$I({x}^{\prime},{y}^{\prime};{z}_{0})\phantom{\rule{0ex}{0ex}}={\left|\frac{1}{{\lambda}^{2}g(l+{z}_{0})}\underset{-\infty}{\overset{+\infty}{\int}}\underset{-\infty}{\overset{+\infty}{\int}}\mathrm{exp}\right\{\frac{jk}{2(l+{z}_{0})}[{(x-{x}_{0})}^{2}+{(y-{y}_{0})}^{2}]\}\phantom{\rule{0ex}{0ex}}\times {T}_{mn}(x,y)\times \mathrm{exp}\{\frac{jk}{2\text{\hspace{0.17em}}\text{\hspace{0.17em}}g}[{({x}^{\prime}-x)}^{2}+{({y}^{\prime}-y)}^{2}]\left\}\mathrm{d}x\mathrm{d}y\right|}^{2}.$$Here, the external pure phase factors have been dropped.

Note that the DOF-extending method works at the expense of losing the light efficiency, and the light efficiency is given by

## 3.

## Depth-of-Field of the Integral-Imaging Pickup System

For the DOF calculation, we only take into account the rear DOF that is behind the reference plane. As for an object point with a certain depth value ${z}_{0}$, its diffraction intensity pattern on the CMOS can be computed by using Eq. (4) and for the diffraction intensity pattern, we define its diameter as the one of a circle where the intensity has dropped by a factor of $1/{2}^{1/2}$ compared to the maximum intensity of that pattern. Similarly, for different object points with different depth values ${z}_{0}$, we can work out a group of distinct diameters. Therefore, when the system parameters are given as $p=8.8\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, $q=6.2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, $f=50.0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, $g=60.4\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, $\lambda =5.5\times {10}^{-4}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, we calculate two groups of pattern diameters with different depth values ${z}_{0}$ for the DOF-extending method (red line) and the conventional method (green line), respectively, as shown in Fig. 2. From Fig. 2, we can see that the minimal pattern diameter on the CMOS is obtained when the object point is located on the reference plane (${z}_{0}=0$), and as the object point goes away from the reference plane (${z}_{0}$ increases), the diameter gradually increases.

The DOF of the integral-imaging pickup system can be defined as the distance in which the object may be axially shifted before an intolerable blur is produced.^{21} The size of the critical tolerable pattern is given by the combination of the least distance of distinct vision of a normal eye (about 250.0 mm)^{22} and the minimum angular resolution of human eyes (about $2.9\times {10}^{-4}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{rad}$),^{17} and the tolerable pattern diameter is obtained as $72.5\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu \mathrm{m}$ (blue line shown in Fig. 2). Therefore, the DOF of both the DOF-extending and conventional methods can be obtained as the abscissas of the intersections of the blue line with the red line and the green line, respectively, as shown in Fig. 2. The results show that the DOF of the DOF-extending method (about 60.0 mm) almost increased by a factor of 2 over one of the conventional methods (about 30.0 mm) with an obscuration ratio $q/p\approx {1/2}^{1/2}$. The light efficiency can be obtained as 50.4% according to Eq. (5).

## 4.

## Experiments and Discussions

To further verify the effectiveness of the DOF extending pickup method based on amplitude-modulated SA, we implemented an optical pickup experiment under white-light illumination and a computational reconstruction experiment. As shown in Fig. 3, a Canon EOS 60D sensor with a Canon EF-S 18 to 55 mm $f/3.5$ to 5.6 IS II lens was fixed onto a Lyseiki motorized translation stage to perform the optical pickup process. The stage was driven by the stage controller to move on both horizontal and vertical directions step by step with a stepping length of 5.0 mm. The focal length, exposure time, ISO, $F$-number, and CMOS size were 50.0 mm, $1/25\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{s}$, 1000, $F/5.6$, and $22.3\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}\times 14.9\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, respectively. The sensor was focusing on the first object and the distance between the sensor CMOS and the first object was 350.0 mm. According to the Gaussian lens law, the distance between the sensor CMOS and the sensor objective equivalent principal plane was 60.4 mm and the distance between the sensor objective equivalent principal plane and object 1 was 289.6 mm.

Note that the opaque mask for amplitude modulating should be placed exactly on the aperture plane of the sensor objective, which was not accessible in our setup. Thus, we introduced an additional diaphragm, whose diameter was set to be 8.8 mm, onto the first surface of the objective to shift the aperture plane to the first surface. As shown in Figs. 4(a) and 4(b), an 8.8-mm aperture stop and a 6.2-mm opaque mask were printed on a photographic film for the optical integral-imaging pickup process. According to the DOF results in Fig. 2 and Sec. 3, the rear DOFs of the DOF-extending and conventional experimental setup can be obtained as 60.0 mm and 30.0 mm, respectively.

As shown in Fig. 4(c), we built a 3-D object which consists of three planar objects located at different depth positions. The distance between every two adjacent objects was 30.0 mm. To make sure that the pickup system has the same angular resolution at three different planar objects, the lateral sizes of object 1, object 2, and object 3 were designed properly as 2.5 mm, 2.8 mm and 3.0 mm, respectively. For each method, there were $7\times 7$ images captured by the SA as the original elemental images and each had a resolution of $5184\times 3456$ pixels. Table 1 shows the parameters used in the experiment.

## Table 1

Parameters used in the experiment.

Parameters | Values |
---|---|

Focal length of the sensor lens | 50.0 mm |

$F$-number | $F/5.6$ |

Diameter of the opaque mask | 6.2 mm |

Diameter of the aperture stop | 8.8 mm |

Sensor step size | 5.0 mm |

Focal length of the virtual MLA | 22.0 mm |

Pitch of the virtual MLA | 5.0 mm |

Distance between every two adjacent 3-D objects | 30.0 mm |

Distance between the CMOS and object 1 | 350.0 mm |

Number of the captured elemental images | $7\times 7$ |

Resolution of the original elemental images | $5184\times 3456\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$ |

Resolution of the shrunk elemental images | $1000\times 1000\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$ |

Resolution of the EIA | $7000\times 7000\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$ |

Note: MLA, microlens array; EIA, elemental image array.

Since the virtual MLA used in the computational reconstruction process had a focal length $f=22.0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$ and a pitch $p=5.0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, the obtained original elemental images need to be resized and shrunk to have a resolution of $1000\times 1000\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$. Figure 5 shows the obtained EIAs and the enlarged elemental images. Each EIA contains $7\times 7$ elemental images and has a resolution of $7000\times \phantom{\rule{0ex}{0ex}}7000\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$. It can be seen that the EIA and elemental images obtained from the DOF-extending method are much clear than those of the conventional method, especially for three black fringes with high-contrast ratio. However, the light efficiency has been decreased due to the obscuration of the central part of each sensor.

Also, to demonstrate the DOF-extending effect more intuitively, we took the two enlarged elemental images in Fig. 5, for example, and plotted the normalized intensity profiles of three objects along the sampling path shown in Fig. 6(a). As shown in Fig. 6(b), the normalized intensity profiles of object 1 obtained by the DOF-extending method [red lines in Fig. 6(b)] are quite similar to the one obtained by the conventional method [blue lines in Fig. 6(b)], which means that object 1 is recorded clearly with sharp edges and high-contrast ratio by both methods. However, in Fig. 6(c), the intensity troughs, which refer to the color fringes on object 2, are separated while the intensity peaks, which refer to the blank areas on object 2, are quite close to each other. This is particularly apparent for three intensity troughs in the right part of Fig. 6(c), which represent three black fringes. The above analyses indicate that the image of object 2 obtained from the DOF-extending method has a higher contrast ratio and sharper edges than that obtained from the conventional method, and thus, it is more faithful to the shape of the original object. This can be observed even more obviously in Fig. 6(d). Thus, we can come to a conclusion that the DOF is evidently enhanced by amplitude modulating.

After obtaining the EIAs, we conducted a computational reconstruction experiment and the reconstructed images obtained at different virtual imaging planes, which were 12.4 mm, 49.4 mm and 85.4 mm away from the original reference plane of the sensor, were shown in Fig. 7. These reconstructed images have been slightly shifted with respect to their theoretical positions due to the experimental errors introduced in the optical pickup process. From the results of the conventional method in Fig. 7(b), we can see that the reconstructed image of object 1 looks quite clear since it locates on the focusing plane of the sensor, object 2 starts getting blurred since it locates on the marginal depth plane, and object 3 is too blurry to be observed since it is located out of the depth range. By contrast, the image of object 3 in Fig. 7(a) is almost as clear as the image of object 2 in Fig. 7(b); therefore, the DOF-extending method has successfully moved the marginal depth plane from object 2 to object 3, which means that the DOF is increased from around 30.0 to 60.0 mm as estimated in Fig. 2 and Sec. 3. What is more, as shown in Fig. 8, normalized intensity profiles of the reconstructed images were obtained by using the same method in Fig. 6. It can be seen that the intensity distributions of these reconstructed images are quite similar to those shown in Fig. 6, and this indicates that the reconstructed images obtained from the DOF-extending method are clearer than those obtained from the conventional method with sharper edges and higher contrast ratio. Thus, the effectiveness of the DOF extending method was finally confirmed. Note that in the experiment, the DOF is enhanced at the expense of losing the light efficiency by a factor of about 49.6%. Therefore, people should be careful when trying to apply this DOF-extending method to situations where light efficiency is highly desired.

It is noteworthy that the DOF-extending method has a better performance in recording the high-frequency components of the object information due to its bandpass characteristics. Generally, for a 3-D scene with a small depth range, people pay more attention to the details of the 3-D object, which are mostly resolved by the high-frequency information.^{23} While for a 3-D scene with a large depth range, the optical transfer function of the DOF-extending method will suffer a severe attenuation and oscillation, which will seriously degrade the image quality.^{24} Therefore, the DOF-extending pickup method is more suitable for enhancing the DOF of a 3-D scene with a small depth range. After performing the experiment using different 3-D objects with different depth ranges for many times, we find that 60.0 mm [shown in Fig. 4(c)] is almost the largest depth range that can be adopted to demonstrate the effectiveness of the DOF-extending method for the given pickup system shown in Fig. 3. The deeper the 3-D object is, the less effective the DOF-extending method will be.

## 5.

## Conclusion

We have analyzed the light intensity distributions and propagation characteristics of the DOF extending and conventional integral-imaging pickup process. Experimental results of the optical pickup process and the computational reconstruction process have shown that the DOF-extending method works effectively when recording a 3-D scene with a small depth range. Note that in the optical pickup experiment, the DOF is enhanced at the expense of losing the light efficiency. Therefore, people should be careful when trying to apply this DOF-extending method to situations where light efficiency is highly concerned. Also, this method is currently difficult to apply to MLAs for optical pickup or reconstruction due to the limited aperture of each microlens, but with time, it would be possible for manufacturers to fabricate a kind of amplitude-modulating mask onto MLAs. In our future work, DOF-extending methods for recording and displaying 3-D scenes with large depth ranges will be presented.

## Acknowledgments

This work is supported by the “973” Program under Grant No. 2013CB328802, the NSFC under Grant Nos. 61225022 and 61320106015, and the “863” Program under Grant Nos. 2012AA011901 and 2012AA03A301. The authors would like to thank Prof. Manuel Martínez-Corral, Prof. Bahram Javidi, and Dr. Xiao Xiao for valuable suggestions on designing the optical pickup experiment.

## References

## Biography

**Cheng-Gao Luo** is currently pursuing his PhD in optical engineering at Sichuan University, Chengdu, China. He worked as a visiting research scholar at the University of Connecticut from 2012 to 2013. His recent research interest is information display technologies including 3-D displays.

**Qiong-Hua Wang** is a professor of optics at the School of Electronics and Information Engineering, Sichuan University, China. She was a postdoctoral research fellow at the School of Optics/CREOL, University of Central Florida, from 2001 to 2004. She has published more than 200 papers on information displays. She is the associate editor of *Optics Express* and *Journal of the Society for Information Display*. Her recent research interests include optics and optoelectronics, especially display technologies.

**Huan Deng** is a lecturer of optics at the School of Electronic and Information Engineering, Sichuan University. She received her PhD from Sichuan University in 2012. She has published more than 10 papers. She is a member of Society for Information Display. Her recent research interest is information display technologies including 3-D displays.