## 1.

## Introduction

Glasses-free three-dimensional display has experienced an unprecedented development in recent years. This is mainly because three-dimensional display provides more perceptual depth cues than conventional two-dimensional display.^{1}^{,}^{2} Two perceptual cues are crucial for humans to observe objects: the stereo parallax (seeing a different image with each eye-pupil) and the motion parallax (seeing different images when viewer moves). Providing high quality three-dimensional experience for viewers with both stereo and motion parallax is ideal but challenging.

At present, the auto-stereoscopic display based on multi-view technique is attainable without need of any special aids, and has been widely used in many commercial applications.^{3}4.5.^{–}^{6} Multi-view three-dimensional display generates a number of individual images, with each of them visible only in a particular viewing zone. These viewing zones are sophisticatedly designed to abut one another. Thus, at an optimal viewing distance and for certain viewpoints, different images are seen by different eyes to achieve stereoscopy. However, one of the main limitations of multi-view displays is the limited size of viewing zones. Perfect stereo motion parallax can only be achieved at a small number (usually several or decades) of viewing positions. This makes these three-dimensional displays unable to provide required continuous motion parallax. Other aspects related to imaging quality factors of conventional auto-stereoscopic displays are ray source resolution, angular sampling resolution, and viewing angle, despite its easier implementation.^{7}

To enable motion parallax and conform to the way in which humans observe objects in actual visual space, displays with large (even 360-deg all-around) viewing zones and smooth motion parallax have been a major advancement of three-dimensional display technology during the past few years, and undoubtedly they suggest tremendous application potentials in the future. Among diverse kinds of reported methods, volumetric display and light field display are two representative large viewing zone three-dimensional display technologies and considered as the natural three-dimensional displays.^{8}^{,}^{9} In terms of conventional volumetric displays such as static volume and swept volume displays, research on computational photography and electronic manufacturing development, while the products are still in experimental phase, great potential is indicated.^{10}11.^{–}^{12} However, the massive data and high-performance modulation devices that holographic display requires make its impossible to be manufactured soon.^{8}

The light field display describes rays of the actual light field of virtual objects. Therefore, it produces the best visual qualities. Zhejiang University proposed two systems achieving omni-directional view three-dimensional display effect using high-speed projector in 2009 and 2010.^{13}^{,}^{14} With this method, a high-speed projector projected, in a time-sequential fashion, images of different views onto a rotating selective-diffusing screen. Thus, observers will get delicate three-dimensional images all around the display system. Another approach to achieve all-around three-dimensional display is to create a cylindrical display area by rotating LED array or other display devices.^{15} However, most existing large viewing zone three-dimensional displays require high-speed projectors, rotation mechanism, precise optoelectronic modulators, and huge data to transfer and display.^{8} These factors have until now resulted in small-sized but inordinately high-cost systems.

An effective alternative method for achieving large-scale display is using the multi-projector light field display approach. Some researches have already provided continuous horizontal parallax to enhance volumetric understanding in this method.^{16}17.^{–}^{18} This multi-projector configuration succeeds in achieving large-scale display but existing systems, like Holografika systems,^{16}^{,}^{17} have not yet performed 360-deg display effect. Moreover, there is far less discussion in the literature about specific light field reconstruction image synthesis to achieve large-scale cylindrical, even 360-deg display and viewing area.

In this paper, a large-sized light field three-dimensional display system using multi-projectors and directional diffuser is proposed. Stereo parallax and smooth horizontal motion parallax are achieved by this approach. The viewing area of this system is much larger than that of conventional auto-stereoscopic displays and can be easily extended to 360-deg display performance. In the rest of the paper, we first introduce the system that generates the light field of a three-dimensional scene by projecting processed images from projectors onto a cylindrical directional diffuser screen. Both optical characteristic of directional diffuser and specific image synthesis algorithm to produce 360-deg light field are discussed to help carry out the prototype. Far less in the literature has reported these together to make general readers understand the light field display clearly and better. In the third section, a prototype with 100 mini-projectors and a nearly 2-meter-diameter cylindrical directional diffuser screen is developed to validate this approach. Finally, results are presented to demonstrate that the three-dimensional scene reconstructed by the prototype can be observed in quite a wide range around the diffuser screen by any number of observers.

## 2.

## Proposed System

The display system proposed in this paper is a light field display with horizontal parallax that employs a large number of projectors and a directional diffuser screen to generate rays of light in the light field. In this section, the light field reconstruction principle is introduced first, followed by description of the hardware configuration and parameters. The algorithm designed for 360-deg display to create the images projected by projectors is given at the end of the section.

## 2.1.

### Principle of Light Field Display

The light field, a mapping from rays to radiance, is an important tool in optical system simulation and analysis.^{19}^{,}^{20} Rays in the light field can be parameterized in different formations. In this paper, we use a concentric light field parameterization that a ray is represented by two spatial points, $P$ and $A$. The point $P$ is on a circle and the other point $A$ is on virtual three-dimensional scene, which the light field displays, as is shown in Fig. 1.

To display these rays in a light field, the proposed system discretizes the circle into spatial points and uses an array of projectors, consisting of $N$ projectors aligned in the same horizontal plane with lens pupil positioned at ${P}_{1}$ to ${P}_{N}$. However, due to a limited number of projectors, rays are projected discontinuously and horizontally from these discretely positioned projectors. That is to say, without a cylindrical directional diffuser, screen observers would only obtain a series of discontinuous emitting exit-pupils of the projectors. To smooth the discontinuity of rays, a cylindrical directional diffuser screen is set in front of these projectors, as illustrated in Fig. 1(a). Rays in the light field are generated from projectors and projected on the diffuser screen. Any arbitrary point on the diffuser screen within the viewing zone is illuminated by a set of projectors. The directional diffuser screen here ensures that these rays coming from different directions are emitted in carefully controlled angular sections, so that each ray is correctly emitted to its direction, forming the light field. Such a scattering smooths the gaps between rays and produces continuous imagery.

Given a virtual three-dimensional object in the light field, lights emitted from the virtual three-dimensional object are represented in rays projected from projectors. The point is to figure out the direction and magnitude of each possible ray, represented as Ray. Then the light field distribution could be described using the mathematical vector set, represented as $S$, defined by Eq. (1):

## (1)

$$S=[\mathrm{Ray}|\overrightarrow{{P}_{ij}{A}_{k}},i=(1,2,3,\cdots ,N),j=(1,2,3,\cdots ,M),k=(1,2,3,\cdots ,L)|],$$Take any two viewing positions ${V}_{1}$, ${V}_{2}$ in the viewing area, for example. Due to the light field reconstruction principle, rays that make the point ${A}_{1}$, ${A}_{2}$, or ${A}_{3}$ on the virtual three-dimensional object observed by the viewing positions ${V}_{1}$ and ${V}_{2}$ are emitted from different projecting points to form the light field, as illustrated in Fig. 1(b). This produces the stereo parallax.

In the system, the distribution of light rays is not specially designed for certain viewing positions as that of multi-view three-dimensional displays. It means, ideally, by distributing rays as uniformly as possible, the adjacent light rays could be made close sufficiently. Then the viewed imagery can be more continuous without the jumping when the viewer moves. This produces the smooth motion parallax. The specific image mapping algorithm is given in Sec. 2.3.

## 2.2.

### Characteristics of the Diffuser Screen

Images of projectors are projected onto the directional diffuser screen from different directions simultaneously. The function of the special directional diffuser is similar to that of the holographic functional materials, which is applied to control the diffuse angle.^{21}^{,}^{22} Because in this system the projectors are arranged in horizontal arc and consider horizontal parallax only, the special directional diffuser is designed to have a small certain diffuse angle in horizontal and a large diffuse angle in vertical. Hence, one projector gives a narrow vertical stripe of image for a certain viewing position [Fig. 2(a)]. The diffuse angle in horizontal is an important characteristic of the diffuser screen to achieve smoother images.

Based on the geometrical relation in Fig. 2(b), the two principal rays of adjacent projectors, represented in different colors, should be diffused in a certain angle so as to meet accurately, as the shadow regions show. Then we can attain the relation between interval angle of adjacent projectors $\delta $ and diffuse angle of the special diffuser $\epsilon $, which is defined by Eq. (2):

where $R$ is the radius of the directional diffuser, and $D$ is the distance from the projector to the center of the system $O$. Since $\delta $ and $\epsilon $ are rather small, approximation can be obtained to come out as follows in Eq. (3):Finally, the optimum diffuse angle of the special diffuser $\epsilon $ is defined by Eq. (4):

In addition, the boundary condition of $\delta $, which should be satisfied to attain stereo parallax and motion parallax when the head moves across different viewing positions, is given by Eq. (5): where $d$ is the distance between two eye- pupils, and ${R}_{V}$ is the optimum viewing distance from the center of the system $O$.## 2.3.

### Image Synthesis Algorithm

The task of image synthesis algorithm is to produce images projected from projectors so as to correctly represent the lights emitted from the virtual three-dimensional object. In contrast to multi-view three-dimensional displays, our system does not generate view images for certain viewing positions. It makes our system unable to directly use the standard multiple-center-of-projection (MCOP) algorithm that has been generally used in previous three-dimensional displays.^{23} And the key feature of our image synthesis lies in its ability to generate 360-deg projecting images with more feasible and flexible algorithm. Based on the adaption proposed in Ref. 21, we present our modified image synthesis algorithm here to generate projecting images.

The scene point ${A}_{k}$ is first projected to the projectors’ array to find its projected pixels ${P}_{ij}$, as well as which projector it belongs to. Then the intersection of ray $\overrightarrow{{P}_{ij}{A}_{k}}$ and the arc diffuser screen is computed and regarded as the shading point for observing. Such a process is widely adapted in computer graphics.

Specifically, take a central projector in the conventional $xyz$ coordinate system first, and deduce the algorithm in $xy$ plane and $yz$ plane individually. The image synthesis algorithm can be illustrated by Fig. 3. Based on the divergence angle $\omega $ of every projector, assume an image plane located on $yz$ plane, which is also the projected image, defined as ${Q}_{1}{Q}_{2}$ in the figure. Actually, choice of the location of this computing image plane will not affect the final mapping results. Consider a certain pixel in a projector owns horizontal and vertical position, defined as ${P}_{\mathrm{ist}}$ here to present clearly, wherein $s$ and $t$ stand for the horizontal and vertical sequential number of this pixel in the projector, respectively. The product of maximum $s$ and $t$ equals $M$ mentioned above. Then the intersection of vector $\overrightarrow{{P}_{\mathrm{ist}}{A}_{k}}$ and ${Q}_{1}{Q}_{2}$ is the pixel ${P}_{\mathrm{ist}}$ that needs to be projected. In the given $xyz$ coordinate system, the pixel ${P}_{\mathrm{ist}}$ $(0,y,z)$ can be inferred with the known ${A}_{k}$ $({x}_{1},{y}_{1},{z}_{1})$ and ${P}_{m}$ $(-D,0,0)$, which is given by Eq. (6):

Then we further get the value of $s$ and $t$, which can map a certain pixel in the projector it belongs to, defined by Eq. (7):

## (7)

$$(s,t)=\mathit{round}\left[\right(\frac{y}{{Q}_{1}{Q}_{2}}+1)\xb7\frac{{M}_{1}}{2},(\frac{z}{{Q}_{1}{Q}_{2}}+1)\xb7\frac{{M}_{2}}{2}],$$*round*rounds the number to the nearest integer, and ${M}_{1}$ and ${M}_{2}$ are the total pixel numbers of the projector in horizontal and vertical mentioned above. Hence we get the final equation [Eq. (8)] by substituting all the known parameters:

## (8)

$$(s,t)=\mathit{round}\left\{\right[\frac{{y}_{1}\xb7\mathrm{cot}(\omega /2)}{{x}_{1}+D}+1]\xb7\frac{{M}_{1}}{2},\phantom{\rule{0ex}{0ex}}[\frac{{z}_{1}\xb7\mathrm{cot}(\omega /2)}{{x}_{1}+D}+1]\xb7\frac{{M}_{2}}{2}\}.$$Traverse all the scene point ${A}_{k}$, then compute all the intersection point ${P}_{\mathrm{ist}}$ for this projector, which is chosen as the basic reference. After attaining the mapping relation for this projector, for arbitrary projector ${P}_{i}$, it is easy to carry out the similar relation by rotating the coordinate system from $xyz$ to $uvz$ system around the original point $O$ in an angle of $\frac{(N-i)\xb7\delta}{2}$, where $\delta $ is the interval angle of adjacent projectors mentioned above. At this time, ${A}_{k}$ $({x}_{1},{y}_{1},{z}_{1})$ turns to ${A}_{k}$’ $({u}_{1},{v}_{1},{z}_{1})$, and mapping results can be attained by putting the ${A}_{k}$’ into Eq. (8). Hence, at last we obtain a series of projecting image sources, even 360-deg display requirements.

## 3.

## Prototype Equipment and Experimental Results

To verify this display principle and system design, a piece of prototype equipment is developed with 100 synchro-control L-CoS mini-projectors with a nearly 2-meter-diameter arc directional diffuser. The prototype is shown in Fig. 4. The 100 mini-projectors are set in four rows staggered horizontally to make the adjacent exit-pupils closer in the horizontal direction. Generally, it displays three-dimensional images with numerous continuous viewing positions in horizontal around the diffuser within a large vertical observing area. The interval angle between the adjacent projectors is designed as 0.8 deg, which is related to the horizontal diffuse angle of the special directional diffuser. The present resolution used for two-dimensional images from each projector is $320\times 240\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$, while the size of three-dimensional images is as large as 110 cm in width and 90 cm in height. The specifications of our prototype are listed in Table 1. Computer graphics real-time image synthesis algorithm is applied to generate the projected source images for light field reconstruction as introduced in Sec. 2.3.

## Table 1

Specifications of the prototype equipment.

Number of projectors | 100 |
---|---|

Interval angle of adjacent projectors | 0.8 deg |

Two-dimensional image resolution of projector | 320×240 (pixels) |

Three-dimensional image size | Nearly 1.1×0.9 (m) in central view |

Directional diffuse angle | Horizontal 0.9 deg and vertical 55 deg, approximately |

Optimum viewing area | 1.0 to 1.5 m from the diffuser screen |

As analyzed in Sec. 2.2, the directional diffuser plays a crucial part in achieving continuous images constituted by many element images. Thus, the luminance angular distribution of the light rays emitted through the special directional diffuser might have great influence on the uniformity of three-dimensional images, as well as the splicing accuracy. Here we use two different designed perpendicular lenticular sheets assembled together to attain the required diffuse angle in horizontal and vertical. (The supplier of these sheets is an ordinary Chinese company.) The characteristics are measured with a narrow parallel beam of incoherent rays (LED source), which passes through the directional diffuser and then illuminates a band area on a lambertian screen.

The image of the illuminated band area is then captured by a camera, as illustrated by Fig. 5(a). After processing the image with engineering software, the luminance angular distribution is obtained, as illustrated by Fig. 5(b). The curve demonstrates the characteristic of diffusing the light to a controlled angular distribution in both horizontal and vertical directions. In the figure, the horizontal axis represents the angle of the diffused light, while the vertical axis represents the relative luminance. The horizontal and vertical angular distribution of diffused light is presented by dotted and solid lines, respectively. From the distribution, it can be inferred that the small diffuse angle in horizontal is matched to the interval angles of adjacent viewing positions, taking the half-width of the angle distribution into consideration, which is about 0.9 deg. In the meantime, from the vertical perspective, the diffuse angle is sufficiently large to create enough comfortable observing area. Because of this characteristic, when an element image is projected onto the special diffuser screen, only a narrow stripe image would be observed.

Figure 6(a) to 6(c) shows the designed results of a three-dimensional scene with a rotation angle of 2.4 deg each. A three-dimensional scene of a color cartoon tiger and a color cartoon dragon is designed for display, respectively. Figure 7(a) to 7(c) shows the photos of a three-dimensional scene displayed by the prototype equipment, and the photos are captured at different horizontal viewing positions. Obviously, bright and large-sized vivid three-dimensional scenes displayed by the system can be observed at different horizontal viewing positions around the cylindrical display area. The observers will get depth perception from delicate stereo parallax and horizontal motion parallax.

This method will also achieve floating effect for interaction to some degree. Figure 8(a) and 8(b) present the photos of an observer interacting, or “playing,” with the vivid virtual characters from two different perspectives. Moreover, other interaction devices can be attached to build future three-dimensional human–machine interaction systems, which could be widely applied in many areas, including medicine, advertising, public exhibition, and so on.

The number of views and the image resolution are two crucial problems for natural three-dimensional display. This is mainly because the resolution of the underlying light field display is shared between views. As mentioned above, this light field display approach is capable of producing smooth motion parallax, which might result in distortion between two exit-pupils of projectors since approximation is introduced. It is known as a distinct disadvantage compared with multi-view auto-stereoscopic displays. Thus, multi-projection style is verified to compensate for such problems and to potentially achieve large-sized all-around display.

In terms of the imaging performance, flat images projected onto such a large curvature of the arc screen might tend to decrease the fineness of the pixels resulting from being out of focus, and may cause image distortion. Image distortion will cause problems such as stripe images not splicing accurately. Appropriate and fast geometric calibration is the following task of our research. In addition, the occurrence of dark stripes is another noticeable problem, as could be seen from the experimental results. Instable performance of experimental mini-projectors and the aberration of system configuration are the main reasons for this. Meanwhile, the optimum match between the hardware parameters and the diffuser characteristic should be designed considering the splicing accuracy and images’ uniformity of luminance in further research.

## 4.

## Conclusions

In conclusion, our research demonstrates an approach to achieve large-sized light field three-dimensional displays, which has potential to be used in future natural three-dimensional cinema. The multi-projection style is verified in the prototype equipment to provide large-sized, even 360-deg natural imaging performance with motion parallax and stereo parallax. The directional diffuser screen has great performance in diffusing the light within different ranges in both horizontal and vertical directions. Meanwhile, its characteristics in terms of the uniformity of the luminance and acceptable transmit-rate demonstrate commercial feasible to some degree. More refined and delicate 360-deg all-around reconstructed imaging performance can be expected with utilizing more mini-projectors or pico-projectors. Therefore, bright and large-sized floating three-dimensional scenes reconstructed by this approach will be observed at continuous horizontal viewing positions around the arc diffuser screen with less fatigue. This approach owns great engineering contribution on providing 360-deg displays with motion parallax in the near future.

## Acknowledgments

This work was supported by research grants from the National Key Fundamental Research Program of 863 (2012AA011902), National Natural Science Foundation (61177015), and Fundamental Research Funds for the Central Universities of China (2012XZZX013). Great thanks to the State Key Lab of Modern Optical Instrumentation, Zhejiang University, for providing the necessary support.

## References

N. A. Dodgson, “Autostereoscopic 3D displays,” IEEE Computer 38(8), 31–36 (2005).CPTRB40018-9162http://dx.doi.org/10.1109/MC.2005.252Google Scholar

F. L. KooiA. Toet, “Visual comfort of binocular and 3D displays,” Displays 25(2), 99–108 (2004).DISPDP0141-9382http://dx.doi.org/10.1016/j.displa.2004.07.004Google Scholar

J. Cobb, “Autostereoscopic desktop display: an evolution of technology,” Proc. SPIE 5664, 139–149 (2005).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.585053Google Scholar

Y. TakakiN. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18(9), 8824–8835 (2010).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.18.008824Google Scholar

J. Leeet al., “Optical performance analysis method of auto-stereoscopic 3D displays,” in SID Int. Symp. Digest, pp. 327–330, SID, Seattle (2010).Google Scholar

W. X. Zhaoet al., “Autostereoscopic display based on two-layer lenticular lenses,” Opt. Lett. 35(24), 4127–4129 (2010).OPLEDP0146-9592http://dx.doi.org/10.1364/OL.35.004127Google Scholar

J. Honget al., “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011).APOPAI0003-6935http://dx.doi.org/10.1364/AO.50.000H87Google Scholar

N. S. Hollimanet al., “Three-dimensional displays: a review and applications analysis,” IEEE Trans. Broadcast. 57(2), 362–370 (2011).IETBAC0018-9316http://dx.doi.org/10.1109/TBC.2011.2130930Google Scholar

J. H. ParkN. HongB. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).APOPAI0003-6935http://dx.doi.org/10.1364/AO.48.000H77Google Scholar

G. E. Favalora, “Volumetric 3D displays and application infrastructure,” IEEE Computer 38(8), 37–44 (2005).CPTRB40018-9162http://dx.doi.org/10.1109/MC.2005.276Google Scholar

C. J. Yanet al., “Omnidirectional multiview three-dimensional display based on direction-selective light-emitting diode array,” Opt. Eng. 50(3), 034003 (2011).OPEGAR0091-3286http://dx.doi.org/10.1117/1.3552664Google Scholar

X. Y. XieX. LiuY. F. Lin, “The investigation of data voxelization for a three dimensional volumetric display system,” J. Opt. A Pure Appl. Opt. 11(4), 045707 (2009).JOAOF81464-4258http://dx.doi.org/10.1088/1464-4258/11/4/045707Google Scholar

C. J. Yanet al., “Color three-dimensional display with omni-directional view based on a light-emitting diode projector,” Appl. Opt. 48(22), 4490–4495 (2009).APOPAI0003-6935http://dx.doi.org/10.1364/AO.48.004490Google Scholar

X. X. Xiaet al., “Omnidirectional-view three-dimensional display system based on cylindrical selective-diffusing screen,” Appl. Opt. 49(26), 4915–4920 (2010).APOPAI0003-6935http://dx.doi.org/10.1364/AO.49.004915Google Scholar

S. M. LiuC. F. ChenK. C. Chou, “The design and implementation of a low-cost 360-degree color LED display system,” IEEE Trans. Consum. Electron. 57(2), 289–296 (2011).ITCEDA0098-3063http://dx.doi.org/10.1109/TCE.2011.5955158Google Scholar

T. Balogh, “Method and apparatus for generating 3D images,” U.S. Patent No. 7,959,294 (2011).Google Scholar

T. Balogh, “The HoloVizio system,” Proc. SPIE 6055, 60550U (2006).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.650907Google Scholar

J. A. I. GuitiánE. GobbettiF. Marton, “View-dependent exploration of massive volumetric models on large-scale light field displays,” Vis. Comput. 26(6), 1037–1047 (2010).VICOE50178-2789http://dx.doi.org/10.1007/s00371-010-0453-yGoogle Scholar

R. G. Yanget al., “Toward the light field display: autostereoscopic rendering via a cluster of projectors,” IEEE Trans. Vis. Comput. Graph. 14(1), 84–96 (2008).TVCG1077-2626http://dx.doi.org/10.1109/TVCG.2007.70410Google Scholar

J. StewartE. P. BennettL. McMillan, “PixelView: a view-independent graphics rendering architecture,” in Proc. ACM SIGGRAPH/EUROGRAPHICS on Graphics Hardware, pp. 75–84, ACM, New York (2004).Google Scholar

A. Joneset al., “Rendering for an interactive 360° light field display,” in Proc. ACM SIGGRAPH Emerging Technologies, Vol. 40, ACM, New York (2007).Google Scholar

S. Liet al., “Full-parallax three-dimensional display using new directional diffuser,” Chinese Opt. Lett 9(8), 081202 (2011).COLHBT1671-7694http://dx.doi.org/10.3788/COLGoogle Scholar

M. W. Halle, “Multiple viewpoint rendering,” in Proc. ACM SIGGRAPH, pp. 243–254, ACM, New York (1998).Google Scholar

## Biography

**Yi-fan Peng** received his BS degree in information engineering from Zhejiang University, China in 2010. Currently, he is studying for his MS degree, in State Key Laboratory of Modern Optical Instrumentation, Zhejiang University. He is a student member of the Society for Information Display and the Optical Society of America. His research interest includes three-dimensional imaging acquisition and display, and display characteristic testing.

**Hai-feng Li** is a professor in the Department of Optical Engineering, Zhejiang University, China. He received his BA and MS in physics, in 1988 and 1991, respectively, from Nankai University, and his PhD in optical engineering from Zhejiang University, China in 2002. He has been a faculty member at Zhejiang University since 1991. His current research works include three-dimensional light field display, projection display, and liquid crystal devices.

**Qing Zhong** received his BS degree in information engineering from Zhejiang University, China in 2011. Currently, he is studying for his PhD degree in State Key Laboratory of Modern Optical Instrumentation, Zhejiang University. His research interests include projection techniques, three-dimensional display, and display characteristic testing.

**Xin-Xing Xia** received his BS degree in information engineering from Zhejiang University in 2008. Currently, he is studying for his PhD degree in State Key Laboratory of Modern Optical Instrumentation, Zhejiang University. His research interests include three-dimensional imaging acquisition and display, and display characteristics.

**Xu Liu** obtained his BS and MS degrees in Zhejiang University, China in 1984 and 1986, respectively, and his PhD in ENSPM Université d'Aix-Marseille III, France in 1990. He is now a professor in the Department of Optical Engineering, Zhejiang University, the director of the State Key Laboratory of Modern Optical Instrumentation, and the executive dean of the Faculty of Information Science of Zhejiang University. He is the vice chair of the Chinese Optics Society. His research fields are optical thin films and thin film technology, optical precision detection, projection display, and three-dimensional display.