30 May 2018 Review of real-time reconstruction techniques for aerial-projection holographic displays
Author Affiliations +
Abstract
Electroholography enables the projection of three-dimensional (3-D) images using a spatial-light modulator. The extreme computational complexity and load involved in generating a hologram make real-time production of holograms difficult. Many methods have been proposed to overcome this challenge and realize real-time reconstruction of 3-D motion pictures. We review two real-time reconstruction techniques for aerial-projection holographic displays. The first reduces the computational load required for a hologram by using an image-type computer-generated hologram (CGH) because an image-type CGH is generated from a 3-D object that is located on or close to the hologram plane. The other technique parallelizes CGH calculation via a graphics processing unit by exploiting the independence of each pixel in the holographic plane.
Kakue, Wagatsuma, Yamada, Nishitsuji, Endo, Nagahama, Hirayama, Shimobaba, and Ito: Review of real-time reconstruction techniques for aerial-projection holographic displays

1.

Introduction

Holography1 involves recording and reconstruction of three-dimensional (3-D) images using interference and diffraction of light. Holography records 3-D images of an object as interference fringe patterns called holograms. Since holography can satisfy all of the physiological factors required for 3-D recognition, such as binocular disparity, motion parallax, focus adjustment, and convergence, they can display 3-D images without causing eye fatigue.

Although classical holography can reconstruct high-quality 3-D images using high-resolution holographic recording materials, dynamic objects are difficult to record and reconstruct because rapid rewriting of these materials is difficult. Electroholography has been proposed to address this challenge.2 Because electroholography uses a spatial-light modulator (SLM), which can dynamically switch displayed patterns, to display holograms, electroholography can reconstruct 3-D motion pictures. Here, hologram patterns need to be obtained on a computer to display holograms using an SLM. Computer-generated holograms (CGHs)3,4 can be produced by numerically simulating the propagation of light from the 3-D object to the hologram plane with diffraction calculations. Several methods are available to calculate CGHs, such as the point-cloud method, the polygon method, and the multiview-image method, but they all entail computational costs and complexity that render dynamic reconstruction of 3-D objects impractical. CGH calculation must be accelerated or made more efficient to make real-time electroholographic displays feasible. With the point-cloud method, CGHs can be calculated using graphics processing units (GPUs)56.7.8.9 because the calculations can be parallelized because of the independence of each pixel in the holographic plane. However, for the point-cloud method, real-time calculation for a high-resolution CGH of a large 3-D object is computationally complex. Hence, reducing the computational complexity of the CGH calculations in the point-cloud method could yield a method rapid, real-time CGH calculation.

A recurrence-relation algorithm1011.12.13 can reduce computational complexity by using addition instead of multiplication operations in its calculations. This algorithm exploits the phase differences between neighboring pixels in the hologram plane and is well suited to hardware implementations of CGH calculations. Some researchers have developed special-purpose computers, referred to as “holographic reconstruction: HORN,” to distribute the calculations. The recurrence-relation algorithm was adopted by HORN system, and CGH calculations were successfully obtained by implementing a pipeline structure with parallel computers, such as field-programmable gate arrays.1415.16.17 The latest version of the HORN system, the eight-board HORN-8 cluster, was used to accelerate the CGH calculations by more than 1000 times compared to the speed achieved with the use of a central-processing unit (CPU), making optimal use of four cores.17

A patch-model algorithm18,19 also exploits the recurrence relations between neighboring pixels. Although the patch-model algorithm can be used to effectively accelerate the CGH calculations when there is a large number of point-light sources, the error caused by approximation becomes large when the distance between a 3-D object and the CGH plane is large. A look-up table (LUT) algorithm can also be used to reduce computational complexity.2021.22.23.24.25.26.27.28 The LUT algorithm precalculates values of complex amplitude and stores them in an LUT. Although storing precalculation results in the LUT requires large memory allocation, this strategy can significantly accelerate CGH calculations and can be combined with other approaches. An algorithm based on image-type holograms has successfully reduced computational complexity.29,30 This algorithm calculates CGH patterns on only a small region of the CGH plane because the 3-D object is placed close to the CGH plane. The wavefront-recording-plane (WRP) algorithm3132.33.34.35.36.37 also requires that the 3-D object be placed near the WRP. Although the WRP algorithm requires diffraction calculations for the region between the WRP and the CGH plane in addition to CGH calculations based on the point-cloud method, the diffraction calculations can be accelerated using fast Fourier transforms (FFTs). Because the computational complexity of the FFT can be expressed as O(NlogN), where N indicates the sampling number and is a power of 2, the WRP algorithm can be used to drastically accelerate the CGH calculations when the CGH resolution is large. The acceleration ratio in the WRP algorithm decreases as the distance from the 3-D object increases.

An algorithm based on point symmetry of a zone plate, which indicates a CGH generated from a single-point light source, can reduce the computational complexity.38,39 An adaptive point-spread spherical wave synthesis algorithm40,41 accelerates CGH calculations by reducing the number of point-light sources using data-compression ideas from graphics processing. An algorithm based on the sparsity of hologram patterns42 can accelerate CGH calculations by using sparse FFTs whose computational complexity is lower than that of standard FFTs. A wavelet shrinkage-based superposition algorithm accelerates CGH calculations by approximating point-spread functions with a representative sample of wavelet coefficients.43 Table 1 summarizes the advantages and disadvantages of several representative CGH-calculation algorithms used in the point-cloud method.

Table 1

Advantages and disadvantages of representative CGH-calculation algorithms for the point-cloud method.

AlgorithmAdvantagesDisadvantages
Recurrence-relationEasy and suitable for implementation in hardware with a pipeline structureAccumulation of errors according to the length of the recurrence relation
Patch modelFaster when the number of point-light sources is largeLarge errors when the distance between the 3-D object and the CGH plane is large
LUTMuch faster CGH generation and compatibility with being combined with other approachesRequires significant memory allocation to store the precalculation results
WRPVery low complex complexity when the resolution of the CGH is largeHigh complex complexity when the 3-D object has a large depth

In this paper, we review two strategies for achieving electroholographic display systems that can project aerial 3-D images in real time. Both strategies are based on the point-cloud method. Although the point-cloud method requires a huge number of point-light sources to obtain a realistic model of a 3-D object, the method is well suited to hardware implementation as the CGH-calculation processes can be parallelized with the pipeline structure. Miniaturization or modularization of the CGH calculation system is required for applications to head-mounted displays and other mobile devices. For example, our research group aims to create a chip dedicated for electroholography to upgrade the HORN system.17 Hence, we adopted the point-cloud method in the two real-time electroholographic display systems described herein. The first reduces the computational load required for each CGH, and the second parallelizes the CGH calculations using a GPU instead of the HORN system.

2.

Electroholography and Computer-Generated Holograms

2.1.

Electroholography

Figure 1 is a schematic of the optical setup used for electroholography. Light from the optical source passes through the beam expander and illuminates the SLM. The SLM diffracts the light according to the pattern of the hologram. The first-order diffracted light is referred to as an object wave and corresponds to the light emitted from the 3-D object while recording the hologram. An observer sees a 3-D object when receiving the object waves in a sequence. Although Fig. 1 shows a monochromatic reconstruction system for simplicity, full-color reconstruction is possible with multiple optical sources of red, green, and blue light. Many studies have been published regarding color electroholography using several SLMs4445.46.47.48.49 or using a single SLM with space-division multiplexing,50,51 depth-division multiplexing,52,53 and time-division multiplexing.5455.56.57.58.59

Fig. 1

Schematic of an optical setup for electroholography.

OE_57_6_061621_f001.png

2.2.

Computer-Generated Holograms

A CGH3,4 is a hologram that is generated by numerical simulation of light diffraction and interference patterns. CGHs can be calculated via methods, such as the point-cloud method, the polygon method, and the multiview-image method.

2.2.1.

Point-cloud method

Figure 2 is a schematic of the point-cloud method. A 3-D object is represented as an aggregate of point-light sources, known as a point cloud. We denote the coordinates of the l’th point-light source Pl and the corresponding pixel on the hologram plane by Pl(xl,yl,zl) and H(xH,yH,0), respectively. Then, ul(xH,yH,0) indicates the complex amplitude of the light emitted from Pl at H:

(1)

u(xH,yH,0)=Alrlexp{i2πλsgn(zl)rl},

(2)

rl=(xlxH)2+(ylyH)2+zl2.

Fig. 2

Schematic of the point-cloud method.

OE_57_6_061621_f002.png

Here, Al, i, λ, and sgn(R) indicate the amplitude of the light emitted from Pl, the imaginary component, the wavelength of light, and the signum function of a real number R, respectively. We can calculate h(xH,yH,0), the complex amplitude formed by the 3-D object at H, by aggregating ul(xH,yH,0) from all point sources of light:

(3)

h(xH,yH,0)=l=1LAlrlexp{i2πλsgn(zl)rl},
where L is the number of point-light sources. The hologram pattern can finally be calculated from h(xH,yH,0). Since commercially available SLMs can modulate only intensity or phase of light and not complex amplitude, amplitude-modulation or phase-modulation holograms need to be generated. We represent the amplitude-modulation hologram hamplitude(xH,yH,0) and phase-modulation type hologram hphase(xH,yH,0) with the following equations:

(4)

hamplitude(xH,yH,0)=Re[h(xH,yH,0)],

(5)

hphase(xH,yH,0)=arg[h(xH,yH,0)].

Here, Re[C] and arg[C] indicate the real part and the argument of a complex number C, respectively. Since Eq. (1) is independently calculated for each pixel on the hologram plane and each point source of light from the 3-D object, we can accelerate the calculation using parallel processing with multiple cores in CPU or a GPU. We can also accelerate CGH computations by using fast algorithms for the point-cloud calculations.6,7,1011.12.13,1819.20.21.22.23.24.25.26.27.28.29.30.31.32.33.34.35.36.37.38.39.40.41.42.43

2.2.2.

Polygon method

Figure 3 is a schematic of the polygon method.6061.62.63.64 The polygon method expresses a 3-D object as an aggregate of planar polygons. We can obtain h(xH,yH,0) by accumulating the light diffracted from each polygon to the hologram plane. Light diffraction is calculated by the Fresnel diffraction method or the angular-spectrum method. We assume that the complex amplitudes of the source plane and the destination plane are denoted as us(xs,ys,0) and ud(xd,yd,z), respectively. The Fresnel diffraction method65 can be expressed by

(6)

ud(xd,yd,z)=IFT[FT[us(xs,ys,0)]FT[g(xs,ys,z)]],

(7)

g(xs,ys,z)=exp[iπλz{(xdxs)2+(ydys)2}].

Fig. 3

Schematic of the polygon method.

OE_57_6_061621_f003.png

Here, FT[C] and IFT[C] indicate the Fourier transform and inverse Fourier transform of a complex number C, respectively. The angular-spectrum method65 can be expressed by

(8)

ud(xd,yd,z)=IFT[FT[us(xs,ys,0)]exp(i2πz1λ2fx2fy2)],
where fx and fy indicate the spatial frequencies on the x and y axes, respectively. Since Eqs. (6) and (8) are based on Fourier transforms, we can accelerate the calculations using FFTs. However, the Fresnel diffraction method and the angular-spectrum method can be used only when the sampling pitches of the source and destination planes are equal and the planes are parallel. To overcome this issue, several algorithms have been proposed: the shifted Fresnel diffraction method,66 aliasing-reduced shifted and scaled Fresnel diffraction method,67 scaled angular-spectrum method,68,69 etc. In addition, by applying a nonuniform sampling method7071.72 to the Fresnel diffraction and angular-spectrum calculations, we can calculate diffractions between planes that were not uniformly sampled.

2.2.3.

Multiview-image method

Figure 4 shows a schematic of the multiview-image method. This method generates a hologram pattern from a series of 2-D images captured from different viewing positions on the hologram plane. Although the multiview-image method can be realized with holographic stereogram7374.75.76 or ray-wavefront conversion7778.79.80.81.82.83.84 approaches, they both acquire the complex amplitudes of a 3-D scene from the 2-D images using Fourier transforms. Hence, CGH calculations in the multiview-image method can be accelerated with FFT. Ichihashi et al.79 succeeded in reconstructing a 3-D scene in real time from CGHs with 7680×4320  pixels at 12  frames/s using ray-wavefront conversion approach.

Fig. 4

Schematic of the mulitview-image method.

OE_57_6_061621_f004.png

2.2.4.

Other methods

The red, green, blue, and depth (RGB-D) image method represents a 3-D scene by composing sectional or layered color 2-D images at different depths.8586.87 Each RGB-D image is a set of color 2-D and depth images. We can obtain RGB-D images using commercially available RGB-D cameras, such as the Microsoft Kinect. A hologram can then be generated from diffraction calculations for the light passing from each sectional color 2-D image to the hologram plane. A 3-D Fourier-spectrum approach88,89 can yield planar as well as cylindrical holograms.

2.2.5.

Comparison

Table 2 summarizes advantages and disadvantages of three CGH-calculation methods: the point-cloud, polygon, and multiview-image methods. The point-cloud method can be used to reconstruct very-high-definition 3-D images because the complex amplitude on the CGH plane is calculated based on wave optics. However, the computational cost associated with the point-cloud method is very high because computational complexity is proportional to both the number of point-light sources and the number of pixels in the CGH. Hence, it is important to reduce the computational complexity of the point-cloud method by implementing algorithms, such as a recurrence-relation algorithm,1011.12.13 a patch-model algorithm,18,19 an LUT algorithm,2021.22.23.24.25.26.27.28 or other algorithms. Like the point-cloud method, the polygon method is based on wave optics and can further be used to reconstruct high-definition 3-D images; however, it is associated with a high computational cost. By contrast, the multiview-image method is based on geometrical optics, and, because various techniques commonly used for 3-D computer graphics based on geometrical optics can be applied to generate 2-D images for different viewpoints, the computational cost is low. Moreover, occlusion culling (or hidden-surface removal), shading, and texture expression can be easily performed in the multiview-image method. However, the resolution of the reconstructed 3-D image is relatively low because the phase information of the light is lost.

Table 2

Advantages and disadvantages of CGH-calculation methods.

MethodImage qualityComputational costEase of occlusion culling, shading, and texture expression
Point-cloudVery highVery highVery complicated
PolygonHighHighComplicated
Multiview-imageLow, mediumLowEasy

3.

Real-Time Holographic Aerial-Projection System Based on Image-Type Holograms

In this section, we review a real-time holographic aerial-projection system that can yield sufficiently short computation times to allow real-time calculation of CGHs and manipulation of aerial 3-D images. Image-type holograms29 are generated from a 3-D object that is located on or close to the hologram plane. The computational load required for point-cloud calculations generally increases with the square of zl, which represents the distance between a 3-D object and the hologram plane, as shown in Fig. 5, because the range-dependent calculation of Eq. (1) is limited by the pixel pitch p of the SLM. We can reduce the computational load with image holograms, but they are difficult to project as aerial 3-D images because image-type holograms can only reconstruct images close to the hologram plane. To address this issue, we included two parabolic mirrors to achieve aerial projection of 3-D images. Figure 6(a) shows a schematic of the proposed holographic aerial-projection system. Two parabolic mirrors with holes in their centers and placed face-to-face on top of each other are used. An SLM is positioned at the lower hole of the parabolic mirrors. The SLM is inclined at an angle of 20  deg relative to the horizontal plane such that the diffraction waves from the CGHs are incident on the upper parabolic mirror. An image-type CGH is displayed on the SLM, and a holographic image is reconstructed near the SLM plane. Then, the parabolic mirrors project a real image of the reconstructed holographic image at the upper hole, allowing a view of the holographic image. Figure 6(b) shows an overview of the experimental setup. We used a green laser operated at 532 nm as the optical source. The outer diameter, hole diameter, height, and focal length of each parabolic mirror are 288, 80, 55, and 100 mm, respectively. We used a phase-modulation SLM (Holoeye Photonics AG) with a pixel pitch, resolution, refresh rate, and gradation of 8.0  μm×8.0  μm, 1920×1080  pixels, 60 Hz, and 256 gray levels, respectively. The SLM was positioned 20  mm away from the bottom of the parabolic mirrors. Figure 7(a) is a schematic of the 3-D object that was used to confirm the system’s functionality, the skeleton of a fish comprising 1498 point-light sources. The 3-D object was virtually placed 20  mm away from the CGH plane. Figures 7(b)7(d) show images of the projected motion picture captured by a digital video camera. The position and size of the fish were varied using keyboard operations in real time. In Figs. 7(b) and 7(c), the position of the 3-D object was varied along the y- and z-axes, respectively. Because the focal distance of the camera was fixed, the projected images became out of focus as the 3-D object was moved along the z-axis. In Fig. 7(d), the size of the 3-D object was increased. The results demonstrate that real-time holographic aerial projection is possible with image-type CGHs and two parabolic mirrors.

Fig. 5

Range-dependent calculation of Eq. (1).

OE_57_6_061621_f005.png

Fig. 6

Real-time holographic aerial-projection system based on image-type holograms: (a) a schematic and (b) a photograph of the setup.

OE_57_6_061621_f006.png

Fig. 7

Experimental results from (a) a 3-D object comprising 1498 point-light sources and the obtained holographic images using keyboard operations in real time as (b) the 3-D object is moved along the y-axis, (c) the 3-D object is moved along the z-axis, and (d) the 3-D object is enlarged.

OE_57_6_061621_f007.png

To evaluate real-time performance of the experimental system, we measured the processing time for CGH calculation when the number of the point-light sources and the distance between the 3-D object and the CGH plane were changed. Here, we set the pixel pitch and the resolution of CGHs to be the same as those used in the experiment described above, respectively. We applied Nishitsuji’s algorithm39 to accelerate image-type CGH calculation. We used the following software environment for this system: Microsoft Windows 7 Professional 64bit; 3.5-GHz Intel Core i5-4690 CPU with 32 GB memory (we fully used four cores); Microsoft Visual C++ 2013. Figure 8 shows the measurement results summarizing the relationship between the number of points and the calculation time depending on zl. Here, the processing time for displaying a CGH was small enough to be negligible with respect to the CGH calculation time. For simplicity, we used the same values of zl for each point-light source in this measurement. In Fig. 8, both the horizontal and vertical axes are described by a common logarithm. The calculation time was the average of 1000 measurements. Figure 8 shows that the CGH calculation time tends to be proportional to the number of point-light sources when the number of point-light sources is more than 100,000 points. By contrast, the acceleration efficiency is low when the number of point-light sources is small. This is caused by the calculation process of Nishitsuji’s algorithm, which consists mainly of the following two steps: generation of zone-plate patterns and synchronization of the patterns. While the calculation time of the former step is independent of the number of point-light sources, the calculation time of the latter step is proportional to the number of point-light sources. This means the calculation time of the latter step is short enough to be negligible when the number of point-light sources is large. Figure 8 also shows that the CGH calculation time is mostly proportional to the square of zl, and we can realize real-time reconstruction of a 3-D object consisting of 100,000 point-light sources as 30  frames/s when zl=2  mm.

Fig. 8

Measurement results summarizing the relationship between the number of points and the calculation time depending on zl.

OE_57_6_061621_f008.png

4.

Application in a Holographic Augmented-Reality Display

Next, we review a real-time holographic display that computes CGHs with a GPU performing calculations in parallel. The point-cloud method can be accelerated by processing the calculations of Eq. (3) in parallel since Eq. (3) can be independently calculated for each pixel. Hence, we can realize real-time reconstruction of holographic images by the point-cloud method using a GPU.56.7.8.9 However, even more acceleration of CGH calculation is required for large holographic displays. In addition, the pixel pitch of commercially available SLMs is too coarse to achieve large reconstructed images with large viewing angles. To avoid the issue, head-mounted or near-eye holographic displays have been researched.86,9091.92.93.94 A near-eye display does not require large SLMs; the SLMs only need to cover the field of view of a human eye. Since the view point is fixed in the near-eye display, a large viewing angle is not needed. For these two benefits, we chose to demonstrate dynamic augmented reality (AR) with a near-eye display. Figures 9(a) and 9(b) show a schematic of a holographic AR display system and a photograph of its prototype for preliminary experiments, respectively. We constructed the system based on Ichikawa’s Fourier-transform based optical setup.95 We used 532-nm green lasers as optical sources. Since the optical path is bilaterally symmetric in Figs. 9(a) and 9(b), we explain only the left side of the setup. Light emitted from the laser is expanded by the microscope objective; is reflected by the first half mirror; is collimated by the collimator lens, whose focal length is 300  mm; and illuminates the SLM. The SLM has the same performance as that used in the Sec. 3. Reconstructed light from the SLM passes through the lens and the first half mirror and is reflected by the second half mirror. By observing the reconstructed light, holographic images can be seen behind the second half mirror. Here, we applied the single-sideband method in the display system and half-zone-plate processing for CGH calculations. These methods allow us to reconstruct clear holographic images by filtering out nondiffracted light and the conjugate image with a shading plate.96,97

Fig. 9

Top view of (a) schematic and (b) photograph of the holographic AR display system for preliminary experiment. MO: microscope objective; HM: half mirror; CL: collimator lens; SP: shading plate; and M: mirror.

OE_57_6_061621_f009.png

First, we confirmed that the constructed system could project a holographic image at variable distances. We positioned real objects 0.4 and 0.6 m away from viewpoint for the left eye and reconstructed a CGH generated by a framework model of a cube’s edges comprising 284 point-light sources. Figure 10 shows images captured by a digital camera positioned at the viewpoint. The value (0.4 m or 0.6 m) of the real object in each figure indicates the distance between the real object and the viewpoint; we focused the digital camera on the real object when capturing the images of Fig. 10. In Figs. 10(a) and 10(b), we projected the CGH of a cube positioned 0.4 m away from the viewpoint. In Figs. 10(c) and 10(d), we reconstructed a CGH of a cube positioned 0.6 m away from the viewpoint. Observe that the reconstructed images of Figs. 10(a) and 10(d) are in focus and those of Figs. 10(b) and 10(c) are out of focus. This test demonstrates the feasibility of projecting holographic images at varying distances for use in AR applications.

Fig. 10

Images projected by the holographic AR display system. (a) and (b) Holographic images are projected 0.4 m away from the viewpoint. (c) and (d) Holographic images are projected 0.6 m away from the viewpoint. Panels (a) and (c) are focused at 0.4 m away from the viewpoint. Panels (b) and (d) are focused at 0.6 m away from the viewpoint.

OE_57_6_061621_f010.png

Next, we introduced a motion sensor into the constructed system to allow a viewer to interactively manipulate holographic images.9,9899.100 Figure 11 shows the processing steps for the interactive holographic AR display system. First, we obtain the initial coordinates of a virtual 3-D object and calculate the CGH from the object as described above. The calculated CGH is displayed with the SLM, and a holographic image of the 3-D object is projected. Next, the system detects the horizontal and depth positions of a viewer’s forefinger using a motion sensor. After detecting the position of the forefinger, the system changes the horizontal and depth coordinates of the artificial object so that the projected holographic image follows the forefinger. Subsequently, the system calculates a hologram of the object using the changed coordinates and displays the CGH on the SLM. These steps in a loop allow interaction with the hologram. We used the same object as in the above experiments with the near-eye display. We used a leap motion sensor (Leap Motion, Inc.) as the motion sensor.101 In the same environment as used in Sec. 3, an NVIDIA Geforce GTX 980 GPU (1279-MHz GPU clock, 3505-MHz memory clock, 4096 MB memory, and 2048 cores) was used to enable real-time calculation of CGHs. A compute unified device architecture (CUDA) version 8.0 was used as an integrated development environment for the GPU. Figure 12 illustrates the experimental results. A motion picture of the projected holographic images was captured by a digital camera at a fixed focal plane; 15 frames were extracted from the motion picture. In Figs. 12(a) and 12(b), the forefinger was moved from left to right and from right to left, respectively. Although the holographic image was observed to have a short latency due to the CGH calculation time, the holographic image followed the movement of the forefinger in real time. In Fig. 12(c), the forefinger moved from front to back along the depth direction. Because the focal distance of the camera was fixed, the holographic image was initially out of focus, gradually approached the in-focus position, then passed the focal distance, and gradually became unfocused once again. Therefore, these results demonstrated the interactive handling of holographic AR images is feasible with the developed apparatus.

Fig. 11

Processing procedure for the interactive holographic AR display system.

OE_57_6_061621_f011.png

Fig. 12

Images of the interactive holographic AR display system when the forefinger was moved (a) from left to right, (b) from right to left, and (c) from front to back.

OE_57_6_061621_f012.png

To characterize the interactive handling performance of the system, the processing time for constructing the holographic AR display system was evaluated. Here, only the CGH calculation time was measured because the processing time for the forefinger detection was less than 0.1 ms and, thus, was considered to be negligible. Figure 13 shows the measurement results as zl was varied in 100 mm increments in the range of 100–500 mm. The calculation time was constant except for when zl=100  mm. This is because the range-dependent calculation of Eq. (1), as shown in Fig. 5, is satisfied when zl is larger than 200 mm; the range requiring CGH calculation corresponds to the whole CGH size. Figure 13 also shows that interactive handling can be realized using the constructed system at 30  frames/s when the number of point-light sources is 2000. In the program, “if” statements were used to judge whether the calculation of Eq. (1) is necessary or not, which reduces the speeding-up ratio in the GPU implementation. Hence, for comparison, the CGH calculation time was also measured without the use of “if” statements. The results show that the calculation time is 1.4 to 2.0 times faster without “if” statements compared to that with “if” statements.

Fig. 13

Measurement results summarizing the relationship between the number of points and the calculation time depending on zl in the holographic AR display system.

OE_57_6_061621_f013.png

5.

Conclusion

We reviewed two strategies to improve the real-time performance of electroholographic systems that project aerial 3-D images. The first used parabolic mirrors to project holographic images reconstructed from image-type CGHs in the air without increasing the computational burden. It was shown that holographic aerial projections can be realized in real time with image-type CGHs and two parabolic mirrors while the position and size of a 3D fish were varied using keyboard operations. The processing time for the CGH calculation was measured to evaluate the real-time performance of the system. The measurement results indicate that real-time reconstruction of a 3-D object consisting of 100,000 point-light sources can be done at 30  frames/s by the proposed apparatus when the distance between the 3-D object and the hologram plane is about 2 mm. The second system parallelizes CGH calculations using a GPU instead of HORN system. A holographic AR display system was constructed based on a near-eye display. The feasibility of projecting holographic images at varying distances was demonstrated for AR applications. In addition, a Leap Motion sensor was implemented into the constructed system as a motion sensor to allow a viewer to manipulate holographic images interactively. Although the holographic image was shown to have only a short latency due to the CGH calculation time of 30  frames/s with 2000 point-light sources, it was demonstrated that the holographic image followed the forefinger in real time and that interactive handling of holographic AR images can be realized with the proposed apparatus. We measured the CGH calculation time of the holographic AR display system. In future studies, we will aim to develop a mobile-sized multimodal and multifunctional 3-D display system by incorporating the HORN system17 with sensing and haptic devices into the holographic display system.

Acknowledgments

This work is partially supported by JSPS Grant-in-Aid No. 25240015 and the Institute for Global Prominent Research, Chiba University.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).SRCEC32045-2322 https://doi.org/10.1038/161777a0 Google Scholar

2. P. St. Hilaire et al., “Electronic display system for computational holography,” Proc. SPIE 1212, 174–182 (1990).PSISDG0277-786X https://doi.org/10.1117/12.17980 Google Scholar

3. B. R. Brown and A. W. Lohmann, “Complex Spatial Filtering with Binary Masks,” Appl. Opt. 5(6), 967–969 (1966).APOPAI0003-6935 https://doi.org/10.1364/AO.5.000967 Google Scholar

4. W. J. Dallas, “Computer generated holograms,” in The Computer in Optical Research, and B. R. Frieden, Ed., pp. 291–366, Springer-Verlag, Berlin, Germany (1980). Google Scholar

5. N. Masuda et al., “Computer generated holography using a graphics processing unit,” Opt. Express 14(2), 603–608 (2006).OPEXFF1094-4087 https://doi.org/10.1364/OPEX.14.000603 Google Scholar

6. N. Takada et al., “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012).APOPAI0003-6935 https://doi.org/10.1364/AO.51.007303 Google Scholar

7. H. Niwase et al., “Real-time spatiotemporal division multiplexing electroholography with a single graphics processing unit utilizing movie features,” Opt. Express 22(23), 28052–28057 (2014).OPEXFF1094-4087 https://doi.org/10.1364/OE.22.028052 Google Scholar

8. H. Niwase et al., “Real-time electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator and the InfiniBand network,” Opt. Eng. 55(9), 093108 (2016). https://doi.org/10.1117/1.OE.55.9.093108 Google Scholar

9. S. Yamada et al., “Interactive holographic display based on finger gestures,” Sci. Rep. 8, 2010 (2018).SRCEC32045-2322 https://doi.org/10.1038/s41598-018-20454-6 Google Scholar

10. T. Shimobaba and T. Ito, “An efficient computational method suitable for hardware of computer-generated hologram with phase computation by addition,” Comput. Phys. Commun. 138(1), 44–52 (2001).CPHCBZ0010-4655 https://doi.org/10.1016/S0010-4655(01)00189-8 Google Scholar

11. H. Yoshikawa, “Fast computation of Fresnel holograms employing difference,” Opt. Rev. 8(5), 331–335 (2001).1340-6000 https://doi.org/10.1007/s10043-001-0331-y Google Scholar

12. J. Weng et al., “Fast recurrence relation for computer-generated-hologram,” Comput. Phys. Commun. 183(1), 46–49 (2012).CPHCBZ0010-4655 https://doi.org/10.1016/j.cpc.2011.08.015 Google Scholar

13. T. Nishitsuji et al., “Fast calculation of computer-generated hologram using run-length encoding based recurrence relation,” Opt. Express 23(8), 9852–9857 (2015).OPEXFF1094-4087 https://doi.org/10.1364/OE.23.009852 Google Scholar

14. T. Shimobaba, S. Hishinuma and T. Ito, “Special-purpose computer for holography HORN-4 with recurrence algorithm,” Comput. Phys. Commun. 148(2), 160–170 (2002).CPHCBZ0010-4655 https://doi.org/10.1016/S0010-4655(02)00473-3 Google Scholar

15. T. Ito et al., “Special-purpose computer HORN-5 for a real-time electroholography,” Opt. Express 13(6), 1923–1932 (2005).OPEXFF1094-4087 https://doi.org/10.1364/OPEX.13.001923 Google Scholar

16. Y. Ichihashi et al., “HORN-6 special-purpose clustered computing system for electroholography,” Opt. Express 17(16), 13895–13903 (2009).OPEXFF1094-4087 https://doi.org/10.1364/OE.17.013895 Google Scholar

17. T. Sugie et al., “High-performance parallel computing for next-generation holographic imaging,” Nat. Electron. 1(4), 254–259 (2018).NEREBX0305-2257 https://doi.org/10.1038/s41928-018-0057-5 Google Scholar

18. Y. Ogihara and Y. Sakamoto, “Fast calculation method of a CGH for a patch model using a point-based method,” Appl. Opt. 54(1), A76–A83 (2015).APOPAI0003-6935 https://doi.org/10.1364/AO.54.000A76 Google Scholar

19. T. Sugawara, Y. Ogihara and Y. Sakamoto, “Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit,” Appl. Opt. 55(3), A160–A166 (2016).APOPAI0003-6935 https://doi.org/10.1364/AO.55.00A160 Google Scholar

20. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993).JEIME51017-9909 https://doi.org/10.1117/12.133376 Google Scholar

21. S. C. Kim and E. S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47(19), D55–D62 (2008).APOPAI0003-6935 https://doi.org/10.1364/AO.47.000D55 Google Scholar

22. S. C. Kim, J. M. Kim and E. S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012).OPEXFF1094-4087 https://doi.org/10.1364/OE.20.012021 Google Scholar

23. J. Jia et al., “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52(7), 1404–1412 (2013).APOPAI0003-6935 https://doi.org/10.1364/AO.52.001404 Google Scholar

24. S. C. Kim and E. S. Kim, “Fast one-step calculation of holographic videos of three-dimensional scenes by combined use of baseline and depth-compensating principal fringe patterns,” Opt. Express 22(19), 22513–22527 (2014).OPEXFF1094-4087 https://doi.org/10.1364/OE.22.022513 Google Scholar

25. S. C. Kim, X. B. Dong and E. S. Kim, “Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up-table method for holographic three-dimensional television broadcasting,” Sci. Rep. 5, 14056 (2015).SRCEC32045-2322 https://doi.org/10.1038/srep14056 Google Scholar

26. M. W. Kwon, S. C. Kim and E. S. Kim, “Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes,” Appl. Opt. 55(3), A22–A31 (2016).APOPAI0003-6935 https://doi.org/10.1364/AO.55.000A22 Google Scholar

27. J. Song et al., “Fast generation of a high-quality computer-generated hologram using a scalable and flexible PC cluster,” Appl. Opt. 55(13), 3681–3688 (2016).APOPAI0003-6935 https://doi.org/10.1364/AO.55.003681 Google Scholar

28. S. Jiao, Z. Zhuang and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017).OPEXFF1094-4087 https://doi.org/10.1364/OE.25.000112 Google Scholar

29. T. Yamaguchi and H. Yoshikawa, “Computer-generated image hologram,” Chin. Opt. Lett. 9(12), 120006–120009 (2011).CJOEE31671-7694 https://doi.org/10.3788/COL Google Scholar

30. T. Kakue et al., “Aerial projection of three-dimensional motion-picture by electro-holography and parabolic mirrors,” Sci. Rep. 5, 11750 (2015).SRCEC32045-2322 https://doi.org/10.1038/srep11750 Google Scholar

31. T. Shimobaba, N. Masuda and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34(20), 3133–3135 (2009).OPLEDP0146-9592 https://doi.org/10.1364/OL.34.003133 Google Scholar

32. T. Shimobaba et al., “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010).OPEXFF1094-4087 https://doi.org/10.1364/OE.18.019504 Google Scholar

33. A. H. Phan et al., “Generation speed and reconstructed image quality enhancement of a long-depth object using double wavefront recording planes and a GPU,” Appl. Opt. 53(22), 4817–4824 (2014).APOPAI0003-6935 https://doi.org/10.1364/AO.53.004817 Google Scholar

34. Y. Zhao et al., “Fast calculation method of computer-generated cylindrical hologram using wave-front recording surface,” Opt. Lett. 40(13), 3017–3020 (2015).OPLEDP0146-9592 https://doi.org/10.1364/OL.40.003017 Google Scholar

35. A. Symeonidou et al., “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23(17), 22149–22161 (2015).OPEXFF1094-4087 https://doi.org/10.1364/OE.23.022149 Google Scholar

36. N. Hasegawa et al., “Acceleration of hologram generation by optimizing the arrangement of wavefront recording planes,” Appl. Opt. 56(1), A97–A103 (2017).APOPAI0003-6935 https://doi.org/10.1364/AO.56.000A97 Google Scholar

37. D. Arai et al., “An accelerated hologram calculation using the wavefront recording plane method and wavelet transform,” Opt. Commun. 393, 107–112 (2017).OPCOB80030-4018 https://doi.org/10.1016/j.optcom.2017.02.038 Google Scholar

38. Z. Yang et al., “A new method for producing computer generated holograms,” J. Opt. 14(9), 095702 (2012). https://doi.org/10.1088/2040-8978/14/9/095702 Google Scholar

39. T. Nishitsuji et al., “Fast calculation of computer-generated hologram using the circular symmetry of zone plates,” Opt. Express 20(25), 27496–27502 (2012).OPEXFF1094-4087 https://doi.org/10.1364/OE.20.027496 Google Scholar

40. Y. Mori and T. Nomura, “Fast hologram pattern generation by adaptive point-spread spherical wave synthesis,” IEEE J. Disp. Technol. 12(8), 815–821 (2016). https://doi.org/10.1109/JDT.2016.2542262 Google Scholar

41. Y. Mori and Y. Arai, “Fast computer hologram generation by flexible-ratio adaptive point-spread spherical wave synthesis,” J. Opt. Soc. Am. A 34(7), 1080–1084 (2017).JOAOD60740-3232 https://doi.org/10.1364/JOSAA.34.001080 Google Scholar

42. H. G. Kim, H. Jeong and Y. M. Ro, “Acceleration of the calculation speed of computer-generated holograms using the sparsity of the holographic fringe pattern for a 3D object,” Opt. Express 24(22), 25317–25328 (2016).OPEXFF1094-4087 https://doi.org/10.1364/OE.24.025317 Google Scholar

43. T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express 25(1), 77–86 (2017).OPEXFF1094-4087 https://doi.org/10.1364/OE.25.000077 Google Scholar

44. K. Takano and K. Sato “Color electro-holographic display using a single white light source and a focal adjustment method,” Opt. Eng. 41(10), 2427–2433 (2002). https://doi.org/10.1117/1.1504460 Google Scholar

45. A. Shiraki et al., “Simplified electroholographic color reconstruction system using graphics processing unit and liquid crystal display projector,” Opt. Express 17(18), 16038–16045 (2009).OPEXFF1094-4087 https://doi.org/10.1364/OE.17.016038 Google Scholar

46. F. Yaraş, H. Kang and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009).APOPAI0003-6935 https://doi.org/10.1364/AO.48.000H48 Google Scholar

47. H. Nakayama et al., “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt. 49(31), 5993–5996 (2010).APOPAI0003-6935 https://doi.org/10.1364/AO.49.005993 Google Scholar

48. H. Sasaki et al., “Large size three-dimensional video by electronic holography using multiple spatial light modulators,” Sci. Rep. 4, 6177 (2014). https://doi.org/10.1038/srep06177 Google Scholar

49. J. Roh, “Full-color holographic projection display system featuring an achromatic Fourier filter,” Opt. Express 25(13), 14774–14782 (2017).OPEXFF1094-4087 https://doi.org/10.1364/OE.25.014774 Google Scholar

50. T. Ito et al., “Holographic reconstruction with a 10-μm pixel-pitch reflective liquid-crystal display by use of a light-emitting diode reference light,” Opt. Lett. 27(16), 1406–1408 (2002).OPLEDP0146-9592 https://doi.org/10.1364/OL.27.001406 Google Scholar

51. D. Wang et al., “Color holographic display method based on a single-spatial light modulator,” Opt. Eng. 53(4), 045104 (2014). https://doi.org/10.1117/1.OE.53.4.045104 Google Scholar

52. M. Makowski, M. Sypek and A. Kolodziejczyk “Colorful reconstructions from a thin multi-plane phase hologram,” Opt. Express 16(15), 11618–11623 (2008).OPEXFF1094-4087 https://doi.org/10.1364/OE.16.011618 Google Scholar

53. M. Makowski et al., “Experimental evaluation of a full-color compact lensless holographic display,” Opt. Express 17(23), 20840–20846 (2009).OPEXFF1094-4087 https://doi.org/10.1364/OE.17.020840 Google Scholar

54. T. Shimobaba and T. Ito, “A color holographic reconstruction system by time division multiplexing with reference lights of Laser,” Opt. Rev. 10(5), 339–341 (2003).1340-6000 https://doi.org/10.1007/s10043-003-0339-6 Google Scholar

55. T. Shimobaba et al., “Interactive color electroholography using the FPGA technology and time division switching method,” IEICE Electron. Express 5(8), 271–277 (2008). https://doi.org/10.1587/elex.5.271 Google Scholar

56. M. Oikawa et al., “Time-division color electroholography using one-chip RGB LED and synchronizing controller,” Opt. Express 19(13), 12008–12013 (2011).OPEXFF1094-4087 https://doi.org/10.1364/OE.19.012008 Google Scholar

57. T. Senoh et al., “Viewing-zone-angle-expanded color electronic holography system using ultra-high-definition liquid crystal displays with undesirable light elimination,” IEEE J. Disp. Technol. 7(7), 382–390 (2011). https://doi.org/10.1109/JDT.2011.2114327 Google Scholar

58. H. Araki et al., “Real-time time-division color electroholography using a single GPU and a USB module for synchronizing reference light,” Appl. Opt. 54(34), 10029–10034 (2015).APOPAI0003-6935 https://doi.org/10.1364/AO.54.010029 Google Scholar

59. H. Araki et al., “Fast time-division color electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator,” Chin. Opt. Lett. 15 (12), 120902 (2017).CJOEE31671-7694 https://doi.org/10.3788/COL Google Scholar

60. H. Nishi, K. Matsushima and S. Nakahara, “Rendering of specular surfaces in polygon-based computer-generated holograms,” Appl. Opt. 50(34), H245–H252 (2011).APOPAI0003-6935 https://doi.org/10.1364/AO.50.00H245 Google Scholar

61. Y. Pan et al., “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013).APOPAI0003-6935 https://doi.org/10.1364/AO.52.00A290 Google Scholar

62. K. Matsushima, M. Nakamura and S. Nakahara, “Silhouette method for hidden surface removal in counter holography and its acceleration using the switch-back technique,” Opt. Express 22(20), 24450–24465 (2014).OPEXFF1094-4087 https://doi.org/10.1364/OE.22.024450 Google Scholar

63. W. Lee et al., “Semi-analytic texturing algorithm for polygon computer-generated holograms,” Opt. Express 22(25), 31180–31191 (2014).OPEXFF1094-4087 https://doi.org/10.1364/OE.22.031180 Google Scholar

64. H. Nishi and K. Matsushima, “Rendering of specular curved objects in polygon-based computer holography,” Appl. Opt. 56(13), F37–F44 (2017).APOPAI0003-6935 https://doi.org/10.1364/AO.56.000F37 Google Scholar

65. J. W. Goodman, Introduction to Fourier Optics, Roberts and Company, Englewood, Colorado (2004). Google Scholar

66. R. P. Muffoletto, J. M. Tyler and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15(9), 5631–5640 (2007).OPEXFF1094-4087 https://doi.org/10.1364/OE.15.005631 Google Scholar

67. T. Shimobaba et al., “Aliasing-reduced Fresnel diffraction with scale and shift operations,” J. Opt. 15(7), 075405 (2013). https://doi.org/10.1088/2040-8978/15/7/075405 Google Scholar

68. S. Odate et al., “Angular spectrum calculations for arbitrary focal length with a scaled convolution,” Opt. Express, 19(15), 14268–14276 (2011).OPEXFF1094-4087 https://doi.org/10.1364/OE.19.014268 Google Scholar

69. T. Shimobaba et al., “Scaled angular spectrum method,” Opt. Lett. 37(19), 4128–4130 (2012).OPLEDP0146-9592 https://doi.org/10.1364/OL.37.004128 Google Scholar

70. T. Shimobaba et al., “Nonuniform sampled scalar diffraction calculation using nonuniform fast Fourier transform,” Opt. Lett. 38(23), 5130–5133 (2013).OPLEDP0146-9592 https://doi.org/10.1364/OL.38.005130 Google Scholar

71. Y.-H. Kim et al., “Exact light propagation between rotated planes using non-uniform sampling and angular spectrum method,” Opt. Commun. 344, 1–6 (2015).OPCOB80030-4018 https://doi.org/10.1016/j.optcom.2015.01.029 Google Scholar

72. Y. Xiao et al., “Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes,” J. Opt. Soc. Am. A 33(10), 2027–2033 (2016).JOAOD60740-3232 https://doi.org/10.1364/JOSAA.33.002027 Google Scholar

73. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976).APOPAI0003-6935 https://doi.org/10.1364/AO.15.002722 Google Scholar

74. H. Kang, T. Yamaguchi and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008).APOPAI0003-6935 https://doi.org/10.1364/AO.47.000D44 Google Scholar

75. Y. Ohsawa et al., “Computer-generated holograms using multiview images captured by a small number of sparsely arranged cameras,” Appl. Opt. 52(1), A167–A176 (2013).APOPAI0003-6935 https://doi.org/10.1364/AO.52.00A167 Google Scholar

76. Y. Takaki and K. Ikeda, “Simplified calculation method for computer-generated holographic stereograms from multi-view images,” Opt. Express 21(8), 9652–9663 (2013).OPEXFF1094-4087 https://doi.org/10.1364/OE.21.009652 Google Scholar

77. T. Mishina, M. Okui and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006).APOPAI0003-6935 https://doi.org/10.1364/AO.45.004026 Google Scholar

78. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011).OPEXFF1094-4087 https://doi.org/10.1364/OE.19.009086 Google Scholar

79. Y. Ichihashi et al., “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20(19), 21645–21655 (2012).OPEXFF1094-4087 https://doi.org/10.1364/OE.20.021645 Google Scholar

80. K. Wakunami, H. Yamashita and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express 21(19), 21811–21822 (2013).OPEXFF1094-4087 https://doi.org/10.1364/OE.21.021811 Google Scholar

81. S. K. Lee et al., “Hologram synthesis of three-dimensional real objects using portable integral imaging camera,” Opt. Express 21(20), 23662–23670 (2013).OPEXFF1094-4087 https://doi.org/10.1364/OE.21.023662 Google Scholar

82. Y. Endo et al., “Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera,” Opt. Commun. 356, 468–471 (2015).OPCOB80030-4018 https://doi.org/10.1016/j.optcom.2015.08.004 Google Scholar

83. S. Igarashi, T. Nakamura and M. Yamaguchi, “Fast method of calculating a photorealistic hologram based on orthographic ray-wavefront conversion,” Opt. Lett. 41(7), 1396–1399 (2016).OPLEDP0146-9592 https://doi.org/10.1364/OL.41.001396 Google Scholar

84. H. Sato et al., “Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration,” Sci. Rep. 8, 1500 (2018).SRCEC32045-2322 https://doi.org/10.1038/s41598-018-19361-7 Google Scholar

85. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23(14), 18143–18155 (2015).OPEXFF1094-4087 https://doi.org/10.1364/OE.23.018143 Google Scholar

86. J. S. Chen and D. Chu, “Realization of real-time interactive 3D image holographic display [Invited],” Appl. Opt. 55(3), A127–A134 (2016).APOPAI0003-6935 https://doi.org/10.1364/AO.55.00A127 Google Scholar

87. P. Su et al., “Fast computer-generated hologram generation method for three-dimensional point cloud model,” IEEE J. Disp. Technol. 12(12), 1688–1694 (2016). https://doi.org/10.1109/JDT.2016.2553440 Google Scholar

88. Y. Sando, D. Barada and T. Yatagai, “Fast calculation of computer-generated holograms based on 3-D Fourier spectrum for omnidirectional diffraction from a 3-D voxel-based object,” Opt. Express 20(19), 20962–20969 (2012).OPEXFF1094-4087 https://doi.org/10.1364/OE.20.020962 Google Scholar

89. Y. Sando et al., “Fast calculation method for computer-generated cylindrical holograms based on the three-dimensional Fourier spectrum,” Opt. Lett. 38(23), 5172–5175 (2013).OPLEDP0146-9592 https://doi.org/10.1364/OL.38.005172 Google Scholar

90. G. L. Xue et al., “Multiplexing encoding method for full-color dynamic 3D holographic display,” Opt. Express 22(15), 18473–18482 (2014).OPEXFF1094-4087 https://doi.org/10.1364/OE.22.018473 Google Scholar

91. Z. Chen et al., “Acceleration for computer-generated hologram in head-mounted display with effective diffraction area recording method for eyes,” Chin. Opt. Lett. 14(8), 080901–80905 (2016).CJOEE31671-7694 https://doi.org/10.3788/COL Google Scholar

92. Q. Gao et al. “Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter,” Opt. Express 25(7), 8412–8424 (2017).OPEXFF1094-4087 https://doi.org/10.1364/OE.25.008412 Google Scholar

93. R. Haussler et al., “Large real-time holographic 3D displays: enabling components and results,” Appl. Opt. 56(13), F45–F52 (2017).APOPAI0003-6935 https://doi.org/10.1364/AO.56.000F45 Google Scholar

94. A. Maimone, A. Georgiou and J. S. Kollin, “Holographic near-eye displays for virtual and augmented reality,” ACM Trans. Graph. 36(4), 1–16 (2017).ATGRDF0730-0301 https://doi.org/10.1145/3072959 Google Scholar

95. T. Ichikawa, T. Yoneyama and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013).OPEXFF1094-4087 https://doi.org/10.1364/OE.21.032019 Google Scholar

96. O. Bryngdahl and A. Lohman, “Single-sideband holography,” J. Opt. Soc. Am. 58(5), 620–624 (1968).JOSAAH0030-3941 https://doi.org/10.1364/JOSA.58.000620 Google Scholar

97. T. Mishina, F. Okano and I. Yuyama, “Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones,” Appl. Opt. 38(17), 3703–3713 (1999). https://doi.org/10.1364/AO.38.003703 Google Scholar

98. Z. Tomori et al., “Holographic Raman tweezers controlled by hand gestures and voice commands,” Opt. Photonics J. 3, 331–336 (2013). https://doi.org/10.4236/opj.2013.32B076 Google Scholar

99. L. Shaw, D. Preece and H. Rubinsztein-Dunlop, “Kinect the dots: 3D control of optical tweezers,” J. Opt. 15(7), 075303 (2013). https://doi.org/10.1088/2040-8978/15/7/075303 Google Scholar

100. M. Yamaguchi, “Full-parallax holographic light-field 3-D displays and interactive 3-D touch,” Proc. IEEE 105(5), 947–959 (2017).IEEPAD0018-9219 https://doi.org/10.1109/JPROC.2017.2648118 Google Scholar

101. Leap Motion Inc., “Leap motion,”  https://www.leapmotion.com/ (3 November 2017). Google Scholar

Biography

Takashi Kakue is an assistant professor at Chiba University. He received his BE, ME, and DE degrees from Kyoto Institute of Technology in 2006, 2008, and 2012, respectively. His current research interests include holography, three-dimensional display, three-dimensional measurement, and high-speed imaging. He is a member of SPIE, the Optical Society (OSA), the Institute of Electrical and Electronics Engineers (IEEE), the Optical Society of Japan (OSJ), and the Institute of Image Information and Television Engineers (ITE).

Yoshiya Wagatsuma is a master course student with Graduate School of Engineering, Chiba University. He received his BE degree from Chiba University in 2017. His current research interests include holography and three-dimensional measurement.

Shota Yamada is a master course student with the Graduate School of Engineering, Chiba University. He received his BE degree from Chiba University in 2017. Currently, his research interests include computer holography and its applications. He is a member of OSJ and the Information Processing Society of Japan (IPSJ).

Takashi Nishitsuji is an assistant professor at Tokyo Metropolitan University. He received his BE, ME, and DE degrees from Chiba University in 2011, 2013, and 2016, respectively. His current research interests include computer holography and its applications. He is a member of OSA, OSJ and IPSJ.

Yutaka Endo is an assistant professor at Kanazawa University. He received his BE, ME, and DE degrees from Chiba University in 2012, 2014, and 2017, respectively. His current research interests include holography and image processing. He is a member of Association for Computing Machinery (ACM), OSA, and OSJ.

Yuki Nagahama is a research fellow at Japan Society for the Promotion of Science. He received his BE, ME, and DE degrees from Chiba University in 2012, 2015, and 2017, respectively. His current research interests include computer holography and its applications. He is a member of OSJ.

Ryuji Hirayama is a research fellow of Japan Society for the Promotion of Science and is working at Chiba University. He received his PhD degree in engineering from Chiba University in 2017. His current research interests include three-dimensional display technologies. He is a member of ACM, OSA, and the Japan Society of Applied Physics (JSAP).

Tomoyoshi Shimobaba is an associate professor at Chiba University. He received his PhD degree in fast calculation of holography using special-purpose computers from Chiba University in 2002. His current research interests include computer holography and its applications. He is a member of SPIE, OSA, the Institute of Electronics, Information, and Communication Engineers (IEICE), OSJ, and ITE.

Tomoyoshi Ito is a professor at Chiba University. He received his PhD degree in special-purpose computers for many-body systems in astrophysics and in molecular dynamics from the University of Tokyo in 1994. His current research interests include high-performance computing and its applications for electroholography. He is a member of ACM, OSA, OSJ, ITE, IEICE, IPSJ, and the Astronomical Society of Japan.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Takashi Kakue, Takashi Kakue, Yoshiya Wagatsuma, Yoshiya Wagatsuma, Shota Yamada, Shota Yamada, Takashi Nishitsuji, Takashi Nishitsuji, Yutaka Endo, Yutaka Endo, Yuki Nagahama, Yuki Nagahama, Ryuji Hirayama, Ryuji Hirayama, Tomoyoshi Shimobaba, Tomoyoshi Shimobaba, Tomoyoshi Ito, Tomoyoshi Ito, } "Review of real-time reconstruction techniques for aerial-projection holographic displays," Optical Engineering 57(6), 061621 (30 May 2018). https://doi.org/10.1117/1.OE.57.6.061621 . Submission: Received: 12 November 2017; Accepted: 14 May 2018
Received: 12 November 2017; Accepted: 14 May 2018; Published: 30 May 2018
JOURNAL ARTICLE
11 PAGES


SHARE
RELATED CONTENT


Back to Top