Real-time generation of computer-generated hologram for 360-degree panoramic views using cylindrical object light

Abstract. The enormous amount of time required to generate hologram data in electro-holography is a problem that hinders real-time display of holographic video. With the aim of achieving holometric video streaming that enables volumetric video streaming by holography, we propose a method for correcting calculations of cylindrical object light using a graphics processing unit and generating electro-holography in accordance with observer movement in real time. We confirmed through experiments using an optical system that the proposed method enables real-time 360-deg panoramic views of three-dimensional video for multiple users.


Introduction
The use of augmented reality (AR) and virtual reality (VR) content with a head-mounted display (HMD) has been increasing.However, current three-dimensional (3D) display technology using HMDs does not satisfy some of the physiological factors by which humans perceive a stereoscopic effect, which can cause VR sickness and eyestrain.
Electro-holography, which electronically displays reconstructed images, is an ideal 3D display technology that can satisfy all physiological factors contributing to stereoscopic vision in humans.HMDs that use electro-holography are now attracting attention as next-generation HMDs that can display ideal 3D video.][3] A key problem in electro-holography is the enormous amount of time required to generate holographic video.The use of an HMD requires the real-time generation and display of video in accordance with the direction of the user's head, but calculating hologram data in real time with a holo-HMD has been difficult.Various methods for making such real-time calculations have been studied and can be broadly divided into methods of devising and speeding up algorithms [4][5][6][7] and those that use high-speed calculation hardware such as a cluster machine. 8There has been much activity in achieving high-speed calculations using a graphics processing unit (GPU), [9][10][11][12] which has made it possible to generate hologram data of simple objects in real time.In practical terms, the display of realistic reconstructed images that include complex objects and hidden surface removal is important, [13][14][15][16] but the generation of these images in real time cannot be achieved.Supercomputers that can execute even faster calculations have been considered, but using such expensive equipment for a single HMD is unrealistic.In addition, the amount of hologram data is considerable, and many problems remain even in terms of compression.
To transmit hologram data that differ among multiple people, the amount of communications can be enormous in 1:1 communication for each holo-HMD.Holometric video streaming has been proposed for solving these problems. 17This technique is used to calculate object light data on the basis of hologram data using a large-scale computer to deliver those data to users simultaneously in a broadcast format and use this object light to generate hologram data for each user at any viewpoint in accordance with the user's viewing direction.Using these ideas, a method has been proposed for taking into account the existence of a 360-deg cylindrical shape that encloses the target object and broadcasts object light onto the surface of that shape. 18This method enables the high-speed generation of hologram data for holo-HMD use in accordance with the user's position from the broadcast cylindrical object light so that the object can be observed from 360 deg.This method, however, is not able to obtain 360-deg images (360-deg panoramic views) of the periphery surrounding the user, therefore cannot obtain holographic video within an immersive environment.
We propose a holometric video-streaming method that enables a user using a holo-HMD to obtain holographic video within an immersive environment.Our method corrects cylindrical object light data calculated from an object on the outer side of the cylinder surrounding the user to planar object light data.

Holometric Video Streaming
Volumetric video streaming has been studied for capturing a target object using multiple cameras or a range camera to obtain 3D information and transmitting that information to a user on the receiving side wearing a device such as an HMD to enable free-viewpoint viewing.This technology is expected to find use in a variety of fields, such as entertainment and medical care. 19n contrast, holometric video streaming records and transmits all light waves (object light) emitted from an object in place of the object's 3D information to enable viewing of 3D video.There are two holometric video-streaming methods for measuring the light waves (object light) reflected off of a 3D object: 1. Digital holography for directly measuring light waves from a real object 2. Computer-generated hologram (CGH) method for capturing an object using multiple cameras or a range camera to obtain 3D information and using those data to calculate reflected light waves through computer simulation.
With digital holography, measurements can only be done in a dark room in which outside light cannot enter, which is a major limitation.The CGH method, however, can be used under natural light, but an enormous amount of calculations are needed to generate object light.As shown in Ref. 18, they have therefore proposed a method for calculating object light-the most time-consuming step-using a high-performance, large-scale server or computer cluster, transmitting the calculated object light data using high-speed broadcast technology, such as 5G, and using the transmitted data to generate a free-viewpoint hologram on the client side consisting, for example, of an HMD (Fig. 1).Since it transmits object light data instead of a generated hologram, one advantage of this method is that it can display different 3D videos for multiple users in a more efficient manner than calculating and transmitting a hologram frame-by-frame for each viewpoint.The shape of the object light to be transmitted can be freely determined, and an ordinary hologram can be calculated using a planar shape in relation to the device displaying the hologram.In addition to a planar shape 20 for the object light to be calculated as used in holometric video streaming, calculation using a cylindrical shape has been proposed, the details of which are given in the following section.
The range of movement that a user wearing an HMD can take is called degrees of freedom (DoF).Commonly used HMDs enable 3-or 6-DoF. 21As shown in Fig. 2 [in this paper, we use a left-handed coordinate which is often used in computer graphics (CG)], 3-DoF corresponds to three different types of facial rotations, pitch, yaw, and roll, which correspond to the vertical movement of the neck, sideways movement of the neck, and tilting of the neck sideways, respectively.6-DoF adds forward and backward, left and right, and up and down movement of a user wearing an HMD to the above facial rotations.Displaying 3D video corresponding to these movements can heighten the observer's sense of immersion.Our proposed method enables observation corresponding to 3-DoF.

Conventional Method (Viewing of Object from 360 deg)
Reference 18 can be cited as a conventional method using cylindrical object light.As shown in Fig. 3, they assumed that an object is arranged near the center of a cylinder, which is viewed from any position outside the cylinder.Given that this method uses cylindrical object light, the planar display mounted on ordinary HMDs cannot be used directly.It is therefore necessary to generate planar object light from cylindrical object light through correction calculations.
This conventional method converts cylindrical object light into planar object light using an approximation in which light waves propagate from a point at the center of a cylinder.It is assumed that the object and a planar-object-light surface are small compared with the radius of the cylinder and that the planar-object-light surface is situated near the cylinder.The wavefront is therefore considered to propagate radially from the center of the object.Based on this approximation, the point corresponding to the light wave of a single pixel on the planar object light lies on a line connected to the center of the object, and the wave at the point where that line intersects the cylindrical surface propagates outward.As shown in Fig. 4, the light wave of a pixel P on the planar object light is propagated from the light wave at a single point C where the straight line connecting the origin O and point P intersects with the cylinder.The propagation calculations are conducted using the path difference l between P and C. By denoting the complex amplitude of the cylindrical object light as O c , and the wave number as k, planar object light O p can be determined as 3 Proposed Method

Overview
As shown in Fig. 5, our method situates the user at the center of the cylinder and has the user observe an object defined outside the cylinder.The viewpoint of the proposed method is opposite that of the conventional method.Compared with the conventional method, our method has the advantage of enabling the arrangement and observation of objects 360 deg around the user, making it easy for the user to feel a sense of immersion.Calculations of O c are conducted using the point-light method.Although there are other methods for conducting these calculations using, for example, the fast Fourier transform for conducting object-light calculations at high speed, [22][23][24][25][26] the problem is that they are not fast enough to calculate in real time.
For the environment envisioned in our study, however, the plan is to conduct object-light calculations on a large-scale, high-performance server, so we used the point-light method 27 for reconstructing detailed objects, thus enhancing the user's sense of immersion.We calculated O c by breaking down a CG model generated with polygons into a point cloud and using the pointlight method on the basis of that cloud.Denoting cylindrical coordinates as ðx c ; y c ; z c Þ, pointlight-source coordinates as ðx l ; y l ; z l Þ, amplitude as a i , light wavelength as λ, and initial phase of the point-light source as ϕ i , the complex amplitude distribution u i on the cylindrical surface emitted from the light can be calculated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 7 ; 5 9 2 When the virtual object is defined as N point-light sources, cylindrical object light O c ðx c ; y c ; z c Þ of any point on the cylindrical surface can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 7 ; 5 1 1 In the conventional method, it is assumed that light waves from the center of the cylinder are emitted radially.However, our proposed method is required to enable the object to be placed at any position outside the cylinder, thus it is impossible to assume that the object wave is emitted from the center of the cylinder.Therefore, we have introduced a "phase-correction point," from which light waves are emitted in correction calculations for calculating O p .By using this concept, the correction calculation used in the conventional methods can also be used in our method.The phase-correction point is basically set at the center of the object, but in the case of two or more objects to be displayed on the screen, it is set at the center of the generated hologram (viewpoint direction).We tested through an experiment the change in the reconstructed image in accordance with the position of the phase-correction point with respect to the target objects.When converting to O p from O c , correction calculations are conducted using l, the same as in Ref. 18.The O p can be generated after determining where the line connecting the phasecorrection point with each pixel ðx p ; y p ; z p Þ on the plane set by the position and rotation of the plane intersecting the cylinder at ðx c ; y c ; z c Þ (Fig. 6).In correction calculations of O c at any coordinate, O p can be calculated using the following equation from the positional relationship among the phase-correction point, O p , and O c using O c and l: Since the maximum spatial frequency of the sampled object light on the cylindrical surface is determined by the sampling theorem, there is a limitation of the size of the zone plate, which is called zone-plate limitation. 28When calculating object light with respect to the cylinder, the zone-plate limitation must be set to prevent high-order diffracted images that hinder observation.In the calculations, denoting a certain pixel on the cylinder as p i , neighboring pixel as p iþ1 , and the distances between those points and the point-light source as r i and r iþ1 , respectively, highorder diffracted images can be prevented by not conducting any object-light calculations within the range in which the difference between r i and r iþ1 exceeds λ∕2 (Fig. 7).As a result of this zone-plate limitation, the area on which object light is recorded on the surface of the cylinder is limited.For the sake of simplicity, we consider points on the z-axis.We denote the coordinates of the point-light source as ð0; 0; z l Þ and those of the point on the cylinder used for correction as ðx A 0 ; 0; z A 0 Þ.The distance L between these two points can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 4 ; 3 8 7 If we assume that x is small compared with z l and z, the l in the horizontal direction can be given as the product of the directional derivative of L and pixel pitch p as follows.
E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 4 ; 3 3 2 where z changes in accordance with the position on the surface of the cylinder as follows: Since l is expressed by multiplying ∂z∕∂x by p, and assuming that R ≫ x, ∂z∕∂x can be approximated as Therefore, the difference in the distance between the point-light source and each of these pixels can be calculated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 4 ; 1 7 9 The zone-plate limitation when calculating object light with respect to a planar shape falls in a range such that the first term of Eq. ( 9) does not exceed λ∕2; thus, the range in which recording can be executed on a cylindrical shape will be slightly smaller.The zone-plate-limited area x max 1 2 can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 1 1 4 ; 9 5 x max 1 2 ¼ Kon and Sakamoto: Real-time generation of computer-generated hologram. . .
On the basis of the above, we can consider the field of view (FOV) that can show the maximum definable size of an object referring to Fig. 8. Denoting the pixel pitch on the hologram surface as d, the maximum electro-holographic diffraction angle θ can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 1 1 7 ; 4 9 8 From this, we obtain the following for H max 1 2 using plane depth z p and θ: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 1 1 7 ; 4 4 9 The Q x in the figure can be expressed as follows from the equation for zone-plate limitation on the plane: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 3 ; 1 1 7 ; 3 9 0 The maximum FOV V max 1 2 can therefore be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 4 ; 1 1 7 ; 3 4 1 When calculating O p from O c , H max 1 2 must be taken into account.If the range to be corrected exceeds H max 1 2 , high-order diffracted images will be generated, so processing that does not involve any calculations within this range is needed.

Implementation
To directly calculate object light on a hologram plane from point-light sources, the calculation order is OðNXYÞ, where X denotes the number of pixels in the horizontal direction on the hologram plane and Y is the number of pixels in the vertical direction.This shows that calculations for a complex object having many hologram pixels and a large number of point-light sources will result in extremely high computational complexity.In contrast, the proposed method involves processing on a 2D plane between the cylindrical surface and hologram surface unrelated to N. Therefore, the time needed to generate the hologram is short and the calculation order is OðXYÞ.While this calculation order is said to be sufficiently fast, calculations using a central processing unit (CPU) have not yet reached a real-time level (30 fps) according to Ref. 18.
With the proposed method, we aim to conduct correction calculations of O c at high speed in real time using a GPU, which features many calculation cores enabling massively parallel computing.Since the proposed method enables calculations of object light on the hologram surface independently, we can achieve high-speed processing through parallel computing in units of pixels.In the experiments described below, the calculations were conducted using Compute Unified Device Architecture cores that constitute an integrated development environment for GPUs provided by NVIDIA.

Experiments 4.1 Experimental Equipment
We conducted an experiment using an optical system to demonstrate the effectiveness of the proposed method.
Figures 9 and 10 show the photo and the block diagram of electro-holography system with a 4f optical system in which the size of the spatial light modulator (SLM) that displays the hologram is 1920 × 1080 pixels (W × H), pixel pitch is 8 × 8 μm, optical wavelength is 512 nm, and cylinder radius is 0.25 m.The pixel pitch of O p and O c are the same.It was assumed that rotation occurs in the counterclockwise direction with the positive direction of the x-axis at 0 deg.With respect to the range of correction calculations described in Sec.3.1 (Eq.12), x max 1 2 per single point-light source in Eq. ( 10) was taken to be 0.004 m as a parameter for this experiment.Calculations were not conducted in a range exceeding H max 1 2 ¼ 0.008 m, so no special processing was needed.

360-Deg Panoramic Views
We determined whether O p corresponding to the rotations of roll, pitch, and yaw could be generated using the proposed method.As shown in Fig. 11, a teapot was placed 0.5 m from the origin in the z direction, and the position of the phase-correction point was defined to be at the center of The phase correction point was set 0.50 m from the origin, the same as the distance to the objects but at a position corresponding to the viewpoint direction and not at the center of an object.The plane was rotated about its center in increments of 0.1 deg.The defined objects could be observed seamlessly with the proposed method.On the basis of the above, a user's surroundings can be observed in the horizontal direction with a planar device using O c .These results confirm that a hologram can be generated on the user's side in accordance with user head rotation.Since different reconstructed images can be observed from the same O c , it should be possible to obtain 3-DoF reconstructed images for each user in the manner of holometric video streaming.

Testing of Reconstructed Images
We also conducted experiments to examine the change in reconstructed images when the phasecorrection point cannot be placed at the center of an object.In the following two experiments, the center of the teapot was defined to be at the position (0, 0, 0.50).We first examined the results of changing the phase-correction point in the depth direction while fixing the x-y directions of the point at the center of the object.Specifically, we compared the reconstructed image obtained from direct calculation of O p with the reconstructed images obtained by moving the phasecorrection point away from the origin in intervals of 0.5 m (Fig. 15).
The results are shown in Fig. 16. Figure 16(a) shows the reconstructed image from a hologram obtained from direct calculation of O p with no correction calculations, and Figs.16(b)-16(i) show the reconstructed images of holograms by O p generated by correction calculations.The phase-correction point was defined to be at the center of the object in Fig. 16(b) and was moved deeper at +0.5 m interval in Figs.16(c)-16(i).From examining these reconstructed images when the position of the phase-correction point was changed from the center of the object up to +1.5 m away, no major changes were observed compared with the reconstructed image obtained from direct calculation of O p .However, the brightness of the reconstructed image began to drop once the position of the phase-correction point exceeded +2.0 m and that the reconstructed images became increasingly blurry from that point on.These results indicate the need to place the phase-correction point within 1.5 m when defining multiple objects.Next, we examined the results of changing the phase-correction point in the horizontal and vertical directions.The SLM size was ∼0.016 × 0.008 m, and from the concept of the phasecorrection point described in Sec.3.1, and correction was carried out from the vicinity of the FOV.Therefore, the phase-correction points were set at all four corners of 0.004 m square and 0.008 m square, as shown in Fig. 17, and the changes in the reconstructed image were tested under these conditions.The position of the plane was defined to be 0.25 m.(a) Directly calculate planar object light, (b) (0, 0, 0.50), (c) (0, 0, 1.00), (d), (0, 0, 1.50), (e) (0, 0, 2.00), (f) (0, 0, 2.50), (g) (0, 0, 3.00), (h) (0, 0, 3.50), and (i) (0, 0, 0.40).The results are shown in Fig. 18.Similar to the results obtained for changes in the depth direction, changes in brightness can be observed in accordance with the position of the phase-correction point, but no major changes were observed with respect to the reconstructed image obtained from direct calculation of O p .These results indicate that the quality degradation of the reconstructed image can be suppressed if the phase-correction point is in the vicinity of the FOV, so it is appropriate to define it at the center of the object being displayed or at the center of the FOV.

Evaluation of Depth Expression
As shown in Fig. 19(a), we defined two stars with a depth difference of 0.1 m, calculated O c , conducted correction calculations with respect to the plane tangent to the cylinder to generate holograms, and examined the reconstructed images.The results of this experiment are shown in Figs.19(b) and 19(c).Figure 19(b) shows the reconstructed image when the camera was focused on the front star and Fig. 16(c) shows the reconstructed image when the camera was focused on the back star.When changing the camera's focal length in this way, one of the objects was blurry while the other was in focus.These results indicate that depth expression can be obtained even for holograms created by O p generated from correction calculations.

Computation Time
Finally, we measured computation time for generating O p with the proposed method.We compared the computation time from direct calculation of O p using a GPU, using the proposed method with a CPU, and the proposed method with a GPU.The CPU we used was an Intel ® Core™ i7-8700K (3.20 GHz) with 16.0 GB memory, the operating system was Windows 10 Pro 64bit, and the GPU was NVIDIA GeForce GTX1080 with 8 GB memory.For each scenario, we took the average of five computation-time measurements.The results are shown in Fig. 20.
As discussed in Sec.3.2, for directly calculating O p , computation time increased proportionally to the number of point-light sources, but with the proposed method, computation time stayed nearly constant since correction calculations could be conducted regardless of the number of point-light sources.While calculations using the CPU took ∼170 ms, they took ∼6 ms using the GPU, meaning that calculations were 28 times faster.Table 1 lists the measurement results of computational time, indicating that it is consistent with theoretical consideration.The computation time with the CPU was ∼6 fps, but that with the GPU was ∼166 fps, achieving the target for real-time calculations (30 fps).These results indicate that correction calculations for generating O p from O c can be conducted in real time.

Conclusion
We proposed a method for generating 360-deg panoramic views of holographic images in real time as a step toward the practical implementation of holometric video streaming.The proposed method enables the observation of reconstructed images accompanying head rotation that differ among multiple users by calculating planar object light through correction calculations from cylindrical object light.It was confirmed through optical experiments that reconstructed images corresponding to head rotation could be displayed using these correction calculations.It was also shown that the amount of calculations in our method was small and that a frame rate of 166 fps could be achieved using a GPU.We anticipate the creation of systems that can provide the same VR experience as systems commonly used today by transmitting cylindrical object light and conducting correction calculations in real time using an HMD to display 3D video and by enabling the 360 deg observation of an object in combination with the method of Ref. 18.

Disclosures
There are no potential conflicts of interest, financial or otherwise, identified for this study.

Fig. 6
Fig.6Overview of correction calculation for surrounding object.

Fig. 17
Fig. 17 Horizontal and vertical change in phase-correction point.
Figure 18(a) shows the reconstructed image of a hologram obtained from direct calculation of O p with no correction calculations, and Figs.18(b)-18(j) show the reconstructed images of holograms by O p generated from correction calculations.

Table 1
Computation times.