Open Access
30 May 2018 Holographic head-mounted display with correct accommodation and vergence stimuli
Takuo Yoneyama, Eishin Murakami, Yuki Oguro, Hibiki Kubo, Kazuhiro Yamaguchi, Yuji Sakamoto
Author Affiliations +
Abstract
We developed a holographic head-mounted display with a see-through structure that enables the user to view augmented reality scenes binocularly. It has right and left optical systems for each eye that have horizontal sliding structures to adapt the interdistance to each observer’s different interpupillary distances. Reconstructed images are colorized using the field sequential color method and are enlarged using a Fourier transform optical system. This paper describes a calibration method to correct installation errors arising from the optical elements. The results of objective and subjective evaluations show that the reconstructed images locate at the correct depths and provide correct accommodation and vergence as well.

1.

Introduction

Head-mounted displays (HMDs) with see-through vision displaying a variety of information and images superimposed on the real-world view are being developed with the advent of augmented reality (AR). Some HMDs show only two-dimensional (2-D) images, but some can also display three-dimensional (3-D) visualizations, which create a scene with depth. A conventional 3-D HMD is based on a stereoscopic imaging technique that shows slightly different images to each eye. However, the 3-D image is optically represented in the depth of the screen, and the focus cue (accommodation stimulus) is not equivalent to the perceived depth. The difference in the depths causes asynchronous stimuli between the movement of the eyes (vergence) and accommodation, resulting in the so-called vergence accommodation conflict. It causes of eyestrain and fatigue for viewers when viewing such 3-D images.1

Holography is known as an ideal 3-D display technology that satisfies all human physiological requirements enabling recognition of objects 3-D without the conflicts. Holography is a technology recording and reconstructing 3-D images using diffraction and interference of two light waves: the object light propagated from the object and the reference light propagated from a high-coherence light source.

The interference makes a pattern on a hologram that we call a “fringe pattern.” In electroholography, the fringe pattern is displayed on an electronic device such as a spatial light modulator (SLM); the SLM reconstructs the light wave of 3-D objects by illuminating the SLM with reference light, the wavelength and position of which are the same as the reference light. Fringe patterns that represent the virtual objects are generated by computer simulation and are called computer-generated holograms (CGHs). Many reports have studied desktop-type displays that enable several observers to view the 3-D images simultaneously.2,3 However, these displays are disadvantageous due to the large size of the optical system and the narrow visual fields. A trade-off relationship exists between the visual field angle and viewing zone angle. It is possible for HMDs to have larger visual fields than desktop-type display because they require a narrow viewing zone.

Holographic HMDs for personal use have also been studied.48 HMDs require only a narrow viewing zone and have larger visual fields than desktop-type displays. The first attempt at a holographic HMD was proposed by Takemori,5 and the HMD had a practical small and wide visual field. An HMD for both eyes requires adjustments to enable the generation of vergence stimuli that can accommodate individual interpupillary distances (PDs). However, the Takemori HMD system did not consider the synchronicity of accommodation and vergence stimuli as individual PD is not taken into account, so the reconstructed images could not be matched to human vision correctly. Development of a holographic HMD is in progress, but at present, there is no HMD arrangement that generates accurate vergence and accommodates stimuli to specific observers.

This paper proposes a holographic HMD with accurate vergence and accommodation stimuli that provides images in full color and a see-through optical system for an AR representation. Moreover, we have clarified that the HMD has correct characteristics of accommodation and vergence without conflict. We will discuss the compact and light weight optical system for the HMD in Sec. 2 and the calculation method for the fringe patterns of the optical system in Sec. 3, including generation methods of accommodation and vergence stimuli. Then, in Sec. 4, we will describe the structures and devices that make up the HMD system. Finally, the experimental results with our HMD will be presented in Sec. 5.

2.

Optical System

The optical system for an HMD needs to be compact and lightweight. To satisfy these requirements, we adopted Fourier transform optical systems (FTOS) and a field sequential color method. Although the HMD system is binocular and uses the optical systems of the same structure, we explain a single optical system in this section.

2.1.

Fourier Transform Optical Systems

In electroholography, the pixel density of the display device for a hologram determines the visual field and viewing zone, which indicate the maximum displayable size and the maximum area that viewers can see, respectively. Figure 1(a) outlines an ordinary arrangement of electroholography, where a hologram is illuminated by parallel light. The light is diffracted by the hologram, and the visual field is displayed across the hologram plane. In Fig. 1(a), the visual field angle θ is given by

Eq. (1)

θ=2sin1(λ2p),
where λ is the wavelength of the light and p is the pixel pitch of the display device. Equation (1) indicates that θ depends on the pixel pitch p, and this factor makes it difficult to expand the viewing angle of usual holographic displays, unless SLMs with higher resolution are used.

Fig. 1

Outlines the optical reconstruction arrangements (a) conventional and (b) FTOS.

OE_57_6_061619_f001.png

To improve this, we adopted a reconstruction method based on an FTOS developed device. The FTOS consists of an SLM, a lens, and a point light source. This simple structure has a great advantage for developing small holographic displays. When the SLM is reflective, the structure is represented as in Fig. 1(b). The point light source is arranged at the focal point of the lens, so the emitted light from the point light source is reflected by the SLM and converges at the focal point of the lens. The fringe pattern on the hologram used in the FTOS is different from that on an ordinary hologram, which is described in Sec. 3. The light diffracted by the hologram reconstructs images around the focal point of the lens. All the light reconstructing images pass through a specified area in front of the SLM. The area is called a viewing window. The width of the viewing window w is given as

Eq. (2)

w=λfp,
where f is the focal length of the lens. When the viewpoint is inside the viewing window, an observer can see the entire image. This window represents the viewing zone of the FTOS, which is narrower than that of the optical system shown in Fig. 1(a).

The visual field θF is enlarged as follows:

Eq. (3)

θF=2tan1(L2f),
where L is the hologram size. Equation (3) suggests that the visual field of the FTOS is larger than that of the arrangement in Fig. 1(a). So, when a large display and a lens with a short focal length are used, the visual field is expanded. Unexpected light such as zeroth-order light and ghost images converge at the focal point of the lens, and they are easily removed by arranging a barrier in front of the view point. Therefore, the FTOS can effectively enlarge the visual field in a small device.

2.2.

Colorization Method

There are two colorization methods in electroholography. One is a method in which images of primary colors are spatially overlapped to reconstruct full-color images.3,9,10 This method has the advantage of being able to use SLMs with a low refresh rate. However, the size of the optical system tends to be large because some optical components, such as lenses, the SLM, and point light sources for each of the primary colors, are necessary. Another colorization method is the field sequential color method,11,12 in which images of the primary colors are successively reconstructed in the same position. Although SLM with a high refresh rate is needed to reconstruct full-color images without flickering, this method has the advantage of having a small reconstruction unit because it uses only one SLM to display holograms for all the primary colors. For the HMD we developed, the field sequential color method was implemented for the colorizing of the reconstructed images.

With the field sequential color method, the calculated holograms of each primary color need to be successively displayed only while the reconstruction light for this color is on. These successively reconstructed images of the primary colors are integrated into a full-color image by human vision. Sequential images with a frequency of more than 60 Hz allow observation of full-color images without flickering. A 180-Hz SLM was used for our experiments to display the holograms to keep the frequency for displaying the sets of the three colors at 60 Hz.

2.3.

Resolution of Reconstructed Image

The resolution of reconstructed images displayed by a hologram needs to be higher than that of the human visual system because insufficient resolution creates image blur in reconstructed images. In the FTOS, the size of the viewing window and the aperture size of the light source for reconstruction light determine the resolution.

When the size of the viewing window w determined by Eq. (2) is less than that of the pupil, the hologram cannot provide enough light to the whole aperture area of the pupil. Thus, the resolution is less than that of human vision. Under the condition w<pupil aperture, the resolution of reconstructed images is

Eq. (4)

δx=λzobj/w,
where zobj is the distance between the view point and an object. To obtain a higher resolution, a high-resolution SLM is necessary.

According to the theory, a reconstruction light is assumed to be an ideal point light, but an actual reconstruction light has the definite aperture size dA. The aperture size of the reconstruction light makes the resolution as follows:

Eq. (5)

δxAzobjdA/f.

The lower of the two resolutions determines the resolution of the system.

Our system uses RGB LEDs as reconstruction light, whose spectral bandwidths are broad, so the resolution of a reconstructed image is affected. As the diffraction angle on the hologram plane is approximately inverse proportion to the wavelength, the depth of the reconstructed image will be changed as follows:

Eq. (6)

δzLzobjδλ/λo,
where δzL is the resolution in the depth, δλ is a bandwidth of LED, and λo is the center wavelength of LED, respectively. The change of the image depth makes the image blur, and the resolution is indicated as

Eq. (7)

δxδλwδλ/λo.

3.

Calculation Method

3.1.

Point Source Method

There are a number of methods to calculate CGHs, and the point source method is used most often.13 This method considers objects as clouds of independent point light sources, and it allows the expression of arbitrarily shaped objects.

As suggested in Fig. 2, with the coordinate of an i’th point source of an object dataset defined as pi(xi,yi,zi) and the coordinates of the hologram plane defined as ph(xh,yh,0), the complex amplitude distribution on the hologram ui is expressed as

Eq. (8)

ui(xh,yh,0)=airiexp{j(2πλri+ϕi)},
where ai is the amplitude of a point source, ri is the propagation distance from pi to ph, j(=1) is an imaginary unit, and ϕi is the initial phase of the point source. Therefore, the total complex amplitude distribution u from all point light sources is expressed as

Eq. (9)

u(xh,yh,0)=i=1Nui(xi,yi,0).

Fig. 2

Outline of concepts involved in the point source method.

OE_57_6_061619_f002.png

3.2.

Calculation of the Propagation Distance

In the FTOS, the propagation distance from the point source to the hologram plane is not like that in usual CGH calculations. A depth-free calculation method14 has been introduced to reconstruct images at arbitrary depths. Through the FTOS, a hologram is slightly enlarged by a lens. As the reconstructed images would be expanded and deformed with the FTOS, the CGH calculation method for FTOS is different from an ordinary hologram. It is necessary to compensate for the change in coordinates for the FTOS. When propagation distances from pi to ph are ri and the coordinates of the objects are (xi,yi,zi), the coordinates of the optimized location of the object (xi,yi,zi) are obtained with

Eq. (10)

zi=fAf+A,xi=xiziB,andyi=yiziB.
Here, the distance ri is calculated as

Eq. (11)

ri=(xixh)2+(yiyh)2+zi2,
and A, B are expressed as

Eq. (12)

A=zifzif,B=AfAf.

3.3.

Calculation for a Binocular System

The points to be reconstructed in a binocular view must have parallax information and must display different interference patterns on each of the two SLMs in the respective display units. To ensure the consistency of these two images reconstructed by each of the display units, two coordinate systems for each eye and the world coordinate system that indicates the position of the virtual point to be reconstructed are shown as Fig. 3. The origin of the world coordinate system OW is located at the center of the view points of the left and right eyes, and the z-axis is set inverse to the viewing direction with the x-axis orthogonally crossing the viewing direction and passing through the left and right view points. The coordinates of the virtual objects represented by the world coordinate system need to be transformed into the two coordinate systems to calculate consistent CGHs for the left and right display units. When the origin of the coordinate system of the left eye in the world coordinate system is defined as OL(xWOL,yWOL,zWOL), the coordinates of the point to be reconstructed in the world coordinate system (xW,yW,zW) are transformed into the left coordinates system (xL,yL,zL) using the movement and rotation transformation, expressed as

Eq. (13)

(xLyLzL1)=R(φL)(100xWOL010yWOL001zWOL0001)(xWyWzW1),
where

Eq. (14)

R(φ)=(cosφ0sinφ00100sinφ0cosφ00001),
and φL is the angle between the viewing direction of the left system and the z-axis of the world coordinate system. The right coordinate system (xR,yR,zR) is similarly calculated using the following equation:

Eq. (15)

(xRyRzR1)=R(φR)(100xWOR010yWOR001zWOR0001)(xWyWzW1),
where φR is the angle between the viewing direction of the right system and the z-axis of the world coordinate system. Equations (13) and (15) are related to the base PD. When the PD is changed by adding Δd [mm], the origin of the coordinate system for the right eye in the world coordinate system OR(xWOR,yWOR,zWOR) moves to become OR(xWOR+Δd,yWOR,zWOR) because the right optical unit slides to adjust the PD in our binocular display system. Due to this movement calculation, the objects have the correct parallax for each of the various PDs that can be reconstructed.

Fig. 3

Outline of the binocular system.

OE_57_6_061619_f003.png

In a binocular system, the binocular visual field angle θB is expressed as

Eq. (16)

θB={θM+φL+φR(Zdz<Zmin)θM2(φL+φR)(zZd),
where θM is the monocular angle of the left and the right display units, Zmin is the depth nearest from the view point where the reconstructed images are observed binocularly, and Zd is the depth where the outside limits of the field of view of the monocular angles cross as shown in Fig. 3. The value of Zmin is calculated as

Eq. (17)

Zmin=PD2tan(θM+φL+φR2),
where PD is the interpupillary distance of the observer. Equation (17) indicates that when the angles φL and φR become larger, the reconstructed images can be observed at a closer point. However, the binocular visual field decreases when objects are located farther away according to Eq. (16). Thus, the parameters φL and φR need to be optimized to match the purpose of the use of the display system.

3.4.

Calibration for Installation Errors

Optical units commonly have some installation errors, and these errors often cause considerable errors in reconstructed images. However, these errors are difficult to remove manually. Here we propose a calibration method to correct installation errors. The calibration method is divided into three steps.

The first step corrects the depth of the reconstructed images. In this step, depth-direction errors are corrected using a method with the linear least squares. First, N measurement depths are defined as z1,z2,,zN, and the measured depth zei which is obtained by measuring the object located at i’th depth zi is determined. We assume that the relation between the zi and zei is represented as a linear model, and the fitting line F(zi) is defined as

Eq. (18)

F(zi)=a1zi+a2.
The a1 and a2 are given by minimizing the following sum:

Eq. (19)

S1=i=1N{zeiF(zi)}2,
using the linear least squares method. F(z) expresses the trend of installation errors. Next, to remove installation errors, depth zi of a virtual object is shifted to the depth obtained by multiplying the inverse function F1(zi). Correction of the depth is independent for each eye regardless of PD. For this reason, if measurement is made when the system is being manufactured, using the measured values can create various hologram data with various PDs. This correction calculation almost takes no time.

In the second step, the image size is corrected. The size of the image is changed after the first step, so the size of the image has to be changed to the original object size. The size of the depth-corrected object Hi is represented by the original object size Hi as follows:

Eq. (20)

Hi=ziF(zi)Hi.
By multiplying the inverse function of Eq. (20), the size of the original virtual object is corrected.

In the third step, the vergence value is also corrected using linear least squares. The vergence value can be determined by the x-coordinates of the images reconstructed by the left and right display units. As shown in Fig. 4, when the x-axis-direction error in the world coordinate system between the objects reconstructed by the left and right display units is measured to be dei at the depth zi, the linear function G(z) followed by dei is expressed with the constants b1 and b2 as

Eq. (21)

G(zi)=b1zi+b2,
and to make following sum minimum:

Eq. (22)

S2=i=1N{deiG(zi)}2.
The error dei is calculated as

Eq. (23)

dei=dxRidxLi,
where dxRi and dxLi are the x-coordinates of the reconstructed object by the right display and by the left display units, respectively. Here, zdi, the gazed depth by both eyes through each system, is calculated with dei by the following equation:

Eq. (24)

zdi=PDPDdeizi.
This equation indicates that when there are no errors (dei), the gazed depth zdi becomes equal to the ideal depth zi, so the final step is completed by subtracting G(zi) from the x-coordinates of the object to make this error zero.

Fig. 4

Measurement of errors of target parallax and conversion to depths.

OE_57_6_061619_f004.png

In summary, the object data (xi,yi,zi) are corrected to (xi,yi,zi), which are represented as

Eq. (25)

xi=F(zi)zixi+G(zi),yi=F(zi)ziyi,zi=ziF(zi).
Note that the value of the function G(zi) is positive when calculating CGHs for the right display unit and negative for the left one.

This is a practical calibration method because it adjusts the installation errors of lenses, reconstruction lights, and other optical elements, and the depth-direction distortions caused by the lenses are also corrected at once.

4.

Fabrication of Head Mounted Displays

We fabricated the holographic HMD system with the optical parameters detailed in Table 1. The range of adjustable PDs was set to 50 to 70 mm because the average PD of adult males is around 65 mm. The viewing direction of the display units for the left and right eyes was defined as being 800 mm from the center of the viewing points. The minimum depth Zmin is about 300 mm because this is the nearest depth human beings can focus on without effort.

Table 1

Optical parameters.

SLMs
Pixel pitch9.6(H)×9.6(V) [μm]
Resolution1280×768 [pixels]
Refresh rate180 [Hz]
Wave length and power of LEDs
Red625 [nm]
Green525 [nm]
Blue465 [nm]
Power consumption1 [W/color]
Lenses
Focal length75 [mm]
Distance from SLM7 [mm]
Visual field angles
Visual field (horizontal)9.4 [deg]
Visual field (vertical)5.6 [deg]
Viewing zone
Viewing zone of red4.9 [mm]
Viewing zone of green4.1 [mm]
Viewing zone of blue3.6 [mm]
Others
Adjustable PDs50 to 70 [mm]

The HMD was 350-mm high × 200-mm wide × 200-mm deep, and the weight was 1480 g.

4.1.

Optical Structure

We fabricated a holographic HMD based on the aforementioned details with a sufficiently solid structure to facilitate adjustments of the lens, light source unit, and SLM as shown in Fig. 5. The optical parts are integrated into one component with a surrounding frame. The reconstruction light should be arranged to be located just at the focal point of the lens to reduce installation errors.

Fig. 5

Photo and outline of holographic HMD.

OE_57_6_061619_f005.png

To create binocular vision, the proposed our HMD has two display units, one for the left eye and one for the right, as shown in Fig. 5. These two units are independent and combined symmetrically for the left and right eyes, and both units are attached on a helmet. Because each display unit has a narrow viewing zone, the binocular display system is equipped with a sliding structure under the right display unit to enable it to be adjusted to the individual PDs of observers.

For each of the display units, the FTOS is comprised of an SLM, a lens, and a light source unit, all of which are arranged on one axis. To superpose the real object and virtual image in the observer’s vision, a half mirror is located between the lens and the light source unit. See-through type displays have to make the reconstructed images bright enough to be observed with ordinary room light. To avoid the attenuation of the reconstruction light, the propagation distance is shortened by a lens with short focal length, and only a one-half mirror is used to make see-through vision possible.

To cut out unexpected light, barriers are located in front of the eyes. The unexpected light is blocked by the barriers, and only the reconstruction light is propagated through the barriers.

4.2.

Full-Color Point Light Source Unit

To obtain sharp and clear full-color images, very small and high-power point light sources are necessary because the resolution of the reconstructed images depends on the quality of the reconstruction light. Lasers for each of the primary colors combined by half-mirrors have usually been adopted as one high-power and full-color point light source.3,9,10,15 However, multiple lasers result in a bulky apparatus and speckle noise in the reconstructed images due to the high coherence of laser light. For these reasons, we used small and high-power light-emitting diodes (LEDs) for the reconstruction light.

The full-color LED used has independent LED tips for the primary colors arranged at slightly different positions. Due to the minor errors it gives rise to, the reconstructed images also have slight errors, which would normally cause some color blurring. To avoid that, a sharpening transparency acryl fiber is put on the LEDs as shown in Fig. 6. The custom-made light source was designed and developed by us. The fiber is about 18-mm long and has a diameter of 5 mm. Except for the top and bottom of this fiber, the surface is mirror-coated. Light emitted from the LEDs is combined inside the fiber, and the colors of each beam of the light are emitted only at the top of the mirror-coated fiber.

Fig. 6

Photo and details of full-color point light source unit with mirror-coated fiber.

OE_57_6_061619_f006.png

Figure 7 shows the top of the fiber. The aperture of the fiber, with the light spot area shown in the figure, is about 0.2 mm. The size is determined by the balance of the brightness, resolution, and directivity. This point light source unit has a little directivity and a half-power angle of 10.8 deg but is still sufficiently large enough to cover the entire active area of the SLM.

Fig. 7

Aperture of LED fiber.

OE_57_6_061619_f007.png

To synchronize the timing of lighting of a primary color LED and displaying the color of the hologram, a synchronizing circuit connects the SLM and the light source unit. This circuit transmits the color information signal from the SLM driver to the light source unit, and each color the LED is lit corresponding to the received color signal.

5.

Experiments and Results

To test the effectiveness of the proposed holographic HMD, we measured the optical characteristics, and conducted objective and subjective evaluations of reconstructed images.

5.1.

Optical Characteristics

Figure 8 shows the left and right images of this holographic HMD. This reconstructed image is generated from the virtual objects shown in Fig. 8(a), with a metal ball at the center, a transparent object on the right, and a diffuse reflection sphere on the left. It can be seen that the checkerboard below is reflected on the surface of the reflective sphere. Figures 8(b) and 8(c) have parallax, and they can be viewed stereoscopically by the parallel method. In this manner, the texture of the object’s surface can be displayed. Also, when the observer is focusing on the sphere in the front, the rear sphere became blurred and a natural expression regarding the focal length was observed.

Fig. 8

Parallax of the holographic HMD (a) schematic diagram and (b) view from left and (c) view from right.

OE_57_6_061619_f008.png

Figure 9 shows the reconstructed image of a visual field chart with a dot interval of 1.0 deg, except for 0.5 deg at both ends of the horizontal row. This result indicates that visual fields of the holographic HMD are 9.4 deg horizontally and 5.6 deg vertically, respectively. As a lens with a small diameter and a short focal length is used, the chromatic aberration can be seen. To correct this, it is necessary to considerate calculation algorithm about the chromatic aberration correction.

Fig. 9

Reconstructed image of a visual field chart.

OE_57_6_061619_f009.png

Figure 10 shows the reconstructed images of test charts to reveal the resolutions of the proposed holographic HMD. The charts are located at a depth of 500 mm, and the numbers on them are line spaced (mm). The figures indicate that both horizontal and vertical resolutions are slightly larger than 1.0 mm. The measured resolutions agree with that predicted by Eq. (5) and the aperture size of reconstruction light described in Sec. 2.3.

Fig. 10

Resolutions of reconstructed images (a) horizontal and (b) vertical resolutions.

OE_57_6_061619_f010.png

Figure 11 shows reconstructed images of two Maltese-cross targets arranged at a depth of 300 and 1000 mm, respectively. In Fig. 11(a) in which the focus is on the left target, the left target image is sharp whereas the right one is defocused. Focusing on the opposite target produced an effect opposite to that as shown in Fig. 11(b). These results indicate that the reconstructed images provide accommodation stimulus.

Fig. 11

Difference in reconstructed images of focal depths.

OE_57_6_061619_f011.png

5.2.

Calibration of the Depth of the Reconstructed Images

In the first experiment, the depth of the reconstructed noncorrected images were measured and corrected using the calibration method described in Sec. 3.4 and we tested the accuracy of the depths and vergence values of the images. The measured depths zi were defined from 400 to 1000 mm with 100-mm intervals, and the depths of the targets were measured using the focus feature of a camera. The focus of the camera was aligned with the reconstructed image, and in this state, the actual index was placed at the focus of the camera. The depth of this real index was measured. The camera was a Nikon D5100, the lens an f/22, and the depth of field was ±90  mm at 1 m. The use of the RGB LEDs affects the image depths described in Sec. 2.3. The central wave length (spectral bandwidths) of the RGB LEDs are 625 nm (17 nm), 525 nm (34 nm), and 465 nm (23 nm), respectively. The largest depth change (green color) is 16 mm at the depth of 1000 mm, which is smaller than the depth of field of the camera. The resolution occurred by bandwidth of the wavelength is 0.24 mm (green color) at a depth of 1000 mm, which is smaller than the resolution occurred by the aperture size.

Figure 12 shows the results of this experiment: Fig. 12(a) is the results of the left optical unit and (b) shows the results of the right one. In this figure, the horizontal and the vertical axes correspond to the target depth zi and the measured depth zei, respectively. The measured depth of the noncorrected targets zei is plotted as “+” marks and the approximated curve of these values F(z) is expressed as the chained line. These data were corrected using Eqs. (20) and (25) and expressed as “×” marks. The measured depths of the corrected images zei are in agreement with the theoretically ideal line that satisfies zei=zi and are represented by the dotted line. These results after correction show that both the left and right units reconstruct the images at the correct depths.

Fig. 12

Results of depth correction (a) measured depths and corrected depths of left optical unit, (b) right unit.

OE_57_6_061619_f012.png

Next, the errors of the vergence values of the display depth-corrected targets located at the theoretically ideal depth were measured. These errors were also measured using a camera with the following procedure: a scale was arranged in the depth displaying the reconstructed images, and we captured the reconstructed images and measure from the left and right viewpoints using camera. If the parallax is correct, the reconstructed image should be located at the same position on the scale. Figure 13 shows difference of positions between the left and right images.

Fig. 13

Results of the vergence-value correction (a) measured errors dei related to the vergence and (b) related to PDs.

OE_57_6_061619_f013.png

In Fig. 13(a), the measured errors dei were sufficiently small and needed no correction; the maximum value was a 10-mm error at the 1000-mm deep target. Figure 13(b) shows these experimental results with the measured errors related to the PDs. No calibration of the vergence value was conducted for these experiments. The error became almost negligible.

5.3.

Subjective Evaluation

The second experiment was subjective evaluations of depth perceptions. In this experiment, the depths of the depth-corrected images were evaluated binocularly by five subjects in their 20s, all of whom had 20/20 vision. The target depths were located from 400- to 1000-mm distance from the observers at 100-mm intervals. The observed depth was measured by moving the actual index to a position, where a subject recognized as the same as the image. The movement is controlled by a subject using an electric laser. There was one experiment for each subject.

The results of the evaluation are shown in Fig. 14, where D (diopter) is a metric expression of 1/m; a unit expressing the distortion power of the lens. Figure 14(a) shows the relationship between the stimuli depths Zi [D] of the displayed targets and the observed depths Zei [D] after the depth correction. In the figure, the dotted line is an ideal line satisfying Zie=Zi and the plotted marks are observed depth. These results indicate that the proposed calibration is able to correct the depths of reconstructed images.

Fig. 14

Results of subjective evaluation (a) observed depths reported by five subjects and (b) relationship between observed depths and PDs.

OE_57_6_061619_f014.png

Figure 14(b) shows the individual PDs in the horizontal axis and the stimuli of the depths of the observed targets in the vertical axis. These results indicate that the holographic HMD regenerates images of the objects at correct depths for the various individual PDs.

6.

Conclusion

We developed a holographic HMD that shows full-color 3-D images with a visual field of 9.4 deg. To generate accurate accommodation and vergence stimuli, we proposed correction methods for the focusing depths and vergence angle. The CGH calculations for these corrections make it possible to use a low-accuracy assembly and a freely adjustable optical system. The results of the objective and subjective evaluations indicated that the display represents 3-D images at correct depths. The apparatus was also allowed to generate adjusted stimuli that accommodate individual interpupillary distances. Although the whole system is large, heavy, and impractical, but the optical system entity is not heavy, and is primarily the weight of the frame. Consequently, the overall arrangement of this system can be considered small and light.

References

1. 

D. M. Hoffman et al., “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision, 8 (3), 33 (2008). https://doi.org/10.1167/8.3.33 Google Scholar

2. 

J. Barabas et al., “Depth perception and user interface in digital holographic television,” Proc. SPIE, 8281 828109 (2012). https://doi.org/10.1117/12.908538 PSISDG 0277-786X Google Scholar

3. 

T. Senoh et al., “Full-color wide viewing-zone-angle electronic holography system,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, (2011). Google Scholar

4. 

E. Moon et al., “Holographic head-mounted display with RGB light emitting diode light source,” Opt. Express, 22 (6), 6526 –6534 (2014). https://doi.org/10.1364/OE.22.006526 OPEXFF 1094-4087 Google Scholar

5. 

T. Takemori, “3-dimensional display using liquid crystal devices–fast computation of hologram–,” 13 –19 Tokyo, Japan (1997). Google Scholar

6. 

M. Kitamura et al., “Depth perception with see-through holographic display,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, (2011). Google Scholar

7. 

W. Su, L. Chen and H. Lin, “Full color image in a holographic head-mounted display,” in 20th Int. Display Workshops (IDW ’13), 1280 –1283 (2013). Google Scholar

8. 

H. E. Kim et al., “Three-dimensional holographic display using active shutter for head mounted display application,” Proc. SPIE, 7863 78631Y (2011). https://doi.org/10.1117/12.872680 PSISDG 0277-786X Google Scholar

9. 

H. Nakayama et al., “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt., 49 5993 –5996 (2010). https://doi.org/10.1364/AO.49.005993 APOPAI 0003-6935 Google Scholar

10. 

F. Yaras, H. Kang and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt., 48 H48 –H53 (2009). https://doi.org/10.1364/AO.48.000H48 APOPAI 0003-6935 Google Scholar

11. 

H. Nakayama et al., “An electro-holographic colour reconstruction by time division switching of reference lights,” Appl. Opt., 49 5993 –5996 (2010). https://doi.org/10.1364/AO.49.005993 APOPAI 0003-6935 Google Scholar

12. 

T. Shimobaba and T. Ito, “A color holographic reconstruction system by time division multiplexing with reference lights of laser,” Opt. Rev., 10 339 –341 (2003). https://doi.org/10.1007/s10043-003-0339-6 1340-6000 Google Scholar

13. 

J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett., 9 405 –407 (1966). https://doi.org/10.1063/1.1754630 APPLAB 0003-6951 Google Scholar

14. 

Y. Sato and Y. Sakamoto, “Calculation method for reconstruction at arbitrary depth in CGH with Fourier transform optical system,” Proc. SPIE, 8281 82810W (2012). https://doi.org/10.1117/12.907615 PSISDG 0277-786X Google Scholar

15. 

Y. Shimozato et al., “Four-primary-color digital holography,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, (2011). Google Scholar

Biographies for the authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Takuo Yoneyama, Eishin Murakami, Yuki Oguro, Hibiki Kubo, Kazuhiro Yamaguchi, and Yuji Sakamoto "Holographic head-mounted display with correct accommodation and vergence stimuli," Optical Engineering 57(6), 061619 (30 May 2018). https://doi.org/10.1117/1.OE.57.6.061619
Received: 14 November 2017; Accepted: 2 May 2018; Published: 30 May 2018
Lens.org Logo
CITATIONS
Cited by 13 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Head-mounted displays

Holography

3D image reconstruction

Holograms

Spatial light modulators

Visualization

Light sources

RELATED CONTENT


Back to Top