Speckles, resultant to the randomized interference of coherent fields, are inherent in holographic displays. Speckle in holographic displays leads to low signal-to-ratio in the reconstructed holographic images and causes potential eye-safety issues. In this invited paper, recent works on speckle reduction in holographic displays either by modulating the light source module or by temporally multiplexing the computer generated holograms will be introduced.
Conventional head-mounted display for virtual reality, which simply adopts the structure of binocular sets of display and floating lens, may result in visual discomfort for the user such as nausea and double vision as it limits the user's accommodative states which leads to the vergence-accommodation conflict. In this paper, we overview the tomographic near-eye display which achieves a high-resolution, large depth of field and quasi-continuous depth at 60 frames per second for resolving vergence-accommodation conflict. In addition, we investigate several design issues and solutions to implement the specific system in the form of a compact head-mounted display and demonstrate the prototypes and experimental results.
Computational accommodation-invariant (AI) display attempts to mitigate vergence-accommodation conflict (VAC) by showing a constant imagery no matter where the observer focuses on. However, due to the usage of an electrically focus-tunable lens, the contrast of imagery is degraded as point-spread functions of multiple foci are integrated. In this paper, we introduce the content-adaptive approach to improve the contrast at the depth of highly salient region in the image. The position of focal plane is dynamically determined considering the zone of comfort and the mean focal distance of salient region. The contrast enhancement compared to conventional accommodation-invariant display is shown through simulation results using USAF resolution target image. We demonstrate our proof-of-concept prototype and its optical feasibility is verified with experimental results.
Near-eye displays with focus cues attract a lot of attention from display society since they can provide more immersive and comfort experience. However, there are several challenging issues to implement near-eye displays with focus cues. First, most prototypes still suffer from the contrast or resolution degradation which is drawback in reconstructing continuous focus cues. Second, providing multiple focus cues usually involves complex or large optical design that prevent comfort experience. Third, rendering process for layered structures demands a large computation load so that it is hard to realize real-time operation. Here, we will investigate several ideas to solve or mitigate these challenges. Foveated retinal optimization enhances the central contrast by considering human visual characteristics. After a calibration process, foveated retinal optimization could be employed without a gaze tracking system because pupil location, gaze direction, and distribution of optical nerves are correlated. Hybrid multi-layer displays are a convincing approach to achieve advantages of additive and multiplication displays. Combining additive and attenuated layers, hybrid multi-layer displays have relatively small form factor and higher brightness while providing continuous focus cues within the range of 1.8 diopter. Recently, we conceived a novel prototype that supports a large number of layers within the large depth of field. This system provides semi-original contrast, large depth of field, and quasi-continuous focus cues at 60 frames per second. We believe this approach may open a new possibility for near-eye displays for representation of continuous depth.
Currently, commercial head-mounted displays suffer from limited accommodative states, which lead to vergenceaccommodation conflict. In this work, we newly design the architecture of head-mounted display supporting 15 focal planes over wide depth of field (20cm-optical infinity) in real time to alleviate vergence-accommodation conflict. Our system employs a low-resolution vertical scanning backlight, a display panel (e.g. liquid crystal panel), and focus-tunable lens. We demonstrate the compact prototype and verify its performance through experimental results.
Various optical eyepieces have been recently proposed for augmented reality head-mounted display. Most of optical eyepieces are based on the reflective optical element including half convex mirror, free form optics and diffractive optical element. We present the transmissive type optical eyepiece: index-matched anisotropic crystal. Also, to compensate the long focal length which is an inherent drawback of index-matched anisotropic crystal, a lightguide is applied to the system. The experimental results and field-of-view analysis are presented to show the feasibility of the proposed idea.
Digital holography is widely known as one of the techniques reconstructing a depth profile of the object. For digital holography (DH), the light source that has a long coherence length such as laser or laser diode is generally recommended. Recently, digital holographic microscopy (DHM) utilizing light emitting diode (LED) as a light source has attracted attention. However, it has to satisfy certain conditions for LED be utilized in off-axis DHM as it has small coherence length. Due to this fact, the hologram cannot be captured from the other side of a beam splitter. Therefore, we propose an LED-based off-axis reflective DHM that combines a 4-f system that optically relays the field to the sensor plane of charge-coupled device (CCD). We analyze the reason why the sample plane has to be relayed by 4-f system. We provide experimental results to verify the necessity of relay optics in LED-based reflective off-axis DHM.
Although there have been a desire to implement ideal three-dimensional (3D) displays, it is still challenging to satisfy commercial demands in resolution, depth of field, form factor, eye-box, field of view, and frame rate. Here, we propose shape scanning displays that may have extremely large depth of field (10cm-infinity) without loss of frame rate or resolution, and enough eye-box (7.5mm) with moderate field of view (30°). Furthermore, our prototype provides quasi-continuous focus cues as well as motion parallax by reconstruction of 120 tomographic layers. Shape scanning displays consist of a tunable lens, a display panel, and a spatially adjustable backlight. The synchronization of the tunable lens and spatially adjustable backlight could provide additional dimension of depth information. In summary, we introduce a novel 3D display technology called shape scanning displays that present superior performance in resolution, depth of field, and focus cue reproduction. This approach has a lot of potential to be applied for various field in 3D displays including head-up displays, tabletop displays, as well as head-mounted displays. It could be efficient solution for vergence-accommodation conflict as providing accurate focus cues.
In this paper, a new type of light field display by combining additive and multiplicative light field displays is proposed. Combination of the two types of compressive light field displays makes the system compact, improves the light efficiency, and alleviates the diffraction effect. The system implements four physical image planes to widen the depth range. Layer image optimization algorithm suitable for the proposed system is introduced. In result, the target light field is decomposed into four different layer images. We explain the principle of the proposed system and verify its feasibility with simulation and experimental results.
With the virtue of rapid progress in optics, sensors, and computer science, we are witnessing that commercial products or prototypes for augmented reality (AR) are penetrating into the consumer markets. AR is spotlighted as expected to provide much more immersive and realistic experience than ordinary displays. However, there are several barriers to be overcome for successful commercialization of AR. Here, we explore challenging and important topics for AR such as image combiners, enhancement of display performance, and focus cue reproduction. Image combiners are essential to integrate virtual images with real-world. Display performance (e.g. field of view and resolution) is important for more immersive experience and focus cue reproduction may mitigate visual fatigue caused by vergence-accommodation conflict. We also demonstrate emerging technologies to overcome these issues: index-matched anisotropic crystal lens (IMACL), retinal projection displays, and 3D display with focus cues. For image combiners, a novel optical element called IMACL provides relatively wide field of view. Retinal projection displays may enhance field of view and resolution of AR displays. Focus cues could be reconstructed via multi-layer displays and holographic displays. Experimental results of our prototypes are explained.
Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.
Introduction of adaptive optics technology into astronomy and ophthalmology has made great contributions in these fields, allowing one to recover images blurred by atmospheric turbulence or aberrations of the eye. Similar adaptive optics improvement in microscopic imaging is also of interest to researchers using various techniques. Current technology of adaptive optics typically contains three key elements: a wavefront sensor, wavefront corrector, and controller. These hardware elements tend to be bulky, expensive, and limited in resolution, involving, for example, lenslet arrays for sensing or multiactuator deformable mirrors for correcting. We have previously introduced an alternate approach based on unique capabilities of digital holography, namely direct access to the phase profile of an optical field and the ability to numerically manipulate the phase profile. We have also demonstrated that direct access and compensation of the phase profile are possible not only with conventional coherent digital holography, but also with a new type of digital holography using incoherent light: selfinterference incoherent digital holography (SIDH). The SIDH generates a complex—i.e., amplitude plus phase—hologram from one or several interferograms acquired with incoherent light, such as LEDs, lamps, sunlight, or fluorescence. The complex point spread function can be measured using guide star illumination and it allows deterministic deconvolution of the full-field image. We present experimental demonstration of aberration compensation in holographic fluorescence microscopy using SIDH. Adaptive optics by SIDH provides new tools for improved cellular fluorescence microscopy through intact tissue layers or other types of aberrant media.