PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176501 (2021) https://doi.org/10.1117/12.2597746
This PDF file contains the front matter associated with SPIE Proceedings Volume 11765, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Display Engine Architectures for AR, VR, and Smart Glasses
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176502 (2021) https://doi.org/10.1117/12.2576490
As of today, display performance is a major development criteria in the quest to deliver consumer-ready, high-quality XR glasses. Laser beam scanners in comparison with other display technologies are among the most promising high-dynamic range RGB display engine architectures, e.g., because the size of these devices remains unchanged when increasing the display resolution and field of view. This is in sharp contrast to competing display engines where each pixel constitutes an individual component and these technologies at some point seem to reach their physical limits. On the other hand, manufacturing state-of-the-art laser beam scanners including optics seems especially labor intensive, exhibiting a low yield, therefore driving up the price of XR glasses. This paper addresses the potential benefits and pitfalls of using laser beam scanners in XR and gives an insight into new solutions in next-gen laser beam scanning devices like, e.g., replacing cumbersome hardware beam combination by mere software solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176503 (2021) https://doi.org/10.1117/12.2579695
Presented is a high-resolution AR micro display based on Laser Beam Scanning (LBS) applying a two-dimensional resonantly operated vacuum packaged MEMS mirror with large mirror diameter, high scan frequencies, high Q-factor and large field-of-view (FoV). The image is projected to the retina using a diffractive waveguide leading to a comfortably large eyebox. Advanced control algorithms and image processing methods are implemented to accurately drive, sense and control the biaxial resonant MEMS mirror as well as to optimize image projection quality. Due to a sufficiently large mirror diameter this micro display does not need any beam expansion optics between MEMS mirror and waveguide enabling an ultra-compact projection unit. Resonant operation of the MEMS mirror in both axes and exploiting the significant advantage of a hermetic vacuum package effectively reduces energy loss by damping and thus minimizes drive voltage and power consumption. The display setup demonstrates the successful realization of a small form factor high resolution micro projector that meets important requirements for enabling fashionable AR smartglasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176504 (2021) https://doi.org/10.1117/12.2576704
Laser beam scanners (LBS) are an emerging micro-display technology for augmented reality (AR) head-mounted displays (HMD), enabling small-form-factor and low-power display units with large field of view (FOV) and daylight-bright luminance, that are compatible with a large range of optical combiner technologies such as waveguide or holographic combiners. We have developed an ultra-compact and lightweight LBS comprising an integrated laser module, a single 2D micro-electro-mechanical systems (MEMS) mirror, and a molded interconnect device (MID). The compact integrated laser module contains red, green, and blue (RGB) semiconductor laser diodes (LDs) and a common system of microlenses for beam collimation, all enclosed in a single hermetically sealed package. The three LDs are mounted onto a single submount using a novel high-precision laser die bonding technique. This high-precision LD placement allows the use of collimation lenses that collimate all three laser beams simultaneously in contrast to separate lenses with additional active alignment steps for each color. No additional optical components such as mirrors and dichroic beam combiners are required—instead, the color channels are overlapped on a pixel-by-pixel basis by a “software beam combination” laser pulse timing algorithm. Both laser module and MEMS mirror are assembled on an MID with printed circuit board (PCB), which is connected to a driver board including video interface. We also give an outlook to future generations of fully mass manufacturable LBS systems with even smaller form factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176505 (2021) https://doi.org/10.1117/12.2584168
For conventional projection displays, the sizes of the microdisplay and imaging lens aperture stop typically limit the optical throughput or étendue of the system. However, for AR/MR projection systems using diffractive pupil-replicating waveguide combiners, we examine how re-interaction with the in-coupler grating at one FOV extreme and pupil replication sparsity at the other FOV extreme severely restrict the optical throughput. Diffractive waveguides are the most commonly used AR/MR combiner technology due to their low cost, attractive form factor, and large achievable FOV. Nevertheless, we show that current and anticipated diffractive combiners only support étendues up to approximately 6.2 mm²sr, equivalent to no larger than a 0.34” display operating at f/2. At this display diagonal, maintaining 60 PPD for retinal resolution over a 50°-diagonal FOV requires a 3-μm pixel pitch. Future combiners with larger FOVs and higher resolutions will require even smaller pixels. Additionally, we find that optical engine volume varies quadratically with pixel pitch. Finally, we illustrate how the push for small pixels will intensify as the dominant array-based microdisplay technology used in AR/MR devices transitions from LCoS to inorganic microLED (micro-iLED). The emissive nature of micro-iLED displays offers exceptional potential power savings for sparse AR/MR content. Nonetheless, the broad angular emission and spatially multiplexed color subpixels of many proposed micro-iLED architectures further strain the bandwidth of the limited waveguide throughput. We demonstrate how CP Display has anticipated the requirement of 3-μm and smaller pixels in architecting its IntelliPix™ display platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176506 https://doi.org/10.1117/12.2585214
Reducing the size of Virtual Reality head-mounted displays is of main interest to improve the comfort of users, which is a particularly complex design problem due to the very large field of view needed to feel the immersion. High compactness with high transmission efficiency and high contrast can be achieved by multichannel optics, whose design for high performance is carried out at LIMBAK introducing intensively freeform optical surfaces, increased resolution via variable magnification, dynamic mapping control and super-sampling via pixel interlacing. This presentation will cover the growing variety of geometries, how to address their challenges and envision their future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176507 (2021) https://doi.org/10.1117/12.2577856
We propose a new display, named as scanning waveguide display, that achieves a large eye box and FOV simultaneously. The display uses an off-axis lens array with extremely low f/# as the out-coupler. The lens array is fabricated by cholesteric liquid crystal polarization holography. We demonstrate a diagonal FOV of 100°, which far exceeds the theoretical limit of waveguide displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176508 (2021) https://doi.org/10.1117/12.2576702
Augmented reality (AR) displays have been a hot topic for many years as they offer potential for a high return on investment. With this high potential come many technical challenges to be addressed before AR displays and smart glasses become more accepted in the marketplace. One of the technical challenges is the optical design of compact and lightweight optics capable of projecting an augmented image onto the user line of sight with comfort. Major advances are being made in waveguide technology to produce large FOV and eye-box. Equally, light engines are also being developed to be less bulky and more efficient. In this paper we present an insight on how a next generation laser beam scanner (LBS) developed by TriLite Technologies can be integrated with different combiners and implemented for different AR displays and smart glasses architectures. The unique design of the LBS lends itself to fit in different configurations as dictated by the different designs and layouts of waveguide and combiners. In addition, the extremely low profile of the next generation LBS make the glasses look smart literally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176509 (2021) https://doi.org/10.1117/12.2580023
Diffractive gratings, one of the most promising near-eye display designs, has been challenging to satisfy the essential features without making sacrifices in output efficiency or the direction of incidence due to the low diffraction efficiency of higher modes at normal incidence. Here, we propose dielectric metagratings that support light deflect to a larger angle with high efficiency within the field of view of 54 degrees. In this paper, through the proposed model of the eye-imaging system, we present optimal designs of metagratings for diffractive total internal reflection combiner or diffractive exit pupil expanders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650A (2021) https://doi.org/10.1117/12.2584691
There are multiple challenges to realize waveguide-based Surface Relief Gratings (SRG) for combiners in Augmented Reality (AR) applications: fabricability, efficiency and diffraction uniformity are among the most important ones. Interdigital develops SRG using Edge Waves (EW) to design highly efficient gratings with a high angular robustness. An EW is generated by a diffraction phenomenon appearing at the interface between two dielectric media and its direction of propagation is controlled by the index ratio between the two media and the direction of the incident plane wave. Combining different edges together, we optimize the elementary geometry, i.e., the building block of an SRG, to diffract into the direction defined by the grating equation, optimizing the power transfer of the incident light into the direction of interest. Our approach enables symmetrical structures with low aspect-ratio, optimized for coupling very efficiently into the first or second order modes, the latter leads to over-wavelength pitch sizes. Moreover, our SRG is designed to angularly tile the exit pupil of the light engine without losses, making our structures adapted to any sort of light engines. Based on our unique design concept, we present in-couplers using two waveguides with a field of view of 130 degrees and RGB operation, and a one waveguide system with 90 degrees of field of view and RGB operation, both with a wafer having also an index of refraction of about 1.7. We believe this will pave the way to new DOE combiners for future AR glasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650B (2021) https://doi.org/10.1117/12.2586205
We report on a novel state-of-the-art diffraction optical elements (DOE) based waveguide architecture for aug- mented reality (AR) display with increased field of view and method for analytical design of such an architecture. The effectiveness of the architecture results from the multiple usage of the same propagation directions inside the waveguide by different field of view parts. Unlike in previous solutions, where such approach would lead to crosstalk generation, the proposed architecture different field of view parts are propagated in different waveguide locations, separated by the corresponding DOEs. The architecture can be applied either for increasing the verti- cal field of view size or the horizontal field of view size with compensation of chromatic dispersion resulting from the diffraction. The architecture configuration, analytical derivations of the DOEs parameters, and modeling results are discussed. The architecture satisfies market demands for the form-factor, size and weight, as well as allows up to four times increase of the field of view size in comparison with the conventional solutions. For the DOEs refractive index of 1.5, the architecture provides 48x44 degrees white-light field of view within two waveguides and 56x56 degrees white-light field of view within three waveguides. For the DOEs refractive index of 1.9, the architecture provides 58x58 degrees white-light field of view within only one waveguide.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650C https://doi.org/10.1117/12.2577298
Substrate Conformal Imprint Lithography (SCIL) solves the limitations of PDMS soft-stamp NIL techniques (resolution, pattern deformation, overlay) and allows low-pressure wafer scale conformal contact and sub-10 nm resolution using a novel silicone rubber stamp.
SCIL showed direct replication of sub-50nm patterns in silica over 200mm wafers with stamp lifetimes over 500 imprints, for AR, NIL resist with an index of up to n=1.96 and overcoat layers of up to n=2.1. Replication of slanted grating patterns in multiple orientations over the wafer are possible. First results of full 300mm wafer imprints will be shared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650F https://doi.org/10.1117/12.2583145
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650G https://doi.org/10.1117/12.2584260
The coming of age of AR, VR and MR applications and usage scenarios relies on the development of ever-improved advanced light management systems, both for sensing (camera) and actuation (display), e.g., solid-state dToF or FMCW scanning or flash LiDAR, polarimetric imaging or resettable structured light illumination for 3D mapping, directional imager for light field registration, plasmonic or dielectric color filters and directors for efficient spectroscopic information acquisition. Indeed, optics remains the dominant user interface modality while large portions of required information can be retrieved in optical domain. These systems rely on the emergence of mature mass-manufacturing integrated photonics platforms in near infrared and visible wavelength ranges.
This presentation introduces developments at imec of diffractive components for reflective, transmissive and guided applications on opaque (Si/CMOS) and transparent (quartz) substrates, relying on sub-wavelength nano-patterning techniques (from DUV dry and wet (immersion) lithography through 200mm wafer-scale e-beam, nano-imprint lithography, block-co-polymer to EUV), novel CMOS-compatible material toolbox beyond Si and SiN (passive, active, resettable and tunable) and high-aspect ratio re-filling to enable stacking of optical features to define complex functional system.
In particular, we will report on pixel-integrated Fresnel phase plates for local eQE optimization, on process complexity trade-off enabled by optical meta-materials, aspherical and non-cylindrical optical components for directed light, tunable structured light scanners, plasmonic and dielectric-based color filters and directors, optical beamformer in near infrared, sub-wavelength spatial light modulator in the visible and finally novel developments for 2D optical waveguides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650H (2021) https://doi.org/10.1117/12.2579505
Augmented reality glassed based on waveguides with diffraction gratings are the technology of choice for many device makers. They have evolved to provide excellent picture quality and large field of view to the users. However, the field of view is a key criterion for such waveguides and to further increase it the refractive index of the used materials has to be increased. With current manufacturing methods mostly nanoimprinted permanent polymers with inorganic high refractive nanoparticles are used. Commercial materials can already achieve refractive index of n=1.9 but it seems difficult to achieve refractive indices of n=2.0 and above. On the other side the glass substrates or coating are already available with a refractive index n=2.0 and higher and thus could be utilized directly for structuring the needed diffraction gratings. In this case a pattern transfer by etching is required which should enable binary grating designs as well as slanted grating. In this work the nanoimprint lithography patterning is investigated in combination with subsequent etching processes to achieve binary or slanted nanograting in high refractive TiO2 and glasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Peter C. Guschl, Grace McClintock, Selina Monickam, Robert Wiacek, Z. Serpil Gonen Williams
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650I (2021) https://doi.org/10.1117/12.2584175
One of the most critical technologies required in XR devices are the optical waveguides applied to the lenses. These waveguides are responsible for taking the image that is projected to the inside of a lens and transmitting it to the other side of the lens, delivering a clear image to the viewer. The key factor that determines a clear and immersive XR visual experience is having a field of view (FoV) as wide as possible. High refractive index formulations are key to increasing the FoV. Here we present characterization data of PixNIL™ nanoimprintable formulations that incorporate PixClear® high refractive index products. Films made with Pixelligent’s PixClear® TiO2 nanocrystals with a mean particle diameter of 20 nm and dispersed in an acrylate-based binder demonstrate refractive index values as high 1.96 at 589 nm. In addition, these films maintain high transparency in the visible light spectrum (400 – 700 nm) and demonstrate both low haze and low absorbance. The PixNIL™ formulations can be applied using nanoimprint lithography (NIL) to create optical structures that are key to enabling the widest FoV waveguides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650J (2021) https://doi.org/10.1117/12.2579235
Bayfol® HX photopolymer films prove themselves as easy-to-process recording materials for volume holographic optical elements (vHOEs) and are available in customized grades at industrial scale. Their full-color (RGB) recording and replay capabilities are two of their major advantages. Moreover, the adjustable diffraction efficiency, tunable angular and spectral selectivity of vHOEs recorded into Bayfol® HX as well as their unmatched optical clarity enables superior invisible "off Bragg" optical functionality. As a film product, the replication of vHOEs in Bayfol® HX can be carried out in a highly cost-efficient and purely photonic roll-to-roll (R2R) process. Utilizing thermoplastic substrates, Bayfol® HX was demonstrated to be compatible to state-of-the-art plastic processing techniques like thermoforming, film insert molding and casting, all enabled using a variety of industry-proven integration technologies for vHOEs. Therefore, Bayfol® HX made its way in applications in the field of augmented reality such as Head-up-Displays (HUD) and Headmounted- Displays (HMD), in free-space combiners, in plastic optical waveguides, and in transparent screens. Also, vHOEs made from Bayfol® HX are utilized in highly sophisticated spectrometers in astronomy as well as in narrow band notch filters for eyeglasses against laser strikes. Based on a well-established toolbox, Bayfol® HX can be adopted for a variety of applications. To further offer access to more applications in sensing and continuously improve the performance in existing applications, we recently extended our chemical toolbox to address both the sensitization beyond RGB into the Near Infrared Region (NIR) and increase the achievable index modulation ▵n1 beyond 0.06. In this paper, we will report on our latest developments in these fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stephan Prinz, Markus Brehm, Isabel Pilottek, Patrick Heissler
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650K https://doi.org/10.1117/12.2579867
UV- and heat-curing polymers are essential building blocks of state-of-the-art electronics assembly. Optical light guides benefit from their high refractive index, optical stability and form stability in high volume nano-imprinting. Smallest sensor assemblies and reflow compatible packages for 3D sensing can be realized with polymers with tuned thermo-mechanical as well as optical properties. Conductive polymers are enabling mass manufacturing of small form-factor MicroLED displays. Eventually, latest adhesive technology facilitates innovative design possibilities for appealing augmented and virtual reality headset formfactors by bonding all the individual building blocks
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650L https://doi.org/10.1117/12.2584237
Waveguide technology is widely believed to constitute the most promising approach to realize affordable and fully immersive Augmented Reality (AR) / Mixed Reality (MR) devices. For all major technology platforms (diffractive, reflective, or holographic), specialty grade high index optical glass is the central component to achieve some of the key features of AR devices, such as field of view, MTF, or weight. We will provide insights into SCHOTT’s roadmap for dedicated glass development for the AR sector and discuss the latest achievement with high relevance for the industry. It is a game of trade-offs between the desired properties to produce an optical glass which enables the entry of AR devices into the consumer market.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650M (2021) https://doi.org/10.1117/12.2577283
Desirable fields-of-view, angular resolutions, and form factors of near-to-eye AR/VR/MR displays require order-ofmagnitude increases in pixel count and pixel density of spatial light modulators (SLM). We present an in-plane angularspatial light modulation technique to increase the independent output display pixels of a DMD by three orders of magnitude to achieve gigapixel output from a sub-megapixel device. Pulsed illumination synchronized to a DMD’s micromirror actuation realizes pixel-implemented and diffraction-based angular modulation, and fine source array control increases angular selectivity. The gigapixel output is demonstrated in a 1440-perspective display, each perspective having the DMD’s full native XGA resolution, across a 43.9°×1.8° FOV viewing angle. 8-bit multi-perspective videos at 30 FPS are demonstrated, and pixel-implemented multi-focal-plane image generation is realized. Implications for near-to-eye displays are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650N (2021) https://doi.org/10.1117/12.2577526
The key aspect of AR/VR is immersivity. Immersivity occurs when all the senses are engaged. When designing a neareye display to supply immersivity to the most important sensory system – the human visual system - the challenge is to obtain both high imaging quality and compactness. Conventional optical designs are unable to resolve the mutually contradictory requirements for modern AR/VR systems, such as achieving low weight / small footprint / low cost while at the same time providing higher resolution and reduced optical aberrations. Eye-tracking real-time measurements can be used to modify the near-eye display visual data and to augment optical system performance, reducing the distortions caused by the physical constraints of AR/VR systems. In this paper, we describe typical AR/VR optical system deficiencies and present methods to overcome them with the help of eye-simulation and eye-tracking. These methods provide a higher effective image resolution with reduced optical aberrations, resulting in improved image quality and a more immersive user experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650O https://doi.org/10.1117/12.2583498
Decreasing hand tremor is crucial for sensitive micromanipulation during micro-surgery. Virtual reality (VR) technology
is playing an important role in many biomedical applications. These applications enable the subject to gain
valuable experience in accurate tasks. This study proposes a VR-based system of a handheld gripper combined
with a long short-term memory (LSTM) architecture. Our VR-based system shows an image of forceps in a
virtual space merged with an LSTM model to precisely track the tool’s position. We applied the LSTM as
sensor fusion between a VR controller and an inertial measurement unit. Also, this study compared the LSTM
model with similar models such as the gated recurrent units (GRU) and VR controller raw data. The trained
models used a linear motor attached to a stage as reference data. The training data used different velocities and
accelerations provided by the linear motor control. Experimental results indicate that the LSTM can provide
better precision in both stationary and dynamic scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650P (2021) https://doi.org/10.1117/12.2576809
Integral imaging (InI) based light field display method offers a great opportunity to achieve true 3D scene rendering with correct focus cues required for mitigating the well-known vergence-accommodation conflicts. However, one of the main challenges that still needs to be solved is the tradeoff between the spatial resolution and depth resolution. Improving the depth resolution requires the increase of the number of distinct views, which is referred to as the view number, for rendering a 3D scene, while increasing the view number often comes at the cost of the spatial resolution of the scene. In this paper, we describe the design of a time multiplexed InI based light field display that can potentially increase the spatial resolution while maintaining the viewing number and thus depth resolution. By incorporating a high-speed programmable switchable shutter array and synchronizing different elemental image sets rendered on the display with the shutter array in a time-multiplexing fashion, the proposed method can rapidly render a 3D scene from slightly different viewing perspectives. Consequently, the scheme can improve the spatial resolution without compromising the viewing density and eyebox size. By choosing appropriate parameters, the proposed method can render as many as 4 by 4 views with in a 6mm eye box while providing an angular resolution about 1.27 arc mins.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650Q (2021) https://doi.org/10.1117/12.2580210
Depth perception is an important building block for user experience in AR/VR/MR applications. It is based on visual cues such as blurring, sharpness, relative size, non-uniform details level, and more. The human brain uses these visual cues to create a 3D perception. Displaying enhanced 3D cues requires high resolution while maintaining a large Field of View (FOV). However, current display technologies fall short of fulfilling these requirements. Current solutions require sophisticated optics and loads of additional hardware. Previous work by the authors presented a novel method to produce a foveated image using a Laser-Based Scanning (LBS) display enabling a localized high resolution and large FOV solution for enhanced 3D cues. This work takes this further and presents an implementation that enables a multifocal image projection combined with a foveated display. Multifocal plains provide natural blurring and sharpness cues. The solution is comprised of multi degrees of freedom scanning MEMS mirrors, a laser projector, and an algorithm enabling foveated projection and changeable focal plains. As the focal plains could be changed from frame to frame, the combination of the created image along with foveated pixelization generates a more natural display. This allows visualizing enhanced depth cues which significantly improves the 3D perception and 3D user experience. Moreover, this setup enabled reduced form factor. The authors believe that the presented solution will enable a state-of-the-art user experience with an acceptable form factor that will enable commercialization into daily use AR/VR devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650R (2021) https://doi.org/10.1117/12.2584044
Holographic displays have recently shown remarkable progress in the research field. However, images reconstructed from existing display systems using phase-only spatial light modulators (SLMs) are with noticeable speckles and low contrast due to the non-trivial diffraction efficiency loss. In this work, we investigate a novel holographic display architecture that uses two phase-only SLMs to enable high-quality, contrast-enhanced dis- play experiences. Our system builds on emerging camera-in-the-loop optimization techniques that capture both diffracted and undiffracted light on the image plane with a camera and use this to update the hologram patterns on the SLMs in an iterative fashion. Our experimental results demonstrate that the proposed display architecture can deliver higher-contrast and holographic images with little speckle without the need for extra optical filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650S (2021) https://doi.org/10.1117/12.2584091
The state-of-the-art Virtual and Augmented Reality (VR/AR) hardware fails to deliver satisfying visual experience due to missing or conflicting focus cues. The absence of natural focal depth in digital 3D imagery causes the so-called vergence-accommodation conflict, focal rivalry, and possibly damage the eye-sight, especially during prolonged viewing of virtual objects within the arm’s reach. It remains one of the most challenging and market- blocking problems in the VR/AR arena today. This talk will introduce CREAL’s unique near-to-eye light-field projection system that provides high-resolution 3D imagery with fully natural focus cues. The system operates without eye-tracking or severe penalty on image quality, rendering load, power consumption, data bandwidth, form-factor, production cost, or complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650T (2021) https://doi.org/10.1117/12.2591934
Inconsistency between the binocular and focus cues in stereoscopic augmented reality overburdens the visual system leading to its stress. However, a high individual variability of tolerance for visual stress makes it difficult to predict and generalize the user gain associated with the implementation of alternative visualization technologies. In this study, we investigated the relationship between the binocular function and perceptual judgments in augmented reality. We assessed the task completion time and accuracy of perceptual distance matching depending on the consistency of binocular and focus cues in the stereoscopic environment of augmented reality. The head-mounted display was driven in two modes: multifocal and monofocal mode, providing consistent-cues and inconsistent-cues condition, respectively. Participants matched the distance of a real object with images displayed at three viewing distances (concordant with distances of display focal planes in the consistent-cues condition). A thorough vision screening was performed before the experiment. As a result, individuals with low convergent fusional reserves and receded near point of convergence misjudged distances to a higher extent in comparison to others in the inconsistent-cues condition. In contrast, perceptual judgments were fast and less overestimated, as well as no significant effect of binocular function was revealed in the consistent-cues condition. We suggest that the binocular function measures characterizing individual tolerance for visual stress might be used as the predictors of user gain in the comparative assessment of new visualization technologies for the augmentation of reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650U (2021) https://doi.org/10.1117/12.2584208
Reliable world locking in augmented and mixed reality (AR/MR) devices is important for achieving immersion and critical for technical applications that rely on stable anchoring of virtual objects in the real world. To achieve this, a head-mounted display (HMD) must maintain accurate knowledge of its real-world position and orientation. We describe a method for measuring the six-degrees-of-freedom (DOF) positioning accuracy of an HMD, and how the same method can be extended to quantify the accuracy of anchoring virtual objects in the real world, i.e., world locking. An HMD is placed on a 6-DOF test jig comprising a motion system with high precision encoders and a time-synchronized imaging system. The HMD is made to display a 3D grid of unique identification markers that are detected by a machine vision camera in real time, while the robot is moving. This allows us to track the position and pose of the virtual camera and compare that with the known HMD position and pose. Using similar methodology, we can display virtual objects with a suitable number of unique identification markers. Their virtual object position and pose can then be compared to the real HMD position, thereby quantifying the accuracy of world locking. For improved accuracy, markers can also be printed out and pasted onto real-world objects. Other temporal parameters can also be computed, including motion-to-photon latency, spatial jitter, pose drift and prediction over/undershoot. The obtained results can be used to improve or recalibrate the positioning software and hardware of the head-mounted device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650V (2021) https://doi.org/10.1117/12.2584144
In this paper, we present short-wave infrared (SWIR) image sensors with high pixel density. Quantum dot (QD) photodiode stack is monolithically integrated on custom, 130 nm node CMOS readout circuit. State-of-the-art pixel pitch of 1.82 μm is demonstrated in focal plane arrays sensitive at eye-safe region above 1400 nm wavelength. Thin-film photodiode (TFPD) technology will facilitate realization of ultra-compact SWIR sensors for future XR applications, including eye-safe tracking systems and enhanced vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650W (2021) https://doi.org/10.1117/12.2578144
It is foreseen that the most convenient hardware for depiction of augmented reality (AR) will be optical seethrough head-mounted displays. Currently such systems are utilizing single focal plane and are inflicting vergenceaccommodation conflict to the human visual system – limiting wide acceptance. In this work, we analyze an optical seethrough AR head-mounted display prototype which has four focal planes operating in time-sequential mode thus mitigating limitation of single focal plane devices. Nevertheless, optical see-through nature implies requirement of very short motion-to-photon latency not to cause noticeable misalignment between the digital content and real-world scene. The utilized prototype display relies on commercial visual-SLAM spatial tracking module (Intel realsense T265) and within this work we analyzed factors improving motion-to-photon latency with the provided hardware setup. The performance analysis of the T265 module revealed slight translational and angular jitter – on the order of <1 mm and <15 arcseconds, and velocity readout of few cm/s from a completely still IMU. The experimentally determined motion-tophoton latency and render-to-photon latency was 46±6 ms and 38 ms respectively. To overcome IMU positional jitter, pose averaging with variable width of the averaging window was implemented. Based on immediate acceleration and velocity data the size of the averaging window was adjusted. To perform pose prediction a basic rotational-axis offset model was verified. Based on prerecorded head movements, a training model reduced the error between the predicted and actual recorded pose. The optimization parameters were corresponding offset values of the IMU’s rotational axis, translational and angular velocity as well as angular acceleration. As expected, the highest weight for the most accurate predictions was observed for velocities following angular acceleration. The role of offset values wasn’t significant. For improved perceived experience and motion-to-photon latency reduction we consider further investigation of simple trained neural networks for more accurate real-time pose prediction as well as investigation of content-driven adaptive image output overriding default order of image plane output in a time-sequential sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650X (2021) https://doi.org/10.1117/12.2581302
With the development of science and technology and industrial production, the application of three-dimensional shape measurement technology in the fields of machine vision, industrial monitoring, mechanical engineering and medical testing has become increasingly widespread. The optical three-dimensional shape measurement technology is a method that uses a series of optical methods to obtain the three-dimensional information of the measured object, and the structured light measurement method is a widely used measurement method. In the structured light measurement method, the generation of fringe patterns is a very important link. Here, we use holographic method to generate a fringe pattern whose period and phase can be easily modulated. First, a black and white fringe pattern with a certain spatial frequency is generated according to the cosine structured light period and phase. The spatial frequency of the black and white fringe is determined by the spatial frequency of the structured light to be generated and the magnification of the projection system. A prism phase that can cause lateral movement is applied to the black part of the black and white stripes, so that the light incident on the black area deviates from the optical axis to the first order of diffraction, while the light incident on other parts (without being modulated) continues to travel along the optical axis. In this way, the deviated beam is bright and dark, and finally structured illumination is obtained. Theories and experiments verify the effectiveness of the method. The method to generate fringe patterns is simple, fast, accurate and can be conveniently controlled, and can be well applied to three-dimensional shape measurement technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tobias Steinel, Dmitrijs Opalevs, Ferdinand Deger, Roland Schanz, Martin Wolf
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 117650Y (2021) https://doi.org/10.1117/12.2584158
Light measurement devices (LMD) are essential tools for developing novel display technologies as well as for quality assurance in display mass production. More complex displays, e.g. near-eye displays (NEDs), might require various LMDs in different development and production phases. Here we demonstrate the benefit and seamless use of complementary measurement principles and LMDs due to an absolute and traceable calibration concept. Goniometric display analysis with full angular and spectral resolution complements with spectrally enhanced, fast 2D imaging measurements covering a wide range of display test applications from research and development to production. Experimental data from these measurement principles are compared, emphasizing luminance and chromaticity measurements, based on LCD and OLED displays of Virtual Reality (VR) headsets. A standard light source acts as a reference for these measurements. Absolute and traceable calibration is discussed with respect to accuracy and complementary characterization of the displays under test (DUT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176510 (2021) https://doi.org/10.1117/12.2584163
The recently reported “Angular Spatial Light Modulator” (ASLM) light engine, using pulsed illumination synchronized to a Digital Micromirror Device (DMD), shows significant promise to enhance pixel counts of Near-to-Eye Displays (NED) without increasing package volume, but requires an uncommon illumination driver. We present a field effect transistor based constant-current driver that is fast, compact, and scalable to RGB illumination. The digital-to-analog convertor modulates intensity on-the-fly for illumination-based multiplexing. The driver outputs 100 ns pulses, up to 24 kHz repetition rate. The circuit is demonstrated for two laser diodes and for two LEDs in an ASLM-enhanced pixel count display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176511 https://doi.org/10.1117/12.2584222
To meet the demand for high quality augmented reality displays with larger field of view, large eye box and better image quality, large area diffraction gratings are needed. Across the industry different types of surface relief gratings for in-coupling and out-coupling are used in the waveguide designs to achieve the optimum performance of the waveguide. Typical gratings are slanted, blazed, binary and multi-level gratings. NIL Technology offers solutions for all of the above-mentioned types of gratings meeting the demand for high quality and size of in particular the output gratings from the market.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176512 https://doi.org/10.1117/12.2584324
Augmented Reality (AR) is a new form of human-machine interaction that superimposes digital information on the real-world environment. AR technology has the ability to organize much of the digital information from X-ray CT imaging. This paper proposes a new system that user can project 3D-Xray AR CT image on the screen of the device such as smartphone and tablets. In the future, the system will be combined with pseudo 3D color display technology by photon counting X-ray CT imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176513 (2021) https://doi.org/10.1117/12.2588768
People with visual impairments rarely use augmented reality displays without prescription optics. This fact makes using AR devices limited. In this paper, we demonstrated a customized AR display design that considers the user’s prescription and improves visual comfort in case of myopia, hyperopia, astigmatism, presbyopia. AR display has a waveguide-based architecture with embedded reflective combiner for virtual image transferring. Both, waveguide substrate and reflective combiner are designed with standard type surfaces (sphere/aspheric). That makes this design convenient for mass production and adoption in society. The proposed design has field of view 42.75° degrees diagonally and thickness less than 6 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176514 (2021) https://doi.org/10.1117/12.2584300
Recently, glass-free light field displays of multi-layer architecture have gradually entered the commercial stage. However, for near-eye displays, light field rendering still suffers from expensive computational costs. It can hardly achieve an acceptable framerate for real-time displays. This work develops a novel light field display pipeline that uses two gaze maps to reconstruct display patterns of foveated vision effect. With the acceleration of GPU and the emerging eye-tracking technique, the gaze cone can be updated instantaneously. The experimental results demonstrate that the proposed display pipeline can support near-correct retinal-blur with foveated vision and high framerate at low-computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II, 1176516 (2021) https://doi.org/10.1117/12.2583990
We previously developed an augmented reality (AR) 3D head-up display (HUD) system[1] for vehicles that can match a 3D arrow object with roads in the real world at a distance from 3.7 m to 70 m. Current autostereoscopic[2] (glasses-free) 3D displays[3] suffer from the 3D crosstalk problem, in which optical phenomena such as light bleeding incompletely separate stereo images[4]. As a result, accurate AR graphics are not irradiated to both eyes, and the user does not perceive a 3D stereoscopic effect. There are two existing methods for reducing crosstalk as user experience postprocessing, blurring the image or lowering the brightness; both reduce image quality. In contrast, we solve the problem without reducing image quality or HUD brightness by covering the 3D crosstalk area with a newly generated image (a crosstalk concealer) that depends on the distance of the arrow object, outdoor luminance, and brightness of the HUD. The width of the crosstalk concealer is determined by the change in disparity according to the distance of the object in the HUD virtual screen. The opacity of the crosstalk concealer is adjusted according to the external brightness and HUD brightness. The environmental conditions considered in this study include the external light-source brightness, HUD brightness, arrow object distance, and the arrow object size, and the system was optimized to maintain HUD brightness and clarity while eliminating crosstalk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.