PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12443, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmentation in manually driven vehicles can raise traffic safety significantly. The most ergonomic (eyes-on-the-road, no refocusing) solution is AR-HUD but the FOV is limited today to 10° by 5°. Transparent displays in the windshield (eyes-on-the-road, refocusing required) are costly (replacement) and hardly meet legal requirements for transparency. The cheapest solution is video-AR on dashboard displays (eyes-off-the-road, refocusing required). We report on a new approach for augmentation as compromise between ergonomics and cost: An eight-line RGB matrix display to be mounted on top of the dashboard at the bottom of the windshield. It spreads from pillar-to-pillar (150 cm, 150 x 8 pixel, RGB LED) and therefore enables augmented information along the whole windshield. In consequence, it needs less eyes-off-the road and refocusing and is a very ergonomic add-on for video-AR. We started with a single line pixelated light guide in a seating buck to measure and to evaluate the required luminance (⪆3,300 cd/m²), RGB luminance ratio (35:50:15) and perception of information from night to blinding sunlight. We optimized the RGB LED display by testing and measuring various diffusers at different distances to the LEDs for an optimum combination of sharpness and pixelation. Image quality and content such as the visualization of actual speed (including color-coding), warnings (e.g. slippery), navigation, and comfort functions (e.g. incoming call, beat mode) were evaluated by subjects via online survey and in our seating buck. The display was rated as being very helpful with significantly reduction in time for grasping the information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 0.37-inch 360 Hz field refresh rate ultra-HD 11,800PPI 2.15 um pixel pitch Liquid Crystal on Silicon (LCoS) micro-display panel with embedded 4x up-scaler is presented. In AR glasses for metaverse, spatial resolution is very important as Field of View (FoV) increases. The lack of spatial resolution interferes with the immersion of the augmented virtual space, such as limited information on image quality degradation and screen door effects. The display is proposed with new ultra-low power resolution enhancement technology of quadruple scaler to solve spatial aliasing caused by limited resolution at Augmented Reality (AR) glasses. The ultra-fine resolution pixel circuit is designed by new spatial interpolation technique called as micro-mirror space-interpolation (mmSI). The new micro-mirror architecture was proposed to make capacitive circuits network which produce interpolated pixel data and pixel mirror itself. The embedded spatial interpolation is done by pixel circuit itself and there are no additional circuits from video input to pixel driving. In this reason, the power consumption of driving the panel is same to full-HD resolution drivers’ which is only 100mW despite of quadruple resolution. The micro-display panel for metaverse AR glasses was fabricated using a 0.11- um CMOS process and was assembled with an LC front plane using VAN LC. The die size, active area and panel size are 11.65 mm x 7.75 mm, 8.25 x 4.64 mm2 and 13.8 x 8.5 mm2, respectively. The output video resolution is 3840 x 2160 with RGB eight-bit gray depth with 1920x1080 video input. The presented panel has 11,800 PPI with UHD video resolution only 0.37-inch diagonal active display area. The angular pixel resolution of the display is achieved at a 49 cycles per degree (cpd) resolution with 90-degree diagonal FoV for ultra-portable handheld AR glasses application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronic holographic displays precisely reconstruct the wavefront of object light and have attracted considerable attention for Virtual Reality (VR) and Augmented Reality (AR) applications. To achieve a high-quality holographic display with a wide field of view, it is necessary to reduce the pixel pitch of a spatial light modulator (SLM) to about 1 μm. We have achieved a precise control of Liquid Crystal (LC) alignment in 1 μm pitch pixels by exploiting the anisotropy of pixel space due to the lattice-shaped dielectric walls. In this paper, we have investigated the effect of LC-SLM structure on the image quality of electric holographic displays. As a result, we clarified that the image quality of phase-modulation type holographic displays does not degrade even when the number of gray levels is four or more and established a simple pixel structure that allows independent control of 1 μm pitch pixels and high image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast switching speed image flipping technique using an SSD-LC drive mode is proposed for VR/AR/MR applications. Thanks to in-plane only retardation switching of an SSD-LC drive mode whole the way of its switching process, high diffraction efficiency with +/- first order diffraction beam enables an effective image flipping either to left or right, resulting in effective impose of additional images on the original actual image. This image flipping method is provided by one of the simplest liquid crystal panel configurations and the simplest panel configuration helps design adoptability for most of VR/AR/MR system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an intraocular augmented reality display featuring retinal prostheses in association with bionic vision processing. Unlike conventional retinal prostheses, whose electrodes are spaced equidistantly, our solution is to rearrange the electrodes to match the distribution of ganglion cells. To naturally imitate the human vision, a scheme of bionic vision processing is developed. On top of a three-dimensional eye model, our bionic vision processing is able to visualize the monocular image, binocular image fusion, and parallax-induced depth map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current learning-based Computer-Generated Holography (CGH) algorithms often utilize Convolutional Neural Networks (CNN)-based architectures. However, the CNN-based non-iterative methods mostly underperform the State-Of-The-Art (SOTA) iterative algorithms such as Stochastic Gradient Descent (SGD) in terms of display quality. Inspired by the global attention mechanism of Vision Transformer (ViT), we propose a novel unsupervised autoencoder-based ViT for generating phase-only holograms. Specifically, for the encoding part, we use Uformer to generate the holograms. For the decoding part, we use the Angular Spectrum Method (ASM) instead of a learnable network to reconstruct the target images. To validate the effectiveness of the proposed method, numerical simulations and optical reconstructions are performed to compare our proposal against both iterative algorithms and CNN-based techniques. In the numerical simulations, the PSNR and SSIM of the proposed method are 26.78 dB and 0.832, which are 4.02 dB and 0.09 higher than that of the CNN-based method, respectively. Moreover, the proposed method contains less speckles and features a higher display quality than other CGH methods in experiments. We suggest the improvement might be ascribed to the ViT’s global attention mechanism, which is more suitable for learning the cross-domain mapping from image (spatial) domain to hologram (Fourier) domain. We believe the proposed ViT-based CGH algorithm could be a promising candidate for future real-time high-fidelity holographic displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of an ideal optical system to support Mixed Reality and Augmented Reality (AR) applications has raised a lot of interest in the scientific community in the last decades. The perfect device remains an inaccessible target and researchers have to focus on the optimization of some specific behaviors. Several years ago, we introduced a disruptive display concept to push the device integration to the limit, with the suppression of the optical system. This allows the imaging process to be considered in a different way with a specific monitoring of the field of view. With this ‘smart glass’ concept, the glass is the display, and the image is formed directly onto the retina with a combination of refractive and diffractive effects. This conceptual target allowed us to define a technological roadmap to support our development. Technologies involved in this concept concern principally the field of Photonic Integrated Circuits in the visible range, digital/analogic holography and Liquid Crystal devices. We will present the current state of our research with a particular focus on the holographic display element. Recent results related to analogic pixelated hologram recording validate and question both our technological and conceptual approach. We will show images formed by sparse holographic pixel distributions with controlled angular characteristics that demonstrate the mix of refractive and diffractive effects. The transmission behavior of this holographic device will also be analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a seamless 3D spatial viewer optimized for both short-term and extended use. Metaverse services often require a head mounted display (HMD). However, after long-term use, users can experience fatigue. Furthermore, they cannot take the HMD off immediately, making it difficult to use with other devices such as a smartphone. Our prototype has long eye relief and a large eyebox that does not touch the user's body. Multiple lenses are used to create the large eyebox, which is transferred by retro-reflector arrays and a half mirror for a long eye relief distance. This paper presents the optical design of the 3D seamless viewer and its evaluation results and discusses future applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this report, we proposed an advanced integral imaging 3D display system using a simplified high-resolution light field image acquisition method. A simplified light field image acquisition method consists of a minimized number of cameras (three cameras placed along the vertical axis) to acquire the high-resolution perspectives of a full-parallax light field image. Since the number of cameras is minimized, the number of perspectives (3×N) and the specifications of the 3D integral imaging display unit (N×N elemental lenses) cannot be matched. It is possible to utilize the additional intermediate-view elemental image generation method in the vertical axis; however, the generation of the vertical viewpoints as many as the number of elemental lenses is a quite complex process and requires huge computation/long processing time. Therefore, in this case, we use a pre-trained deep learning model, in order to generate the intermediate information between the vertical viewpoints. Here, the corrected perspectives are inputted into a custom-trained deep learning model, and a deep learning model analyzes and renders the remaining intermediate viewpoints along the vertical axis, 3×N → N×N. The elemental image array is generated from the newly generated N×N perspectives via the pixel rearrangement method; finally, the full-parallax and natural-view 3D visualization of the real-world object is displayed on the integral imaging 3D display unit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, materials for ARVR have been actively developed, and high Refractive Index (R.I.) materials are required to achieve high performance and a wide field of view. In addition, AR display is a fine display, therefore the display is needed the materials that can be embedded in fine structures. Since AR has a fine structure, it is formed by nanoimprint (NIL) or gap fill processes. In short, materials with a high R.I., high transparency, and NIL and gap filling properties are required for AR waveguide. High R.I. formulation with high fluidity and low volatility are required for NIL and gap fill process, however, in the case of conventional organic materials, there is the trade-off to obtain high fluidity with low volatility. We designed from the molecular structure and realized to solve the trade-off parameters. With that organic material technology and our unique formulation technology, we have realized products that can perform NIL and gap fill even if it contains a large amount of inorganic nano-filler. In addition, by using organic materials for organic EL, it is possible to obtain effective characteristics such as solvent-free materials with high light extraction efficiency. Products using this technology are expected to be applied to AR and OLED.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser source displays offer wide color gamut and deep color saturation. However, they also suffer from speckle due to their coherent wave-front interaction with surface roughness or imperfections from any component in the optical chain of the display. Speckle can be offensive to the viewer as it gives the displayed image a noticeable "shimmer" or grainy appearance. For an expanded beam, flood illumination display such as those using a Spatial Light modulator (SLM), there exist methods to minimize this speckle to where it is not noticeable to the viewer. However, a laser scanned display is a different situation as the image is generated by a single flying spot and speckle strategy relying on mitigation of flood illumination imaging cannot be efficiently used. Static components such as close packed microlens arrays used in the diffuser optical plane in a typical Head Up Display (HUD) layout are a good compromise and do a reasonable job mitigating speckle. However, the microlens size must be closely matched to the laser beam spot size to minimize other artifacts such as Moiré pattern in the image. Despite being a good countermeasure against speckle, in high magnification HUD systems, the microlens array structure can be viewed in the image and can sometimes be perceived as "pixelization". We propose utilizing discrete step index motion of a close packed microlens array to both double the perceived image resolution and mitigate speckle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photophoretic Optical Trapping (POT) is a relatively new concept in the field of optics which has potential application in 3D display. The POT is realized by confining a particle within a very small location of the optical system, mostly around the focus. The particle, if captured by the beam, has the potential to print visible 3D images in free space. Our POT system is encapsulated by an acrylic enclosure, which also incorporates a biconvex lens as well as a laser module with an adjustable focus. Particles are released around the top of the lens’ focal point until the captured particle can be seen. First, by implementing varying sizes of biconvex lenses, we are able to measure the capture rates of different focal length ranging from 60 to 200mm and extract the maximum capture rates of the system. The capture rates give us an accurate representation of the system limitations which demonstrates if the particle can and cannot be captured efficiently. From our data, we found that the most efficient capture is produced between the focal length around 80 to 160mm for a 405nm laser source. The range of 60 to 200mm will be used to reevaluate the wavelength of 405nm as well as 532nm and 630nm against one another to determine which yields the highest efficiency or rate of captures. The wavelength-dependence study experimentally reveals the relationship between wavelengths of light source and trapping capability which is novel and important for future photophoretic optical trapping applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an autostereoscopic 3D display with an increased aperture ratio of the parallax barrier. The problem with high aperture ratio is that subpixels are observed simultaneously by both eyes, resulting in high crosstalk ratio. To overcome this problem, we propose the image processing method that suppresses the crosstalk by displaying crosstalk subpixels in black. Thus, the necessary pixels can be observed as much as possible. Using the prototype system, we confirmed that a wide viewing area with a crosstalk ratio of less than 10% and high quality 2D images with less image quality degradation can be obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented Reality (AR) has been attracted considerable attention according to the demand for non-face-to-face services. The principle of AR is overlapping a virtual image in the real world. To display a virtual image at a proper position, depth of field is a significant factor. In this paper, we propose a multi-variable focal lens system that can dynamically tune a depth of field. By using a multifocal lens that has several different focal lengths, an image has depth information corresponding to each focal length. A focus tunable lens controls a focused area and magnification to display the appropriate position and size. The proposed system has a huge advantage in form factor and fever issues owing to its simple architecture. In order to verify the feasibility of the system for AR, numerical simulations are performed. The system divides a 2D image into focused and defocused areas. Focused and defocused areas show feasibility that can be tuned by the multifocal lens and focus tunable lens. The results show the depth range from 0.3 m to 2 m (3.3D to 0.5D), which is determined by the design of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Occlusion technology has grown its importance for enhancing immersive augmented reality experiences by improving mutual depth perceptions between the real and virtual scenes. Among various methods for implementing occlusion in augmented reality displays, the 4f system method has gained a lot of attention for its capability to produce a sharp occlusion effect. However, this method has a drawback of having a large form factor and difficulty in achieving sharp occlusion when implemented to display multiple-depth images. Previous studies have applied a lens array to a 4f system to reduce the form factor. In this work, we numerically and experimentally investigate the use of a pair of Focus-Tunable Lenses (FTLs) along with a lens array 4f system to achieve sharp occlusion at multiple levels of depths in addition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a numerical transformation technique for Computer-Generated Holograms (CGHs) using the deep learning method. Using the proposed technique, one can obtain CGHs for a user-defined holographic display system from given CGHs. The calculation speed of the proposed technique is about 20 times faster than that of the conventional free-space propagation algorithm. We verify through both numerical simulation and optical experiment that focal stacks produced with the CGHs obtained by the proposed technique are similar to those produced with the target CGHs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The superposition of digital information in the Field of View (FOV) of a user is the basis of the current developments in mixed and augmented reality. Before being studied for near eye device and head mounted display, this application was implemented in Head Up Display (HUD) to help pilots and drivers to manage both the driving stress and the information flow related to the vehicle. Classical optical design of HUD based on the use of a combiner are strongly limited in FOV due to the issues related to pupil management. To overcome this issue head up projection displays have been developed based on the projection of digital image directly on the windshield. To support this approach an efficient projection surface that meets bright reflection and clear transparency has to be developed. We have introduced few years ago an optical approach based on retro-reflective transparent projection surface and a manufacturing process to provide microscopic corner cubes that incorporate an optical diffuser function. We present in this contribution an optimized design that increases the efficiency of the retroreflective structure towards 100%. We also discuss a possible technological process that allows the manufacturing of the master used to replicate the microstructure. This process based on grayscale lithography and on Deep Reactive Ion Etching (DRIE) may guaranty a high retro-reflection efficiency, a high transparency and a realistic draft to allow a molding manufacturing process for the microstructure fabrication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extended reality human factors studies commonly utilize an approach in which group behavior is reported. This possibly masks the true prevalence of oculomotor changes that appear in response to stereoscopic augmented reality. Thus, the study aimed to elucidate the prevalence, direction, and magnitude of oculomotor changes after the near work in stereoscopic augmented reality. The task of fifty-three subjects (18-28 years old, normal visual acuity, no vision complaints) was to type the text displayed at 60 cm as accurately and quickly as possible. Each subject participated in two sessions – the text was displayed in stereoscopic augmented reality and on the computer screen. Clinical assessments of visual parameters were performed before and immediately after 30 minutes of the task. As a result, individual variations were found in the magnitude and direction of oculomotor changes after the near work. After the use of stereoscopic augmented reality, adverse changes in vergence and accommodation were observed in about 40% of the group. Despite the prevalence of adverse oculomotor changes being similar in the case of text displayed on the computer screen, only less than 20% of the group showed a decline of visual parameters in both viewing conditions. The exploratory study highlights the necessity to consider individual variations in visual responses and identify groups that might benefit or be disadvantaged in using stereoscopic augmented reality technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional Light Field Displays (LFDs) promise to provide realistic and comfortable viewing for one or multiple users simultaneously without any eyewear by overcoming the vergence-accommodation conflict. However, LFDs have not yet gained widespread adoption and remain a hot topic of research. Currently, LFDs are based on refractive Microlens Array (MLA) optics, which have inherent limitations including high optical aberrations and/or bulkiness. Metasurfaces are flat optics made of a distribution of subwavelength size nanopillars that can manipulate light wave properties including phase, amplitude, and polarization and be fabricated in a single lithographic step. They can be used as a more compact alternative to refractive MLAs. However, current designs cannot achieve comparable full-color and wide field-of-view imaging by multiple layers of refractive lenses. In this work, we demonstrate a deconvolution neural network model based on the U-Net architecture and Wiener non-blind deconvolution that reduces the effects of aberrations caused by a designed metasurface, enabling high image quality 3D LFDs. We employ an analytical model to determine the metasurface phase profile and point spread function for a five-by-five view LFD scenario. Our model is trained and evaluated using 52 images of 8.1 megapixels each from online databases of multiview images. To minimize the spatially varying aberration effects, a loss function is used that incorporates spatial pixel-wise error, structural quality, and angular consistency. Compared to the output images without preprocessing images using the designed PSFs, our neural network model improved PSNR by 10 dB and MS-SSIM by 2% overall for all views and reduced variations between different views by 40% and 70%, respectively, for PSNR and MS-SSIM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, Mid-Air Display (MAD) technology is of a great interest to practitioners. Potential application in consumer products with large aperture “floating” image displays, like TV, monitor, ATM, vending machine, home appliance, etc., and contactless user interface for remote control increase their attractiveness. In order to obtain enlarged mid-air image size at maintaining large horizontal Field of View (FoV) and high light display efficiency, the following challenges are to be solved: developing high fill-factor Diffractive Optical Elements (DOE) architecture with optimal size of out-coupling aperture and designing custom-made projection optics with specified exit pupil matched to in-coupling DOE. As a possible solution to abovementioned problems, the authors propose a MAD based on commercially available projector source, custom-made projection optics and designed corner DOE waveguide architecture with focusing Fresnel lens. The mid-air image is formed at the back focal plane of the Fresnel lens, between the viewer and the display. For mid-air image with five-inch diagonal and 32° horizontal FoV, we take waveguide out-coupling aperture of 245 x 145 mm2 and Fresnel lens with back focal length of 220 mm and obtain image brightness ~1000 cd/m2 due to custom projection optics. Basic contactless user interaction was also implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional light field displays are not yet widely adopted due to the bulky form-factor and limited image quality caused by optical aberrations of microlens arrays (MLAs). Conventional optimization techniques cannot approach the maximal displayed image quality as they rely on intermediary metrics such as focal spot size. In order to optimize for the full-color wide field-of-view image quality, the point spread function of the MLA should be modeled to provide more flexibility. We developed a modeling approach for both refractive and metasurface MLAs and assessed the accuracy by judicious comparisons between the numerical simulations and experimental characterization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic display based on the Computer-Generated Holograms (CGH) has been suffered from two problems. The first problem is the heavy computational complexity involved in the generation of CGH, which has limited real-time holographic display. The second problem is that the image quality of holographic display is limited. To solve the two problems, a Block-based Sub-Hologram (BSH) method is proposed, which has the ability to generate high quality holograms in real-time. In the BSH method, the target 3D scene is divided into a series of blocks. A block is composed of adjacent object points. We use a diffraction-based approximation method to determine the region of the Sub-Hologram (SH) corresponding to each block. Since the size and the complexity of the SH are reduced, the computation time is decreased significantly. It is confirmed that the proposed method implemented on the GPU framework can achieve real-time color three-dimensional holographic display with large size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.