21 May 2018 Understanding waveguide-based architecture and ways to robust monolithic optical combiner for smart glasses
Author Affiliations +
Abstract
With the emergence of Augmented Reality (AR) and Virtual Reality (VR) headset during the past decade, firms and academic laboratory have worked on the design of optical combiners to increase the performances and form factor of the optical combiners. Most of the smart glasses on the market have the asset of being small, which eases the integration of the combiner in a head worn device. Most of them (Google glass, Vusix) use prism-like architecture, where the collimation and deflection of the light is performed by one single optical piece. This approach reduces the size and tolerance issues of the device. Other companies (Optinvent, Microsoft, Lumus) came with waveguide architecture, in which the light is collimated by a lens or group of lens, injected in a slab waveguide and extracted in front of the eye of the user. This way, the image is brought right in front of the eye, where prism like architecture displays an image in the peripheral sight of the user. These optical combiners however suffer from low tolerances and fabrication complexity as several pieces are combined. The injection and extraction of image rays in the waveguide can be performed either by holograms or slanted mirrors. Each technology has its downturns but for now the performances of holographic combiners were deceiving, resulting in chromatic dispersion and thus degradation of MTF. This paper relates the work on a waveguide-type optical architecture designed for smart glasses. The system described in this paper was conceived as a solution for smart glasses uses, for which the main concerns are the size of the eye box, adaptability, and a small form factor. Good optical performances were obtained, with a resolution of around 1.2px/arcmin, together with a large eye box.

1.

INTRODUCTION

The market of smart glasses and Head Mounted Displays (HMD) in 2018 abounds in optical architectures of optical combiners. The role of such optical systems for Augmented Reality (AR) and Virtual Reality (VR) devices are of great importance, as they collimate and deflect rays coming from the image display to the eye of the user. The constraints of HMDs to be accepted by consumer, however, require that it is able to fit in relatively small area, to be forgettable in some ways, and to produce a high fidelity and confortable image to avoid headache. To achieve this, optical designers came with a wide variety of options [1]. The most common in the realm of smart glasses and AR type devices are free space architectures, light guide-based architectures, and waveguide-based architectures.

Free-Space Architecture

The solution of a free-space architecture is probably the most intuitive of the HMD market. It consists in using the lens of the user glass as a reflector, to deflect rays coming from the display. The collimation can be achieved upstream by a lens (or group of lens), or by a combination of such lens with a deflector optical piece within the glass lens. The free-space approach can be used for either AR see-through displays or VR occlusion displays.

Figure 1.

Composyt Lab. (Intel) smart glasses (left); principle of free space architecture (right)

00095_PSISDG10676_106761D_page_2_1.jpg

One of the companies that embodied best this approach in the past years was Composyt Lab. (acquired in 2015 by Intel Cop.). Here the classic association of a display and a collimating optics was replaced by a set of RGB lasers together with a scanning micro projector. The image was then formed by scanning the field of view, and then directly reflected by a hologram on the lens of the glass.

Light-Guide Architecture

As mentioned previously, optical collimators are needed to deflect and collimate light from the display to the eye of the user. While the free-space approach uses a reflector in front of the eye of the user, such reflectors are sometimes bulky, and very often visible from the user, depending on the targeted Field Of View (FOV).

Another attempt has been developed for the past few years and popularized by the Google Glass. In such units the light is refracted inside a prism-like optical plastic cube, encounters a tilted (partial) mirror, reflecting it toward the user eye. The collimation can either be obtained here by introducing a lens before the light is refracted in, or refracted out. Google used the entrance surface of the prism combined with a totally reflective curved mirror for the collimation of the display image. This kind of architecture is particularly well adapted for smart glasses, as the combiner is compact and the mechanical constraints are low.

Figure 2.

A photo of the Google glass, and its optical splitter cube (left); principle of light-guide architecture (right)

00095_PSISDG10676_106761D_page_2_2.jpg

Although this type of system is remarkable by its simplicity, this approach suffers from important geometrical constraints. As the image fields are formed inside the cube, the cube width will be entirely determined by the FOV and the length of the system, itself based on the Inter Pupillary Distance (IPD). It thus appears difficult to innovate using this architecture, without creating a bulky system.

Waveguide Architectures

One very interesting element in the tool case of the optical engineer is the waveguide architecture. In this kind of system, the collimation is often differentiated from the ray deflection. The collimation is achieved by the design of a collimating system, using a single lens or a group of lens. The light is then coupled in a slab waveguide in which the fields will bounce back and forth between the two surfaces of the waveguide, before being coupled out toward the eye.

Each ray forming the image fields must satisfy the condition of Total Internal Reflection (TIR), which depends on the internal reflection angle and index of the waveguide material, in order to be coupled inside the system. TIR condition requires that the rays corresponding to the different fields must be deflected prior to their first reflection on either side of the guide, in order to reach a steep enough angle at the incidence of the surface of the waveguide. In order to be coupled out, the light will then need to be deflected before being refracted out of the waveguide.

Figure 3.

DigiLens optical combiner, using volume holograms to deflect, collimate and form eye box (upper left); principle of holographic waveguide architecture (upper right); Lumus glasses, using waveguide and slanted mirrors (left); principle of waveguide based architecture (right)

00095_PSISDG10676_106761D_page_3_1.jpg

To achieve the coupling of the light in and out of the system, several options are available [2]. They can be divided into two categories: one is to use geometric optics (prism, lens or mirrors) used by Lumus [3], Epson [4] and Optinvent [5-6]; or to use Diffractive Optical Elements (DOE) like Nokia [79], Konica Minolta [10], Sony [11] and BAE Systems [1214]. The first one is probably the most simple and effective, as it can be easily modeled using any kind of ray tracing CAD software, but is sometimes challenging on the manufacturing hand. The holographic approach, however, has the benefit to reduce the dimensions of the system. By introducing multiple holograms, each of different kind, it is possible to deflect and collimate the object fields coming from the display.

Holograms and diffractive optical elements can however bring blur in the image as dispersion appears with the diffraction of incident light.

2.

DESIGN PRINCIPLES

General description

The work presented here relies on a waveguide-based design for smart glasses. The philosophy of the design is to propose a system in which the specifications for smart glasses are reached, while keeping a decent form factor and eye box. Most of the combiners on the market using a waveguide have the ability to perform AR operation, and are as a consequence bulky and sometimes with a reduced eye box. If we consider only smart glass uses however, AR can be off the table, thus having a see-through and front of the eye image, as well as a wide FOV is not necessary. The image can therefore be on the peripheral vision of the user, with average FOV.

Figure 4.

Sketch of the waveguide design

00095_PSISDG10676_106761D_page_4_1.jpg

The waveguide system (WGDS) presented here is composed of two majors parts, a collimator and a waveguide, with three critical mechanisms. The first mechanism is the collimation, which principle is to make an image of the microdisplay exempting the user to accommodate; the second one is the injection inside the waveguide that will set up the conditions for the image fields to be completely reflected inside the device; and the third one is the extraction, i.e. the deflection of the fields in the direction of the eye.

Specifications

The design specifications were taken on the base of what is available on the market of smart glasses, while trying to improve the optical performances. The display size was taken in the range of the “on the shelf” micro-displays. The associated technology could either be LCD, LCOS or OLED. The OLED technology has the asset to generate a bright and high quality image while remaining relatively slim, and represents probably the most viable option. Because of the geometry of the design, the LCOS technology can be a favorable choice here, despite the increased size and the illumination of the display from a front light. LCD is maybe the poorer choice here, the display width being considerably increased by the backlight.

Table 1.

Design Specifications.

DESIGN SPECIFICATIONS
Image dimensionsDisplay dimensions
Diagonal FOV20°Diagonal size.37”
Image format16:9Display resolution1280px720p
Angular pixel resolution>1,2px/arcminDisplay format16:9
Optical performances 
MTF@25lp/mm>60%Eye boxas large as possible
MTF@30lp/mm>50%Image distortion<1%
MTF@40lp/mm>30%  

Intuitively it would seem natural to try to reach a FOV as large as possible. However, smart glasses are mostly used for the visualization of notifications, information and small images, and thus requires fairly small FOV. One must understand here that FOV is not the only optical parameter, and depending on the use of the system, sometimes not the most important. While FOV is naturally important for AR and VR HMDs, to get an immersive user experience, smart glasses first and foremost need to be adaptable to fit the head of almost any user. A very important parameter will then appear to be the eye box; representing the potential distance the eye can travel while still seeing the image. Problem is that a wide eye box and large FOV are often not compatible. The reason for that is that the eye box results from the cross section of a triangle formed by the extreme rays, defining the FOV. At constant eye relief and pupil aperture, the eye box will then be shorter as the FOV increase. The field of view is sometimes sold as crucial parameter, and some companies will advertise their product with an impressively large image, but the travel of the eye to see the extreme corners (fields) will be too large to fit in the eye box.

The specifications were taken in agreement with these discussions and the requirements of the student Challenge.

3.

DESIGN PRESENTATION

Image collimation

The collimator designed here is composed of 3 lenses, with 5 powered surfaces, 2 of them are aspheric. One of the lenses can be seen as a splitting cube, deflecting the image rays with a 90° angle from the micro-display. The aim here was to ease mechanical constraints by fixing the display on the glass frame, and to be able to stick the first planar optical element and take it as a reference. The lenses materials used for the lenses are polycarbonate and PMMA, as the association of their Abbe number offers a good achromatic solution.

Figure 5.

Ray tracing of the fields forming the image inside the optical collimator

00095_PSISDG10676_106761D_page_5_1.jpg

The optical performances of the collimator are shown in the following figures. The grid distortion was kept under 0.4%, MTF values for resolution of 30lp/mm was over 25% for the steepest field, and over 50% for the vast majority of the fields.

Figure 6.

Modulation Transfer Function (MTF) of the collimator for central and peripheral fields, at wavelength 450nm, 550nm and 650nm.

00095_PSISDG10676_106761D_page_5_2.jpg

Light Injection

The injection of the light rays is a crucial part of the design. The TIR condition and the FOV will define the geometry of the guide. The TIR condition that must be satisfied for every light field is the following:

00095_PSISDG10676_106761D_page_6_1.jpg

Where θTIR is the angle of incidence on the waveguide facet, nair and nPMMA are the index of refraction of the air and PMMA of which the waveguide is made. As this condition is critical for every one of the image fields, we can write for the central field:

00095_PSISDG10676_106761D_page_6_2.jpg

Where θ is the angle of incidence of the central field on the waveguide surface.

The concept of total internal refraction is a powerful tool here, as it consequently fold the pupil of the waveguide entrance on itself. Indeed, by directing the central field to the edge of the waveguide pupil, half of the FOV will go through a supplementary reflection on the waveguide surface, and be redirected to the opposite edge of the pupil. The width of the waveguide is taken to be 3mm.

Figure 7.

Sketch of the injection of the light fields at the entrance of the waveguide

00095_PSISDG10676_106761D_page_6_3.jpg

Light Extraction

As long as the waveguide surfaces are parallel, the image collimation is supposed to remain the same as the output of the collimating system. After being reflected inside the guide, the light then needs to be directed to the eye to form the eye box, while keeping the light rays collimated. Many solutions exist on the market, using slanted mirrors, micro-mirrors or holograms. All of these solutions can be applied to this design. Slanted mirrors and micro-mirrors have the asset of using purely geometric optics while remaining small, but will bring complexity to the manufacturing process. Surface or volume holograms are on the other hand very thin and “seamless” elements, but suffer from complexity and image degradation due to the diffractive nature of the optical element. Here it was decided to start with a simple flat mirror to avoid complex manufacturing of the design.

The tilt angle of the mirror is simply determined by the geometric construction of the rays propagating inside the waveguide, together with the required field of view. Snell’s law must be applied to know the exact refraction angle of the field of view inside the PMMA waveguide. Once the values of the angles needed for extraction of the light are known the angle of the mirror can be calculated.

Figure 8.

Sketch of the extraction of the image fields at the end of the waveguide

00095_PSISDG10676_106761D_page_7_1.jpg

Eye box expansion

As explained earlier, the eye box is determined by the arrangement of the field of view and the distance to the pupil of the system. This means that for the eye box to be big, the pupil must be close to the eye, and/or the FOV small. Solutions exist for holographic combiners to expand the eye box, by creating multiple spot-sized eye boxes. The user will see the entire image as long as one of these reduced eye boxes falls on his eye.

In our case, expanding the image reflection on the extractor can increase the size of the eye box. This can be done easily when using a waveguide, as the fields keep reflecting inside the system and can be extracted at different locations. This was used by LUMUS in their design, in which the eye box was truly formed by the apparent size of the slanted mirrors put together. In our case the only difference will be the size of the extracting mirror, together with the apparition of a partially reflecting coating on one side of the waveguide, to be able to keep some of the fields travelling until the extremity of the waveguide.

Figure 9.

Transformation of the mirror extraction into an array of micro-mirrors, with complementary piece for see-through operation

00095_PSISDG10676_106761D_page_7_2.jpg

Introducing a partially reflective coating on the base of the mirror requires the extractor side of the waveguide to be fabricated from two separate pieces. One of them would be the mirror, while the other one would be the full waveguide. This solution however comes with the disadvantage to be quiet bulky, and could be replaced with an array of micro-mirrors, each of which would be a slice of original one presented here. It would also be easier to bring a complementary piece, which will enable see-through operation. This solution was best demonstrated by Optinvent [5-6] on their ORA glasses. The extractor can also be designed using slanted mirrors.

4.

CONCLUSION

This design uses the assets of a waveguide system, which are a slim form factor and high image fidelity, to tackle the constraints of smart glasses display imaging. The image is here formed using a triplet of lenses, and the image fields are then injected inside a waveguide, and then extracted using a simple partially or totally reflective flat mirror. The dimension of the image on the user sight is higher than 20° diagonal, and situated in the peripheral vision. The extraction could also be performed using holographic optical elements, micro-mirrors or slanted mirrors. Using these elements, the waveguide size would be reduced. However, the manufacturing constraints emerging would be high, while the problem of the appearance of the system in the peripheral vision of the user can be relativized. Most of the optical combiners using a collimator are found to be quite bulky on the temple of the user. The reason is mainly due to the fact that a large eye box requires the collimator aperture to be important. By introducing the partially reflecting surface on the extraction of the light from the waveguide the eye box can be enlarged, and as a consequence the dimension of the collimator can be reduced, improving the optical performances and form factor of the system.

Eventually, the ultimate goal of any optical designer working on such waveguide-based optical combiner would be to conceive a monolithic waveguide with the same performances as enounced. Indeed, one of the drawbacks of the waveguide approach using a collimating lens is the critical mechanical tolerance. The alignment of the collimator with the entrance of the waveguide can therefore be critical to reach the TIR condition, and have all the image fields coupled in. However, it seems difficult to achieve satisfying optical performances without using a group of lens to collimate the image of the display. One solution would be to design the aspect of the lenses to be associated with each other. This can be done by molding plastic lenses with hooks on their non-optical sides. This way the lenses and waveguide could be hold together like pieces of a same puzzle, bringing mechanical stability and good alignment to the design.

REFERENCES

[1] 

B. Kress et al, “The segmentation of the HMD market: optics for smart glasses, smart eyewear, AR and VR headsets”, roc. SPIE 9202, Photonics Applications for Aviation, Aerospace, Commercial, and Harsh Environments V, 92020D (5 September 2014);Google Scholar

[2] 

ian Han, Juan Liu, Xincheng Yao, and Yongtian Wang, “Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms,” Opt. Express 23, 3534–3549 (2015) 10.1364/OE.23.003534Google Scholar

[5] 

K. Sarayeddine and K. Mirza, “Key challenges to affordable see-through wearable displays: the missing link for mobile AR mass deployment,” Proc. SPIE 8720, 87200D (2013).Google Scholar

[7] 

P. Äyräs, and P. Saarikko, “Near-to-eye display based on retinal scanning and a diffractive exit-pupil expander”, ⃞Proc. SPIE 7723, 77230V (2010).Google Scholar

[8] 

T. Levola and P. Laakkonen, “Replicated slanted gratings with a high refractive index material for in and outcoupling of light,” Opt. Express 15(5), 2067–2074 (2007). 10.1364/OE.15.002067Google Scholar

[9] 

T. Levola, “Novel Diffractive Optical Components for Near to Eye Displays,” SID Symposium Digest of Technical Papers, 37: 64–67 (2006).Google Scholar

[10] 

I. Kasai, Y. Tanijiri, E. Takeshi, and U. Hiroaki, “A practical see-through head mounted display using a holographic optical element,” Opt. Rev. 8(4), 241–244 (2001). 10.1007/s10043-001-0241-zGoogle Scholar

[11] 

H. Mukawa, K. Akutsu, I. Matsumura, S. Nakano, T. Yoshida, M. Kuwahara, K. Aiki, and M. Ogawa, “A Full ⃞Color Eyewear Display using Holographic Planar Waveguides,” SID Symposium Digest of Technical Papers, 39: 89–92 (2008).Google Scholar

[12] 

I. K. Wilmington and M. S. Valera, “Waveguide⌷Based Display Technology”, SID Symposium Digest of Technical Papers, 44: 278–280 (2013).Google Scholar

[13] 

A. A. Cameron, “Optical waveguide technology and its application in head-mounteddisplays,” Proc. SPIE383, ⃞83830E (2012).Google Scholar

© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vincent Brac de la Perrière, Vincent Brac de la Perrière, } "Understanding waveguide-based architecture and ways to robust monolithic optical combiner for smart glasses ", Proc. SPIE 10676, Digital Optics for Immersive Displays, 106761D (21 May 2018); doi: 10.1117/12.2315681; https://doi.org/10.1117/12.2315681
PROCEEDINGS
8 PAGES


SHARE
Back to Top