21 May 2018 Compact see-through AR system using buried imaging fiber bundles
Author Affiliations +
Abstract
This design concept is using multi-core imaging fiber bundles with small diameters (<350 μm) to transfer information from an image source (e.g. laser pico projector) to the eye of the user. One of the main benefits of this approach is that the resulting glasses are almost indistinguishable from conventional eyewear. Not only are the fiber bundles very thin and positioned close to the eye but their difference in refractive index compared to the surrounding medium is comparably small which makes them hardly visible. At the same time, they can carry a significant space-bandwidth product and may be easier to fabricate in comparison to similar solutions using waveguides or Fresnel-type extractors. Using ray tracing and wave-optical considerations, we show that such an approach can lead to highly inconspicuous AR glasses with a >20° diagonal field of view and good angular resolution.

1.

INTRODUCTION

Concepts, prototypes and products of compact see-through augmented reality (AR) systems, also known as “smart glasses”, exist in an great variety. The most common approaches include beam splitters1, embedded wave guides with different types of extractors2,3, or direct projection using the eyeglasses as a mirror4, e.g. by applying a wavelength-selective coating. Although great effort is put into making such devices as inconspicuous as possible, existing solutions are usually easily distinguishable from conventional eyewear. This leads to problems with user acceptance and has so far hindered widespread use in public.

In this paper we present a design example of a highly inconspicuous system which can hardly be distinguished from conventional eyewear (Figure 1). The main idea of our approach is to use fused imaging fiber bundles to transfer information from a device in the pocket of the user to the eye such that all components which potentially contribute to a bulky eyeglass frame are not directly visible. Solutions which include a fiber bundle directly embedded in the eyeglasses have been proposed before5,6. Nonetheless, to the best of our knowledge, there are no specific design examples or information about prototypes of this idea publicly available.

Figure 1.

Rendered image of the proposed smart glass design.

00096_PSISDG10676_106761E_page_1_1.jpg

2.

CONCEPT

In contrast to most AR concepts, the initial image formation in our design is not necessarily happening at the eyeglass frame itself but can be transferred to a secondary device inside the user’s pockets. Not only does this approach lead to highly unobtrusive glasses but it can also help to include a stronger battery, a more advanced display source, or more computational power because space it not confined to the eyewear frame. Ideally, a laser pico projector is used to directly project images onto the proximal end of fused imaging fiber bundles which, in the present case, have a diameter of 350 μm and contain ∼10,000 pixels (Fujikura Ltd. FIGH-10-350S). Alternatively, other image sources such as microdisplays can be used as well. For microdisplays, it becomes an option to bring fiber facets into direct contact with the display and avoid any projection optics.

The fiber is guided to the user’s head similar to an earphone cable. As a part of the eyeglass frame, it is embedded into the eyeglass itself and, thanks to its similar refractive index, its small diameter, and its vicinity to the pupil, nearly invisible to the user. Within the glass, light is exiting the distal end of the fiber with a certain numerical aperture (NA) which may be controlled from the other end of the fiber7. The emitted light is then reflected by an embedded wavelength selective freeform mirror towards the eye and finally collimated into the pupil using a small aspheric micro lens.

In principle, collimating and directing into the eye pupil could be achieved with a freeform mirror alone. Our investigation showed that, with our requirements, a good image resolution could only be achieved with two separate optical components. This means that a small lens has to be placed on the eyeglass and may affect the see-through vision. We mitigate this effect by placing a negative lens with the same surface shape at the backside of the eyeglass which compensates the effect of the positive lens6.

One of the challenges of AR systems with their image source close to the eye is matching the system exit pupil with the eye pupil. In our approach this problem is somewhat circumvented by limiting beam diameters such that, even without pupil matching, no light is vignetted as long as the eye remains in its eye box and the eye pupil does not become smaller than 2 mm in diameter. Because of reduced beam diameters, the eye box becomes fairly limited as well. In our case the eye can be moved by 1 mm in every direction without vignetting when the pupil diameter is 4 mm. In principle, this property could be used in a meaningful way if the AR image is to appear only at very specific viewing angles and to disappear at the remaining part of the field of view (FOV). A larger eyebox is possible if more fibers are used and integrated such that the illuminated area at the pupil plane is increased. Reduced beam diameters also have the benefit of a decreased focusing NA which means an increased depth of focus leading to less image blur when the eye is not focusing to infinity.

Compared to other image sources such as organic light emitting diodes (OLED) or liquid crystal displays (LCD), fused image fiber bundles can have a low pixel pitch (∼3 μm in case of FIGH-10-350S). At the same time they usually have a worse fill factor which leads to a visible grid pattern. The visibility of this pattern can be influenced by controlling the angular extent (NA) of the light field at the other end of the fiber or by slightly defocusing the facet at the cost of reduced sharpness.

In order to combine a FOV with good resolution, multiple fibers have to be used. They can be addressed by one single image source, however deliver different sections of the FOV. In the present example, we use a total of 7 fibers, one for the central part of the FOV and the remaining 6 in a hexagonal arrangement around the center. This allows us to achieve a FOV of 23.7°. In Fig. 2, a drawing of this approach is shown.

Figure 2.

A, Drawing of the AR system including a laser projection unit which is connected to the multicore fiber bundles. B, Front view of the right eyeglass in which fiber bundles are embedded. The glasses have dimensions of 50 mm by 25 mm each. C, Side view of the system which shows how the light is projected into the user’s eye.

00096_PSISDG10676_106761E_page_2_1.jpg

Since the fibers have a circular shape, only circular sub-images can be transferred to the eye if simple optical components are used. In such a case, they cannot be combined to a seamless bigger image without overlaying or cropping the sub-images and thus losing pixels (Figure 3, A). If the wavelength selective mirror and the collimation lens are of a more complex shape, they can be designed such that the circular fiber facets are distorted into more complex shapes and can be combined without losing pixels (Figure 3, B). Alternatively, fiber bundles with a hexagonal or rectangular cross section could be used.

Figure 3.

A, Image fusion using normal lenses. B, Image fusion using freeform lenses. While A leads to a waste of pixels, B allows a lossless fusion.

00096_PSISDG10676_106761E_page_3_1.jpg

Because optical fibers have a minimum bending radius (in case of our fiber: 20 mm), care has to be taken in the layout to not fall below this value (Fig. 2B). In the present design the minimum bending radius of the fiber is undercut in some places and it has to be investigated experimentally if such a design can be realized or if the fibers need to be replaced with multiple thinner fibers which have smaller bending radii.

3.

OPTICAL DESIGN

The optical design can be separated into three different problems:

  • 1. Imaging of the source (e.g. pico-projector, display) onto the fiber bundles.

  • 2. Imaging the fiber outputs onto the retina.

  • 3. Making sure that the optical components within the eyeglass do not have negative influence on the visual perception of the real world.

The first problem is not considered in this paper as many solutions to it already exist. The third problem is not tackled with a sequential optimization process but by analytical considerations and later checked with non-sequential ray tracing. Thus, the main focus of this section is on the second problem of creating a virtual image on the retina.

For our sequential ZEMAX design with 7 channels, there are two different beam paths in total, one for the central channel and six identical designs which are arranged hexagonally around the central one. The 7 fibers provide a total number of ∼70,000 pixels which means that the theoretically possible angular resolution at 20° circular FOV is at ∼3.6 arcmin. This value could be reduced if thicker fibers with more pixels are used, the number of fibers is increased, or if variants with smaller pixel pitch become available.

Although the fibers provide an output NA of ∼0.4, the wavelength selective mirrors only cover a cone with an NA of ∼0.16. In order to avoid light loss and unwanted stray light, the fiber output NA can be controlled by coupling in with a lowered input NA, e.g., by only exciting the fundamental mode7. The mirror dimensions are chosen as a trade-off between a thin eyeglass (small mirror) and a high collection NA (big mirror). The footprint is limited to 870 μm in diameter. Because light is exiting the fiber along the lateral eyeglass dimensions, a 90° reflection towards the eye pupil is necessary. This is achieved by arranging the mirrors with a tilt of 45° and using an elliptical shape to compensate for projection effects.

Both channel variants are designed and optimized in separate configurations. The mirror surface is used as the system pupil since effectively it acts like an aperture stop limiting beam diameters. It cannot be directly matched with the eye pupil without bulky secondary optics. We circumvent this problem by using beams with small enough diameters such that no vignetting occurs. The negative effects of this approach (small eyebox, vignetting happens for each channel simultaneously) can be reduced by increasing the distance of the fiber to the mirror at the cost of decreased NA and a decreased FOV. The mirrors are coated such that they are reflective for three slim wavelength bands and transmissive for the rest of the spectrum.

The beam path starts at the fiber facet with an image size of 325 μm in diameter. The wavelength selective mirror is placed at a distance of 2.8 mm from the fiber facet, while both are embedded into a material with a refractive index of 1.518 at λ = 550 nm and an Abbe number of 64.2. After the 90° reflection, light exits the eyeglasses through a small micro lens on the surface. The micro-lens vertex is placed in a distance of 14 mm from the eyeball. Best results could be obtained by using a XY-polynomial freeform surface for the mirror and an even aspheric surface for the lens. We used a realistic eye model from the ZEMAX sample data base to assess the achievable resolution. In case of the beam path for the outer channels, the design is decentered by 2 mm and a small extra tilt is given to the mirror surface such that no rays are vignetted at the eye pupil. Figure 4 shows the layout of the two different channels as well as an example with all 7 channels included. The circular FOV of one channel is 8.6°. In order to cover the combined FOV without holes, an overlap of ∼1.1° has been included. With a total FOV of 23.7° and 70,000 fiber cores, the maximum achievable resolution is 4.2 arcmin, limited by the fiber pixel pitch.

Figure 4.

A, Sequential optical design with all 7 channels. B, Isolated view of the central channel and one of the outer channels. C, Closer view of the imaging optics. Sub-images are placed at fiber facets (FF), reflected by the wavelength selective micro mirror (M) and collimated into the eye by the aspheric micro lens (ML). The compensation lens (CL) compensates the effect of the micro lens for light coming from the environment.

00096_PSISDG10676_106761E_page_4_1.jpg

The initial designs, in which radii of curvature are set manually, are improved with the local optimization algorithm of ZEMAX in several steps while the number of variables is increased gradually. Because of beam diameters of below 700 μm each, the eye pupil is not completely filled. Nonetheless, resulting spot radii of <18 μm over the whole FOV lead to a resolution which is below the limit given by the fiber pixel pitch. Resulting spot diagrams are shown in Figure 5.

Figure 5.

Comparison of spot diagrams and point spread functions (PSF) for the two different channel types and different field positions on the multicore fiber facets.

00096_PSISDG10676_106761E_page_5_1.jpg

A Monte-Carlo tolerancing simulation (2,000 systems) of the central channel revealed that the following tolerances have to be achieved in order to stay below the fiber resolution limit in terms of root mean square (RMS) spot radius with 90% of the simulated systems (Table 1).

Table 1.

Acceptable tolerances to stay below the fiber resolution limit of 4.2 arcmin.

ToleranceValue
Radius of curvature± 1 mm
Surface distances± 20 μm
Surface decenter± 20 μm
Surface tilt± 0.2°

4.

SIMULATION OF IMAGING

In order to test the functionality of our approach, a non-sequential simulation model was set up. This model includes all possible effects except diffraction at lens boundaries or embedded fiber bundles. While diffraction at the lenses or the mirrors is negligible, contrast in refractive index between cores and cladding of the imaging fibers leads to significant diffraction and scattering. The refractive indices can be assumed to be ncore = 1.5 and ncladding = 1.4467. Figure 6 shows a 2D wave optical propagation through an arrangement of fiber cores (diameter of circular area: 200 μm) embedded in the cladding material. The simulation was performed at a wavelength of 550 nm using a single collimated beam. To match the available information about the fiber, our model consists of of 2 μm diameter cores in hexagonal arrangement with a core-to-core distance of 3.2 μm. The simulation was performed using the Wave Propagation Method (WPM)8 which only considers forward propagating light. The results show that the multicore fiber bundles, even if embedded into a material with the cladding refractive index, act as strong scattering sources.

Figure 6.

WPM simulation of light scattering at a multicore imaging fiber. The circular field enhancement around the individual fiber cores is artificial and was added after the simulation in order to make them more visible. The simulation was performed in 2D which corresponds to an infinitely extended fiber. The simulation grid size was set to 25 nm in both directions.

00096_PSISDG10676_106761E_page_6_1.jpg

As a result of the WPM simulations, the fiber bundles were included into the non-sequential simulation as Lambertian scattering sources. The effects on the visual perception of the user were simulated with a simplified eye model using a perfect lens and an eye pupil diameter of 4 mm. A chessboard pattern was imaged into the eye from infinity passing through the eyeglass with embedded scattering objects having the dimensions of the used fibers. The shape of the fibers was simplified to straight cylinders. Figure 7 shows a simulation of imaging through the eyeglass. On the logarithmic plot, the fibers and the central part become clearly visible. While at some parts of the FOV a contrast of more than 10 orders of magnitude is possible, the included optics reduce this value to 4-5 orders of magnitude. It is difficult to predict the how much this pattern will affect the user experience as there are psychological aspects to this question as well.

Figure 7.

Simulated imaging of a chessboard pattern through the eyeglasses without AR light coupled in.

00096_PSISDG10676_106761E_page_6_2.jpg

A very similar setup was used to simulate what a user of our device would see wearing the smart glasses. In this case, the fiber facets were modeled as image sources. The chessboard pattern was replaced by a background image covering a circular FOV of 120°. The sub-images were prepared and cut to the correct geometry to enable imaging without significant seams or holes. Figure 8 shows the non-sequential model in which the eye was replaced by a perfect lens in order to decrease computational effort.

Figure 8.

Non-sequential ZEMAX model of the monocular AR system. The ZEMAX eye model was ignored in this case and replaced by an ideal lens.

00096_PSISDG10676_106761E_page_7_1.jpg

The model was used to simulate the user’s view with help of massive ray tracing. Figure 9 shows examples of imaging on the detector. In order to reach acceptable signal levels, more than 200 million rays were traced for one image. Stray light was included in the simulations and some of the stray paths are visible in Figure 8. The results show that the approach is working as intended. A complete image with readable text is formed by the 7 different channels, while the overall shape is not circular but rather similar to a flower. At a closer look, thin boundaries between the different segments are visible and could be further reduced by a better alignment or digital means at the image source. The grid pattern of the fiber bundles was taken into account in case of Figure 9B by masking the source images accordingly. The results show that the grid pattern does not significantly reduce the performance as the resolution limit of the lenses was already close to the fiber pixel pitch. The usable eyebox was evaluated by laterally shifting the aperture stop at eye pupil which is similar to a rotational movement of the eye. The comparison shows that for a movement of 1 mm no significant vignetting appears while at 2 mm all image segments start to become vignetted. It has to be noted that these results also depend on the diameter of the pupil and were obtained at a value of 4 mm. If the pupil is more closed, vignetting happens earlier.

Figure 9.

A, Result of non-sequential ray tracing through the system displayed in Figure 8. The FOV corresponds to ∼90° × ∼40°. B, Detailed view of the overlaid virtual image with pixelation effects of the fiber bundles taken into account. C, Pixelation not included in the simulation. D, Eye pupil shifted by 1 mm. E, Eye pupil shifted by 2 mm. The used detector for all images has a linear signal scaling. Contrast and brightness of the whole figure have been increased linearly by 20% and 10%, respectively.

00096_PSISDG10676_106761E_page_8_1.jpg

5.

SUMMARY AND CONCLUSION

We successfully demonstrated the approach of using buried multicore fibers and micro-optical components to create highly unobtrusive smart glasses. Although our design is monocular, it could be easily extended to binocular eyeglasses. The model features color (RGB) virtual images which can be designed to appear at arbitrary positions within the FOV. Our design achieves the following specifications (Table 2):

Table 2.

Achieved specifications of the presented smart glass design.

SpecificationValue
Field of View23.7°
Angular resolution<4.2 arcmin
Eyebox± 1 mm
Lookbarely distinguishable from standard glasses

For successful manufacturing, the tolerances from Table 1 should be met. They are expected to be achievable with conventional micro-optics manufacturing methods.

According to our simulation, the normal see-through vision is not significantly affected by the components embedded in the glasses. However, this can be judged better in an experimental prototype as human vision is strongly affected by psychological effects.

In conclusion, we consider our approach as a realistic variant of unobtrusive AR smart glasses which are of special appeal if the invisibility of the optical and electronic components is important. We assume that in such a case, e.g. if the eyeglasses require a special “designer” look, the current limitations in terms of resolution or eyebox could be acceptable. For the subtle placement of simple information like notifications, time, or directions, our concept could be an ideal solution even if this information is only visible at a specific eye position.

6.

OUTLOOK

There are multiple ways of how this design could be improved and many of them have already been mentioned in the previous chapters. The most problematic ones are the limited eyebox, limited resolution and a possible obstruction of the see-through vision by optical components.

Limitations with the eyebox could be improved by increasing the focal length of the micro-optical components (mirrors, lenses) which would lead to better pupil matching but also to a smaller FOV of the single channels and also to thicker eyeglasses at constant NA. Another option could be to increase the eyebox by using more channels. Vignetting effects could also be mitigated by eye tracking and dynamic changes to the local image brightness at the display side.

Angular resolution could be increased if a lower FOV is used, more pixels are integrated (greater fibers or more fibers), or if fibers with a smaller pixel pitch become available. With our design, the optical performance of the lenses it not limiting the resolution but more complex lens and mirror shapes may be required if these demands are increasing.

If experiments show that the optical components are interfering with the see-through vision too much, different steps could be tried. One possibility would be to optically cloak the fibers by creating a specific refractive index distribution around them9. The small bumps on the glasses coming from the microlenses could be removed by switching to gradient index lenses or planar lenses. If the focal length of the micro-optical components is increased, the fibers can be moved towards the outer part of the eyeglass leading to less obstruction.

For rapid prototyping of the micro-optics, 3D printing by 2-photon-lithography could be an ideal candidate as it allows direct printing of freeform optical surfaces onto fiber facets10.

ACKNOWLEDGMENTS

The authors thank the Bundesministerium für Bildung und Forschung (BMBF) and Baden-Württemberg-Stiftung for funding the projects “Printoptics” and “Opterial”, respectively.

7.

7.

REFERENCES

[1] 

Budd, R. A., Dove, Derek B., Nakai S. and Toyokawa, T., “Head mounted display,” Patent USD436960S1 (1999).Google Scholar

[2] 

Dobschal, H.-J. and Lindig, K., “Imaging optical unit and smart glasses,” Patent US20170307895A1 (2014).Google Scholar

[3] 

Schultz, R.J. and Burdick, N.E., “Prismatic multiple waveguide for near-eye display,” Patent US8649099B2 (2010).Google Scholar

[4] 

Tremblay, E., Guillaumee, M., Moser, C., “Method and apparatus for head worn display with multiple exit pupils,” Patent US9846307B2 (2013).Google Scholar

[5] 

Amirparviz, B., “Head mounted display using a fused fiber bundle,” Patent US8666212B1 (2011).Google Scholar

[6] 

Spitzer, M. B., “Image combining system for eyeglasses and face masks,” Patent US5886822A (1996).Google Scholar

[7] 

Orth, A., Ploschner, M., Maksymov, I. S. and Gibson, B. C., “Extended depth of field imaging through multicore optical fibers,” Opt. Express 26, 6407–6419 (2018). 10.1364/OE.26.006407Google Scholar

[8] 

Schmidt, S., Tiess, T., Schröter, S., Hambach, R., Jäger, M., Bartelt, H., Tünnermann, A. and Gross, H., “Wave-optical modeling beyond the thin-element-approximation,” Opt. Express 24, 30188–30200 (2016). 10.1364/OE.24.030188Google Scholar

[9] 

Ergin, T., Stenger, N., Brenner, P., Pendry, J. B. and Wegener, M., “Three-Dimensional Invisibility Cloak at Optical Wavelengths,” Science 328, 337–339 (2010). 10.1126/science.1186351Google Scholar

[10] 

Gissibl, T., Thiele, S., Herkommer, A. M. and Giessen, H., “Sub-micrometre accurate free-form optics by three-dimensional printing on single-mode fibres,” Nat. Comm. 7, 11763 (2016). 10.1038/ncomms11763Google Scholar

© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
S. Thiele, S. Thiele, P. Geser, P. Geser, H. Giessen, H. Giessen, Alois M. Herkommer, Alois M. Herkommer, } "Compact see-through AR system using buried imaging fiber bundles", Proc. SPIE 10676, Digital Optics for Immersive Displays, 106761E (21 May 2018); doi: 10.1117/12.2315685; https://doi.org/10.1117/12.2315685
PROCEEDINGS
10 PAGES


SHARE
Back to Top