21 May 2018 Design of a spatially multiplexed light field display on curved surfaces for VR HMD applications
Author Affiliations +
A typical light field virtual reality head-mounted display (VR HMD) is comprised of a lenslet array and a display for each eye. An array of tiled subobjects shown on the display reconstructs the light field through the lenslet array, and the light field is synthesized into one image on the retina. In this paper, we present a novel compact design of binocular spatially multiplexed light field display system for VR HMD. Contrary to the flat lenslet array and flat display used in current light field displays, the proposed design explores the viability of combining a concentric curved lenslet array and curved display with optimized lenslet shape, size and spacing. The design of placing lenslet array on a spherical surface is investigated and the specification tradeoffs are shown. The system displays highest resolution at the direction wherever the eye gazes. The design form is thin and lightweight compared to most other VR optical technologies. Furthermore, the use of a curved display reduces the complexity of optical design and wastes fewer pixels between subobjects. The design simultaneously achieves a wide field of view, high spatial resolution, large eyebox and relatively compact form factor.



Virtual Reality, Mixed Reality, and Augmented Reality (collectively here “VR”) headsets are of increasing importance in our daily lives. They promise enhanced consumer experiences and more efficient workplaces through directly assisting operators and engineers. Virtual reality headsets typically refer to systems which generate a new threedimensional world independent of one’s physical surroundings, where mixed and augmented realities imply a mixed image of user surroundings and rendered graphics. For the latter experiences, the interconnect to the physical world may be electronic, where onboard cameras convert the image of the external world into part of the rendered graphics for the display, or optical, where rendered graphics are channeled into passive optics. Purely emissive display technologies for head-mounted display (HMD) applications, including the one presented, are suitable for any or all of virtual reality, augmented reality, and mixed reality headsets.

Lenslet arrays (also called microlens arrays, or, in one dimension, lenticular arrays) have found many applications in imaging and nonimaging optics. A pair of lenslet arrays permits efficient uniform illumination for display systems and lithography [1], and recently microlens arrays have found application in condensing sunlight for photovoltaic conversion [2]. One of the notable early citations of integral image synthesis was by Gabriel Lippmann, using a lenslet array to construct a stereoscopic image with multiple subimages [3]. The plenoptic camera has leveraged the ray mapping of lenslet arrays for collecting three-dimensional information for post-imaging focus adjustment. Ng and colleagues go beyond computational refocusing to correct aberrations in an adjoining optical system with light field plenoptic photography [4].

Massof and colleagues show a spherically tiled system of Fresnel lenses centered with respect to the eyeball [5]. The spherical symmetry of the tiled lenses is cited by Massof to increase the field of view while maintaining resolution and limiting distortion. In later work, Lanman and Luebke of Nvidia have demonstrated application of lenslet arrays for highly compact VR HMDs [6]. We claim that a combination Lanman and Massof’s work, i.e. decreasing the size of the tiled lenses into lenslets tiled onto a curved surface, can not only further reduce the lenslet aberrations but also largely increase the field of view of a lightfield VR system. Assuming current progress in display technologies continues to decrease pixel size and increases display flexibility, curved displays with small pixels and appropriately compensated back-illumination will become increasingly available. Similarly, advances in manufacturability of micro-optics will enable increased realization of richly parametrized optics such as specialized lenslet arrays. This leads the current study to analyze monolithic spherically curved lenslet arrays as shown in Figure 1. The back surface has the display coupled into the lenslet array, and the concentric front surface has the lenslet array. Inherent symmetry of the system allows for an arbitrarily large field of view with improved image quality and illumination. This is compared with the planar array in the design methods.

Figure 1.

Schematic rendering of a curved monolithic lenslet array. Light is emitted from the back spherical display surface (hidden) through the front spherical lenslet array. Both dominant surfaces are concentric with the center of the eye. Design of the system is chosen in the cartesian YZ plane.




This section discusses the layout, assumptions, and basic parameters of the spherical lightfield VR system. From the direction of the eye, the system is comprised of a lenslet array on a spherical surface and a spherical display. Lenslets in the array have spherical shape and the array and display are concentric about the rotational center of the eyeball. Lenslets are arranged in azimuth/altitude tiling along the spherical surface, in a pseudo-square fashion corresponding to the rectangular tiling of pixelated displays. The y-axis is defined as the reference pole for altitude and azimuth coordinates. Because a 2D cartesian display mapping onto spherical azimuth and altitude does not preserve equal spacing across a wide field, an actually manufactured system would have to consider tiling requirements more carefully. Tiling is a practical consideration specific to manufacturing details, so the design study commences assuming representation in the cartesian YZ plane is sufficient. Similar to Lanman and Leubke, the design has a virtual image one meter away from the lenslet array, which implicitly constrains the distance between the display and the lenslet array given a lenslet focal length.

Distance between the lenslet array and the display (DO), eye relief of the system (DER), and radius of the eye (Reye) are defined on a YZ section of the system as shown in Figure 2. Since the lenslet array and the display are concentric about the center of the eye rather than the pupil, a user of the system cannot see the full rendered field of view at once. The design essentially creates a floating window within the rendered field of view, and the window appears to move with the rotation of the eyeball, always centered on the gazing direction of the eye. The floating window samples different parts of the full rendered field, as shown in Figure 3. With small lenslets and spherical symmetry, the highest resolution is always at the center of the gaze of the eye. This is discussed in the design methods.

Figure 2.

YZ plane view of the spherically symmetric lightfield VR system. The lenslet array on the spherical surface is placed in front of the eye, and a spherically symmetric display is behind the lenslet array. The system is concentric on the center of the eyeball, so the rays far from the lenslet axis (red) will be blocked by the pupil and the rays close to the axis (green) contribute to the image, leaving a circular floating window of view. These contents far off axis are seen by rotating the eyeball.


Figure 3.

Visualization of the floating windows for both eyes. The red dotted circles show the size of the floating window field of view shared by both eyes. The grey background is the binocular rendered field of view, containing green and blue object points. At left above, when the eye is focusing on the blue dot, the green dot is barely resolved because it appears on the edge of the floating window. Likewise, at right the gaze centers on the green dot and the blue dot is barely resolved.




The design strategy is rooted in simultaneous improvement of two existing approaches. Lanman and Leubke with their planar lenslet array lightfield display and Massof with sphere-tiled lenses both present improvements with their solutions. Both approaches to HMD design have merits, and the proposed design method leverages all the merits. Understanding the spherical lenslet HMD design is the topic of the first section of the design method, and the second section outlines how the method proposed is an improvement over the existing approaches.


Specifications of the system

In the design process of the spherical lightfield VR system, the angular/spatial resolution and the size of eyebox are closely related to the size and effective focal length of lenslets. In this section, a set of equations defines the relation between these parameters and explore the limitations.

The lenslet array design ensures that each lenslet corresponds to an independent fraction of display (subobject) behind it. In the concentric shell design, the lenslet and its corresponding subobject share the same solid angle assuming a perfect tiling condition, i.e. the subobjects have no overlap or gap between them on the display. The size of subobject corresponding to each lenslet is given by


where wO is the full size of a perfectly tiled subobject and w1 is the full width of a single lenslet. These parameters are shown in Figure 4 with half angular size of eyebox θ and projected angle of the subobject α. DO is the object distance, DER is eye relief and Reye is the radius of the eyeball as above. Consequently, the subobject pixel count (Np), a metric of spatial resolution, is


where p is the pixel size. Notice that for a uniform lenslet array on a spherical surface, the subobject size remains the same for the center of gaze across the whole field of view.

Figure 4.

Designation of system parameters. Subobject of full width wo has half angle α when projected to the eyeball through a lenslet of width wl. The range over which the eye can move and still register an image through a particular lenslet is the eyebox. This eyebox is specified to first order by weyebox. The half angle of the floating window θ is the angular representation used to represent the eyebox.


One of the most important specifications for HMDs is the angular resolution. In lightfield displays, the pixels are magnified because the display is placed close to the focal plane of the lenslet array. The pixel angular resolution on the subobiect is given by


The half angular size of a subobject is given by


The eyebox of the system is defined as the range the eye can rotate while still seeing the central subobject. To first order, the projected side length of a subobject on the eye is an eyebox metric


Since the system is concentric about the center of the eye, the eyebox has the same size for all subobjects. The calculation of projected side length assumes the eyebox is a plane mapped onto an eyeball, which is a good metric for quick analysis but is incompatible with the sphere geometry. A more applicable angular metric for evaluating the eyebox is the angle that eyeball can rotate while still seeing a specific subobject. The half viewing range of the eyeball to a specific subobject gives


where θ is the half angular size of the floating window.


Why a monolithic spherical lenslet array?

As multiple lenslets synthesize the lightfield, the display can sit close to the eye and tune multiple bundles of rays independently for a given field point. Synthesis of the lightfield relies heavily on coregistration of adjacent subobjects on the pixel scale. Planar arrays are currently more commercially available than spherical arrays, and this led to the work of Lanman and Leubke. Curving the lenslet array first increases the uniformity of illumination. Most commercial display illumination is engineered to have constant angular illumination distribution across the display, and placing such a display close to a uniform lenslet array reduces illumination at the edge of the field by an obliquity factor of the field angle.

Extreme field angles incident on a planar lenslet array experience more field aberrations. Furthermore, field aberration varies from the center to the edge of the planar display. This aberration distribution is also fixed to the display; i.e., changing the gaze angle of the eye will not correct it. Because the concentric spherical lenslet array maintains the same lenslet to display distance (DO) across the field of the display, the field aberration distribution is minimal wherever the eye gazes since the central gaze of the eye is always normal to the spherical surface. Placing the eye at the center of a planar lenslet array also creates pixel mapping distortion, and this is minimized by the symmetry of the spherical lenslet array design.

The parameters analyzed above are all specified in air, so when extending the analysis to the real monolithic system, the focal length and form factor considerations change by the index of the refraction of the refracting material, making the system more compact. The spherical array must remain concentric with the display during use, and making the design monolithic better maintains the design dimensions.



The discussion in this section assumes an eye relief of 25mm, eye radius of 12 mm, and pixel size of 5 μm. The field of view can be extended arbitrarily to the limit of manufacturing constraints. The plots in Figure 5 reveal tradeoffs to be considered during the design process.

Figure 5.

System tradeoff plots. System parameters: 25 mm eye relief, 12 mm eye radius, 5 μm pixel size. Plots display pixel angular resolution (a), subobject pixel count (b), form factor (c) projected eyebox (d), angular size of floating window (e), and diffraction limited spot size (f), as functions of focal length and lenslet size (diameter). Plots are cut off where not defined or limited by fundamental factors. Diffraction limit is constrained less than one pixel per diffraction spot diameter at 587 nm. The eyebox cannot exceed the size of the eye. Balancing form factor with angular resolution constrains the lenslet focal length, and balancing eyebox with diffraction limit constrains the speed of the system. The optimal speed is between f/2 and f/4.


Figure 5(a) shows the pixel angular resolution is only a function of lenslet focal length, and the longer focal length provides better angular resolution. Figure 5(b) shows that for the perfect tiling condition, lenslets with larger sizes have higher pixel density and better spatial resolution in each subobject. Figure 5(c) shows form factor is driven directly by the focal length of the lenslet array. Figure 5(d) and (e) show that both projected side length of eyebox and the angular size of floating window have quasi-linear relation with the f/number of the system, and faster system has larger eyebox. The upper limit of eyebox size is the size of the eyeball. The f/number is limited by diffraction as shown in Figure 5(f) where single pixel per diffraction spot diameter is assumed.

The tradeoff plots show spherical lenslet array systems with long focal length and large lenslet size can provide better angular and spatial resolution with larger eyebox. Taking this argument into extreme, however, turns the system into a simple rotational symmetric magnifier system with large form factor and a significant amount of field aberrations. Focal length of the system is restricted by thickness of the system and the amount of field aberration in the system. The size and speed of lenslets are constrained by eyebox size and the diffraction limit. Lenslets faster than f/2 do not improve the eyebox, and slower than f/4 the diffraction limited spot is too large. In current configuration, an optimal design has focal length at about 6 mm and lenslet size about 3 mm.

An image simulation of the spherical light field system is performed to show the field of view experienced by the user. Tradeoff analysis shows a lenslet array bounded between f/2 and f/4 has a limited floating window size, so an 80 degree floating window is simulated on a 110 degree static field of view. Figure 6(a) shows the source picture of a camera man to be used in the simulation. The image is separated into overlapping subsections of the field and tiled on a spherical display without gap. Figure 6(b) shows a projection of the tiled subobjects on a spherical display. The simulated field of view of the system is shown in Figure 6(c). The central vision, corresponding to contents within the floating window, suffers minor resolution loss from pixel size limitation. The subobjects outside of the floating viewing window remain blurry and highly vignetted until the user looks in that direction, shifting the floating window onto that section of the field.

Figure 6.

Image simulation and tiling of subobjects. Overlapping subsections of the source picture in (a) is tiled on a spherical surface in (b), and an 80 degree floating window on a 110 degree static field of view is simulated in (c). The clear region in the simulated field of view is composed of the subobjects in the floating viewing window. Annular blurred region in the simulation is designed to match the lower resolution periphery of the eye.




Near-eye light field HMD using planar lenslet array is considered one of the most promising technologies in virtual reality industry. Comparing to current designs using simple magnifiers, the lenslet array can obtain a compact form factor while efficiently expanding the field of view. However, the planar lenslet array geometry is incompatible with the rotationally symmetric geometry of the eye and limits further field of view expansion. The present design demonstrates a spherical lenslet array with a spherical display in concentric configuration can obtain a wide field of view, high spatial resolution, large eyebox and relatively compact form factor. The geometry of the system ensures high resolution at the center of the field of view wherever the eye tracks.





R. M. Tasso, “Efficient and uniform illumination with microlens-based band-limited diffusers,” Photonics Spectra, April 2010.Google Scholar


E. Fennig, G. Schmidt and D. T. Moore, “Design of Planar Light Guide Concentrators for Building Integrated Photovoltaics,” in Optical Design and Fabrication 2017 (Freeform, IODC, OFT), OSA Technical Digest (online) (Optical Society of America, 2017), paper ITh1A.4, 2017.Google Scholar


G. Lippman, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl., vol. 7, no. 1, pp. 821–825, 1908. 10.1051/jphystap:019080070082100Google Scholar


R. Ng and P. Hanrahan, “Digital Correction of lens aberrations in light field photography,” Proc. SPIE 6342, International Optical Design Conference 2006, 63421E (17 July 2006).Google Scholar


R. W. Massof, L. G. Brown, M. D. Shapiro, G. D. Barnett, F. H. Baker and F. Kurosawa, “37.1: Invited Paper: Full-Field High-Resolution Binocular HMD,” SID symposium digest of Technical Papers, vol. 34, no. 1, pp. 1145–1147, 2012. 10.1889/1.1832490Google Scholar


D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Transactions on Graphics (TOG)-Proceedings of ACMSIGGRAPH Asia 2013, vol. 32, no. 6, 2013.Google Scholar

© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tianyi Yang, Tianyi Yang, Nicholas S. Kochan, Nicholas S. Kochan, Samuel J. Steven, Samuel J. Steven, Greg Schmidt, Greg Schmidt, Julie L. Bentley, Julie L. Bentley, Duncan T. Moore, Duncan T. Moore, } "Design of a spatially multiplexed light field display on curved surfaces for VR HMD applications ", Proc. SPIE 10676, Digital Optics for Immersive Displays, 106760Y (21 May 2018); doi: 10.1117/12.2315373; https://doi.org/10.1117/12.2315373

Back to Top