We present an underexplored variation of the classical optical freeform prism design that incorporates 3 optical surfaces. This optical architecture can make use of one, two, or three freeform surfaces. Our initial prototype uses a single freeform surface along with a sphere and a at surface to simplify manufacturing complexity. There are two key contributions in this paper that to our knowledge have not been achieved previously: 1) the design of a thin, 4 mm to 1 mm gradient thickness, curved freeform lightguide (nearly 4x thinner than the original freeform prism), and 2) lightguide fabrication utilizing ophthalmic machines. This particular optical design makes combined use of total internal reflections and partial reflections. The advantages of this optical architecture include the curved optical surfaces that eliminate the optical collimator requirement in at lightguides, a relatively large eyebox, and a manufacturing approach that reuses the standard ophthalmic process for fabricating the eyeside and worldside optical surfaces. The limitations of the optical design are low efficiency (~ 5%), multiple image artifacts, and lack of optical see-through.
The key contribution is the optical design of a 2.75 g 2.5× magnification visual loupe developed within the Defense Advanced Research Projects Agency Manufacturable Gradient Index (M-GRIN) phase 2 program. We present a visual loupe (i.e., a Galilean telescope) that is constructed by a positive optical power objective lens that makes use of a spherical gradient index profile and a negative optical power eye lens that collimates the light for visual use. The optical materials and the preform thickness in the design are judiciously designed to be manufacturable within the spherical gradient index design rules available today. A comparison of the M-GRIN design to an all-plastic homogeneous baseline design shows that the M-GRIN design reduces the weight from 4.15 to 2.75 g while maintaining equivalent optical performance of the baseline.
This paper presents the use of radial basis functions (RBF) for describing freeform optical
surfaces. The RBF approximation framework along with preliminary optical design experiences will
Compact and lightweight optical designs achieving visually acceptable image quality, field of view, eye clearance, eye
box diameter and operating across the visible spectrum, are the key to the success of next generation head-worn displays.
There have been several approaches in the design of head-worn displays including holographic optical elements and
laser scanner systems. For example, Minolta has pursued a monochromatic display (green) with a 3 mm exit pupil
realized by a 3.4 mm thick light guide with a holographic optical element to achieve an eyeglass form-factor head-worn
display . Our approach in this paper is to investigate the field of view, eyebox diameter, and the performance limit of
a single element magnifier comprised of freeform surfaces. The surface shape is a major variable in such a constrained
system with respect to the optimization degrees of freedom.
Typical optical surfaces are functions mapping vectors in R<sup>2</sup> to real numbers representing the sag of the surface. A
majority of optical designs to-date have relied on conic sections to which are added polynomials as the functions of
choice. The choice of conic sections is easily justified, since conic sections are stigmatic surfaces under certain imaging
geometries. The choice of polynomials from an image quality analysis point of view is understood since the wavefront
aberration function is typically expanded in terms of polynomials. Therefore, a polynomial surface description may link
a designer's understanding of the wavefront aberrations and the surface shape. However, from the point of view of shape
optimization and representation, polynomial shape descriptions can be challenged. In Section 2, we briefly describe the
radial basis function approach to represent freeform optical surfaces. In Section 3, we apply the RBF to design a single
element see-through compatible head-worn display.
In this paper, we summarize our initial experiences in designing head-worn displays with free-form optical surfaces.
Typical optical surfaces implemented in raytrace codes today are functions mapping two dimensional vectors to real
numbers. The majority of optical designs to date have relied on conic sections and polynomials as the functions of choice.
The choice of conic sections is justified since conic sections are stigmatic surfaces under certain imaging geometries.
The choice of polynomials from the point of view of surface description can be challenged. The advantage of using
polynomials is that the wavefront aberration function is typically expanded in polynomials. Therefore, a polynomial
surface description may link a designer's understanding of wavefront aberrations and the surface description. The
limitations of using multivariate polynomials are described by a theorem due to Mairhuber and Curtis from
approximation theory. In our recent work, we proposed and applied radial basis functions to represent optical surfaces as
an alternative to multivariate polynomials. We compare the polynomial descriptions to radial basis functions using the
MTF criteria. The benefits of using radial basis functions for surface description are summarized in the context of
specific magnifier systems, i.e., head-worn displays. They include, for example, the performance increase measured by
the MTF, or the ability to increase the field of view or pupil size. Full-field displays are used for node placement within
the field of view for the dual-element head-worn display.
In this study, we take a data-driven approach to study the design efficiency of a variety of optical designs. Efficiency is
defined to be the number of resolvable spots across the image per lens element. 3188 designs were selected from a
commercially available lens database. Each design was imported into a raytrace code, briefly optimized, and the number
of resolvable spots was computed. Examples of efficient designs within this dataset are shown. Four design efficiency
groupings are created and discussed separately: 1) all-spherical, monochromatic designs, 2) monochromatic designs with
some aspheres, 3) all-spherical, polychromatic designs, and 4) polychromatic designs with some aspheres. Zoom lens
systems were excluded from the dataset. The results of the analysis are intended to answer the question of "how many
elements does it take, as a minimum, to deliver a certain number of resolved spots?"
The emergence of several trends, including the increased availability of wireless networks, miniaturization of electronics
and sensing technologies, and novel input and output devices, is creating a demand for integrated, fulltime displays for
use across a wide range of applications, including collaborative environments. In this paper, we present and discuss
emerging visualization methods we are developing particularly as they relate to deployable displays and displays worn
on the body to support mobile users, as well as optical imaging technology that may be coupled to 3D visualization in
the context of medical training and guided surgery.
We present the first-order design details and preliminary lens design and performance analysis of a compact optical system that can achieve mutual occlusions. Mutual occlusion is the ability of real objects to occlude virtual objects and virtual objects to occlude real objects. Mutual occlusion is a desirable attribute for a certain class of augmented reality applications where realistic overlays based on the depth cue is important. Compactness is achieved through the use of polarization optics. First order layout of the system is similar to that of a Keplerian telescope operating at finite conjugates. Additionally, we require the image to lie on the plane of the object with unit magnification. We show that the same lens can be used as the objective and the eyepiece. The system is capable of having very close to zero distortion.
Head-mounted displays present a relatively mature option for augmenting the visual field of a potentially mobile user. Ideally, one would wish for such capability to exist without the need to wear any view-aided device. However, unless a display system could be created in space, anywhere and anytime, a simple solution is to wear the display. We review in this paper the fundamentals of head-mounted displays including image sources and HMD optical designs. We further point out promising research directions that will play a key role towards the seamless integration between the virtually superimposed computer graphics objects and the tangible world around us.