Nodal Aberration Theory, developed by Kevin Thompson and Roland Shack, predicts several important aberration phenomena but remains poorly understood. To de-mystify the theory, we describe the origins and fundamental concepts of the theory.
The design of tilted, decentered, and non-rotationally symmetric or freeform optical systems has become an important part of optical design. We explore desensitization of traditional and freeform optical designs and compare their effectiveness.
Sensitivity to tolerances is a well-known problem in optical design. In many cases, multiple designs having different tolerance sensitivities will solve the optical design problem. Often, the solution with the best “as-designed” performance is not the solution with the best “as-built” performance. In the end, it is not the as-designed quality of the optics that matters; it is only the as-built quality that matters. As we demonstrate in this paper, typical merit functions used in optimization (e.g., RMS spot diameter or RMS wavefront variance of the pre-tolerance system) are often poorly correlated to actual, as-built image quality; in many cases the correlation is <i>extremely </i>poor. One known strategy to avoid this is to add something to the merit function that penalizes design forms that are particularly sensitive. The ultimate success of a merit function is determined by the extent to which it correlates with as-built performance. Such a strategy is of particular importance during a global optimization design phase, in which the optimizer will generate many different design forms, some of which may differ significantly from the starting design, both in appearance and in tolerance sensitivity. In this paper we examine the addition of a “sensitivity” parameter to the merit function. We discuss the selection of the weighting factor for the sensitivity parameter, as well as the correlation of the merit function (both with and without the sensitivity parameter) to as-built performance.
Setting a tolerance for the slope errors of an optical surface (e.g., surface form errors of the “mid-spatial-frequencies”) requires some knowledge of how those surface errors affect the final image of the system. While excellent tools exist for simulating those effects on a surface-by-surface basis, considerable insight may be gained by examining, for each surface, a simple sensitivity parameter that relates the slope error on the surface to the ray displacement at the final image plane. Snell’s law gives a relationship between the slope errors of a surface and the angular deviations of the rays emerging from the surface. For a singlet or thin doublet acting by itself, these angular deviations are related to ray deviations at the image plane by the focal length of the lens. However, for optical surfaces inside an optical system having a substantial axial extent, the focal length of the system is not the correct multiplier, as the sensitivity is influenced by the optical surfaces that follow. In this paper, a simple expression is derived that relates the slope errors at an arbitrary optical surface to the ray deviation at the image plane. This expression is experimentally verified by comparison to a real-ray perturbation analysis. The sensitivity parameter relates the RMS slope errors to the RMS spot radius, and also relates the peak slope error to the 100% spot radius, and may be used to create an RSS error budget for slope error. Application to various types of system are shown and discussed.
The selection of compensators for a cam-driven zoom lens is more complex than for a prime lens, because tolerances cause the back focal distance to shift by different amounts in different zoom positions, i.e, the system loses parfocality. Adjustment of the back focal distance can bring one, but not all, of the zoom positions back into focus. Furthermore, compensator selection is more complex because it is usually desirable to avoid adjustments within the moving groups. In this paper, we examine the effects of tolerances and compensators on a photographicformat zoom lens. We begin by assigning reasonable tolerances to all surfaces, materials, and groups, and then examine in detail how these tolerances affect the image quality. We determine the relative amount of degradation caused by transverse tolerances (decenters and tilts) compared to rotationally symmetric tolerances (power, index, thicknesses and spacings). For the rotationally symmetric tolerances, we examine the efficacy of shifting the detector, shifting the fixed groups, and respacing elements within the fixed groups. Similarly, for the transverse tolerances, we examine the efficacy of implementing decenter compensators within the fixed groups.
Passive athermalization requires that the materials (both optical and mechanical) and optical powers be carefully selected in order for the image to stay adequately in focus at the plane of the detector as the various materials change in physical dimension and refractive index. For a large operational temperature range, the accuracy of the thermo-optical coefficients (dn/dT coefficients and the Coefficients of Thermal Expansion) can limit the performance of the final system. Based on an example lens designed to be passively athermalized over a 200°C temperature range, and using a Monte Carlo analysis technique, we examine the accuracy to which the expansion coefficients and dn/dT coefficients of the system must be known.
As many authors have documented, it is possible to correct secondary color without using special glasses, if there are
substantial separations between lenses or groups that are chromatically uncorrected. The trick is to use the separations to
“induce” secondary color by allowing the rays of different colors to separate from each other before being refracted by
the group that follows. This approach works, but the use of separated and uncorrected groups that correct each other
raises the question of tolerance sensitivity, because misalignments between the groups causes imperfect correction of the
aberrations. It is generally good practice to correct aberrations within groups, rather than allow the groups to “crosscorrect”
each other. On the other hand, the use of special glass types to control secondary color directly is often either
discouraged for cost reasons, or simply not allowed because of thermal shock sensitivity. Moreover, some optical
systems (particularly projector applications) require extremely good secondary color correction – often to a small
fraction of a pixel. The important question is how much secondary color can be induced before the increased tolerance
sensitivity negates the advantage of the color correction. In this paper, we examine the as-designed and as-built
performance of several sample systems that rely on separated groups for the correction of secondary color, and compare
the performance to that of systems designed without regard to secondary color correction.
Previous papers have established the inadvisability of applying tolerances directly to power-series aspheric coefficients.
The basic reason is that the individual terms are far from orthogonal.
Zernike surfaces and the new Forbes surface types have certain orthogonality properties over the circle described by the
"normalization radius." However, at surfaces away from the stop, the optical beam is smaller than the surface, and the
polynomials are not orthogonal over the area sampled by the beam.
In this paper, we investigate the breakdown of orthogonality as the surface moves away from the aperture stop, and the
implications of this to tolerancing.
We describe a tool that analyzes the characteristics of an existing lens, and then determines which surfaces should be
made aspheric for optimal results. We apply this tool to several problems, and discuss the results.
Adding keystone distortion (in addition to anamorphism) to an off-axis asphere dramatically improves the ability of the
surface to correct aberrations. Analogously, using 1-theta and 3-theta terms are important when using Zernike surfaces.
Although many designs are evaluated in the design stage by examination of their MTFs, three-bar resolution is often
used for the determination of resolution in practice. In certain applications, the measured three-bar resolution differs
greatly from the resolution obtained with crossing the MTF curve with a threshold curve. In this presentation, we
examine conditions under which this can occur.
In this study, we take a data-driven approach to study the design efficiency of a variety of optical designs. Efficiency is
defined to be the number of resolvable spots across the image per lens element. 3188 designs were selected from a
commercially available lens database. Each design was imported into a raytrace code, briefly optimized, and the number
of resolvable spots was computed. Examples of efficient designs within this dataset are shown. Four design efficiency
groupings are created and discussed separately: 1) all-spherical, monochromatic designs, 2) monochromatic designs with
some aspheres, 3) all-spherical, polychromatic designs, and 4) polychromatic designs with some aspheres. Zoom lens
systems were excluded from the dataset. The results of the analysis are intended to answer the question of "how many
elements does it take, as a minimum, to deliver a certain number of resolved spots?"
Today, most optical surfaces are assigned tolerances on power and irregularity, as well as on surface defects (scratch and dig), but usually not on peak surface slope error. This situation reflects concern for the types of error that typically occur with the classical, grind-and-polish method of fabricating lenses. Sometimes, RMS tolerance types are used to control the difference between a surface and the intended, ideal surface.
With the propagation of new fabrication methods, new types of surface error - or, at least, types of surface error that were not previously prevalent - are increasing in importance. In particular, this is true for processes such as diamond turning and computer-controlled, local polishing, both of which are used for the fabrication of aspheric surfaces and aspheric mold inserts.
In this paper, we examine the use of "peak slope error" as a criterion for specifying optical surface. In the first part of the paper, we look into cases in which the traditional tolerance types for form error are insufficient, and examine when and where surface slope errors (as opposed to surface height errors) are important
In the second part of the paper, we look at how tolerances for slope error can be calculated.
We describe a method of using Global Synthesis<sup>"R"</sup> for finding tolerance-insensitive design forms. By simultaneously
optimizing several design configurations that are nominally identical but differ by tolerance-level perturbations, one
is essentially requesting that the optimizer find solutions that are insensitive to the types of perturbations present.
Global Synthesis is a useful tool for examining a wide range of design forms; when combined with a merit function
that prefers less sensitive solutions, it is particularly useful in exploring the space of tolerance-insensitive design
Because of the index change between water and air, dive masks with flat interfaces magnify by a factor of 1.34X, and the field of view (FOV) in water is restricted to about 60 degrees. In addition, the image suffers significant amounts of lateral color and distortion. For technical diving applications (e.g., underwater welding), these attributes reduce situational awareness, lead to poor hand-eye coordination and are highly undesirable. This paper describes the design issues and design solution of a unity magnification dive mask covering a full FOV of 140 degrees.
High fidelity visual display capability has long been a critical element in successfully training war-fighters. This paper presents the results of a project to develop an affordable, ergonomically designed, augmented reality, Advanced Helmet Mounted Display (AHMD) system. AHMD blends computer-generated data (symbology, synthetic imagery, enhanced imagery) with the actual and simulated visible environment. Critical requirements included ability to be used on the user’s own unmodified helmet, rapid self fitting and alignment, with a wide (100° x 50°) field of view (FOV), low mass and balanced center of mass (CM), maximized see-through (>60%) with image quality (>0.5 cy/mr resolution and >30:1 ANSI contrast ratio) that supports training. This paper will outline the design, incorporating a number of innovative concepts, manufacture and performance of the resulting AHMD. This innovation in visual display technology can be used to support deployable reconfigurable training solutions, traditional simulation requirements, UAV augmented reality, air traffic control and Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications.
Tilted component systems are known to be characterized by aberrations with unusual field dependences, such as decentered coma and binodal astigmatism. Often, a computer optimization of a tilted-component system will yield a solution having astigmatism that grows approximately linearly from a value of zero at the field center, i.e., one of the astigmatic nodes has been placed at the center of the field. For system with substantial field angles, this linear dependence is as detrimental to image quality as ordinary coma, but it is often difficult to avoid this form of solution. In this paper, the origin of binodal astigmatism in a multi-element system from the contributions of individual surfaces is explained in an intuitive manner, as a logical extension of the 'ordinary' aberrations known to all optical designers. The insight provided by this graphical model allows an understanding of why the astigmatism of any given system behaves the way it does, and what remains can be corrected by a final, rotationally symmetric subsystem. Examples of tilted component system are given in which astigmatism and coma have been reduced to 'ordinary' forms.