Open Access Paper
17 July 2019 Teaching of optical imaging and aberrations
Author Affiliations +
Proceedings Volume 11143, Fifteenth Conference on Education and Training in Optics and Photonics: ETOP 2019; 111430F (2019) https://doi.org/10.1117/12.2523698
Event: Fifteenth Conference on Education and Training in Optics and Photonics: ETOP 2019, 2019, Quebec City, Quebec, Canada
Abstract
The purpose of this paper is to give an outline of my approach on how to teach optical imaging and aberrations based on my teaching experience. Teaching of optical imaging starts with Gaussian imaging, which determines the location and size of the image in terms of the location and size of the object. Diffraction of the light leaving its exit pupil determines the actual image. Quality of the image is determined by the aberrations of the system. Hence, their knowledge and understanding about how they degrade the image by way of the aberrated PSF and OTF of the system must be an integral part of an optical imaging course. We discuss the teaching flow, including the calculation of aberrations, how to make interesting numerical problems for homework that are interesting and relevant, and when to use commercial software.

1.

INTRODUCTION

Optical imaging and aberrations are the bread and butter of optical design and the development of an imaging system. Imaging consists of two main areas, ray geometrical imaging and wave diffraction imaging. Its education starts with Gaussian imaging, which determines the location and size of the image in terms of the location and size of the object [14]. The size of the entrance pupil that determines the object flux entering a system and thereby the image brightness is often left out in undergraduate teaching of Gaussian imaging. Beyond the usual imaging equations, it should also include paraxial ray tracing to determine the size of the imaging elements, and pupil vignetting to determine the field of view of the system. Similarly, we emphasize the calculation of the optical aberrations at the exit pupil of the system to determine its pupil function.

While the Gaussian image is an exact replica of the object, except for its magnification and brightness, the actual image is determined by diffraction of the light leaving the exit pupil of the system that inherently includes its aberrations. We point out that while the students learn the concept of the optical transfer function (OTF) as the Fourier transform of the point-spread function (PSF), their understanding of it is limited to it as a mathematical entity without comprehending its physical significance. We mention how the PSFs aberrated by the classical aberrations display their symmetry, but the PSF aberrated by atmospheric turbulence is broken up into speckles. We also discuss briefly the two-point resolution.

We outline the teaching flow of optical imaging with steps involved in its learning, and how to make interesting homework problems to keep the students motivated and interested in this field of classical optics.

2.

RAY GEOMETRICAL IMAGING

2.1

Gaussian imaging

The first matter to settle in Gaussian imaging is the sign convention for measuring the distances of objects and images and the angles of the rays. The archaic sign convention based on whether the object is real or virtual, on the left or right should be abandoned because it becomes cumbersome to implement when more than two imaging surfaces are involved, and changes when the imaging elements are mirrors. The sign convention must be based on the Cartesian sign convention that all students understand, and is adopted by the optical computer software programs.

Gaussian imaging is based on paraxial rays, i.e., rays making small angles with the optical axis and surface normals. We start with imaging by a spherical refracting surface of radius of curvature R separating media of refractive indices n and n′, as in Figure 1. The imaging equations are given by

00005_PSISDG11143_111430F_page_2_1.jpg

where S and S′ are the object and image distances from the vertex V of the refracting surface, f and f′ are the object- and image-space focal lengths. The focal length f’ represents the image distance where the parallel rays incident on the surface come to focus, and f is the object distance that produces a parallel beam on the image side. Mt is the transverse magnification of the image, where h and h′ are the heights of the object and image points, respectively. Any negative quantities in the figure are indicated by a parenthetical negative sign.

Figure 1.

Imaging by a spherical refracting surface.

00005_PSISDG11143_111430F_page_2_2.jpg

A graphical construction of the image (see Figure 2) can be used to obtain the image magnification. Ray 1 from the offaxis point object P incident parallel to the optical axis VC passes through the focal point F′ after refraction. Ray 2 incident passing through the focal point F is refracted parallel to the optical axis, determines the image height h′, and intersects the refracted ray 1 at the image point P′ Ray 3 passing through the center of curvature C of the surface undeviated passes through P′ and confirms the correctness of the image point P′. The deviation of the rays takes place effectively at the plane tangent to the surface at its vertex. The object and image distances z and z′ measured from the focal points and utilized in the Newtonian imaging equation zz′ = ff′ are also indicated in the figure, but they are not discussed here.

Figure 2.

Graphical construction of imaging by a refracting surface.

00005_PSISDG11143_111430F_page_2_3.jpg

An illustrative example of imaging by a refracting surface is that of a glass hemisphere with a flower imbedded at its center of curvature, and finding the apparent position and relative size of the flower for a certain values of R and n, and illustrating the solution with a ray diagram. The diagram is important, as it helps to understand the imaging process and if the solution is correct.

The image formed by a thin lens of refracting surfaces of radii of curvature R1 and R2 and refractive index n can be obtained by applying the imaging equations for a refracting surface sequentially to its two surfaces:

00005_PSISDG11143_111430F_page_3_1.jpg

A graphical construction of the image formed by a thin lens can carried out in a manner similar to that for imaging by a refracting surface. Here, the undeviated ray passes through the center of the lens. A simple homework problem is that of finding the focal length of a lens in water in terms of the its focal length in air. A problem can be formulated on the magnifying glass. While imaging by a thin lens is perhaps the most famous example of imaging, a discussion of the effect of its thickness should not be overlooked. We will consider it when discussing paraxial ray tracing in Section 2.2.

The image formed by a multielement imaging system can be obtained in a similar manner. However, an error in imaging by any of its elements will yield an incorrect final image. The right approach is to utilize the principal planes of the system, i.e., planes of unit magnification, and measure the object and image distances from them. The deviation of the rays by the system effectively takes place at these planes. The imaging equations are exactly the same as those for a refracting surface. A ray incident in the direction of the object-side principal point emerges from the image-side principal point such that the ratio of the slope angles of the incident and the emergent rays is equal to the ratio of the refractive indices of the image and object spaces. If these refractive indices are equal, then the ray emerges parallel to the incident ray. In a problem on the eye glasses, question can be asked about the difference in prescription between them and the contact lenses.

An interesting illustration of the transverse magnification is shown in Figure 3 as the image of a cube as an object.

Figure 3.

Image of a cube as a truncated pyramid.

00005_PSISDG11143_111430F_page_3_2.jpg

The imaging equations for a spherical mirror (see Figure 4) of radius of curvature R can be obtained in a similar manner, and it instructive to do so to understand the imaging process. They can also be obtained from those for a refracting surface by letting n =1 because the mirror is in air, and n′ = −1 because the refracted ray travels backwards:

00005_PSISDG11143_111430F_page_3_3.jpg

Figure 4.

Imaging by a spherical mirror.

00005_PSISDG11143_111430F_page_4_1.jpg

An interesting problem for imaging by a mirror is when you look into the eyes of another person. Because the cornea is slightly reflective, it acts a like a convex mirror. A problem can be formulated by providing the radius of curvature of the cornea and determining the size of the image, which can be verified immediately. The image observed is virtual but erect.

Another interesting problem is based on the passenger-side mirror on an automobile to eliminate the blind spot for safety reasons. Such mirrors have an inscription on them “Objects in mirror are closer than they appear.” What we see in the mirror is the image of the object; an automobile in Figure 5. The image lies in the otherwise blind spot, thus eliminating the blind spot. Note that the image erect. Formulate a homework problem to determine the radius of curvature of the mirror, including its sign (positive or negative). The students can be asked further to measure the convexity of the mirror surface on their automobiles, and check how it compares with the radius of curvature thus determined. Now a days, the problem of blind spot has been overcome with the use of sonar signals and sensors mounted on automobiles.

Figure 5.

Passenger-side mirror on an automobile.

00005_PSISDG11143_111430F_page_4_2.jpg

2.2

Paraxial ray tracing

While Gaussian imaging helps determine the location and size of the image of an object formed by an imaging system, how do we determine the size of its imaging elements and field of view? This is where paraxial ray tracing comes in [1]. The equations for the height x1 of a ray and its slope β1 can be written recursively in the case of a refracting surface (see Figure 6) in the form

00005_PSISDG11143_111430F_page_4_3.jpg

Figure 6.

Paraxial ray tracing of a spherical refracting surface.

00005_PSISDG11143_111430F_page_5_1.jpg

These equations can be applied sequentially to the surfaces of a system to determine, for example, the focal length of a thick lens of thickness t (see Figure 7) by starting with a ray incident parallel to its axis. Where the incident and emergent rays intersect determines the image space principal point H′, and where the emergent ray intersects the optical axis locates the image-space focal point F′.

Figure 7.

Gaussian imaging by a thick lens.

00005_PSISDG11143_111430F_page_5_4.jpg
00005_PSISDG11143_111430F_page_5_2.jpg

The paraxial ray tracing equations for a spherical mirror of radius of curvature R can be obtained from those for a refracting surface with the result

00005_PSISDG11143_111430F_page_5_3.jpg

The size of the secondary mirror M2 in terms of the size of the primary mirror M1 in a two-mirror telescope (see Figure 8) can be determined by considering a ray incident parallel to its axis at the edge of M1 and determining its intersection with the M2. Where the ray reflected by M2 intersects M1 determines the hole size required in M1. The ratio of the two mirror sizes yields the obscuration ratio of the annular beam transmitted by the telescope. By tracing an off-axis ray, we can determine how the field of view affects the size of M2 and the hole in M1.

Figure 8.

Size of secondary mirror M2 and hole in the primary mirror M1.

00005_PSISDG11143_111430F_page_6_1.jpg

The next step in Gaussian imaging is to determine the brightness of image. This requires determining the aperture stop of the system, i.e., the element that restricts the cone of object rays entering the system the most. The angular cone of rays entering the system is equivalently limited by the entrance pupil EnP. Similarly, the angular cone of rays exiting from the system is limited by the exit pupil ExP. This is illustrated in Figure 9 for a two-lens system with aperture stop between them for both on and off-axis point objects. Beyond a certain field angle, the pupil is no longer circular. It is more elliptical in shape, as illustrated in Figure 10.

Figure 9.

Aperture stop AS and its images the entrance and exit pupils EnP and ExP.

00005_PSISDG11143_111430F_page_6_2.jpg

Figure 10.

Change of EnP shape resulting from vignetting of rays as the field angle increases.

00005_PSISDG11143_111430F_page_7_1.jpg

2.3

Comments on and about Gaussian imaging

We teach Gaussian imaging by considering spherical surfaces, but we don’t bring up the question “What if the surface is nonspherical?,” e.g., if it is ellipsoidal or paraboloidal. After all, such surfaces are used in optical systems. The answer is that the Gaussian imaging depends on the vertex radius of curvature of a surface. If this radius is the same as that of a corresponding spherical surface, then the Gaussian image is the same for both. Why then nonspherical surfaces are used in optical systems? That is because the qualities of the two images are not the same owing to their different aberrations. Hence, the next step is to consider the aberrations of a system.

We teach Gaussian imaging by emphasizing that it is based on paraxial rays, i.e., rays with small angles. Yet, when we calculate the Gaussian image formed by a system, we never pay attention to the magnitude of the ray angles. Regardless of the object size, we calculate its image using the Gaussian imaging equations. Again, this is where the aberrations and the quality of the image come in. And one more thing. In Gaussian imaging, the object and image distances are measured along the optical axis, regardless of the location of a point object. This results in an error in the position of the image of an off-axis point object. The correct image of a planar object is actually spherical, called the Petzval image and The image observed in a plane is slightly aberrated.

3.

OPTICAL ABERRATIONS

An aberration-free image is formed when a spherical wave diverging from a point object P is converted by the imaging system into a spherical wave converging to the Gaussian image point P′, as illustrated in Figure 11. All of the object rays transmitted by the system pass through P′, and their optical path lengths are equal to each other. If the wave exiting from the system is not spherical, then its deviation from the spherical form are called wave aberrations, and the image point P′ becomes a distribution of rays, called a spot diagram.

Figure 11.

Aberration-free PSF, or Airy pattern

00005_PSISDG11143_111430F_page_8_5.jpg

The aberrations of a rotationally symmetrical system consist of integral powers of three rotational invariants h2, r2, and hr cos θ, where (r, θ) are the polar coordinates of a point in the plane of its exit pupil [5]. The lowest-order wave aberrations are called primary or Seidel aberrations, and they can be written

00005_PSISDG11143_111430F_page_8_1.jpg

For each aberration the sum of the powers of h and r is 4, that is their degree in them is four. Therefore, they are called fourth-order wave aberrations. It is pertinent to ask at this point if the students wearing glasses have looked at their prescriptions to find what aberrations they consists of. Suppressing h and using a normalized radial variable ρ = r/a, where a is the radius of ExP, we can write the aberration function in the form

00005_PSISDG11143_111430F_page_8_2.jpg

In reduced form, an aberration coefficient Ai has the dimensions of length, and represents the peak or the maximum value of the corresponding primary aberration. For example, if As = 1λ, where λ is the wavelength of the object radiation, we speak of one wave of spherical aberration. It would be worthwhile mentioning that Hubble had 4 waves of spherical aberration (until the astronauts corrected it with a device).

The wave aberrations of simple systems can and should be calculated to understand how they arise. It requires determining the optical path length of a ray relative to that of the ray passing through the center of the aperture stop. Just as we started the Gaussian image formed by a single refracting surface, we determine its aberrations, and then continue to two surfaces, a single mirror, or two-mirrors, as in a telescope. The next step should inevitably be how the aberrations change when the position of the aperture stop is changed. Schmidt camera is a perfect example for illustrating this effect, as the angle-dependent aberrations are zeroed out when the Schmidt corrector plate for correcting the spherical aberration of its spherical mirror is placed at its center of curvature. Once the students grasp the calculation of primary aberrations, they can perform exact ray tracing using the commercial optical design programs such as Zemax, CODE V, or Synopsys to verify their calculations, and inquire the values of the higher-order aberrations to get an idea of how negligible or not are those aberrations. Understanding and evaluation of primary aberrations in this manner can be interesting as well as challenging to them.

The wave aberrations are related to the ray aberrations according to [1]

00005_PSISDG11143_111430F_page_8_3.jpg

where the ray aberrations (xi, yi) are in units of λF, W is in units of wavelength, and the pupil coordinates (x, y) are normalized by the pupil radius a. Here, F = R/D is the focal ratio of the image forming light cone, where R is the radius of curvature of the reference sphere with respect to which the wave aberration is defined and D = 2a.

4.

WAVE DIFFRACTION IMAGING

4.1

Aberration-free image of a point object, the Airy pattern

While the aberration-free Gaussian image of a point object is a point, in reality, the image is a distribution of light because of diffraction of the wave exiting from the exit pupil. It is called the Airy pattern shown in Figure 11 for a circular aperture. It is described by [6]

00005_PSISDG11143_111430F_page_8_4.jpg

where r is in units of λF and the central value is normalized to unity. The central bright spot of radius 1.22λF, called the Airy disc, contains 83.8% of the total light. It is surrounded by alternating dark and bright rings. It is worthwhile emphasizing to the students that the central point is the brightest because this is where the Huygens’ secondary wavelets arrive in phase. The image of a point object is called the point-spread function (PSF).

4.2

Strehl ratio and optical tolerances

A simple measure of image quality is the Strehl ratio that represents the ratio of the central irradiances of the PSF with and without aberration. For a small aberration the Strehl ratio is approximately given by [6,7]

00005_PSISDG11143_111430F_page_9_1.jpg

where 00005_PSISDG11143_111430F_page_9_2.jpg is the variance of the phase aberration. Aberrations can only reduce the central value because they result in nonconstructive interference of Huygens’ secondary wavelets. The students can be given some sense of fabrication tolerances based on a certain value of the Strehl ratio, e.g., 0.8. For example, it can be shown that the fabrication tolerances in a three-mirror system are no more than about fiftieth of a wave when the random fabrication errors are root-sum-squared, taking into account that a fabrication error is nearly doubled in wavefront error because of reflection of the wave from a mirror.

Rayleigh showed (in 1879) that the Strehl ratio for a quarter wave of spherical aberration is 0.8. For other primary aberrations, distinctly different values of Strehl ratio are obtained. Yet, it is quite common to speak of Rayleigh’s quarter wave rule in optical design that a system is near its diffraction limit if its wavefront is contained between two spherical surfaces that are a quarter wave apart. Students should have a clear understanding that whereas Rayleigh’s quarter wave rule is useful for qualitative assessment of image quality, Strehl ratio yields a quantitative assessment.

4.3

Aberrated PSFs

The students can be challenged to derive the symmetry of an aberrated PSF from the symmetry of its aberration, and then verify their result by calculating the aberrated PSF. We show in Figure 12 the PSFs aberrated by spherical aberration As, spherical aberration combined with defocus Bd, astigmatism, and astigmatism combined with defocus [6,8]. Spherical aberration yields radially symmetric PSF, astigmatism PSF has biaxial symmetry, and four-fold symmetry when astigmatism is combined with an appropriate amount of defocus, and coma has symmetry about the horizontal axis.

Figure 12.

Aberrated PSFs.

00005_PSISDG11143_111430F_page_9_3.jpg

4.4

Optical transfer function (OTF) and the image of an incoherent object

The diffraction image of an isoplanatic incoherent object is equal to the convolution of its Gaussian image and the diffraction PSF. This is how the concept of OTF as the Fourier transform of the PSF originates, and that the spatial frequency spectrum of the image is equal to the product of the spectrum of the Gaussian image and the OTF [6,9]. However, it is not the object but the imaging system that has to be isoplanatic, and the students need to be reminded that the imaging systems are generally aberrated, the aberrations are field dependent, and the system, therefore, can only be very approximately isoplanatic.

Students learn and know that mathematically the OTF is the Fourier transform of the PSF. They also learn that it is also equal to the autocorrelation of the pupil function (i.e., the complex amplitude at the ExP). But they don’t immediately realize that this definition yields its cutoff frequency beyond which the OTF is zero. Moreover, they don’t quite understand the physical significance of the OTF.

The physical significance of the OTF can be explained by considering a sinusoidal object and its Gaussian and diffraction images. A sinusoidal object of spatial frequency 00005_PSISDG11143_111430F_page_10_1.jpg, radiance Bo, modulation or contrast m, and an arbitrary phase constant φ can be written

00005_PSISDG11143_111430F_page_10_2.jpg

Its Gaussian image is given by

00005_PSISDG11143_111430F_page_10_3.jpg

It has the same modulation and phase as the object. Its spatial frequency 00005_PSISDG11143_111430F_page_10_4.jpg is, of course, different from the object frequency 00005_PSISDG11143_111430F_page_10_5.jpg by the image magnification M. Let us write the complex OTF in the form

00005_PSISDG11143_111430F_page_10_6.jpg

where 00005_PSISDG11143_111430F_page_10_7.jpg is its magnitude, called the modulation transfer function (MTF), and 00005_PSISDG11143_111430F_page_10_8.jpg is its phase called the phase transfer function (PTF). It can be shown that the diffraction image of the sinusoidal object is given by [6]

00005_PSISDG11143_111430F_page_10_9.jpg

It is evident that the modulation of the diffraction image is lower by a factor of the MTF 00005_PSISDG11143_111430F_page_10_10.jpg and the phase is different by the PTF 00005_PSISDG11143_111430F_page_10_11.jpg, as illustrated in Figure 13. For a discussion of the type of aberrations that yield a real OTF and those that yield a complex OTF the reader may refer to [6,10]. A phase of π for a certain band of spatial frequencies will result in contrast reversal, i.e., bright regions of the object in this band will appear dark in the image and dark will appear bright. As a check, the integral of the real part of OTF yields the Strehl ratio, and that of the imaginary part should give zero.

Figure 13.

Sinusoidal object and its Gaussian and diffraction images.

00005_PSISDG11143_111430F_page_10_12.jpg

4.5

Two-point resolution

A measure of the imaging quality of a system is its ability to resolve closely-spaced objects. According to the Rayleigh criterion of resolution, two point objects of equal intensity are just resolved if the principal maximum of the Airy pattern of one of them falls on the first zero of the other, i.e., if the separation between their Gaussian images is 1.22λF. If the Gaussian images are located at x = ± 0.61λF, then the irradiance distribution of the aberration-free image along the x axis is given by [6]

00005_PSISDG11143_111430F_page_11_1.jpg

where x is in units of λF The irradiance distribution along the x axis of the aberration-free image of two incoherent point objects of equal intensity separated by the Rayleigh resolution of 1.22λF is shown in Figure 14. The dip at the center has a value of 0.73, compared to a maximum value of unity at x = ± 0.61.

Figure 14.

Irradiance along the x axis of the image of two point objects spaced 1.22λF apart.

00005_PSISDG11143_111430F_page_11_2.jpg

A practical homework problem related to resolution is that of driving at night on a highway and observing a vehicle coming from the other side. How do we tell whether the vehicle is a car or a motor cycle? Well, it is a car if it has two headlights, but it is a motorcycle if it has only one headlight. What we see when the vehicle is far away is only one light because the eye cannot resolve the two headlights of a car. The question then is at what distance can we tell whether the vehicle is a car or a motorcycle. Evidently, this distance is where the two car headlights can be resolved.

The angular resolution of the eye is given by

00005_PSISDG11143_111430F_page_11_3.jpg

For a car at a distance d, the angular subtense of its two headlights is given by

00005_PSISDG11143_111430F_page_11_4.jpg

Equating the two, we obtain the distance of the car at which the eye can resolve the two car headlights as 10 km or about 6 miles. In reality the resolution will be somewhat poorer due to aberrations, and will depend on the illumination at the driver by the headlights of the approaching vehicle.

4.6

Imaging through atmospheric turbulence

While the aberrated PSFs for simple static aberrations are interesting, what happens when the aberration consists of a mixture of them, as in the case of those introduced by atmospheric turbulence. As illustrated in Figure 15, the image is broken up into small spots called speckles, which is a characteristic of the effect of random aberrations. Angular size of a speckle is approximately equal to λ/D, and the angular size of the overall image is approximately equal to λ/r0, where r0 is the atmospheric coherence length called seeing. With a long exposure, the image becomes a large spot, thus limiting the resolution [6,11]. Twinkling of stars and fluctuations of city lights result from atmospheric turbulence.

Figure 15.

A short exposure image of a star by a ground-based telescope without any adaptive optics.

00005_PSISDG11143_111430F_page_12_1.jpg

Observatories are built on mountain tops to reduce the effect of turbulence so that r0 is large. Since resolution is limited by r0, why did we build large telescopes for ground-based applications, e.g., diameter of the Mount Palomar telescope near San Diego is 5 m? It was to collect more light, so we could see dim objects. Even on mountain sites, the value of r0 is only a few centimeters. To improve resolution, the United States developed the Hubble Space Telescope with D = 2.4 m, thereby avoiding the image degrading effects of atmospheric turbulence. Now James Webb telescope is under development with D = 25 m.

4.7

Adaptive optics

Correction of wavefront errors in (near) real time by using a steering mirror and a deformable mirror is called adaptive optics. The steering mirror with only three actuators corrects the large x and y wavefront tilts (also called tip and tilt). The deformable mirror deformed by actuating an array of actuators attached to it corrects the wavefront deformation. The signals for the actuators are determined by sensing the wavefront errors with a wavefront sensor in a closed loop to minimize the variance of the residual wavefront errors. This is the principle of the 10 m Keck telescope on Mauna Kea in Hawaii.

5.

CONCLUSIONS

When teaching Gaussian imaging, use the Cartesian sign convention and make the students aware of the Petzval curvature of the image. Paraxial ray tracing should be an integral part of the curriculum for determining the size of the imaging elements and the aperture stop, and its images EnP and EXP, vignetting, obscuration, and FOV. There should be a smooth transition from Gaussian imaging to diffraction imaging. Emphasize that the actual image is determined by diffraction, and its quality depends on the aberrations of the imaging system. Students should feel that the homework problems are interesting and relevant.

Needless to say only some key topics have been outlined in this paper. However, such a course can be covered in two semesters; one for teaching ray geometrical imaging, including analytical calculation of aberrations of simple systems, and the other for wave diffraction imaging.

REFERENCES

[1] 

Mahajan, V. N., Fundamentals of Geometrical Optics, 444 SPIE Press, Bellingham, WA (2014). Google Scholar

[2] 

Mouroulis, P. and Macdonald, J., Geometrical Optics and Optical Design, Oxford, New York (1997). Google Scholar

[3] 

Welford, W. T., Aberrations of the Symmetrical Optical System, Academic Press, New York (1974). Google Scholar

[4] 

Smith, W. J., Modern Optical Engineering, 754 4SPIE Press, Bellingham, WA (2007). Google Scholar

[5] 

Mahajan, V. N., Optical Imaging and Aberrations, 469 SPIE Press, (1998, Second Printing2001). Google Scholar

[6] 

Mahajan, V. N., Optical Imaging and Aberrations, 450 SecondSPIE Press, Bellingham, WA (2011). Google Scholar

[7] 

Mahajan, V. N., “Strehl ratio for primary aberrations in terms of their aberration variance,” J. Opt. Soc. Am., 73 860 –861 (1983). https://doi.org/10.1364/JOSA.73.000860 Google Scholar

[8] 

Mahajan, V. N., “Symmetry properties of aberrated point-spread functions,” J. Opt. Soc. Am., A 11 1993 –2003 (1994). https://doi.org/10.1364/JOSAA.11.001993 Google Scholar

[9] 

Goodman, J. W., Introduction to Fourier Optics, 491 3Roberts & Company Publishers, Englewood, CO (2005). Google Scholar

[10] 

Mahajan, V. N. and Diaz, J. A., “Imaging characteristics of Zernike and annular polynomial aberrations,” Appl. Opt., 52 2062 –2074 (2013). https://doi.org/10.1364/AO.52.002062 Google Scholar

[11] 

Fried, D.L., “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am., 56 1372 –1379 (1966). https://doi.org/10.1364/JOSA.56.001372 Google Scholar
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Virendra N. Mahajan "Teaching of optical imaging and aberrations", Proc. SPIE 11143, Fifteenth Conference on Education and Training in Optics and Photonics: ETOP 2019, 111430F (17 July 2019); https://doi.org/10.1117/12.2523698
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Mirrors

Imaging systems

Point spread functions

Diffraction

Spherical lenses

Monochromatic aberrations

Optical transfer functions

Back to Top