Open Access
18 August 2014 Modeling the optical transfer function in the imaging chain
Author Affiliations +
Abstract
Modeling the imaging chain has become a critical design tool to understand the relationship between design trades and image quality in camera systems. The mathematical models for the fundamental components of an imaging chain are well understood and have been validated using working camera systems. However, the complexity of camera designs continues to grow as the technology advances to drive higher performance using different approaches. The fundamental imaging chain models do not always meet the needs of the new imaging system designs, thus requiring the models to advance in complexity as well. Of particular interest to the optical designers is the development of mathematical models that enable more complex modeling of the wavefront errors for the optical transfer functions (OTF) in the image chain models. A tutorial on the imaging chain is given followed by an innovative approach using an η matrix for modeling the OTF in the imaging chain.

1.

Introduction

Most photographs captured today are acquired with cameras that produce digital images, i.e., an image that is composed of an array of numbers that represent the brightness of each picture element, called a pixel (Fig. 1). But how are these numbers created and what influenced their values? What drives the pixel values away from the ideal values that would give us the best image quality? Camera designers struggle with these questions every day as they balance the desired image quality with design constraints, most notably size, weight, power, and cost.

Fig. 1

Camera designers need to understand the imaging process that creates the final digital counts in a digital image to properly assess the image quality.

OE_53_8_083103_f001.png

The relationship between the final image quality and the camera design factors, such as detector size and optical quality, needs to be well characterized and understood in order to relate system requirements to the image quality required by the image user. This relationship needs to be characterized early in the design phase to prevent costly redesigns later in the development phase, especially when new technologies are being considered. Ideally, we would like to study the design options by conducting image evaluations using images captured from camera prototypes that cover every design option, but this is typically prohibited by the time and cost involved in building all the necessary hardware, so we need other design tools at our disposal.

A historical approach to creating simulated image examples was to create an optical lens that closely matched the predicted system-level image quality effects of the proposed camera. Images were captured using this lens and then used in image quality studies to predict the proposed camera’s performance. This approach greatly restricted the ability to perform design trades of the optical design because a representative lens would need to be created for each modification in the optical design trade. The other difficulty was determining how close the lens performance needed to match the predicted camera imaging performance such that the evaluation results would be valid, i.e., how good is good enough? For camera systems that will drive important decisions, it is critical that the image simulations do not differ in quality from the operational images that will drive those decisions.

With the advent of digital imaging and faster computers, processing techniques were advancing enough to allow accurate digital image simulations using mathematical models that encompass every step of the imaging process, i.e., the imaging chain.1,2 Each element of the imaging chain could be mathematically modeled and linked together to capture the interactions between the links. Ultimately, very accurate image simulations could be produced using the imaging chain models that were indistinguishable from the operational images captured when the camera was built from the same design. Image chain models have become invaluable at displaying image quality differences between camera designs that have the same top-level design parameters, e.g., the same camera optics size and sensor size, but subtle differences at the component-level designs that cannot be easily translated into the predicted image quality differences, e.g., optical quality and detector sensitivity. Figure 2 shows image simulations of two design options for a panchromatic remote sensing camera that has the same optics aperture size, focal length, and detector size but has different optical aberrations and detector sensitivities. Modeling the imaging chain has become a critical design tool to assess camera designs before hardware is built and prevent unwanted surprises after the camera is built and the first images are acquired.

Fig. 2

Simulations from imaging chain models allow us to see image quality differences from two camera designs that have the same top-level requirements but subtle differences in the component designs. The image on the left was created from a design option that explored lowering the manufacturing cost, but this process increased optical aberrations and used a sensor with lower sensitivity, resulting in a noticeable image quality degradation in the image simulations.

OE_53_8_083103_f002.png

2.

Simple Imaging Chain Model

The fundamental links of the imaging chain for a camera system are radiometry, image formation, image sensing, processing, display, and interpretation, as shown in Fig. 3. At first glance, the imaging chain seems fairly simple, but as we dig deeper to develop mathematical models for each of these links, we can quickly get overwhelmed with the subtle complexities involved. It is very important to understand not only the role that each link plays in producing the final image quality, but also how the interaction between the links affects the image quality as well. Let us first look at the most basic imaging chain models for a simple digital camera system before focusing on a more detailed method for modeling the optical wavefronts in the imaging chain.

Fig. 3

The fundamental links of the imaging chain.

OE_53_8_083103_f003.png

2.1.

Radiometry

The imaging chain starts with modeling the radiometry, i.e., the electromagnetic energy that is the essence of creating the image. Imaging is fundamentally the act of detecting and measuring electromagnetic radiation in an attempt to understand a remote object or collections of objects. The electromagnetic spectrum ranges from gamma rays with a wavelength λ in the picometers to radio waves with wavelengths of hundreds of kilometers. The shorter wavelength electromagnetic spectrum is detected via photonic interactions with matter and is often visualized as packets of energy called photons, where the longer wavelengths are detected with the aid of antennas and are often visualized as waves. In this paper, we are concerned with the range of the electromagnetic spectrum that includes the visible spectrum (λ=400 to 800 nm) that can be redirected with refractive or reflective optics and detected with bandgap semiconductors. The visible spectrum is a subset of the optical spectrum, which involves frequencies from ultraviolet up to the long-wave infrared (about λ=10nm to 1 mm).

We are most interested in modeling the spectral radiance from the object of interest, Lobject(x,y,λ), but this is only one of the contributions to the spectral radiance of the entire scene, Lscene(x,y,λ), that enters the camera aperture.3 Modeling the spectral radiance for the scene is very challenging given the complex variations and sources of the light energy that will arrive at the camera (Fig. 4). For example, the spectrum of the light changes dramatically in the imaging chain if the camera moves from sunlight illumination to indoor incandescent light. To further complicate the model for outdoor lighting, the radiometry is heavily dependent on the atmospheric conditions that the light travels though, which is constantly changing. The radiometry from indoor light is also complicated by the variety of light sources and the illumination condition from the light reflecting off walls and other objects. Probably, the best example of varying radiometric conditions for indoor lighting occurs when an image is captured with or without a camera flash (Fig. 5).

Fig. 4

Modeling the spectral radiance that enters the camera is more complicated than simply modeling the spectral radiance from the object of interest.

OE_53_8_083103_f004.png

Fig. 5

The radiometry of the illumination source can dramatically change the appearance of the image, such as incandescent lighting (a) versus a camera flash (b).

OE_53_8_083103_f005.png

Various software programs have been created to assist modeling the complex scene spectral radiance. Atmospheric models, such as moderate resolution atmospheric transmission (MODTRAN), produced by Spectral Sciences Inc., Burlington, Massachusetts and the U.S. Air Force, are usually employed to model the atmospheric propagation of the electromagnetic radiation. The scene can be modeled using a physics-based radiometric scene generation tool that builds up an object from smaller facets, with each facet comprising its own material properties. An example of a radiometric scene generating tool is the digital imaging and remote sensing image generation (DIRSIG) model developed at the Rochester Institute of Technology.

2.2.

Image Formation

The image formation element of the imaging chain starts with the spectral radiance from the scene, Lscene(x,y,λ), entering the aperture of the camera. As the light waves propagate through the imaging elements to form the image on the sensor, we must consider the modifications to the spatial distribution of the light energy before we can calculate the resulting photons that strike each detector. If we assume that the optical system behaves as a linear shift-invariant system, then we can model the spatial modifications by understanding the effects on a single point of light, resulting in a point spread function (PSF).46 The resulting spatial modification to the spectral radiance in the imaging plane, Limage(x,y,λ), can then be modeled by simply convolving the PSF for the system, PSFsystem, with scene spectral radiance, i.e.,

Eq. (1)

Limage(x,y,λ)=Lscene(α,β,λ)PSFsystem(xα,yβ,λ)dαdβ

Eq. (2)

=Lscene(x,y,λ)*PSFsystem(x,y,λ),
where the symbol “*” denotes the convolution operation. The system PSF is obtained by convolving together the individual PSFs from each element of the imaging chain, i.e.,

Eq. (3)

PSFsystem(x,y,λ)=PSF1(x,y,λ)*PSF2(x,y,λ)*PSF3(x,y,λ).

It is also very helpful to understand how the imaging system is modifying the contrast of the spatial frequencies in the scene by calculating the Fourier transform of the system PSF to give us the system transfer function. In Fourier space, the convolution operation becomes a multiplication operation, so the transfer function for the entire camera system, Hsystem, is obtained by simply multiplying together the individual transfer functions from each element in the imaging chain that contribute to the system transfer function, i.e.,

Eq. (4)

Hsystem(ξx,ξy,λ)=H1(ξx,ξy,λ)H2(ξx,ξy,λ)H3(ξx,ξy,λ),
where ξx and ξy are the corresponding spatial frequencies for x and y. It is useful to separate the modulation transfer function (MTF) and the phase transfer function (PTF) components of the transfer function, i.e., 

Eq. (5)

H(ξx,ξy,λ)=MTF(ξx,ξy,λ)eiPTF(ξx,ξy,λ).

The MTF shows us how the transfer function modifies the contrast of each spatial frequency and the PTF shows us how the transfer function modifies the phase of each spatial frequency.

A broader PSF translates into a transfer function that drops off more rapidly in the higher spatial frequencies, resulting in a blurrier image (Fig. 6). Thus, the PSF and its corresponding transfer function have a direct impact on the image quality, so it is very important to understand the elements in the imaging chain that contribute to them. For most camera systems, the biggest contributor to the system PSF is the optical PSF and its corresponding optical transfer function (OTF).

Fig. 6

An imaging system with a lower transfer function will produce an image with more blurring.

OE_53_8_083103_f006.png

2.2.1.

Optical PSF and coherence

To visualize the image formation process of the optical PSF in the imaging chain, let us start with a broadband point source in object space. This point source could be radiating on its own or scattering light, but either way the point source is radiating spherical waves. A small solid angle patch of these spherical waves enters the entrance aperture of our imaging system. At this moment, if we were to stop time and investigate the field that exists at the entrance aperture from the point source, we would notice that the field is spatially correlated across the entrance aperture. This spatial correlation is required for the system to make a diffraction-limited PSF. If there is a loss of spatial correlation across the entrance aperture, then the PSF becomes wider, resulting in a blurry image. Propagation through the atmosphere or aberrations in the optics could cause this loss of spatial correlation. If we now look along the axis of propagation, we would notice a very short correlation length; this direction is often thought of as the temporal correlation length. This lack of temporal correlation is a direct result of the broadband nature of the point source. However, one can imagine that each wavelength creates a colocated PSF in the image plane whose width is determined by the wavelength. These colocated PSF’s then add as intensities, because of the temporal averaging of the detector to form one broad band PSF.

To understand the difference between coherent and incoherent imaging, we must bring in another point source. If both point sources are broadband, then they both have short correlation times. The measured intensity is the time average of the modulus squared of the field and all cross-terms from the two point sources average to zero because there is no correlation between the two point sources, resulting in the image of the two point sources adding as intensities. This type of imaging is, therefore, linear in intensity, i.e., the modulus squared of the field. If both of the point sources are monochromatic with the same wavelength, then the two point sources differ by no more than a relative phase. This time the interference pattern set up by the two fields is stationary, resulting in a distribution of bright and dark regions. In other words, the cross-terms do not average to zero. This type of imaging is, therefore, linear in field. In both cases, coherent and incoherent, it is possible to add the fields, perform a modulus square, and then do a time average to get the measured intensity. It just so happens that in the incoherent case, all the cross-terms average to zero, allowing one to just add the intensities.

A full image can be thought of as a collection of closely spaced point sources. If these point sources are uncorrelated, which is the case with solar illumination of an outdoor scene, then we will have incoherent imaging. If, however, there is a phase relationship between the point sources, which is the case when a scene is illuminated with a laser, then we have coherent imaging. It is important to remember that in both cases, the field from one point source is spatially coherent across the entrance aperture. The previous discussion of coherent and incoherent imaging has only scratched the surface of this rich and complex field; the interested reader is directed to Refs. 5, 7, and 8.

2.2.2.

Optical diffraction PSF and OTF

Let us go back to the entrance aperture and follow the spatially coherent section of a spherical field to the focal plane. Depending on the distance to the point source and, to a lesser extent, the size of the entrance aperture, the field at the entrance aperture may appear to be a plane wave, a parabolic wave, or a spherical wave. The optical system is designed to take the diverging wavefront at the entrance aperture and turn it into a converging spherical wave at the exit pupil. This converging spherical wave is called the Gaussian reference sphere. The center of the Gaussian reference sphere is on the focal plane and is called the Gaussian reference point. If the point sources in the object space are mapped to their corresponding Gaussian reference points, then we have what is called the Gaussian reference image. The Gaussian reference image is an imaginary perfect unaberrated image that does not have the blurring effect of the PSF and is the starting point for modeling the optical wavefronts in the imaging chain. Each point in the focal plane can be thought of as the center of a set of converging waves on a Gaussian reference sphere from an exit pupil, but each point may be the result of different views of the exit pupil. This effect can manifest itself in two ways, first as a warping of the Gaussian reference image and second as a spatially varying PSF. If the spatially varying PSF is small enough to be ignored, then the system is said to be a linear shift-invariant system. Another term that is often used is isoplanatic patch, which refers to the region of the image plan for which the spatial variability of the PSF is small enough to be considered constant.

If we calculate the Fresnel number to determine what fidelity of the diffraction theory is required to propagate the field at the exit pupil to the image plane, we would find that the Fresnel number is not much less than one, which is required in order to use the Fraunhofer diffraction theory. The diffraction calculation for a camera requires Fresnel diffraction but because the optical system has created a wavefront that is converging to a point in the image plane, the parabolic phase term in Fresnel diffraction is canceled with the parabolic phase of the converging wavefront. This, fortunately, leaves us with the mathematics of the Fraunhofer diffraction theory. It should be pointed out that our visualization is for converging spherical waterfronts, but our mathematics is for converging parabolic waves, which is a result of the paraxial approximation, i.e., considering the rays that are near the optical axis.

The converging field at the exit pupil has an amplitude that depends on the intensity of the point source in object space and on the complexities of the propagation from the source to the imaging system. Normally, the concept of radiometry is separated from the magnitude of the PSF. The field at the exit pupil is taken to be one such that the PSF from all point sources have the same magnitude. The radiometry of the scene is then applied by scaling the values of the Gaussian reference image. The converging field at the exit pupil also has phase aberrations relative to the perfect Gaussian reference sphere. These phase aberrations are called wavefront error (WFE).

As an example of the Fraunhofer diffraction PSF, let us look at a clear circular aperture of diameter D with no WFE. The Fruanhofer diffraction pattern for light at wavelength λ with incident electric field Eo imaged with a lens of focal length f will be

Eq. (6)

PSFdiff-circ(r,λ)=[EoπD22λf]2[2J1(πDrλf)πDrλf]2,
where J1 is the first-order Bessel function and r=x2+y2. This shape is often called an Airy pattern and consists of a pattern of concentric rings as shown in Fig. 7. The first ring of zeros occurs at

Eq. (7)

r1.22(λfD)=1.22(λf/#),
where

Eq. (8)

f/#=fD.

Fig. 7

The left column shows shape of the exit aperture and the right column shows the shape of the point spread function (PSF) on the focal plane, often called an Airy pattern. The top row is a cross-section of the functions and the bottom row shows an intensity plot of the functions. The first zero of the PSF occurs when 1.22(λf/D).

OE_53_8_083103_f007.png

The Fourier transform of the diffraction PSF gives the same result as autocorrelating of the aperture pattern to give us the diffraction OTF.

Eq. (9)

Hdiff-circ(ρ,λ)={2π[cos1(ρρc)ρρc1(ρρc)2]forρρc10forρρc>1,
where

Eq. (10)

ρc=Dλf=1λf/#,
and ρ=ξx2+ξy2. Note that the diffraction OTF, as shown in Fig. 8, has a cutoff frequency ρc, where the contrast of the higher spatial frequencies are zero; thus, the diffraction from the aperture imposes a limit to the resolution of the optical system that improves with increasing aperture size. The optical cutoff frequency, therefore, imposes a fundamental resolution limit on the entire camera system because the system transfer function will be zero for all spatial frequencies higher than ρc, even when all of the individual transfer functions are multiplied together.

Fig. 8

The optical transfer function for an optical system that is linear in intensity and has an exit pupil as shown in Fig. 7. A system that is linear in intensity is often called an incoherent system. The optical transfer function (OTF) has a cut-off at ρc=D/(λf).

OE_53_8_083103_f008.png

2.2.3.

WFE and OTF

Although many simple imaging chain models only consider the aperture diffraction effects on the OTF, there are many other factors that will alter the functional form of the OTF. As discussed earlier, aberrations in the imaging elements will cause departures from the ideal spherical wavefronts in the optical system. Accurately modeling the WFE caused by these aberrations can be very complicated, but is essential in order to properly model the image quality of the camera. We will return to this point as the focus of Sec. 3.

The addition of the aberrations will result in a MTF that is equal to or lower than the diffraction MTF, i.e.,

Eq. (11)

MTFaberrated(ξx,ξy)MTFdiff(ξx,ξy).

The WFE caused by aberrations will, therefore, reduce the OTF and degrade the image quality with additional blurring (Fig. 9). It is common to characterize the WFE of an optical system by the root-mean-square (RMS) WFE, WFERMS, i.e., the statistical variance of the optical path difference over the entire wavefront. When the aberrations are not known, such as early in the design phase of an optical system, a random phase error can be used to model a nominal WFE produced by the aberrations and the optics manufacturing process. The optics degradation is modeled as an optical quality factor (OQF) transfer function that is multiplied with the diffraction transfer function to get the OTF. Hufnagel modeled the OQF transfer function for aberrations and high-frequency surface roughness as9

Eq. (12)

HOQF-Hufnagel(ρ)=e4π2WFERMS2(1e4ρ2l2),
where l is the spatial frequency of the correlation length but can be used as a tuning parameter to fit measured optical data if they are available. It should be pointed out that some of the original work on what is now called the Hufnagel model was done by O’Neill10 and developed further by Barakat.11 One key idea that is often overlooked is that the term e4(ρ2/l2) represents a correlation function that may not be a Gaussian.

Fig. 9

The top left plot shows a contour plot of an example wavefront error on a circular aperture optic. The wavefront error is made up of a random set of aberrations with a high spatial frequency (HSF) wavefront error with a correlation length about a tenth the radius of the optic. The plot in the top right shows the resulting modulation transfer function (MTF) when the wavefront error is scaled to 0.05, 0.1, and 0.2 root-mean-square (RMS) wavefront error (WFE) at 670 nm. The dashed line is for zero wavefront error.

OE_53_8_083103_f009.png

Although there is a correlation between WFERMS and the reduction of the OTF, different aberrations at the same WFERMS can have different effects on the resulting image quality, so it is very important to model the WFE from the actual aberrations if they are known. As an example, let us look at aberrations that cause a low spatial frequency WFE compared to aberrations that cause a high spatial frequency WFE. Figure 10 shows the distinct differences of the low-frequency and high-frequency aberration optics PTF and PSF even though they both have WFERMS=0.2λ. It is also important to note that different weightings between the types of optical aberrations, e.g., spherical aberration and astigmatism, can occur, which will result in the same RMS WFE but result in different optics MTFs and PSFs. This variability as well as the accuracy of the Hufnagel approximation is tighter for a high spatial frequency WFE than it is for a low spatial frequency WFE, as illustrated in Fig. 11. Figure 12 shows the different effects that the aberrations have on image quality even though they have the same WFERMS. It should be noted that these image examples did not have noise added or any image restoration processes applied to them.

Fig. 10

The left column shows WFE, phase transfer function (PTF), and the PSF for low spatial frequency (LSF) WFE. The right column shows the same plots for high spatial WFE. Both have the same RMS WFE of 0.2λ at 670 nm. A key observation is that low spatial WFE disrupts the core of the PSF, but the high spatial WFE preserves the PSF core but creates a very broad base. The numbering of the PSFs relates to the PSFs illustrated in Fig. 11

OE_53_8_083103_f010.png

Fig. 11

These MTFs and PSFs are all different even though they all have WFERMS=0.2λ. Note that the variation in possible transfer functions with the same RMS WFE is larger for LSF WFE than for HSF WFE. Note also that the Hufnagel approximation is a reasonable fit to the average of the range of MTFs for both the LSF and the HSF case when the spatial frequency of the correlation length l is modified for each case.

OE_53_8_083103_f011.png

Fig. 12

The top plots show the MTFs for three instates of low and high spatial WFE at two different RMS WFE values. In both cases, the MTF’s for the high spatial WFE drop quickly. The low spatial WFE has more variability between instances with the same RMS WFE. The top row of image simulations is for the high spatial WFE and the bottom row of image simulations is for the low spatial WFE. Notice the middle row of images look hazy, while the bottom row looks blurrier as one would expect by looking at the PSF’s shown in Fig. 10.

OE_53_8_083103_f012.png

2.3.

Motion

The next consideration in the imaging chain is the relative motion between the camera and the scene that will cause additional blurring to the image. General motion transfer functions can be derived from

Eq. (13)

Hmotion(ξ)=1texp0texpe2πixo(t)·ξdt,
where xo(t) is the two-dimensional (2-D) path of the motion during the integration time, texp, and ξ is the corresponding 2-D spatial frequency. The two most common motion blurring effects are smear and jitter, shown in Fig. 13.

Fig. 13

Smear and jitter are the most common motion blurs encountered with cameras.

OE_53_8_083103_f013.png

Smear is directional blurring caused by a constant linear motion, such as snapping a long exposure image from the window of a fast-moving car. If we let xo(t)=vt, where v is the relative velocity, and evaluate Eq. (13), then we find the transfer function to be

Eq. (14)

Hsmear(ξ)=sin(πdsmearξ)πdsmearξeπidsmearξ=sinc(dsmearξ)eπidsmearξ,
where

Eq. (15)

dsmear=vtexp.

The phase term eπidsmearξ shifts the centroid of the image by half the smear distance. This phase term is normally ignored because it only represents the shift in the image pixels and does not affect image quality; hence, usually only the smear MTF as a sinc function is used in the modeling. If the smear direction is not along an axis, then a more general smear transfer function can be created by replacing dsmearξ with dsmear·ξ. The smear PSF along the x-direction is modeled as

Eq. (16)

PSFsmear(x,y)=rect(xdsmear)δ(y),
where

Eq. (17)

rect(xw)={1for|x|<w212for|x|=w20for|x|>w2.

Note that convolving the smear PSF with the scene performs a uniform integration of the scene radiance over the local distance dsmearξ.

Jitter is the result of very high-frequency vibrations [fjitter(1/texp)] of the camera system. Jitter can be derived by visualizing the integration time being broken up into many very short subintegration times δt such that the camera motion during the time δt can be assumed stationary. These individual points and subintegration times can be placed into Eq. (13), which will return the transfer function that is the average of all of the spatial shifts. However, if the probability distribution is known or assumed for the spatial location of the points, then one can find the expected value of the average of the transfer function. Because of the form of Eq. (13), the expected value of the average of the transfer function is the Fourier transform of the probability distribution. This allows us to visualize the probability distribution as the PSF for jitter. Often jitter is modeled as a Gaussian probability distribution given by

Eq. (18)

PSFjitter(x,y)=12πσxσy1ρxy2e12(1ρxy2)[(xσx)2+(yσy)22ρxy(xyσxσy)],
with a transfer function given by

Eq. (19)

Hjitter(ξx,ξy)=e2π2(ξx2σx2+ξy2σy2+2ρxyξxξyσxσy),
where σx and σy are the standard deviations of the jitter motion in the x and y directions, respectively, and ρxy is the correlation between the two directions.

2.4.

Image Sensing

Modeling the sensor requires us to understand the impact that the sensor has on the spatial properties of the image as well as the electromagnetic energy reaching the detector. Let us first consider how a digital sensor alters the spatial properties of the image. The predominant effect in most sensors is the integration of the incident light across the aperture shape of each detector. For a sensor with rectangular detectors of size dx by dy, this integration will cause a blurring modeled as

Eq. (20)

PSFdet-ap(x,y)=rect(xdx,ydy)=rect(xdx)rect(ydy),
with the transfer function given as

Eq. (21)

Hdet-ap(ξ)=sinc(dxξx,dyξy)=sin(πdxξx)πdxξxsin(πdyξy)πdyξy.

Other blurring effects need to be incorporated into the sensor model, such as carrier diffusion, but these will depend on the specific sensor design and architecture and will not be considered here for our simple imaging chain model.

The other predominant effect that needs to be modeled for a digital sensor is the sampling. The simplest sensor model can be given by the blurring of the optical image by the detector PSF, PSFdet(x,y), followed by a sampling at each detector location (Fig. 14). For an M×N detector array with each detector a distance px and py from the next, we can model the detector blur and sampling on the incident radiance L(x,y) as

Eq. (22)

gdetector(x,y)=m=1Mn=1N[L(x,y)*PSFdetector(x,y)]δ(xmpx)δ(ynpy).

Fig. 14

A simple model for an imaging sensor is a blur operation followed by a sampling operation.

OE_53_8_083103_f014.png

Mathematically, gdetector(x,y) is a distribution function because of the Dirac delta distribution in its definition. The operation of sampling is where our visualization transitions from continuous to discrete. An alternative representation of gdetector(x,y) can be purely discrete, i.e.,

Eq. (23)

gdetector(m,n)=L(x,y)*PSFdetector(x,y)|x=mpxy=npy.

The distances px and py are called the sampling pitch of the detector and can have a profound effect on the image quality. All spatial frequencies higher than the Nyquist frequency, defined by

Eq. (24)

ξN12p,
for either direction will appear in the image as a lower alias spatial frequency. This effect is easily seen when the spacing between lines in the scene are scaled down in the image to a spacing smaller than twice the detector sampling pitch (Fig. 15). The sampling pitch of the detector, therefore, imposes a fundamental limitation on the image resolution by limiting the spacing between objects that can be resolved.

Fig. 15

High frequency line patterns appear as lower frequency patterns as the sampling distance between detectors increases.

OE_53_8_083103_f015.png

Let us now consider the light energy that reaches each detector. For a detector with a spectral bandpass of λmin to λmax, the radiant flux (watts) on the detector is given by

Eq. (25)

Φdetector(x,y)=AdetectorAaperturef2(m+1)2λminλmaxτoptics(λ)Limage(x,y,λ)dλ,
where Adetector is the area of the detector, Aaperture is the area of the aperture, m is the magnification, and τoptics(λ) is the optical transmittance. If the focal length is much smaller than the distance between the camera and the object being imaged, then (m+1)1, and for a clear circular aperture with diameter D, the radiant flux on a square detector of width d is

Eq. (26)

Φdetector(x,y)=πd24f/#2λminλmaxτoptics(λ)Limage(x,y,λ)dλ.

The digital imaging sensor model converts the spatially distributed light energy of the image into the digital counts that record the image. The most common digital sensors are composed of individual detectors that generate electrons from the incident light through the photoelectric effect; thus, the average number of photoelectrons generated at the detector during the exposure time texp is given by

Eq. (27)

sdetector(x,y)=πd2texp4f/#2λminλmaxλhcη(λ)τoptics(λ)Limage(x,y,λ)dλ,
where η(λ) is the quantum efficiency of converting photons to electrons in the detector, h is Planck’s constant, and c is the speed of light in a vacuum. The electrons create a voltage potential that is then converted to integer digital count values correlated to the brightness of the light on each detector, given by

Eq. (28)

counts(x,y)=sdetector(x,y)NDRNwell,
where the operation z returns the integer value of z, Nwell is the total number of electrons that the detector will hold before saturating (known as the well depth or full well capacity), and NDR is the dynamic range of the digital counts, e.g., 28=256 counts for an 8-bit dynamic range.

Unfortunately, the digital count value of the image does not have a one-to-one relationship with the scene radiance, Lscene(x,y,λ), due to the blurring caused by the PSF, the sampling, and the noise that is introduced. The noise introduces an undesired randomness to the brightness of the digital count value for each pixel with the primary noise contributors in most digital cameras coming from the photons’ noise, dark noise, and quantization noise. The photons arriving at the sensor do not arrive in a steady stream but in random fluctuations that follow a Poisson distribution, giving rise to the photon noise that increases with the square root of the signal intensity. The dark noise appears as fluctuations in the digital counts even when no signal is present. There are many causes for the dark noise, such as error in the analog-to-digital converter that creates the digital count value, so this value in the imaging chain model is captured from measured data for the sensor being modeled. Finally, we have quantization noise that arises from quantizing the signal electrons into integer digital count values, creating a randomness that follows a uniform distribution. Putting these noise sources together gives us the root-sum-square noise for the sensor electrons as

Eq. (29)

σnoise(x,y)=σphoton2(x,y)+σquantization2(x,y)+σdark2(x,y)

Eq. (30)

=sdetector(x,y)+112(NwellNDR)2(x,y)+σdark2(x,y).

An interesting detail regarding the photon noise is that the arriving photons arrive following a Poisson distribution but then drive a binomial probability distribution that has a probability of success given by the quantum efficiency. It just so happens that when a Poisson process drives a binomial process, the result is also a Poisson process, so it is, therefore, still valid to use a Poisson distribution for the photoelectrons.

2.5.

Additional Considerations When Putting the Imaging Chain Elements Together

Figure 16 illustrates the fundamental steps of putting together all of the elements of a simple imaging chain model to simulate the image capture portion of the imaging chain. Image processing, such as calibration, is generally applied to the output of the image capture portion of the imaging chain to generate the final image produced by the camera. The image simulations can then be processed to enhance the image quality, displayed, and evaluated by image users to complete the imaging chain for assessing the image quality of the camera design.

Fig. 16

The fundamental steps of the imaging chain model.

OE_53_8_083103_f016.png

It is easy for many developers to oversimplify the imaging chain models, but these simplifications can lead to misleading results when the evaluations are conducted using the image simulations. For example, many simple image chain models will only consider the PSF of the Airy pattern, shown in Fig. 7, in their system. Although it is the optics diffraction that sets the fundamental resolution limit of the system PSF, using the Airy pattern as the system PSF overlooks the blurring effects caused by the other elements of the imaging chain that can significantly impact the resulting image quality. Also, the Airy pattern is the circular aperture diffraction PSF for a single wavelength of light and ignores the integration of the PSF of the spectral bandpass of the imaging system. If the camera integrates the light over the spectral range λmin to λmax, then the radiant flux, ϕimage(x,y), reaching the image plane is a function of Lscene(x,y,λ) by

Eq. (31)

ϕimage(x,y)λminλmaxLscene(x,y,λ)*PSFsystem(x,y,λ)dλ,
where the spectral response of the system is captured in PSFsystem(x,y,λ).

As an illustration, let us consider a gray world where the spectral radiance of the scene, Lscene(x,y,λ), is flat across the spectral range λmin to λmax and the spectral response of the system is flat across the spectrum as well. In this special case, the integration over the spectral bandpass will simply average the spatial scaling of the PSF over the range λmin to λmax. Even if we only look at the diffraction PSF and corresponding OTF for a circular aperture over the visible spectral bandpass λmin=0.4μm and λmax=0.8μm, as shown in Fig. 17, we see differences from the single wavelength model. Note that the side rings of the Airy pattern smooth out after the PSF is integrated over the spectral bandpass.

Fig. 17

Both plots show the different wavelengths as dashed lines and the spectral averaged as the solid line. The left plot shows the PSF and the right shows the OTF. The inset on the left is a zoomed scale of the PSF that appears in the gray region on the main plot.

OE_53_8_083103_f017.png

When building an imaging chain model for a digital imaging system, particular attention also needs to be paid to the relationship between the optics and the digital sensor. We saw earlier that each imposes a fundamental limit on the detail that can be imaged with a digital camera and the image quality is dependent on the relationship between these limiting factors. The diffraction from the optical aperture imposes a limit on the highest spatial frequency that can be captured [Eq. (10)] and likewise the detector sampling imposes a limit from the Nyquist frequency [Eq. (24)]. The ratio of the sampling frequency and the optical cutoff frequency is a design parameter Q defined as12

Eq. (32)

Q1p1λf/#=2ξNρc=λf/#p.

Historically, Q has been defined as the ratio of the sampling frequency to the optical cutoff frequency rather than the Nyquist frequency to the optical cutoff frequency, so the detector and optics resolution limits are equal when ξN=ρc and Q=2. The resolutions of digital cameras that are designed with Q>2 are fundamentally limited by the optics diffraction, whereas digital cameras designed with Q<2 are fundamentally limited by the detector sampling.

It may seem intuitive that digital cameras should all be designed with Q=2, but other factors, such as signal-to-noise ratio and motion blurring, will influence the final image quality and Q=2 may not be the best design when all image quality factors are considered (Fig. 18). Most digital cameras will produce brighter, sharper images when Q<2, with Q typically in the range 0.5<Q<1.5. It is important to note that Q only compares the resolution limits between the optics and the detector sampling and does not take into consideration the system transfer function. The impact of the OTF is a significant factor in the system transfer function when Q0.1 (Fig. 19), so for most imaging systems, we must understand the effect of the optical WFE on the system transfer function, which will influence the image quality.

Fig. 18

The value of Q will influence the final system image quality and must be optimized for each digital camera design.

OE_53_8_083103_f018.png

Fig. 19

Comparing the diffraction-limited optics MTF with the detector aperture MTF shows that the detector aperture MTF dominates for Q0.1, but the OTF is a significant factor when Q0.1.

OE_53_8_083103_f019.png

3.

Wavefront Error and Optical Transfer Function Generation

3.1.

Problem Motivation

In the previous section, we only considered a very simple form of the OTF that included the diffraction from a clear circular aperture with a generalized WFE that can be modeled as an OQF, although the importance of properly modeling the WFE was illustrated. If we simply extend the aperture model to include a circular central obscuration of diameter Dobs, as we would see in a Cassegrain telescope, then the aperture diffraction PSF will now have the form

Eq. (33)

PSFobs(r,λ)=[EoπD24λf1(1ϵ2)]2[2J1(πDrλf)(πDrλf)2ϵ2J1(ϵπDrλf)(ϵπDrλf)]2,
where

Eq. (34)

ϵ=DobsD.

The diffraction OTF for a circular aperture with a central obscuration is given by

Eq. (35)

Hdiff-circ-obs(ρ,λ)=2πA+B+C1ϵ2,
where

Eq. (36)

A=cos1(ρρc)ρρc1(ρρc)2,

Eq. (37)

B={ϵ2[cos1(ρϵρc)ρϵρc1(ρϵρc)2]for  0ρρcϵ0for  ρρc>ϵ,

Eq. (38)

C={πϵ2for0ρρc1ϵ2πϵ2+ϵsin(ϕ)+ϕ2(1+ϵ2)(1ϵ2)tan1[1+ϵ1ϵtan(ϕ2)]for1ϵ2ρρc1+ϵ20forρρc>1+ϵ2,

Eq. (39)

ϕ=cos1[1+ϵ24(ρρc)22ϵ].

Note that adding the central obscuration to the clear circular aperture has the effect of moving the concentric rings in the PSF closer toward the center while more energy is moved from the central peak to the outer rings (Fig. 20). Also note that the contrast of the mid-spatial frequencies in the diffraction OTF is reduced as the central obscuration increases in size, but the cutoff frequency remains the same.

Fig. 20

(a) The PSF for a system with different obscuration sizes and (b) the OTF. The inset in (a) is a zoomed-in section of what is shown is a zoomed-in section of what is shown in the gray region. The online version is color and accompanied with an animation (Video 1, MOV, 6.4 MB) [URL: http://dx.doi.org/10.1117/1.OE.53.8.083103.1].

OE_53_8_083103_f020.png

Although these models for the diffraction OTF are useful for simple imaging chain models, they are still inadequate for more complex optical designs, especially when we need to consider all of the potential sources for WFEs.

The primary driver for the shape of the PSF is the shape of the exit pupil. Historically and in many elementary texts on optics, the shape of the exit pupil is an unobstructed circle. However, in reality, the exit pupil has obstructions and in more complex systems, like segmented arrays and multitelescope systems, the overall shape of the exit pupil is very complex. A secondary effect on the shape of the PSF is that of the WFE. WFE is the spatial variation about the optimal optical path length over the exit pupil. Again, historically and in many elementary texts on optics, WFE would be modeled with a linear combination of Zernike polynomials, which are an infinite set of orthogonal functions over the unit disk. Obviously, this orthogonality is lost on exit pupils with complex shapes. One option is to find a new set of orthogonal functions for the new shapes.13 This works well for a small yet common set of shapes. Another option is to accept the loss of orthogonality and develop a method that allows the usage of any collection of basis functions, including the influence functions of adaptive optics on an arbitrary exit pupil geometry.

The method presented below is a generalized method for tracking WFE in a consistent manner that is applicable to arbitrary pupil geometries. WFE will be visualized as a vector in an infinite-dimensional vector space; however, calculations will be done in a finite-dimensional vector space. This vector space visualization will conveniently allow for the separation of WFE into multiple subspaces that can be aligned with the specifications for a future system. Calculations for RMS WFE, line-of-sight (LOS), and other macroscopic parameters are reduced to fast matrix calculations.

3.2.

OTF Generation

The impulse response function h(x), which is a precursor to the PSF, can be calculated by evaluating the Fraunhofer diffraction integral.

Eq. (40)

h(x)=DP(u)e2πλfi(u·x)d2u,
where P(u) is the field at the Gaussian reference sphere, f is the effective focal length of the system, which is the distance from the exit pupil to the focal plane, λ is the wavelength of light, and D is the domain of the exit pupil, which represents the shape of the exit pupil. The PSF is the modules squared of the impulse response function.

Eq. (41)

PSF(x)=|h(x)|2.

Before we go much farther, it is important that we become explicitly clear about our choice of sign conventions. In general, sign conventions and handedness of axes are completely arbitrary, but we must be consistent and when obtaining data from an external source, one must take great care to understand the hidden assumptions with sign conventions.

We start off with the definition of the optical axis that runs along the center line of the optical system in the direction of propagation. This will be the z axis of our coordinate system. In other words, the z axis points from the exit pupil to the focal plane. We will use a right-handed coordinate system, which will dictate the order of the x and y axes in the focal plane, as shown in Fig. 21. The labels u and v have been used for the axes in the exit pupil. The polar angle θ will be positive when measured from the x axis in the direction the fingers of the right hand point when the thumb of the right hand is pointing in the positive z direction. This is the normal convention for the polar angle θ.

Fig. 21

A right-handed coordinate system is used with the z axis running in the direction of the propagation of light. The diagram shows the location of the PSF in the focal plane as the top of the wavefront tips toward the focal plane. Positive WFE ϕ>0 is colored bluish and is defined as moving in the direction of the z axis.

OE_53_8_083103_f021.png

The complex scalar field at the Gaussian reference sphere at the exit pupil is called the pupil function, P(u), and is used in Eq. (40) to calculate the impulse response function. The pupil function can be broken into two real functions: the aperture function, 0A(u)1, and the WFE function, ϕ(u)R. The aperture function, which is often a binary function but, in principle, can take on values between 0 and 1, represents the relative spatial distribution of the transmission of the exit pupil. The WFE function represents deviations in the wavefront from the ideal Gaussian wavefront.

In this documentation, we have picked ϕ(u)>0 to be in the direction along positive z. This choice forces us to place a minus sign in the exponent of Eq. (40). This choice supports the visualization that the normal vector of the tip-tilt WFE shown in Fig. 21 points toward the center of the PSF.

The Fraunhofer diffraction integral of Eq. (40) has the same form as a Fourier transform integral, provided that we define the forwarded transform as

Eq. (42)

F(ν)=f(x)e2πixνdx.

In practice, one will be creating sampled images of the exit pupil and then using a discrete Fourier transform (DFT) algorithm to calculate the PSF and OTF with the DFT implemented using a fast Fourier transform (FFT).

To avoid aliasing, one must sample the pupil function with sufficient zero padding. For this discussion, we will assume that a square grid is used and we will only talk about the linear scaling along one side. Let w be the physical width of the sampling grid and Rin be the enclosing radius of the pupil function. For a simple round optic, the enclosing radius is the radius of the optic, but for a complex multisegmented system, it is the radius of the smallest circle that can contain the full system. The enclosing radius must satisfy the following constraint:

Eq. (43)

Rinw4.

If the square grid is sampled with an n×n array, the sample spacing is w/n. The left most axis on Fig. 22 shows the sampling of one side of this grid. The pupil function, P(xi)=Pi, has been sampled at

Eq. (44)

xi=w2+iwn,wherei(0,1,2,,n1).

Fig. 22

The left axis is the sampling of the pupil function. The second axis shows the scaling resulting from converting the Fraunhofer diffraction integral into a Fourier transform. The next axis shows the width and sampling of the PSF on the focal plane. The last axis shows the sampling of the OTF in the frequency domain.

OE_53_8_083103_f022.png

An arbitrary choice has been made regarding the sampling that is done from (w/2) to (w/2)(w/n). We could have easily sampled from (w/2)+(w/n) to w/2, which would give us consistent results. The current choice is driven by using a computer programming language that references arrays with a zero-based index system. In other words, A[0] is the first element of the array and A[n-1] is the last. For an array with an even number of points and length n, the center value of the function is at array index A[n/2].

Another requirement for sampling the pupil function is that the smallest structure of the pupil function is adequately sampled. This includes the physical structure in the aperture function and the variations in the WFE function. If the size of the resulting grid gets too large for the computing resources of the day, one can use the techniques of Eikenberry et al. or Ransom et al.14,15

Let FFT(Pi)=hj represent the action of a DFT on the Pi data to create the hj data. One should keep in mind that every data value hj is a function of all the input data values Pi. The PSF data can be calculated from the sampled pupil function, Pi, as follows:

Eq. (45)

PSFj=|FFT(Pi)|2,
where the PSF data are sampled at

Eq. (46)

xj=nλf2w+jλfw,wherej(0,1,2,,n1),
where λ is the wavelength of light and f is the effective focal length of the system. This sampling is shown graphically on the third axis in Fig. 22. These PSF data can now be combined with an image gj via a convolution to produce the first step in simulating an optical system. Normally, this convolution is performed via the convolution theorem.

Eq. (47)

FFT(g^j)=FFT(PSFj)×FFT(gj)=OTFk×Gk,
where OTFk=FFT(PSFj) and is called the OTF and Gk=FFT(gj). The times symbol × is just an element-by-element multiplication of the data. The real object of interest is the simulated image, g^j, and can be written as

Eq. (48)

g^j=FFT1[OTFk×FFT(gj)],
where FFT1 represents the operation of an inverse FFT. OTF is sampled as shown on the right most axis in Fig. 22 and below.

Eq. (49)

νk=w2λf+kwnλf,wherek(0,1,2,,n1).

When modeling OTFk, it is often convenient to consider the MTF and PTF, which are related as

Eq. (50)

OTFk=MTFkeiPTFk.

The magnitude of the OTF, is the MTF which is scaled such that the zero frequency is equal to one, MTFn/2=1. The PSF and OTF are a Fourier Transform pair,

Eq. (51)

OTFk=FFT(PSFj),
and the PSF is the inverse Fourier transform of the OTF.

Eq. (52)

PSFj=FFT1(OTFk).

For a simple round optic of radius R with WFE, one can expand the phase of the pupil function out into a set of orthogonal functions. A common choice for these orthogonal functions are Zernike polynomials.

Eq. (53)

ϕ(u)=ϕ(r,θ)=ϕ(ρR,θ)=j=0ajZj(rR,θ).

Two key benefits of using a set of orthogonal functions to describe WFE are that it is easy to describe the key aberrations with just a few terms and the orthogonality relationship of the basis functions makes it easy to calculate macroscopic parameters with just the coefficients of the basis functions. The orthogonality relationship is as follows:

Eq. (54)

02π01Zj(ρ,θ)Zj(ρ,θ)ρdρdθ=πδj,j.

The coefficients that describe a given WFE, ϕ(u), can be calculated by taking advantage of the orthogonality relationship.

Eq. (55)

ai=1π02π01Zi(ρ,θ)ϕ(ρR,θ)ρdρdθ.

Once the coefficients (a0,a1,a2,,an) have been calculated, we can easily calculate the following integral:

Eq. (56)

ϕ2=02π0Rϕ(r,θ)2rdrdθ,
by substituting in Eq. (53) and taking advantage of the orthogonality relationship. Finally, we arrive at

Eq. (57)

ϕ2=j=0aj2.

Therefore, if we know the coefficients (a0,a1,a2,,an), calculating the value for ϕ2 is as simple as summing the square of the coefficients. Another very important way of looking at this is if we think of the WFE as a vector made up of the coefficients

Eq. (58)

ϕ=a0,a1,a2,,anT,
then the integral that provides the value of ϕ2 can be thought of as the dot product of ϕ with itself.

Eq. (59)

ϕ2=ϕ·ϕ.

The rest of this work describes how to do these types of integrals when the optic is arbitrarily complex and the orthogonality relationship is no longer valid. This is done by careful book keeping of the WFE vector spaces and the calculation of a metric we will call the eta matrix, η. This matrix will allow us to keep the visualization that WFE is a vector in a vector space and will be used to calculate the new dot product in this vector space.

4.

Exit Pupil Modeling for Arbitrarily Complex Systems

We are going to consider a general hierarchy of nested apertures that make up the exit pupil. Each aperture can have a WFE that will affect the total system WFE. Also, each aperture can have child apertures, which are completely contained within the domain of the parent aperture. This hierarchy can be thought of as a simple tool to model a complex set of system specs, or as a method of producing a spatially correlated WFE over a large multisegmented aperture array, or even for modeling a multitelescope system. An example of an arbitrary nested aperture system is shown in Fig. 23. In Sec. 4.7, we will present a full numerical example for a nonrealistic multisegmented, multitelescope. This example will contain the elements needed to demonstrate many of the applications of the η matrix method.

Fig. 23

The generalized aperture shown here corresponds to the tree structure shown in Fig. 24. The inset is the system pupil mask.

OE_53_8_083103_f023.png

4.1.

Aperture Nesting Rules

In general, each aperture has an enclosing disk or other enclosing shape that contains the unobscured domain of the basis functions. For this discussion, we will assume that the enclosing shape is a disk with radius of Rq, where q is the vector-index designator for the given aperture. The enclosing disk is the domain over which the WFE basis functions are defined. If one is using Zernike polynomials, then the basis functions are also orthogonal over the domain of the disk. However, if the domain of the basis functions is not the shape of a disk, then one can think of the radius Rq as a scale parameter of the enclosing shape. For example, if the basis functions are square image files, then the enclosing shape is a square and the scaling parameter Rq can be the distance from the center to one of the corners.

The domain over which the q’th aperture is defined will be referred to as Dq. This region is shown in Fig. 23 as the gray regions within the supporting circles. The gray regions get darker as there are more nested apertures. The q vector indexing method helps visualize the tree structure of the nested apertures. Figure 24 shows the nesting hierarchy for the general hierarchy of nested apertures shown in Fig. 23. The q vector is as long as the deepest trail on the graph from the root to the leaf. Each level corresponds to a dimension in the q vector. The q vector is the trail from the root to the node in question with zero padding. The union of the domains of the leafs of the aperture tree corresponds to the final pupil mask as shown in the inset of Fig. 23.

Fig. 24

The tree structure shown here is for the aperture system shown in Fig. 23. The union of the domain of the gray boxes; the leaf nodes make up the pupil mask, which is the inset in Fig. 23.

OE_53_8_083103_f024.png

Apertures can be combined into a hierarchal arrangement to aid in modeling a complex system. Any aperture can have children apertures, which can be used to build up a complex hierarchy. The following rules apply to hierarchical aperture arrangements:

  • The domain of child apertures is completely contained within the domain of their parent aperture.

  • No other overlapping of apertures is allowed.

  • WFE of the exit pupil is the sum of the WFE on all of the apertures at a given point in the exit pupil. One must work with the optical designer to map the WFE to the exit pupil.

  • The exit pupil mask is the union of all of the domains of the leaf nodes of the aperture tree. An aperture tree is shown in Fig. 24.

4.2.

Calculation of the η Matrix

The goal of the η matrix is to replace time-consuming numerical integrations of the product of two functions defined over the domain of the pupil with a quick matrix multiplication.

Eq. (60)

Dϕ(r)ψ(r)dA=ϕηψ.

This matrix multiplication can be thought of as the dot product between the two vectors ϕ and ψ in a nonorthogonal space described by the metric η, where the elements of ϕ and ψ are the expansion coefficients of the functions ϕ(r) and ψ(r). If we first look at a simple single aperture system where the WFE is expanded on the following basis functions, Bi(x), then Eq. (60) can be rewritten as

Eq. (61)

Dϕ(r)ψ(r)dA=D[i=1aiBi(r)][j=1bjBj(r)]dA.

It is now possible to switch the order of the integration and summations to arrive at

Eq. (62)

Dϕ(r)ψ(r)dA=i=1j=1aibjDBi(r)Bj(r)dA.

This seemingly trivial algebraic operation has profound computational effects. The integral in Eq. (62) can be calculated for all combinations of i and j with just the knowledge of the geometry of the aperture, D. This will become the η matrix. The instance of the WFE is contained in the expansion coefficients ai and bj, which become the WFE vectors ϕ and ψ. Basically, we have separated the geometry of the aperture from the instance of the wavefront error.

In the most general case, the η matrix is the overlap matrix between every combination of two basis functions for every combination of two apertures over the domain these two apertures make in the exit pupil. The η matrix is calculated by evaluating the following integral for each matrix element:

Eq. (63)

ηqa,qb:i,j=DchildrenBqa:i(rRqa)Bqb:j(rRqb)dA,
where the domain of integration is the union of all the leafs of the pupil tree that are common to both qa and qb. The following examples refer to the aperture tree in Fig. 24.

  • For qa=1,0,0 and qb=1,0,0, the domain of integration is all of the gray boxes in Fig. 24.

  • For qa=1,0,0 and qb=1,1,0, the domain of integration is all of the gray boxes with q=1,1,n, where n(1,2,3), in Fig. 24.

  • For qa=1,1,0 and qb=1,2,0, the domain of integration is the empty set because these two q’s have no common terminating branches.

Bqa:i(r/Rqa) in Eq. (63) is the i’th basis function on the qa’th aperture that is used to describe the WFE of the optical system.

As a result of there being no restrictions on the basis functions used and the fact that the same type of basis functions can be used on parent apertures along with children apertures, the η matrix will not have full rank and there will be redundancy in how a given WFE can be represented on the basis functions. However, along the diagonal, there are blocks that have full rank.

4.3.

Derivation of Support Matrices

We need to derive some supporting matrices to assist in the calculation of the system RMS WFE, global pointing plane (GPP), and LOS. To do this, one needs to calculate vector representations for the simple functions 1, x, and y.

As a result of using multiple complete sets of basis functions over the same region of space, there is some freedom in how one can expand any given function on the basis functions. When modeling a system, one has the freedom to pick appropriate basis functions. If the basis functions are polynomial, then the lowest-order basis functions are most likely of the form c0, c1x, and c2y, or some linear combination thereof, where ci are constants. In this situation, the vector expressions for the functions 1, x, and y will evaluate to a finite number of nonzero terms. If the basis functions are not polynomial, then the expansion of the functions 1, x, and y may not terminate. Due to this condition, it is suggested that at least three basis functions are polynomial and of the form c0, c1x, and c2y. The remainder of the basis functions can be anything else that best matches the needs of the modeling the WFE of the system.

There is no loss in generality by requiring three basis functions for each leaf node to be the first three Zernike polynomials. If one chooses to use a different set for the first three basis functions, one is required to derive the final expressions shown in this section.

An arbitrary function f(r) can be written in terms of a linear combination of basis functions Bqa:i(r) over the domain of the qa’th aperture as

Eq. (64)

f(r)=i=0bqa:iBqa:i(rxqaRqa).

When solving for the expansion coefficients bqa:i by multiplying by the j’th basis function, one can integrate over the full domain, not just the domain as masked by the qa aperture. This, of course, assumes that the function f(r) is known outside of the pupil domain and everywhere within the domain of the basis functions.

If Zernike polynomials are used for the basis functions, the expansion coefficients can be calculated with

Eq. (65)

bqa:i=1π02π01f(xqa+Rqaρ)Zqa:i(ρ,θ)ρdρdθ,
where the vector notion is short hand showing the needed scaling and transformation. Using this expression, the following vectors can be derived:

Eq. (66)

1qa=1,0,0,0,,0T,

Eq. (67)

Xqa=xqa,0,Rqa2,0,,0T,

Eq. (68)

Yqa=yqa,Rqa2,0,0,,0T.

The point (xqa,yqa) is the center of the qa’th aperture and Rqa is the radius of the disk that encloses the aperture. Because the η matrix of Eq. (63) contains the information for all the apertures and basis functions, the 1qa, Xqa, and Yqa vectors are combined to form full system vectors, for example,

Eq. (69)

X=Xq1T,Xq2T,,XqQTT,
where Q is the total number of apertures. Later, we will be discussing how to combine the WFE and apertures into groups called subspaces. In that section, one of our goals will be to minimize the size of the η matrix, which is done by eliminating any common WFE, aperture combinations that are identical for different subspaces. This often results in a more complex ordering than that shown in Eq. (69).

4.4.

Projecting Out the Mean and the GPP

Given an arbitrary WFE function ϕ(r) defined over the union of all of the domains of the leafs of the pupil mask tree, the mean of the WFE can be found by integrating ϕ(r) over the valid domain and then dividing by the area of the domain, giving

Eq. (70)

ϕ(r)=Daqaϕ(r)dADaqadA.

Our goal is to find a matrix operation that accomplishes this task. By using Eq. (60) and the vector 1, we can write down the integration as a matrix operation.

Eq. (71)

ϕ(r)=1Tηϕ1Tη1=Kμϕ.

This leads us to two matrices, one that calculates the mean, Kμ, and one that projects the mean out of the original WFE vector, Pμ.

Eq. (72)

Kμ=(11Tη1)1Tη,

Eq. (73)

Pμ=I1Kμ.

In addition to projecting out the mean, another common operation is calculating and projecting out the best-fit plane to the WFE. This best-fit plane is called the GPP. A normal vector to the GPP that intersects the optical axis points to the location of the maximum of the PSF in the image plane. This location will be called the LOS. GPP is the global piston, tip, tilt (PTT) of a system.

We first start with a least-squares fit cost function for the GPP.

Eq. (74)

S(α,β,γ)=q=1QDq[ϕq(x,y)αxβyγ]2dA.

We proceed in the normal fashion by setting the partial derivatives to zero and then solving for α, β, and γ. During the mathematical manipulation, one will come across terms of the following form, which can be rewritten in a matrix form.

Eq. (75)

q=1QDqxϕq(x,y)dA=q=1QXqTηqϕq=XTηϕ,

Eq. (76)

q=1QDqxydA=q=1QXqTηqYq=XTηY,
where the vectors without the subscripts are the appropriately ordered vectors for the full system of Q apertures, which allows us to write the following matrix equation:

Eq. (77)

(XTηYTη1Tη)ϕ=(XTηXXTηYXTη1XTηYYTηYYTη1XTη1YTη11Tη1)(αβγ).

It is now possible to write down the matrices for calculating GPP and projecting out the GPP.

Eq. (78)

KΓ=M1(XTηYTη1Tη),
where M is the 3×3 matrix of Eq. (77). The KΓ matrix is a 3×n matrix that will return α, β, and γ of the GPP when it acts on a WFE vector.

Finally, the PΓ matrix, which is used to project out the GPP, can be calculated with

Eq. (79)

PΓ=ILKΓ=I(XY1)KΓ,
where I is the identity matrix and L=(XY1) is the matrix composed of three column vectors next to one another to form an n×3 matrix. The dimensions of PΓ are n×n.

The coefficients for the GPP can be calculated by applying the KΓ matrix on the WFE vector, ϕ.

Eq. (80)

Γ=(αβγ)=KΓϕ.

The WFE about the GPP is obtained by subtracting the GPP from the full WFE function, or in matrix terms, the WFE vector can have the GPP projected out, where we let ϕΓ be the WFE vector with the GPP projected out, i.e.,

Eq. (81)

ϕΓ=PΓϕ.

4.5.

Generalized Projection Operators

In general, one can project one subspace into another provided that there is some overlap between the subspaces. An excellent example is the case of adaptive optics, where a set of influence functions are used to back out a measured WFE. Let ϕ^1 of length n be the measured WFE, which is expanded out on the basis functions B1(x). This subspace will be called S1. Let ϕ^2 of length m be the induced WFE from the influence functions B2(x) in subspace S2. The hat above the vectors is used to denote that these vectors are not the full length that corresponds to the dimensions of the η matrix. The influence functions are just another set of basis functions. The full WFE vectors are

Eq. (82)

ϕ1=(ϕ^102)andϕ2=(01ϕ^2),
and the η matrix of dimension (n+m)×(n+m) has the following block form:

Eq. (83)

η=(η^1,1η^1,2η^2,1η^2,2).

It should be pointed out that in this example, the two subspaces S1 and S2 make up the whole η matrix, but this idea can easily be extended without modifying the mathematics when there is an additional subspace S3, which holds everything else. One just needs to be careful indexing into the η matrix and the WFE vectors.

The derivation of the projection operator that will project ϕ^1 into subspace S2 is easily derivable by looking at the functional form of the WFE. We need to find the best coefficients for ϕ^2 that can be used to describe ϕ^1.

Eq. (84)

j=1nϕ^1(j)B1(j)(x)i=1mϕ^2(i)B2(i)(x),

Eq. (85)

j=1nϕ^1(j)DB1(j)(x)B2(k)(x)dx2i=1mϕ^2(i)DB2(i)(x)B2(k)(x)dx2,

Eq. (86)

η2,1ϕ^1η2,2ϕ^2,

Eq. (87)

ϕ^2=η2,21η2,1ϕ^1=P^12ϕ^1,
where ϕ^2 are the best possible, in a least-squares sense, coefficients that can describe ϕ^1 and P^12 is the projection operator that will project a vector from S1 into S2. The dimensions of this matrix are m×n, which is not the full size of the η matrix. A full projection matrix can be created by placing the elements of P^12 into the corresponding locations that make up a full-size matrix. In this example, the full projection matrix would have the form

Eq. (88)

P12=(0P^120I).

This can be done, in general, where there are many subspaces. One just has to keep track of where the elements of P^12 are mapped into the larger matrix. I is required to preserve the components in S2 and ensure that P12 behaves like a projection operator.

Back to the adaptive optics example, if we can find the best instance of the influence functions that can cancel the input measured WFE, we will have optimality decreased the total system WFE. Therefore, using what was just derived above, the optimal setting for the influence functions is

Eq. (89)

ϕ^2=η2,21η2,1ϕ^1=P^12ϕ^1.

Notice the minus sign, which is needed to ensure that the influence functions cancel the incoming WFE and do not amplify it. In Sec. 4.7.2, we will look at an example and point out some options available while modeling a system.

4.6.

System Parameters

Once the η matrix and the supporting projection matrices have been calculated, one can start calculating system-level parameters with fast matrix operations.

4.6.1.

RMS WFE

One of the most popular metrics calculated for an optical system is the RMS WFE. For a simple round unobscured optic, where the WFE is represented by Zernike polynomials, the RMS WFE can be calculated by taking the square root of the sum of the squares of the Zernike polynomial coefficients. However, for a more complex system with obscured optics and when other basis functions are used, this calculation involves performing a numerical integration over the WFE function over the domain of the aperture. If one needs to do Monte Carlo simulations over many wavefront configurations, this numerical integration can become a limiting calculation hurdle.

By using the projection matrices derived in Sec. 4.3, one may write down matrix expressions for RMS WFE relative to mean wavefront and GPP.

  • 1. RMS: This is the normal RMS that is often used for a monolith system. This is the RMS of the WFE after the mean has been removed. In a monolith system, any residual tip-tilt is simply thought of as an LOS offset and is not expected to change with time. The tip-tilt that does change with time is called image motion and is often modeled as smear and/or jitter.

    Eq. (90)

    RMS(ϕ)=[ϕ(r)ϕ(r)]2

    Eq. (91)

    =ϕ2(r)ϕ(r)2

    Eq. (92)

    =ϕTPμTηPμϕ1Tη1.

  • 2. RMS GPP: This is the RMS WFE with the best-fit plane removed. This RMS is rarely used in monolith systems, but becomes very important in multitelescope and segmented systems. In an actively controlled multitelescope or segmented system, the random motion of the segments can cause an effective image motion that is separate from the bus motion.

    Eq. (93)

    RMSGPP(ϕ)=ϕ(r)αxβyγ

    Eq. (94)

    =ϕTPΓTηPΓϕ1Tη1,
    where the parameters, α, β, and γ are picked to minimize the RMSGPP(ϕ), which was done in the derivation of PΓ.

For faster computations, both RMS calculations can be done with fewer operations by noting that A=1Tη1, where A is the area of the full aperture, and a transformed η can be precalculated, ηΓ=PΓTηPΓ, resulting is the following expression for RMS WFE relative to GPP.

Eq. (95)

RMSGPP(ϕ)=ϕTηΓϕA.

4.6.2.

LOS

The LOS is defined as the location of the maximum of the PSF. This is also the intersection of a normal originating from the GPP at the optical axis with the focal plane. The LOS can be thought of as a 2-D vector.

Eq. (96)

L=λ2π(f000f0)Γ

Eq. (97)

=λ2π(f000f0)KΓϕ,
where λ is the wavelength of light and f is the effective focal length of the system. The LOS is not a function of wavelength for a reflective system, but because the WFE vector is measured in waves, one must multiply by the wavelength to indicate that the WFE was measured in to recover a physical distance.

4.6.3.

Correlated WFE vector

There are times when you know the coefficients for GPP, Γ, and would like to know the correlated WFE vector that would produce the given GPP without adding any RMS WFE. This is called correlated because the PTT of all of the leaf node apertures move in unison to create the desired GPP. The correlated WFE vector is

Eq. (98)

ϕGPP=LΓ.

The subscript GPP is used to distinguish this WFE vector from the WFE vector ϕΓ, which is the WFE relative to GPP. The WFE relative to the GPP can be thought of as a n uncorrelated WFE.

4.7.

Extreme Example

The example shown in this section is an unrealistic multitelescope system with each subtelescope consisting of three hexagonal segments. This system has been picked such that it is possible to show many of the features of the η matrix methods. This example, shown in Fig. 25(a), contains 13 apertures, 4 parent apertures, which are outlined with dashed circles, and 9 child apertures, which are the 9 hexagonal elements. The dots on the hexagonal elements are the locations of simple Gaussian influence functions.

Fig. 25

Part (a) shows how apertures can be nested to an arbitrary depth. Nesting apertures can represent a modeling method of adding spatially correlated WFE over a set of segments or a combiner of a multitelescope system. The clear area of the pupil only exists over the intersection of all the overlaying apertures. The insert is the view of the pupil mask for this system. Part (b) shows the η matrix for the system shown in (a) using the basis functions described in the text. White cells represent a value greater than zero, the black cells are less than zero, and the gray cells are identically zero.

OE_53_8_083103_f025.png

If the system in Fig. 25(a) is only in the initial concept stage there will be many unknown system parameters, however this will not stop the questions of image quality and overall performance. To precede one must make resalable assumptions. This process will inform the community and help with the creation of system requirements. For this investigation, each of the hexagonal segments will have PTT control and random PTT noise. Each group of three hexagonal segments make up one telescope that is combined with the other two. This can result in the PTT of each multitelescope subsystem. The final combiner optic, which is modeled via the largest enclosing parent aperture, will have aberrations and, in this case, also contain the influence functions. It is obvious that the real complexities of this system are quite involved, but from an initial image science point of view, each image point is made from a converging wavefront that can be represented as coming from an imaginary exit pupil, which is an effective focal length away. Therefore, provided it is possible to make reasonable assumptions on the form and statistical nature of the WFE, it is possible to bound the possible performance and help set requirements for the system concept.

The first thing that needs to be done is to calculate the η matrix, which can be done by evaluating Eq. (63) for all of the basis functions and apertures in the given problem. It is not required, but it is convenient to create the smallest possible dimensional η matrix. For example, in a situation where you are given some specifications that require a limit of a given RMS WFE on the following basis functions, (f1,f2,,fn), and a subset of these basis functions along with other basis functions are also required to model some external effects, like atmospheric turbulence, the η matrix that is created should only have one row-column that corresponds to the basis functions that are common to both subspaces. The process for finding the lowest-dimensional η matrix can be as simple as generating a list of all possible basis function/aperture combinations required and then eliminating the duplicates and keeping track of the index mappings back into all of the subspaces. Another way to think about subspaces is that subspaces are the links between a set of basis functions and a group of apertures, like an n to m link table in a database.

In this example, 14 groups of basis functions are used. The basis function groups include sets of Zernike polynomials, like j(0,1,2), j(3,4,5), and j(6,11,17,24,18,13,9), Gaussian influence functions at all the locations which are shown as dots in Fig. 25, and nine instances of a power spectrum density wavefront, one for each hexagonal aperture. When these groups of basis functions are combined with the different combinations of apertures, it results in 17 subspaces that can be used to model the dynamic PTT of the segments of the system, low spatial frequency variations of the apertures or the adaptive optics along with other effects. Anyone of these subspaces could have a temporal dependence, for example, the low spatial frequency could have a time dependence resulting from temperature fluctuations.

The η matrix for this example is shown in the right part of Fig. 25, where the white cells represent a value greater than zero, the black cells are less than zero, and the gray cells are identically zero.

4.7.1.

Dynamic PTT WFE

If the physical structure of the system will result in high-frequency vibrations of the piston, and tip and tilt of the nine hexagonal mirror segments, one must be able to simulate the possible effects. When real PTT test data or simulated data from a structural engineer are provided for the system, one must be able to separate out the dynamical parts that drive the WFE from the dynamical parts that drive image motion or jitter. Otherwise, the image scientist must generate reasonable PTT data based on RMS WFE and image motion requirements. However, either way, a time-dependent PTT WFE vector is created, which has a form similar to

Eq. (99)

ΦPTT(ti)=0,0,,ϕ1T(ti),ϕ2T(ti),,ϕ9T(ti),,0T,
where ϕqT(ti) is a three-element time-dependent PTT vector for the q’th mirror segment. These three-element vectors are placed in the correct locations to line up with the ordering that was chosen when creating the η matrix.

The time dependence of ΦPTT(ti) can be broken into two components: the correlated component, ΦC-PTT(ti), and the uncorrelated component, ΦU-PTT(ti). The correlated component represents the motion of the GPP, which is the average direction of the wavefront that determines the location of the PSF in the imaging plane. The uncorrelated component is the deviation from this average wavefront that determines the variation in shape of the PSF that is not driven by the geometry of the aperture. During an integration time with random PTT motion, one can imagine the maximum of the PSF randomly moving around on the image plane. This motion should be considered image motion or jitter. If you were to ride along with the maximum of the PSF, you would see the PSF changing shape as a function of time. This dynamic shape change should be considered to be resulting from WFE.

The two components of ΦPTT(ti) can be calculated with

Eq. (100)

ΦU-PTT(ti)=PΓΦPTT(ti),

Eq. (101)

ΦC-PTT(ti)=LKΓΦPTT(ti).

The operation of applying KΓ returns the Γ(ti) vector as shown in Eq. (80), which is three parameters for the GPP as a function of time. The application of L takes the GPP vector and creates a full WFE vector. The correlated component, ΦC-PTT(ti), produces all of the LOS variation and the uncorrelated component produces all of the RMS WFE. A frame out of an animation is shown in Fig. 26, which shows the RMS WFE, the LOS, and the wavefront on the aperture.

Fig. 26

The columns show two time instances from Video 2 (MOV, 9.5 MB) [URL: http://dx.doi.org/10.1117/1.OE.53.8.083103.2]. The top row is the WFE on the exit pupil. The middle row shows the line-of-sight during the integration time. The bottom row shows the RMS WFE as a function of time. The dots on the line plots correspond to the time instance shown in the top row.

OE_53_8_083103_f026.png

If one is given a specification for RMS WFE, RMSPTT for dynamical PTT, and one for image motion possibly in the form of jitter, σ, then one can generate Monte Carlo data that match these specifications in the following manner:

  • 1. Generate random time series for ΦPTT(ti) and Γ(ti) from whatever method matches your physical situation.

  • 2. Create the uncorrelated vector, ΦU-PTT(ti), and scale to the required RMS WFE.

    Eq. (102)

    ΦU-PTT(ti)=sUPΓΦPTT(ti),
    where sU is the required scaling derived from

    Eq. (103)

    RMSPTT=sU1ni=1nΦPTTT(ti)PΓTηPΓΦPTT(ti)1Tη1.

  • 3. Create the LOS profile by scaling the correlated ΦC-PTT(ti) vector.

    Eq. (104)

    ΦC-PTT(ti)=sCLΓ(ti),
    where sC is the required scaling derived from

    Eq. (105)

    σ=sCJ[λ2π(f000f0)Γ(ti)],
    where the function J[xi] will map a path into an image motion metric.

The time to do this type of analysis and Monte Carlo simulation is significantly reduced by using the η matrix to preform these operations. Without the η matrix, each term in the RMS time series above would require fitting a plane to the WFE and a numerical integration over the domain for the aperture. With the η matrix, all possible elementary numerical integration combinations have already been done and all one needs to do is fast matrix multiplications.

4.7.2.

Moving wavefront error between subspaces

As was pointed out in Sec. 4.2, the η matrix most likely will not be of full rank. Because this wavefront vector space is overcomplete, it is possible to move the expression of the wavefront from one subspace to another. A common example of this is in adaptive optics, where a measured wavefront is reduced by applying a matching but negative wavefront. We are not going to talk about how one would model a wavefront sensing system or model the error in the application of the adaptive optics basis functions. However, we will show how the η matrix methodology can help in separating ideas and the calculation of macroscopic parameters.

The following example will be for a time instance; one can easily add temporal dynamics by using similar methods as shown in the previous section. The total wavefront of a system is the simple vector sum of all of the wavefront subspaces.

Eq. (106)

Φ=ϕ1+ϕ2++ϕn.

These subspaces could represent any WFE source that is included in your modeling of the system. Let the sum of a subset of these vector spaces be ϕm, which represents the wavefront that is measured using a wavefront sensing method. If the wavefront sensing method is blind to the GPP, then one needs to break ϕm into the correlated and uncorrelated pieces.

Eq. (107)

ϕm=PΓϕm+LKΓϕm.

The measurement of PΓϕm will have some residual error, ϵ, such that the final measured wavefront is

Eq. (108)

ϕ˜m=PΓϕmϵ.

Using the projection operator derived in Sec. 4.5, it is possible to map ϕ˜m into another vector space that contains the basis functions for the adaptive optics influence functions.

Eq. (109)

ϕao=Pmaoϕ˜m.

If one wants to assume that the application of the influence functions does not affect the GPP, then one could project out the GPP by applying the PΓ projection operator to ϕao. One can also add the noise associated with the application of the influence functions by creating an error vector ϵao. The final wavefront vector for the system is as follows:

Eq. (110)

Φao=ϕ1+ϕ2++ϕn+ϕao+ϵao,
which hopefully produces less RMS WFE than was present in the original wavefront, Φ.

As a final visual example, the top four subspaces shown in Fig. 27 have been applied to the system and assumed to be measured with no error. This measured WFE is then mapped into the influence function basis space using the methods previously discussed. The influence functions are then used to cancel the measured WFE. The system with the original WFE and the reduced WFE are shown in Fig. 28 along with cross-section plots of the MTF and density plots of the PTF. Because of the sparse nature of the geometry of the aperture, the MTF drops much faster than the filled aperture as shown in the top right of Fig. 9. The application of the influence functions improves the MTF but has a considerable effect on the PTF shown in the last row of Fig. 28. The PSF of the system before and after the application of the influence functions is shown in Fig. 29 along with a simple image simulation.

Fig. 27

Each image shows a random instance of the WFE on the aperture for the different subspaces with the exception of the bottom right plot of the influence functions. All of the influence functions are shown with a unitary coefficient to show the location and width of the functions. The online figure uses a rainbow of colors with blue being positive and red being negative.

OE_53_8_083103_f027.png

Fig. 28

The top left plot shows the sum of WFE from the top four subspaces shown in Fig. 27. For this plot, it was assumed that this WFE could be measured in full and the calculated influence functions were applied with no error. The online color version shows positive WFE as blue and negative WFE as red. The next row shows the resulting MTF. The dashed line is the MTF in the x-direction and the solid line is the MTF in the y-direction. The bottom row shows the PTF.

OE_53_8_083103_f028.png

Fig. 29

The top row shows the PSF for the system in Fig. 28 before and after the application of the influence functions. The bottom row shows a simple image simulation that only includes the effects of the aperture geometry and WFE.

OE_53_8_083103_f029.png

5.

Summary

Modeling the imaging chain has proven to be an invaluable tool for optical designers to assess design trades and their impact on the final image quality. A tutorial for modeling the imaging chain of a digital camera system was provided to give an overview of the concept and the mathematics associated with a simple model for digital cameras. Accurately modeling the OTF in the imaging chain is critical for understanding the relationship between the WFE and the final image quality. Unfortunately, modeling the pupil function for the OTF calculation of an optical system can prove to be the most challenging step in the imaging chain model for dynamically changing complex optical systems, such as segmented or sparse aperture systems. Modeling the OTF over the range of dynamic wavefront instances to ascertain RMS WFE, LOS, and GPP can prove challenging and laborious. We introduced a novel approach that simplifies modeling the pupil function WFE in the imaging chain model. With the calculation of an η matrix, the resulting WFE, LOS, and GPP calculations are greatly simplified with fast matrix calculations. No longer is it necessary to perform costly numerical integrations every time the instance of WFE changes. We demonstrate the application of the η matrix to provide the dynamical changing WFE for a segmented sparse aperture.

Acknowledgments

We would like to acknowledge our great appreciation to our colleagues in the Imaging Science group at Exelis for the many useful and engaging conversations while developing the image chain modeling concepts.

References

1. 

R. D. Fiete, Modeling the Imaging Chain of Digital Cameras, SPIE Press, Bellingham, Washington (2010). Google Scholar

2. 

R. Fiete, Formation of a Digital Image: The Imaging Chain Simplified, SPIE Press Monographs, SPIE Press, Bellingham, Washington (2012). Google Scholar

3. 

J. R. Schott, Remote Sensing: The Image Chain Approach, Oxford University Press, New York, NY (2007). Google Scholar

4. 

J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics, Wiley, New York (1978). Google Scholar

5. 

J. Goodman, Introduction to Fourier Optics, Roberts & Company, New York (2005). Google Scholar

6. 

R. L. Easton Jr., Fourier Methods in Imaging, Wiley, Hoboken, New Jersey (2010). Google Scholar

7. 

J. Goodman, Statistical Optics, Wiley, Hoboken, New Jersey (1985). Google Scholar

8. 

H. BarrettK. Myers, Foundations of Image Science, Wiley-Interscience, Hoboken, New Jersey (2004). Google Scholar

9. 

R. E. Hufnagel, “Random wavefront effects,” in Perkin-Elmer Symp. on Practical Application of Modulation Transfer Functions, (1963). Google Scholar

10. 

E. O’Neill, Introduction to Statistical Optics, Dover Publications, Mineola, New York (1963). Google Scholar

11. 

R. Barakat, “The influence of random wavefront errors on the imaging characteristics of an optical system,” Opt. Acta, 18 (8), 683 –694 (1971). http://dx.doi.org/10.1080/713818488 OPACAT 0030-3909 Google Scholar

12. 

R. D. Fiete, “Image quality and λf#/p for remote sensing systems,” Opt. Eng., 38 (7), 1229 –1240 (1999). http://dx.doi.org/10.1117/1.602169 OPEGAR 0091-3286 Google Scholar

13. 

V. N. MahajanG. ming Dai, “Orthonormal polynomials in wavefront analysis: analytical solution,” J. Opt. Soc. Am. A, 24 (9), 2994 –3016 (2007). http://dx.doi.org/10.1364/JOSAA.24.002994 JOAOD6 0740-3232 Google Scholar

14. 

S. S. EikenberryB. D. ShkediT. L. Herter, “Calculating diffraction-limited point spread functions for large segmented telescopes,” Proc. SPIE, 4837 776 –785 (2003). http://dx.doi.org/10.1117/12.457908 PSISDG 0277-786X Google Scholar

15. 

S. M. RansomS. S. EikenberryJ. Middleditch, “Fourier techniques for very long astrophysical time-series analysis,” Astron. J., 124 (3), 1788 (2002). http://dx.doi.org/10.1086/342285 ANJOAA 0004-6256 Google Scholar

Biography

Robert D. Fiete is the director of research and development at Exelis Geospatial Systems. He received his BS in physics and math from Iowa State University and his MS and PhD in optical sciences from the University of Arizona. Over the past 30 years, he has developed imaging chain models to assess the image quality of imaging system designs. He is a senior member of OSA and SPIE as well as a fellow of SPIE.

Bradley D. Paul is a senior image scientist at Exelis. He did undergraduate work at Miami University in Oxford, Ohio. He received his PhD in physics from JILA at the University of Colorado in Boulder, Colorado. After biking around the United States and Canada, he worked for a start-up company assisting in the construction of a holographic printer. He joined Eastman Kodak in 2004, where he has been developing new methods of analyzing electro-optical imaging systems.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Robert D. Fiete and Bradley D. Paul "Modeling the optical transfer function in the imaging chain," Optical Engineering 53(8), 083103 (18 August 2014). https://doi.org/10.1117/1.OE.53.8.083103
Published: 18 August 2014
Lens.org Logo
CITATIONS
Cited by 25 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Point spread functions

Imaging systems

Optical transfer functions

Sensors

Image quality

Cameras

Wavefronts

Back to Top