Most photographs captured today are acquired with cameras that produce digital images, i.e., an image that is composed of an array of numbers that represent the brightness of each picture element, called a pixel (Fig. 1). But how are these numbers created and what influenced their values? What drives the pixel values away from the ideal values that would give us the best image quality? Camera designers struggle with these questions every day as they balance the desired image quality with design constraints, most notably size, weight, power, and cost.
The relationship between the final image quality and the camera design factors, such as detector size and optical quality, needs to be well characterized and understood in order to relate system requirements to the image quality required by the image user. This relationship needs to be characterized early in the design phase to prevent costly redesigns later in the development phase, especially when new technologies are being considered. Ideally, we would like to study the design options by conducting image evaluations using images captured from camera prototypes that cover every design option, but this is typically prohibited by the time and cost involved in building all the necessary hardware, so we need other design tools at our disposal.
A historical approach to creating simulated image examples was to create an optical lens that closely matched the predicted system-level image quality effects of the proposed camera. Images were captured using this lens and then used in image quality studies to predict the proposed camera’s performance. This approach greatly restricted the ability to perform design trades of the optical design because a representative lens would need to be created for each modification in the optical design trade. The other difficulty was determining how close the lens performance needed to match the predicted camera imaging performance such that the evaluation results would be valid, i.e., how good is good enough? For camera systems that will drive important decisions, it is critical that the image simulations do not differ in quality from the operational images that will drive those decisions.
With the advent of digital imaging and faster computers, processing techniques were advancing enough to allow accurate digital image simulations using mathematical models that encompass every step of the imaging process, i.e., the imaging chain.1,2 Each element of the imaging chain could be mathematically modeled and linked together to capture the interactions between the links. Ultimately, very accurate image simulations could be produced using the imaging chain models that were indistinguishable from the operational images captured when the camera was built from the same design. Image chain models have become invaluable at displaying image quality differences between camera designs that have the same top-level design parameters, e.g., the same camera optics size and sensor size, but subtle differences at the component-level designs that cannot be easily translated into the predicted image quality differences, e.g., optical quality and detector sensitivity. Figure 2 shows image simulations of two design options for a panchromatic remote sensing camera that has the same optics aperture size, focal length, and detector size but has different optical aberrations and detector sensitivities. Modeling the imaging chain has become a critical design tool to assess camera designs before hardware is built and prevent unwanted surprises after the camera is built and the first images are acquired.
Simple Imaging Chain Model
The fundamental links of the imaging chain for a camera system are radiometry, image formation, image sensing, processing, display, and interpretation, as shown in Fig. 3. At first glance, the imaging chain seems fairly simple, but as we dig deeper to develop mathematical models for each of these links, we can quickly get overwhelmed with the subtle complexities involved. It is very important to understand not only the role that each link plays in producing the final image quality, but also how the interaction between the links affects the image quality as well. Let us first look at the most basic imaging chain models for a simple digital camera system before focusing on a more detailed method for modeling the optical wavefronts in the imaging chain.
The imaging chain starts with modeling the radiometry, i.e., the electromagnetic energy that is the essence of creating the image. Imaging is fundamentally the act of detecting and measuring electromagnetic radiation in an attempt to understand a remote object or collections of objects. The electromagnetic spectrum ranges from gamma rays with a wavelength in the picometers to radio waves with wavelengths of hundreds of kilometers. The shorter wavelength electromagnetic spectrum is detected via photonic interactions with matter and is often visualized as packets of energy called photons, where the longer wavelengths are detected with the aid of antennas and are often visualized as waves. In this paper, we are concerned with the range of the electromagnetic spectrum that includes the visible spectrum ( to 800 nm) that can be redirected with refractive or reflective optics and detected with bandgap semiconductors. The visible spectrum is a subset of the optical spectrum, which involves frequencies from ultraviolet up to the long-wave infrared (about to 1 mm).
We are most interested in modeling the spectral radiance from the object of interest, , but this is only one of the contributions to the spectral radiance of the entire scene, , that enters the camera aperture.3 Modeling the spectral radiance for the scene is very challenging given the complex variations and sources of the light energy that will arrive at the camera (Fig. 4). For example, the spectrum of the light changes dramatically in the imaging chain if the camera moves from sunlight illumination to indoor incandescent light. To further complicate the model for outdoor lighting, the radiometry is heavily dependent on the atmospheric conditions that the light travels though, which is constantly changing. The radiometry from indoor light is also complicated by the variety of light sources and the illumination condition from the light reflecting off walls and other objects. Probably, the best example of varying radiometric conditions for indoor lighting occurs when an image is captured with or without a camera flash (Fig. 5).
Various software programs have been created to assist modeling the complex scene spectral radiance. Atmospheric models, such as moderate resolution atmospheric transmission (MODTRAN), produced by Spectral Sciences Inc., Burlington, Massachusetts and the U.S. Air Force, are usually employed to model the atmospheric propagation of the electromagnetic radiation. The scene can be modeled using a physics-based radiometric scene generation tool that builds up an object from smaller facets, with each facet comprising its own material properties. An example of a radiometric scene generating tool is the digital imaging and remote sensing image generation (DIRSIG) model developed at the Rochester Institute of Technology.
The image formation element of the imaging chain starts with the spectral radiance from the scene, , entering the aperture of the camera. As the light waves propagate through the imaging elements to form the image on the sensor, we must consider the modifications to the spatial distribution of the light energy before we can calculate the resulting photons that strike each detector. If we assume that the optical system behaves as a linear shift-invariant system, then we can model the spatial modifications by understanding the effects on a single point of light, resulting in a point spread function (PSF).45.–6 The resulting spatial modification to the spectral radiance in the imaging plane, , can then be modeled by simply convolving the PSF for the system, , with scene spectral radiance, i.e.,
It is also very helpful to understand how the imaging system is modifying the contrast of the spatial frequencies in the scene by calculating the Fourier transform of the system PSF to give us the system transfer function. In Fourier space, the convolution operation becomes a multiplication operation, so the transfer function for the entire camera system, , is obtained by simply multiplying together the individual transfer functions from each element in the imaging chain that contribute to the system transfer function, i.e.,
The MTF shows us how the transfer function modifies the contrast of each spatial frequency and the PTF shows us how the transfer function modifies the phase of each spatial frequency.
A broader PSF translates into a transfer function that drops off more rapidly in the higher spatial frequencies, resulting in a blurrier image (Fig. 6). Thus, the PSF and its corresponding transfer function have a direct impact on the image quality, so it is very important to understand the elements in the imaging chain that contribute to them. For most camera systems, the biggest contributor to the system PSF is the optical PSF and its corresponding optical transfer function (OTF).
Optical PSF and coherence
To visualize the image formation process of the optical PSF in the imaging chain, let us start with a broadband point source in object space. This point source could be radiating on its own or scattering light, but either way the point source is radiating spherical waves. A small solid angle patch of these spherical waves enters the entrance aperture of our imaging system. At this moment, if we were to stop time and investigate the field that exists at the entrance aperture from the point source, we would notice that the field is spatially correlated across the entrance aperture. This spatial correlation is required for the system to make a diffraction-limited PSF. If there is a loss of spatial correlation across the entrance aperture, then the PSF becomes wider, resulting in a blurry image. Propagation through the atmosphere or aberrations in the optics could cause this loss of spatial correlation. If we now look along the axis of propagation, we would notice a very short correlation length; this direction is often thought of as the temporal correlation length. This lack of temporal correlation is a direct result of the broadband nature of the point source. However, one can imagine that each wavelength creates a colocated PSF in the image plane whose width is determined by the wavelength. These colocated PSF’s then add as intensities, because of the temporal averaging of the detector to form one broad band PSF.
To understand the difference between coherent and incoherent imaging, we must bring in another point source. If both point sources are broadband, then they both have short correlation times. The measured intensity is the time average of the modulus squared of the field and all cross-terms from the two point sources average to zero because there is no correlation between the two point sources, resulting in the image of the two point sources adding as intensities. This type of imaging is, therefore, linear in intensity, i.e., the modulus squared of the field. If both of the point sources are monochromatic with the same wavelength, then the two point sources differ by no more than a relative phase. This time the interference pattern set up by the two fields is stationary, resulting in a distribution of bright and dark regions. In other words, the cross-terms do not average to zero. This type of imaging is, therefore, linear in field. In both cases, coherent and incoherent, it is possible to add the fields, perform a modulus square, and then do a time average to get the measured intensity. It just so happens that in the incoherent case, all the cross-terms average to zero, allowing one to just add the intensities.
A full image can be thought of as a collection of closely spaced point sources. If these point sources are uncorrelated, which is the case with solar illumination of an outdoor scene, then we will have incoherent imaging. If, however, there is a phase relationship between the point sources, which is the case when a scene is illuminated with a laser, then we have coherent imaging. It is important to remember that in both cases, the field from one point source is spatially coherent across the entrance aperture. The previous discussion of coherent and incoherent imaging has only scratched the surface of this rich and complex field; the interested reader is directed to Refs. 5, 7, and 8.
Optical diffraction PSF and OTF
Let us go back to the entrance aperture and follow the spatially coherent section of a spherical field to the focal plane. Depending on the distance to the point source and, to a lesser extent, the size of the entrance aperture, the field at the entrance aperture may appear to be a plane wave, a parabolic wave, or a spherical wave. The optical system is designed to take the diverging wavefront at the entrance aperture and turn it into a converging spherical wave at the exit pupil. This converging spherical wave is called the Gaussian reference sphere. The center of the Gaussian reference sphere is on the focal plane and is called the Gaussian reference point. If the point sources in the object space are mapped to their corresponding Gaussian reference points, then we have what is called the Gaussian reference image. The Gaussian reference image is an imaginary perfect unaberrated image that does not have the blurring effect of the PSF and is the starting point for modeling the optical wavefronts in the imaging chain. Each point in the focal plane can be thought of as the center of a set of converging waves on a Gaussian reference sphere from an exit pupil, but each point may be the result of different views of the exit pupil. This effect can manifest itself in two ways, first as a warping of the Gaussian reference image and second as a spatially varying PSF. If the spatially varying PSF is small enough to be ignored, then the system is said to be a linear shift-invariant system. Another term that is often used is isoplanatic patch, which refers to the region of the image plan for which the spatial variability of the PSF is small enough to be considered constant.
If we calculate the Fresnel number to determine what fidelity of the diffraction theory is required to propagate the field at the exit pupil to the image plane, we would find that the Fresnel number is not much less than one, which is required in order to use the Fraunhofer diffraction theory. The diffraction calculation for a camera requires Fresnel diffraction but because the optical system has created a wavefront that is converging to a point in the image plane, the parabolic phase term in Fresnel diffraction is canceled with the parabolic phase of the converging wavefront. This, fortunately, leaves us with the mathematics of the Fraunhofer diffraction theory. It should be pointed out that our visualization is for converging spherical waterfronts, but our mathematics is for converging parabolic waves, which is a result of the paraxial approximation, i.e., considering the rays that are near the optical axis.
The converging field at the exit pupil has an amplitude that depends on the intensity of the point source in object space and on the complexities of the propagation from the source to the imaging system. Normally, the concept of radiometry is separated from the magnitude of the PSF. The field at the exit pupil is taken to be one such that the PSF from all point sources have the same magnitude. The radiometry of the scene is then applied by scaling the values of the Gaussian reference image. The converging field at the exit pupil also has phase aberrations relative to the perfect Gaussian reference sphere. These phase aberrations are called wavefront error (WFE).
As an example of the Fraunhofer diffraction PSF, let us look at a clear circular aperture of diameter with no WFE. The Fruanhofer diffraction pattern for light at wavelength with incident electric field imaged with a lens of focal length will beFig. 7. The first ring of zeros occurs at
The Fourier transform of the diffraction PSF gives the same result as autocorrelating of the aperture pattern to give us the diffraction OTF.Fig. 8, has a cutoff frequency , where the contrast of the higher spatial frequencies are zero; thus, the diffraction from the aperture imposes a limit to the resolution of the optical system that improves with increasing aperture size. The optical cutoff frequency, therefore, imposes a fundamental resolution limit on the entire camera system because the system transfer function will be zero for all spatial frequencies higher than , even when all of the individual transfer functions are multiplied together.
WFE and OTF
Although many simple imaging chain models only consider the aperture diffraction effects on the OTF, there are many other factors that will alter the functional form of the OTF. As discussed earlier, aberrations in the imaging elements will cause departures from the ideal spherical wavefronts in the optical system. Accurately modeling the WFE caused by these aberrations can be very complicated, but is essential in order to properly model the image quality of the camera. We will return to this point as the focus of Sec. 3.
The addition of the aberrations will result in a MTF that is equal to or lower than the diffraction MTF, i.e.,
The WFE caused by aberrations will, therefore, reduce the OTF and degrade the image quality with additional blurring (Fig. 9). It is common to characterize the WFE of an optical system by the root-mean-square (RMS) WFE, , i.e., the statistical variance of the optical path difference over the entire wavefront. When the aberrations are not known, such as early in the design phase of an optical system, a random phase error can be used to model a nominal WFE produced by the aberrations and the optics manufacturing process. The optics degradation is modeled as an optical quality factor (OQF) transfer function that is multiplied with the diffraction transfer function to get the OTF. Hufnagel modeled the OQF transfer function for aberrations and high-frequency surface roughness as910 and developed further by Barakat.11 One key idea that is often overlooked is that the term represents a correlation function that may not be a Gaussian.
Although there is a correlation between and the reduction of the OTF, different aberrations at the same can have different effects on the resulting image quality, so it is very important to model the WFE from the actual aberrations if they are known. As an example, let us look at aberrations that cause a low spatial frequency WFE compared to aberrations that cause a high spatial frequency WFE. Figure 10 shows the distinct differences of the low-frequency and high-frequency aberration optics PTF and PSF even though they both have . It is also important to note that different weightings between the types of optical aberrations, e.g., spherical aberration and astigmatism, can occur, which will result in the same RMS WFE but result in different optics MTFs and PSFs. This variability as well as the accuracy of the Hufnagel approximation is tighter for a high spatial frequency WFE than it is for a low spatial frequency WFE, as illustrated in Fig. 11. Figure 12 shows the different effects that the aberrations have on image quality even though they have the same . It should be noted that these image examples did not have noise added or any image restoration processes applied to them.
The next consideration in the imaging chain is the relative motion between the camera and the scene that will cause additional blurring to the image. General motion transfer functions can be derived fromFig. 13.
Smear is directional blurring caused by a constant linear motion, such as snapping a long exposure image from the window of a fast-moving car. If we let , where is the relative velocity, and evaluate Eq. (13), then we find the transfer function to be
The phase term shifts the centroid of the image by half the smear distance. This phase term is normally ignored because it only represents the shift in the image pixels and does not affect image quality; hence, usually only the smear MTF as a sinc function is used in the modeling. If the smear direction is not along an axis, then a more general smear transfer function can be created by replacing with . The smear PSF along the -direction is modeled as
Note that convolving the smear PSF with the scene performs a uniform integration of the scene radiance over the local distance .
Jitter is the result of very high-frequency vibrations  of the camera system. Jitter can be derived by visualizing the integration time being broken up into many very short subintegration times such that the camera motion during the time can be assumed stationary. These individual points and subintegration times can be placed into Eq. (13), which will return the transfer function that is the average of all of the spatial shifts. However, if the probability distribution is known or assumed for the spatial location of the points, then one can find the expected value of the average of the transfer function. Because of the form of Eq. (13), the expected value of the average of the transfer function is the Fourier transform of the probability distribution. This allows us to visualize the probability distribution as the PSF for jitter. Often jitter is modeled as a Gaussian probability distribution given by
Modeling the sensor requires us to understand the impact that the sensor has on the spatial properties of the image as well as the electromagnetic energy reaching the detector. Let us first consider how a digital sensor alters the spatial properties of the image. The predominant effect in most sensors is the integration of the incident light across the aperture shape of each detector. For a sensor with rectangular detectors of size by , this integration will cause a blurring modeled as
Other blurring effects need to be incorporated into the sensor model, such as carrier diffusion, but these will depend on the specific sensor design and architecture and will not be considered here for our simple imaging chain model.
The other predominant effect that needs to be modeled for a digital sensor is the sampling. The simplest sensor model can be given by the blurring of the optical image by the detector PSF, , followed by a sampling at each detector location (Fig. 14). For an detector array with each detector a distance and from the next, we can model the detector blur and sampling on the incident radiance as
Mathematically, is a distribution function because of the Dirac delta distribution in its definition. The operation of sampling is where our visualization transitions from continuous to discrete. An alternative representation of can be purely discrete, i.e.,
The distances and are called the sampling pitch of the detector and can have a profound effect on the image quality. All spatial frequencies higher than the Nyquist frequency, defined byFig. 15). The sampling pitch of the detector, therefore, imposes a fundamental limitation on the image resolution by limiting the spacing between objects that can be resolved.
Let us now consider the light energy that reaches each detector. For a detector with a spectral bandpass of to , the radiant flux (watts) on the detector is given by
The digital imaging sensor model converts the spatially distributed light energy of the image into the digital counts that record the image. The most common digital sensors are composed of individual detectors that generate electrons from the incident light through the photoelectric effect; thus, the average number of photoelectrons generated at the detector during the exposure time is given by
Unfortunately, the digital count value of the image does not have a one-to-one relationship with the scene radiance, , due to the blurring caused by the PSF, the sampling, and the noise that is introduced. The noise introduces an undesired randomness to the brightness of the digital count value for each pixel with the primary noise contributors in most digital cameras coming from the photons’ noise, dark noise, and quantization noise. The photons arriving at the sensor do not arrive in a steady stream but in random fluctuations that follow a Poisson distribution, giving rise to the photon noise that increases with the square root of the signal intensity. The dark noise appears as fluctuations in the digital counts even when no signal is present. There are many causes for the dark noise, such as error in the analog-to-digital converter that creates the digital count value, so this value in the imaging chain model is captured from measured data for the sensor being modeled. Finally, we have quantization noise that arises from quantizing the signal electrons into integer digital count values, creating a randomness that follows a uniform distribution. Putting these noise sources together gives us the root-sum-square noise for the sensor electrons as
An interesting detail regarding the photon noise is that the arriving photons arrive following a Poisson distribution but then drive a binomial probability distribution that has a probability of success given by the quantum efficiency. It just so happens that when a Poisson process drives a binomial process, the result is also a Poisson process, so it is, therefore, still valid to use a Poisson distribution for the photoelectrons.
Additional Considerations When Putting the Imaging Chain Elements Together
Figure 16 illustrates the fundamental steps of putting together all of the elements of a simple imaging chain model to simulate the image capture portion of the imaging chain. Image processing, such as calibration, is generally applied to the output of the image capture portion of the imaging chain to generate the final image produced by the camera. The image simulations can then be processed to enhance the image quality, displayed, and evaluated by image users to complete the imaging chain for assessing the image quality of the camera design.
It is easy for many developers to oversimplify the imaging chain models, but these simplifications can lead to misleading results when the evaluations are conducted using the image simulations. For example, many simple image chain models will only consider the PSF of the Airy pattern, shown in Fig. 7, in their system. Although it is the optics diffraction that sets the fundamental resolution limit of the system PSF, using the Airy pattern as the system PSF overlooks the blurring effects caused by the other elements of the imaging chain that can significantly impact the resulting image quality. Also, the Airy pattern is the circular aperture diffraction PSF for a single wavelength of light and ignores the integration of the PSF of the spectral bandpass of the imaging system. If the camera integrates the light over the spectral range to , then the radiant flux, , reaching the image plane is a function of by
As an illustration, let us consider a gray world where the spectral radiance of the scene, , is flat across the spectral range to and the spectral response of the system is flat across the spectrum as well. In this special case, the integration over the spectral bandpass will simply average the spatial scaling of the PSF over the range to . Even if we only look at the diffraction PSF and corresponding OTF for a circular aperture over the visible spectral bandpass and , as shown in Fig. 17, we see differences from the single wavelength model. Note that the side rings of the Airy pattern smooth out after the PSF is integrated over the spectral bandpass.
When building an imaging chain model for a digital imaging system, particular attention also needs to be paid to the relationship between the optics and the digital sensor. We saw earlier that each imposes a fundamental limit on the detail that can be imaged with a digital camera and the image quality is dependent on the relationship between these limiting factors. The diffraction from the optical aperture imposes a limit on the highest spatial frequency that can be captured [Eq. (10)] and likewise the detector sampling imposes a limit from the Nyquist frequency [Eq. (24)]. The ratio of the sampling frequency and the optical cutoff frequency is a design parameter defined as12
Historically, has been defined as the ratio of the sampling frequency to the optical cutoff frequency rather than the Nyquist frequency to the optical cutoff frequency, so the detector and optics resolution limits are equal when and . The resolutions of digital cameras that are designed with are fundamentally limited by the optics diffraction, whereas digital cameras designed with are fundamentally limited by the detector sampling.
It may seem intuitive that digital cameras should all be designed with , but other factors, such as signal-to-noise ratio and motion blurring, will influence the final image quality and may not be the best design when all image quality factors are considered (Fig. 18). Most digital cameras will produce brighter, sharper images when , with typically in the range . It is important to note that only compares the resolution limits between the optics and the detector sampling and does not take into consideration the system transfer function. The impact of the OTF is a significant factor in the system transfer function when (Fig. 19), so for most imaging systems, we must understand the effect of the optical WFE on the system transfer function, which will influence the image quality.
Wavefront Error and Optical Transfer Function Generation
In the previous section, we only considered a very simple form of the OTF that included the diffraction from a clear circular aperture with a generalized WFE that can be modeled as an OQF, although the importance of properly modeling the WFE was illustrated. If we simply extend the aperture model to include a circular central obscuration of diameter , as we would see in a Cassegrain telescope, then the aperture diffraction PSF will now have the form
The diffraction OTF for a circular aperture with a central obscuration is given by
Note that adding the central obscuration to the clear circular aperture has the effect of moving the concentric rings in the PSF closer toward the center while more energy is moved from the central peak to the outer rings (Fig. 20). Also note that the contrast of the mid-spatial frequencies in the diffraction OTF is reduced as the central obscuration increases in size, but the cutoff frequency remains the same.
Although these models for the diffraction OTF are useful for simple imaging chain models, they are still inadequate for more complex optical designs, especially when we need to consider all of the potential sources for WFEs.
The primary driver for the shape of the PSF is the shape of the exit pupil. Historically and in many elementary texts on optics, the shape of the exit pupil is an unobstructed circle. However, in reality, the exit pupil has obstructions and in more complex systems, like segmented arrays and multitelescope systems, the overall shape of the exit pupil is very complex. A secondary effect on the shape of the PSF is that of the WFE. WFE is the spatial variation about the optimal optical path length over the exit pupil. Again, historically and in many elementary texts on optics, WFE would be modeled with a linear combination of Zernike polynomials, which are an infinite set of orthogonal functions over the unit disk. Obviously, this orthogonality is lost on exit pupils with complex shapes. One option is to find a new set of orthogonal functions for the new shapes.13 This works well for a small yet common set of shapes. Another option is to accept the loss of orthogonality and develop a method that allows the usage of any collection of basis functions, including the influence functions of adaptive optics on an arbitrary exit pupil geometry.
The method presented below is a generalized method for tracking WFE in a consistent manner that is applicable to arbitrary pupil geometries. WFE will be visualized as a vector in an infinite-dimensional vector space; however, calculations will be done in a finite-dimensional vector space. This vector space visualization will conveniently allow for the separation of WFE into multiple subspaces that can be aligned with the specifications for a future system. Calculations for RMS WFE, line-of-sight (LOS), and other macroscopic parameters are reduced to fast matrix calculations.
The impulse response function , which is a precursor to the PSF, can be calculated by evaluating the Fraunhofer diffraction integral.
Before we go much farther, it is important that we become explicitly clear about our choice of sign conventions. In general, sign conventions and handedness of axes are completely arbitrary, but we must be consistent and when obtaining data from an external source, one must take great care to understand the hidden assumptions with sign conventions.
We start off with the definition of the optical axis that runs along the center line of the optical system in the direction of propagation. This will be the axis of our coordinate system. In other words, the axis points from the exit pupil to the focal plane. We will use a right-handed coordinate system, which will dictate the order of the and axes in the focal plane, as shown in Fig. 21. The labels and have been used for the axes in the exit pupil. The polar angle will be positive when measured from the axis in the direction the fingers of the right hand point when the thumb of the right hand is pointing in the positive direction. This is the normal convention for the polar angle .
The complex scalar field at the Gaussian reference sphere at the exit pupil is called the pupil function, , and is used in Eq. (40) to calculate the impulse response function. The pupil function can be broken into two real functions: the aperture function, , and the WFE function, . The aperture function, which is often a binary function but, in principle, can take on values between 0 and 1, represents the relative spatial distribution of the transmission of the exit pupil. The WFE function represents deviations in the wavefront from the ideal Gaussian wavefront.
In this documentation, we have picked to be in the direction along positive . This choice forces us to place a minus sign in the exponent of Eq. (40). This choice supports the visualization that the normal vector of the tip-tilt WFE shown in Fig. 21 points toward the center of the PSF.
The Fraunhofer diffraction integral of Eq. (40) has the same form as a Fourier transform integral, provided that we define the forwarded transform as
In practice, one will be creating sampled images of the exit pupil and then using a discrete Fourier transform (DFT) algorithm to calculate the PSF and OTF with the DFT implemented using a fast Fourier transform (FFT).
To avoid aliasing, one must sample the pupil function with sufficient zero padding. For this discussion, we will assume that a square grid is used and we will only talk about the linear scaling along one side. Let be the physical width of the sampling grid and be the enclosing radius of the pupil function. For a simple round optic, the enclosing radius is the radius of the optic, but for a complex multisegmented system, it is the radius of the smallest circle that can contain the full system. The enclosing radius must satisfy the following constraint:
If the square grid is sampled with an array, the sample spacing is . The left most axis on Fig. 22 shows the sampling of one side of this grid. The pupil function, , has been sampled at
An arbitrary choice has been made regarding the sampling that is done from to . We could have easily sampled from to , which would give us consistent results. The current choice is driven by using a computer programming language that references arrays with a zero-based index system. In other words, is the first element of the array and is the last. For an array with an even number of points and length , the center value of the function is at array index .
Another requirement for sampling the pupil function is that the smallest structure of the pupil function is adequately sampled. This includes the physical structure in the aperture function and the variations in the WFE function. If the size of the resulting grid gets too large for the computing resources of the day, one can use the techniques of Eikenberry et al. or Ransom et al.14,15
Let represent the action of a DFT on the data to create the data. One should keep in mind that every data value is a function of all the input data values . The PSF data can be calculated from the sampled pupil function, , as follows:Fig. 22. These PSF data can now be combined with an image via a convolution to produce the first step in simulating an optical system. Normally, this convolution is performed via the convolution theorem. Fig. 22 and below.
When modeling , it is often convenient to consider the MTF and PTF, which are related as
The magnitude of the OTF, is the MTF which is scaled such that the zero frequency is equal to one, . The PSF and OTF are a Fourier Transform pair,
For a simple round optic of radius with WFE, one can expand the phase of the pupil function out into a set of orthogonal functions. A common choice for these orthogonal functions are Zernike polynomials.
Two key benefits of using a set of orthogonal functions to describe WFE are that it is easy to describe the key aberrations with just a few terms and the orthogonality relationship of the basis functions makes it easy to calculate macroscopic parameters with just the coefficients of the basis functions. The orthogonality relationship is as follows:
The coefficients that describe a given WFE, , can be calculated by taking advantage of the orthogonality relationship.
Once the coefficients have been calculated, we can easily calculate the following integral:53) and taking advantage of the orthogonality relationship. Finally, we arrive at
Therefore, if we know the coefficients , calculating the value for is as simple as summing the square of the coefficients. Another very important way of looking at this is if we think of the WFE as a vector made up of the coefficients
The rest of this work describes how to do these types of integrals when the optic is arbitrarily complex and the orthogonality relationship is no longer valid. This is done by careful book keeping of the WFE vector spaces and the calculation of a metric we will call the eta matrix, . This matrix will allow us to keep the visualization that WFE is a vector in a vector space and will be used to calculate the new dot product in this vector space.
Exit Pupil Modeling for Arbitrarily Complex Systems
We are going to consider a general hierarchy of nested apertures that make up the exit pupil. Each aperture can have a WFE that will affect the total system WFE. Also, each aperture can have child apertures, which are completely contained within the domain of the parent aperture. This hierarchy can be thought of as a simple tool to model a complex set of system specs, or as a method of producing a spatially correlated WFE over a large multisegmented aperture array, or even for modeling a multitelescope system. An example of an arbitrary nested aperture system is shown in Fig. 23. In Sec. 4.7, we will present a full numerical example for a nonrealistic multisegmented, multitelescope. This example will contain the elements needed to demonstrate many of the applications of the matrix method.
Aperture Nesting Rules
In general, each aperture has an enclosing disk or other enclosing shape that contains the unobscured domain of the basis functions. For this discussion, we will assume that the enclosing shape is a disk with radius of , where is the vector-index designator for the given aperture. The enclosing disk is the domain over which the WFE basis functions are defined. If one is using Zernike polynomials, then the basis functions are also orthogonal over the domain of the disk. However, if the domain of the basis functions is not the shape of a disk, then one can think of the radius as a scale parameter of the enclosing shape. For example, if the basis functions are square image files, then the enclosing shape is a square and the scaling parameter can be the distance from the center to one of the corners.
The domain over which the ’th aperture is defined will be referred to as . This region is shown in Fig. 23 as the gray regions within the supporting circles. The gray regions get darker as there are more nested apertures. The vector indexing method helps visualize the tree structure of the nested apertures. Figure 24 shows the nesting hierarchy for the general hierarchy of nested apertures shown in Fig. 23. The vector is as long as the deepest trail on the graph from the root to the leaf. Each level corresponds to a dimension in the vector. The vector is the trail from the root to the node in question with zero padding. The union of the domains of the leafs of the aperture tree corresponds to the final pupil mask as shown in the inset of Fig. 23.
Apertures can be combined into a hierarchal arrangement to aid in modeling a complex system. Any aperture can have children apertures, which can be used to build up a complex hierarchy. The following rules apply to hierarchical aperture arrangements:
• The domain of child apertures is completely contained within the domain of their parent aperture.
• No other overlapping of apertures is allowed.
• WFE of the exit pupil is the sum of the WFE on all of the apertures at a given point in the exit pupil. One must work with the optical designer to map the WFE to the exit pupil.
• The exit pupil mask is the union of all of the domains of the leaf nodes of the aperture tree. An aperture tree is shown in Fig. 24.
Calculation of the Matrix
The goal of the matrix is to replace time-consuming numerical integrations of the product of two functions defined over the domain of the pupil with a quick matrix multiplication.
This matrix multiplication can be thought of as the dot product between the two vectors and in a nonorthogonal space described by the metric , where the elements of and are the expansion coefficients of the functions and . If we first look at a simple single aperture system where the WFE is expanded on the following basis functions, , then Eq. (60) can be rewritten as
It is now possible to switch the order of the integration and summations to arrive at
This seemingly trivial algebraic operation has profound computational effects. The integral in Eq. (62) can be calculated for all combinations of and with just the knowledge of the geometry of the aperture, . This will become the matrix. The instance of the WFE is contained in the expansion coefficients and , which become the WFE vectors and . Basically, we have separated the geometry of the aperture from the instance of the wavefront error.
In the most general case, the matrix is the overlap matrix between every combination of two basis functions for every combination of two apertures over the domain these two apertures make in the exit pupil. The matrix is calculated by evaluating the following integral for each matrix element:Fig. 24.
in Eq. (63) is the ’th basis function on the ’th aperture that is used to describe the WFE of the optical system.
As a result of there being no restrictions on the basis functions used and the fact that the same type of basis functions can be used on parent apertures along with children apertures, the matrix will not have full rank and there will be redundancy in how a given WFE can be represented on the basis functions. However, along the diagonal, there are blocks that have full rank.
Derivation of Support Matrices
We need to derive some supporting matrices to assist in the calculation of the system RMS WFE, global pointing plane (GPP), and LOS. To do this, one needs to calculate vector representations for the simple functions 1, , and .
As a result of using multiple complete sets of basis functions over the same region of space, there is some freedom in how one can expand any given function on the basis functions. When modeling a system, one has the freedom to pick appropriate basis functions. If the basis functions are polynomial, then the lowest-order basis functions are most likely of the form , , and , or some linear combination thereof, where are constants. In this situation, the vector expressions for the functions 1, , and will evaluate to a finite number of nonzero terms. If the basis functions are not polynomial, then the expansion of the functions 1, , and may not terminate. Due to this condition, it is suggested that at least three basis functions are polynomial and of the form , , and . The remainder of the basis functions can be anything else that best matches the needs of the modeling the WFE of the system.
There is no loss in generality by requiring three basis functions for each leaf node to be the first three Zernike polynomials. If one chooses to use a different set for the first three basis functions, one is required to derive the final expressions shown in this section.
An arbitrary function can be written in terms of a linear combination of basis functions over the domain of the ’th aperture as
When solving for the expansion coefficients by multiplying by the ’th basis function, one can integrate over the full domain, not just the domain as masked by the aperture. This, of course, assumes that the function is known outside of the pupil domain and everywhere within the domain of the basis functions.
If Zernike polynomials are used for the basis functions, the expansion coefficients can be calculated with
The point is the center of the ’th aperture and is the radius of the disk that encloses the aperture. Because the matrix of Eq. (63) contains the information for all the apertures and basis functions, the , , and vectors are combined to form full system vectors, for example,69).
Projecting Out the Mean and the GPP
Given an arbitrary WFE function defined over the union of all of the domains of the leafs of the pupil mask tree, the mean of the WFE can be found by integrating over the valid domain and then dividing by the area of the domain, giving
Our goal is to find a matrix operation that accomplishes this task. By using Eq. (60) and the vector , we can write down the integration as a matrix operation.
This leads us to two matrices, one that calculates the mean, , and one that projects the mean out of the original WFE vector, .
In addition to projecting out the mean, another common operation is calculating and projecting out the best-fit plane to the WFE. This best-fit plane is called the GPP. A normal vector to the GPP that intersects the optical axis points to the location of the maximum of the PSF in the image plane. This location will be called the LOS. GPP is the global piston, tip, tilt (PTT) of a system.
We first start with a least-squares fit cost function for the GPP.
We proceed in the normal fashion by setting the partial derivatives to zero and then solving for , , and . During the mathematical manipulation, one will come across terms of the following form, which can be rewritten in a matrix form.
It is now possible to write down the matrices for calculating GPP and projecting out the GPP.77). The matrix is a matrix that will return , , and of the GPP when it acts on a WFE vector.
Finally, the matrix, which is used to project out the GPP, can be calculated with
The coefficients for the GPP can be calculated by applying the matrix on the WFE vector, .
The WFE about the GPP is obtained by subtracting the GPP from the full WFE function, or in matrix terms, the WFE vector can have the GPP projected out, where we let be the WFE vector with the GPP projected out, i.e.,
Generalized Projection Operators
In general, one can project one subspace into another provided that there is some overlap between the subspaces. An excellent example is the case of adaptive optics, where a set of influence functions are used to back out a measured WFE. Let of length be the measured WFE, which is expanded out on the basis functions . This subspace will be called . Let of length be the induced WFE from the influence functions in subspace . The hat above the vectors is used to denote that these vectors are not the full length that corresponds to the dimensions of the matrix. The influence functions are just another set of basis functions. The full WFE vectors are
It should be pointed out that in this example, the two subspaces and make up the whole matrix, but this idea can easily be extended without modifying the mathematics when there is an additional subspace , which holds everything else. One just needs to be careful indexing into the matrix and the WFE vectors.
The derivation of the projection operator that will project into subspace is easily derivable by looking at the functional form of the WFE. We need to find the best coefficients for that can be used to describe .
This can be done, in general, where there are many subspaces. One just has to keep track of where the elements of are mapped into the larger matrix. is required to preserve the components in and ensure that behaves like a projection operator.
Back to the adaptive optics example, if we can find the best instance of the influence functions that can cancel the input measured WFE, we will have optimality decreased the total system WFE. Therefore, using what was just derived above, the optimal setting for the influence functions is
Notice the minus sign, which is needed to ensure that the influence functions cancel the incoming WFE and do not amplify it. In Sec. 4.7.2, we will look at an example and point out some options available while modeling a system.
Once the matrix and the supporting projection matrices have been calculated, one can start calculating system-level parameters with fast matrix operations.
One of the most popular metrics calculated for an optical system is the RMS WFE. For a simple round unobscured optic, where the WFE is represented by Zernike polynomials, the RMS WFE can be calculated by taking the square root of the sum of the squares of the Zernike polynomial coefficients. However, for a more complex system with obscured optics and when other basis functions are used, this calculation involves performing a numerical integration over the WFE function over the domain of the aperture. If one needs to do Monte Carlo simulations over many wavefront configurations, this numerical integration can become a limiting calculation hurdle.
By using the projection matrices derived in Sec. 4.3, one may write down matrix expressions for RMS WFE relative to mean wavefront and GPP.
1. RMS: This is the normal RMS that is often used for a monolith system. This is the RMS of the WFE after the mean has been removed. In a monolith system, any residual tip-tilt is simply thought of as an LOS offset and is not expected to change with time. The tip-tilt that does change with time is called image motion and is often modeled as smear and/or jitter.
2. RMS GPP: This is the RMS WFE with the best-fit plane removed. This RMS is rarely used in monolith systems, but becomes very important in multitelescope and segmented systems. In an actively controlled multitelescope or segmented system, the random motion of the segments can cause an effective image motion that is separate from the bus motion.
For faster computations, both RMS calculations can be done with fewer operations by noting that , where is the area of the full aperture, and a transformed can be precalculated, , resulting is the following expression for RMS WFE relative to GPP.
The LOS is defined as the location of the maximum of the PSF. This is also the intersection of a normal originating from the GPP at the optical axis with the focal plane. The LOS can be thought of as a 2-D vector.
Correlated WFE vector
There are times when you know the coefficients for GPP, , and would like to know the correlated WFE vector that would produce the given GPP without adding any RMS WFE. This is called correlated because the PTT of all of the leaf node apertures move in unison to create the desired GPP. The correlated WFE vector is
The subscript GPP is used to distinguish this WFE vector from the WFE vector , which is the WFE relative to GPP. The WFE relative to the GPP can be thought of as a n uncorrelated WFE.
The example shown in this section is an unrealistic multitelescope system with each subtelescope consisting of three hexagonal segments. This system has been picked such that it is possible to show many of the features of the matrix methods. This example, shown in Fig. 25(a), contains 13 apertures, 4 parent apertures, which are outlined with dashed circles, and 9 child apertures, which are the 9 hexagonal elements. The dots on the hexagonal elements are the locations of simple Gaussian influence functions.
If the system in Fig. 25(a) is only in the initial concept stage there will be many unknown system parameters, however this will not stop the questions of image quality and overall performance. To precede one must make resalable assumptions. This process will inform the community and help with the creation of system requirements. For this investigation, each of the hexagonal segments will have PTT control and random PTT noise. Each group of three hexagonal segments make up one telescope that is combined with the other two. This can result in the PTT of each multitelescope subsystem. The final combiner optic, which is modeled via the largest enclosing parent aperture, will have aberrations and, in this case, also contain the influence functions. It is obvious that the real complexities of this system are quite involved, but from an initial image science point of view, each image point is made from a converging wavefront that can be represented as coming from an imaginary exit pupil, which is an effective focal length away. Therefore, provided it is possible to make reasonable assumptions on the form and statistical nature of the WFE, it is possible to bound the possible performance and help set requirements for the system concept.
The first thing that needs to be done is to calculate the matrix, which can be done by evaluating Eq. (63) for all of the basis functions and apertures in the given problem. It is not required, but it is convenient to create the smallest possible dimensional matrix. For example, in a situation where you are given some specifications that require a limit of a given RMS WFE on the following basis functions, , and a subset of these basis functions along with other basis functions are also required to model some external effects, like atmospheric turbulence, the matrix that is created should only have one row-column that corresponds to the basis functions that are common to both subspaces. The process for finding the lowest-dimensional matrix can be as simple as generating a list of all possible basis function/aperture combinations required and then eliminating the duplicates and keeping track of the index mappings back into all of the subspaces. Another way to think about subspaces is that subspaces are the links between a set of basis functions and a group of apertures, like an to link table in a database.
In this example, 14 groups of basis functions are used. The basis function groups include sets of Zernike polynomials, like , , and , Gaussian influence functions at all the locations which are shown as dots in Fig. 25, and nine instances of a power spectrum density wavefront, one for each hexagonal aperture. When these groups of basis functions are combined with the different combinations of apertures, it results in 17 subspaces that can be used to model the dynamic PTT of the segments of the system, low spatial frequency variations of the apertures or the adaptive optics along with other effects. Anyone of these subspaces could have a temporal dependence, for example, the low spatial frequency could have a time dependence resulting from temperature fluctuations.
The matrix for this example is shown in the right part of Fig. 25, where the white cells represent a value greater than zero, the black cells are less than zero, and the gray cells are identically zero.
Dynamic PTT WFE
If the physical structure of the system will result in high-frequency vibrations of the piston, and tip and tilt of the nine hexagonal mirror segments, one must be able to simulate the possible effects. When real PTT test data or simulated data from a structural engineer are provided for the system, one must be able to separate out the dynamical parts that drive the WFE from the dynamical parts that drive image motion or jitter. Otherwise, the image scientist must generate reasonable PTT data based on RMS WFE and image motion requirements. However, either way, a time-dependent PTT WFE vector is created, which has a form similar to
The time dependence of can be broken into two components: the correlated component, , and the uncorrelated component, . The correlated component represents the motion of the GPP, which is the average direction of the wavefront that determines the location of the PSF in the imaging plane. The uncorrelated component is the deviation from this average wavefront that determines the variation in shape of the PSF that is not driven by the geometry of the aperture. During an integration time with random PTT motion, one can imagine the maximum of the PSF randomly moving around on the image plane. This motion should be considered image motion or jitter. If you were to ride along with the maximum of the PSF, you would see the PSF changing shape as a function of time. This dynamic shape change should be considered to be resulting from WFE.
The two components of can be calculated with
The operation of applying returns the vector as shown in Eq. (80), which is three parameters for the GPP as a function of time. The application of takes the GPP vector and creates a full WFE vector. The correlated component, , produces all of the LOS variation and the uncorrelated component produces all of the RMS WFE. A frame out of an animation is shown in Fig. 26, which shows the RMS WFE, the LOS, and the wavefront on the aperture.
If one is given a specification for RMS WFE, for dynamical PTT, and one for image motion possibly in the form of jitter, , then one can generate Monte Carlo data that match these specifications in the following manner:
1. Generate random time series for and from whatever method matches your physical situation.
2. Create the uncorrelated vector, , and scale to the required RMS WFE.
3. Create the LOS profile by scaling the correlated vector.
The time to do this type of analysis and Monte Carlo simulation is significantly reduced by using the matrix to preform these operations. Without the matrix, each term in the RMS time series above would require fitting a plane to the WFE and a numerical integration over the domain for the aperture. With the matrix, all possible elementary numerical integration combinations have already been done and all one needs to do is fast matrix multiplications.
Moving wavefront error between subspaces
As was pointed out in Sec. 4.2, the matrix most likely will not be of full rank. Because this wavefront vector space is overcomplete, it is possible to move the expression of the wavefront from one subspace to another. A common example of this is in adaptive optics, where a measured wavefront is reduced by applying a matching but negative wavefront. We are not going to talk about how one would model a wavefront sensing system or model the error in the application of the adaptive optics basis functions. However, we will show how the matrix methodology can help in separating ideas and the calculation of macroscopic parameters.
The following example will be for a time instance; one can easily add temporal dynamics by using similar methods as shown in the previous section. The total wavefront of a system is the simple vector sum of all of the wavefront subspaces.
These subspaces could represent any WFE source that is included in your modeling of the system. Let the sum of a subset of these vector spaces be , which represents the wavefront that is measured using a wavefront sensing method. If the wavefront sensing method is blind to the GPP, then one needs to break into the correlated and uncorrelated pieces.
The measurement of will have some residual error, , such that the final measured wavefront is
Using the projection operator derived in Sec. 4.5, it is possible to map into another vector space that contains the basis functions for the adaptive optics influence functions.
If one wants to assume that the application of the influence functions does not affect the GPP, then one could project out the GPP by applying the projection operator to . One can also add the noise associated with the application of the influence functions by creating an error vector . The final wavefront vector for the system is as follows:
As a final visual example, the top four subspaces shown in Fig. 27 have been applied to the system and assumed to be measured with no error. This measured WFE is then mapped into the influence function basis space using the methods previously discussed. The influence functions are then used to cancel the measured WFE. The system with the original WFE and the reduced WFE are shown in Fig. 28 along with cross-section plots of the MTF and density plots of the PTF. Because of the sparse nature of the geometry of the aperture, the MTF drops much faster than the filled aperture as shown in the top right of Fig. 9. The application of the influence functions improves the MTF but has a considerable effect on the PTF shown in the last row of Fig. 28. The PSF of the system before and after the application of the influence functions is shown in Fig. 29 along with a simple image simulation.
Modeling the imaging chain has proven to be an invaluable tool for optical designers to assess design trades and their impact on the final image quality. A tutorial for modeling the imaging chain of a digital camera system was provided to give an overview of the concept and the mathematics associated with a simple model for digital cameras. Accurately modeling the OTF in the imaging chain is critical for understanding the relationship between the WFE and the final image quality. Unfortunately, modeling the pupil function for the OTF calculation of an optical system can prove to be the most challenging step in the imaging chain model for dynamically changing complex optical systems, such as segmented or sparse aperture systems. Modeling the OTF over the range of dynamic wavefront instances to ascertain RMS WFE, LOS, and GPP can prove challenging and laborious. We introduced a novel approach that simplifies modeling the pupil function WFE in the imaging chain model. With the calculation of an matrix, the resulting WFE, LOS, and GPP calculations are greatly simplified with fast matrix calculations. No longer is it necessary to perform costly numerical integrations every time the instance of WFE changes. We demonstrate the application of the matrix to provide the dynamical changing WFE for a segmented sparse aperture.
We would like to acknowledge our great appreciation to our colleagues in the Imaging Science group at Exelis for the many useful and engaging conversations while developing the image chain modeling concepts.
Robert D. Fiete is the director of research and development at Exelis Geospatial Systems. He received his BS in physics and math from Iowa State University and his MS and PhD in optical sciences from the University of Arizona. Over the past 30 years, he has developed imaging chain models to assess the image quality of imaging system designs. He is a senior member of OSA and SPIE as well as a fellow of SPIE.
Bradley D. Paul is a senior image scientist at Exelis. He did undergraduate work at Miami University in Oxford, Ohio. He received his PhD in physics from JILA at the University of Colorado in Boulder, Colorado. After biking around the United States and Canada, he worked for a start-up company assisting in the construction of a holographic printer. He joined Eastman Kodak in 2004, where he has been developing new methods of analyzing electro-optical imaging systems.