The point spread function (PSF) is the image of a point source of light. The PSF is shaped by various effects, including the spatial frequency response of the optics and image motion due to line-of-sight (LOS) motion. The optical transfer function (OTF) of an isoplanatic (shift invariant) optical system is the two-dimensional (2-D) spatial Fourier transform (FT) of the PSF for noncoherent (incoherent) optical radiation, and the modulation transfer Function (MTF) is its magnitude.1,2 Prominent uses for the OTF of an optical system include predicting performance from simulation information, specifying performance tolerances and requirements for an optical system, and analyzing performance from test data.
Optical systems operating in real-world scenarios are subject to dynamic environments. The principle dynamic effect is image motion during the exposure interval in which electromagnetic energy is collected by the detector. Image motion reduces the system OTF, particularly at higher spatial frequencies, and therefore reduces image quality. Image motion is potentially a limiting factor in the imaging performance of an optical system. The image motion treated herein is the relative LOS pointing motion projected onto the two spatial dimensions of a focal plane. The relative pointing motion is due to camera attitude error, deliberate attitude motion, translational camera motion, and translational target motion. Other sources of image motion, distortion, and varying target aspect, for example, are not considered. The various types and sources of image motion are illustrated and explained in detail in Ref. 3 (Ch. 8, pp. 103–115). The effect of image motion on the performance of an optical system is measured by an image motion OTF. In addition to its contribution to the system OTF described above, the image motion OTF is also needed to calculate an inverse filter for image compensation. In this work, we consider systems where all elements of the image sensor are exposed simultaneously. Line scan detectors, time delay integration (TDI), and moving shutter systems are not considered.
The purpose of this paper is to derive statistical image motion OTFs in two dimensions of spatial frequency for image displacement, smear, and jitter, and to provide a methodology to compute the parameters of the OTFs from LOS pointing motion of the optical system. Conventional analysis of the smear OTF (a sinc function) assumes some particular value for smear, so we call it a deterministic smear OTF. In general, image smear has a mean value plus a random variation from one image to another. In some optical systems, the random variation dominates the mean. The statistical smear OTF measures the average performance of an ensemble of images subject to nonzero-mean Gaussian random smear. It is best visualized as a surface over the two dimensions of spatial frequency. The derivations yield the familiar Gaussian jitter OTF, which is also a statistical OTF, and a displacement OTF, which measures image offset due to image motion. The parameters of the OTFs are means and covariances computed from the power spectrum of the pointing motion weighted by frequency domain weighting functions. Various types of pointing metrics are defined. The OTFs and the method to compute their parameters are intended to support integrated modeling, multidisciplinary analysis, and simulation of electro-optical systems.
Historical Literature Survey
Various authors184.108.40.206.220.127.116.11.18.104.22.168.16.17.–18 have analyzed the effect of image motion on the performance of optical systems. The OTF has been studied analytically and numerically for specific motions such as uniform linear motion, accelerated motion, low-frequency sinusoidal motion with period greater than the exposure interval and with various initial phase angles, high-frequency sinusoidal motion with period less than the exposure interval, and white Gaussian random motion (jitter). The image motion OTF has been studied extensively for deterministic motion. Except for the jitter OTF, statistical treatment of image motion OTF has been limited to numerical evaluation.
The image motion MTF derived in Refs. 4 and 5 for high-frequency sinusoidal motion, assuming an integral number of cycles during an exposure or many cycles so that fractional cycle is negligible, is shown to be a zero-order Bessel function of the spatial frequency and amplitude of the sinusoid. The low-frequency image motion MTF in Ref. 5 is simply the image motion MTF for uniform linear motion with the assumption that the image exposure time is much shorter than the period of the sinusoid. The image motion OTF for uniform linear motion and Gaussian random motion are also given in Ref. 4. The OTF for uniform linear motion and for sinusoidal motion, with zero to two cycles in the exposure interval, including fractional cycles, and for various initial phase angles, are analyzed in Ref. 7. The image motion OTF for quadratic motion was first analyzed in Ref. 6. The image motion OTF for linear plus quadratic (accelerated) motion is derived in Ref. 8, where it is shown that in the presence of accelerated motion the MTF is nonzero at any spatial frequency but approaches the sinc function as the smear due to acceleration becomes small compared with the smear due to the initial velocity. The MTF for a fractional-cycle sinusoid at a particular initial phase angle shown in Ref. 7 is similar to the MTF for accelerated motion in Ref. 8. This is not surprising, since a short segment of a sinusoid can be approximated as a quadratic. Image degradation due to various types of image motion is summarized in Ref. 3 (Ch. 8, pp. 115–124). A “lucky shot” probability model is derived in Ref. 10 and confirmed experimentally in Ref. 11 to predict how many independent exposures are needed, with a given probability, to obtain at least one image with a smear less than a given length. This result is important to compute the probability of target acquisition. A numerical method to compute the MTF from arbitrary motion data is presented in Ref. 12, and MTFs are computed numerically for linear motion and for high- and low-frequency sinusoidal motion. Average MTFs for low-frequency sinusoidal motion with random initial phase (relative to the start of the exposure) and for low-frequency motion with a range of amplitudes are also computed in Ref. 12. The OTF for sinusoidal image motion is computed in Ref. 13 by first obtaining a line spread function (LSF) from a histogram (probability density function) of the image motion data and then computing the OTF by a fast Fourier transform (FFT) of the LSF. The numerically computed OTF due to sinusoidal image motion is studied in greater detail and confirmed experimentally using motion sensor data in Ref. 13. The numerically computed OTF for accelerated motion is also analyzed in Refs. 13 and 14. The image motion analyses in Refs. 1011.12.13.–14 are summarized in Ref. 15 (Ch. 14). An image motion MTF is derived in Ref. 16 by using moments of the motion data. There is no assumption about the type of motion or about its probability density. Results show that a large number of moments are typically required to achieve acceptable numerical accuracy, and the number of moments required depends on the data. A deterministic image motion MTF for a time-delay-integration (TDI) detector subject to uniform linear motion and a statistical (jitter) image motion MTF are derived in Ref. 17. The image motion MTF for a TDI line-scan detector and uniform linear motion was derived and analyzed via simulation in Ref. 18.
In many systems, the image motion is more accurately represented by a power spectral density (PSD) spread over a range of frequencies rather than a single vibrational frequency. Image motion is defined in Refs. 19 and 20 as a displacement plus jitter, and the variances of the displacement and jitter are computed from the PSD of the image motion weighted by temporal-frequency domain weighting functions for displacement and jitter. The computation of covariance matrices in the present work follows that of Refs. 19 and 20. The jitter MTF given in Refs. 19 and 20 is the same as in Ref. 17. In previous work by the first author21,22 the terms “stability” and “jitter” are defined and (point-to-point) stability and windowed stability are introduced as measures of image stability over multiple images. A standard adopted by the National Geospatial Intelligence Agency for spatial data accuracy23 defines a point-to-point stability metric and an algorithm to compute it, typically for line-scan data. This metric can be computed more efficiently by our method (Sec. 4.1). Windowed stability measures the change in displacement from one image to another and is useful for image registration and target tracking.
This work extends results24 for LOS motion in one spatial dimension to two spatial dimensions and removes the assumption of zero-mean smear rate (and zero-mean smear). We first define the displacement, mean smear rate, and jitter components of image motion over the exposure interval and derive expressions for these as a function of the pointing motion. The mean smear is the mean smear rate times the exposure time. We then derive from first principles the general image motion OTF as a function of the pointing motion. The general image motion OTF is written in terms of displacement, smear, and jitter, which is shown to be separable in these components of pointing motion. Taking expectations and time averages yields the statistical image motion OTFs. The OTFs are parameterized by means and covariances, which are computed from the power spectrum of the pointing motion weighted by frequency domain weighting functions. The frequency domain weighting functions are a direct result of the definitions of the displacement, mean smear rate, and jitter. A notable result is that the statistical smear OTF cannot be written as the product of one-dimensional (1-D) OTFs, unlike the deterministic smear OTF and the Gaussian jitter OTF.
In Sec. 2, the components of the pointing motion (displacement, smear, and jitter) are defined and expressions for these components in terms of the pointing motion are derived. In Sec. 3, we derive the general image motion OTF and derive expressions for the displacement, statistical smear, and jitter OTFs. The statistical smear OTF is characterized in Sec. 3.2. The statistical smear LSF (1-D PSF) is derived in Sec. 3.3. The OTFs and LSF are summarized in Table 1 in Sec. 3.3. In Sec. 4 and in Appendices A–H, equations for the mean and covariance of the components of the pointing motion are derived based on the power spectrum of the pointing motion. The lengthy derivations of the weighting functions are relegated to the Appendices but summarized in Table 2 in Sec. 4. Weighting functions used to compute the various covariance matrices are discussed in Sec. 4.1. The computation of the power spectrum is presented in Sec. 4.2, and a method for simulating and analyzing the pointing motion in an imaging vehicle or platform is discussed in Sec. 4.4.
Pointing Motion Model
Image motion is due to the relative LOS motion of the camera and the observed object. The relative LOS motion is caused by the relative translational and rotational motion of the camera and the observed object. Image motion due to changes in aspect of the object is not considered here. The image motion can be modeled by
Figure 1 shows image motion comprising displacement, smear, and jitter. Displacement is the average image offset over the exposure interval of length . Smear is due to a linear motion over the interval and is equal to times the smear rate, where the smear rate is the average slope of the image motion over the exposure interval. Jitter is the residual motion after displacement and smear are removed from the image motion. Smear results in a streaked image and jitter causes an image to be blurred.
The image exposure interval of length centered at time is
Differentiation of with respect to the displacement gives
Differentiation of with respect to the smear rate gives
We assume that and are Gaussian and wide sense stationary with expected value meansTable 2 in Sec. 4. These covariance matrices are used in the formulas in the previous section to compute the smear and jitter OTFs.
Image Motion Optical Transfer Function
The noncoherent (incoherent) imaging of an isoplanatic (shift invariant) electro-optical system is the product of the Fourier transform of the point spread function of the optical system and the Fourier transform of the object geometrically projected onto the detector plane. The imaging process in the Fourier transform domain is
The irradiance of a still image is a function of the spatial location in the image. The average irradiance at over an exposure of duration seconds centered at time is
The irradiance of the image subject to image motion is , and the average irradiance over the exposure is
Statistical Image Motion Optical Transfer Function
An analytical expression for the statistical image motion OTF is derived in this section. The statistical image motion OTF is the expected value of the single image motion OTF in Eq. (29):D that and are independent random variables in each interval , and are independent of the least-squares residual so we have for and ,
Displacement optical transfer function
The displacement of an image is represented in the OTF by a constant phase shift. For any given exposure centered at time , the displacement is a random constant during the exposure interval. Therefore we take the expectation in Eq. (33) to obtain
Smear optical transfer function
For a Gaussian random smear rate with mean and covariance , the second term in Eq. (33) is the characteristic function associated with the Gaussian density of , so we have [Ref. 25 (p. 115)],
The function in Eq. (39) is the complex error function (usually denoted erf), and is the real part of its complex argument. Equation (39) was obtained with the aid of Ref. 26 (p. 108, §2.33-1), which is also found in Ref. 27 [p. 3, §3.2, Eq. (3)]. The complex error function is found in Fourier analysis, Fresnel integrals, and the plasma dispersion function.
The error function for real is Ref. 25 (p. 48)28 continued into the complex plane with a complex argument in place of the real argument . The complex error function is Hermetian, so . [Similarly, the standard error function is odd, so .] The complex error function is bounded between for all real , but unbounded on the imaginary axis as . Therefore Eq. (39) is best computed from rather than from separate terms. The complex error function and its properties, related functions, and series expansions are given in Ref. 29 (p. 297–309). Algorithms, code, and documentation for computing the complex error function are found in Refs. 3031.32.33.–34. One must be careful in using any numerical algorithm — remarks on the method in Ref. 33 indicate that it may be less accurate for complex arguments near the imaginary axis. It is beyond the scope of this paper to provide a detailed treatment of numerical methods to compute the complex error function. The reader is directed to Ref. 30 (§7) as a starting point, but beware that some articles and algorithms cited in Ref. 30 (§7.25) (and elsewhere) as methods for computing the complex error function actually compute the Faddeeva function. An exception is Ref. 32, which provides algorithms and code to compute the complex error function, the complementary complex error function, and the Faddeeva function. Although Ref. 35 provides algorithms for the Voigt function, it includes two series approximations for the complex error function, which are found in Refs. 33 and 34. The erfz function in Ref. 31, which comprises three separate algorithms noted in comments in the code, was used with its default settings to generate results in the next section.24 where () is assumed at the outset (and where the motion is 1-D). Equation (44) is the average of deterministic smear OTFs for images whose smear lengths are zero-mean Gaussian random variables. Equations (39)–(41) are the average smear OTF for images whose smear lengths are nonzero-mean Gaussian random variables. The relationship between these OTFs are illustrated and discussed in Sec. 3.2.
From Eqs. (39)–(41), the statistical smear OTF is clearly not separable in terms of mean smear and dispersion. Therefore, it is incorrect to model the deterministic and stochastic effects of smear as the product of the deterministic smear OTF (the sinc function) and the statistical smear OTF (with ). Furthermore, the two spatial frequency components of the statistical smear OTF are also not separable, so the statistical smear OTF cannot be expressed as the product of 1-D OTFs in each frequency variable. A coordinate transformation can be applied so that one component of is zero. This rotates the graph of so that one axis is aligned with the direction of the mean smear. Even in this coordinate system, is not separable.
In computing , we could have switched the order of expectation and time averaging,
Comparison of Deterministic and Statistical Smear Optical Transfer Function
The statistical smear OTF is characterized to show how it behaves as a function of smear and smear dispersion (standard deviation of smear). For clarity, the characterization is shown for one frequency axis. Surface plots are also provided to illustrate the statistical smear OTF in two dimensions of spatial frequency.
Figure 2 shows two plots of the 2-D statistical smear OTF evaluated for various mean smear and various smear dispersion along one dimension in frequency. The frequency axis is one-sided since the OTF is an even function.
Figure 2(a) shows the statistical smear OTF for mean smear and smear dispersion ranging from 0.02 to 2.0 mm. The OTF converges to the sinc function, Eq. (43), as , and converges to the statistical smear OTF, Eq. (44), as . The curve for is almost indistinguishable from the curve for and so is not shown. An interesting characteristic is that the curves essentially go up as the dispersion increases until the dispersion equals the mean smear, and then the curves go down as the dispersion increases further. The degradation of the statistical smear OTF is pronounced as the dispersion increases above the mean smear. The curves begin to look like the sinc function when the dispersion is less than about half the mean smear.
Figure 2(b) shows the statistical smear OTF for a smear dispersion and mean smear ranging from 0.02 to 2.0 mm. The curve for is almost indistinguishable from the curve for and so is not shown. The curves move left and down as the smear increases, again indicating worsening degradation of the image. The curves begin to look like the sinc function when the smear is greater than twice the dispersion, which is consistent with Fig. 2(a).
The statistical smear OTF in two dimensions of spatial frequency is shown in the contour plots in Fig. 3. (A three-dimensional mesh plot is difficult to show clearly, so it is omitted.) The statistical smear OTF in Fig. 3(a) was produced with mean smear , and smear dispersion . Compare with Fig. 2(a). Although there is smear in the direction, the average smear OTF is not a sinc function, although the response is a sinc function for each realization of smear in each image. Figure 3(b) was produced with mean smear and smear dispersion , . Although the mean smear is along a line, the sinc response is along the axis, and the response is essentially along a line. Although this may seem counterintuitive, it is because of the large random smear in the axis. Since the statistical smear OTF can change dramatically with changes in the parameters, one must be careful in making any general statements regarding the smear OTF. Nevertheless, the statistical smear OTF provides information about OTF performance that is not revealed by the deterministic smear OTF (the sinc function) for any choice of smear such as a “worst-case” smear.
In Fig. 4(a), the statistical smear MTF (, ) tightly bounds the deterministic smear MTF (, ). It also bounds the deterministic smear MTF for , since the deterministic smear MTF is smaller for longer smear lengths.
The statistical smear and jitter OTF are shown in Fig. 4(b) for , , and . From Eq. (60), the contribution of smear to the root-mean-square (RMS) attitude motion is , or 1.15 for . This is only slightly larger than the jitter in this example. Empirical evidence indicates that image quality tolerates degradation from smear better than from jitter. The reason for this is because the statistical smear OTF goes slowly to zero with increasing spatial frequency, whereas the jitter OTF goes to zero quickly.
Statistical Smear Line Spread Function
The deterministic smear LSF is a rectangle (boxcar) function. The rectangle function is shown in Fig. 5(a) for three values of smear. The statistical smear LSF describes the average LSF over an ensemble of random rectangle LSFs whose widths are random from one image to another. The statistical smear LSF can be computed as the expected value of the random rectangle function. Alternatively, the statistical smear LSF can be obtained by computing the inverse Fourier transform of the statistical smear OTF, Eq. (39). The computation is facilitated by substituting Eq. (36) for the integrand in Eq. (38), and then switching the order of integration and inverse Fourier transform, and by assuming that the random smear rate is Gaussian. The derivation is lengthy by either method, so details are omitted. We have derived the statistical smear LSF in one dimension for zero-mean Gaussian smear. The 1-D statistical smear LSF for zero-mean smear is
The statistical smear LSF is shown in Fig. 5(b) for values of from 0.2 to 2.0 mm. In comparison, the deterministic smear LSF for smear is a rectangle function of width and amplitude (a Dirac delta function for ). The statistical smear LSF has the required property that it has unit area, as does the deterministic smear LSF.
Gaussian random smears are concentrated around the mean, which is zero in Eq. (47); hence is large there, and as . Large smears are infrequent, so as . The broadens but becomes thinner near as increases. For small , is concentrated near and becomes a delta function [similar to the Dirac delta function ] as . As can be seen in Eq. (47), scales the graph [Fig. 5(b)] of in both axes.
Summary of image motion OTFs and LSF.
|Eqs. (40) and (41)|
|Eqs. (40) and (41)|
|Smear (statistical, )|
|Smear LSF (deterministic, )|
|Smear LSF (statistical, )|
The OTFs for displacement, smear rate, smear, and jitter in Table 1 are parameterized with means (mean displacement), (mean smear rate), (mean smear), and covariances (displacement covariance), (smear rate covariance), (smear covariance), and (jitter covariance) in Eqs. (13)–(19). These are derived in Appendices B, C, and E. Covariance matrices for other performance metrics are also derived. These are the covariances , , , and of the relative pointing motion, smitter (the sum of smear and jitter), point-to-point stability (the relative motion at points in time separated by seconds), and windowed stability (displacements separated by seconds). These are defined in Appendices A, F, G, and H and discussed further in Sec. 4.1.
Although the covariance matrices can be computed directly from sampled motion data, that approach is computationally intensive and does not reveal what spectral content of the pointing motion contributes significantly to the covariance matrices, hence to the image motion OTF. The spectral content can also reveal which sources of relative pointing motion contribute most significantly to the covariance matrices and to the image motion OTFs. The basic idea19,20 is to compute the PSD from the autocorrelation of the relative pointing motion . Expressions for the covariances in terms of the PSD are derived in Appendices A–H.
We assume only that is wide sense stationary during the exposure intervals. We can ignore pointing motion between exposure intervals that is not characteristic of the motion during the exposures (e.g., a slew between exposures). Any average displacement and trend over the ensemble of images should be removed so that a valid autocorrelation and PSD are computed. After subtracting the overall trend, the trend in each exposure can vary but average to zero over the ensemble. The average displacement and trend are added back into and (or ) before computing the OTFs.
The autocorrelation of is the matrix
From the inverse Fourier transform, we obtain the autocorrelation in terms of the PSD,
Since is real, is real and even, is real and even, and we can write Eq. (50) as
The PSD and autocorrelation are used in the Appendices to derive expressions for the covariance matrices , , , , , , and . The covariances are computed from expressions of the formTable 2. The means are also defined in Table 2.
Summary of covariances and corresponding frequency domain weighting functions.
|Pointing measure||Mean||Covariance||Weighting function||Equations|
|Smear rate||Eq. (66)|
|Point-to-point stability||Eq. (88)|
|Windowed stability||Eq. (93)|
The covariance of the pointing motion is derived in Appendix A, Eq. (60) and is given byTable 2 are such that
Smitter is the sum of smear and jitter, or equivalently the pointing motion with the displacement removed. The smitter covariance in Table 2 is the jitter covariance defined in previous works,1920.21.–22 and is not used to compute the image motion MTFs in Table 1.
The displacement, smear, jitter, and smitter weighting functions in Table 2 are plotted in Fig. 6 with second. It can be seen that the frequency content of the pointing contributes to the covariances, hence the OTFs, over certain ranges of frequency. The displacement weighting function is lowpass, so low-frequency pointing motion contributes to the displacement. The smear weighting function peaks at 0.7 Hz and is zero at 0 and 1.4 Hz, and exhibits smaller peaks at higher frequencies. The smear weighting function is essentially bandpass, so pointing motion over a certain range of frequencies contributes significantly to smear. The jitter weighting function is highpass. Large-amplitude pointing motion can be significant at frequencies where the weighting function is small. The displacement, smear, and jitter weighting functions overlap, and so the spectral content of the image motion at any frequency contributes to all three measures of image motion. The contribution of the pointing motion to displacement, smear, and jitter depends on the PSD of the pointing motion as well as the weighting functions, so there are no arbitrary frequency regions associated with displacement, smear, and jitter.
The point-to-point stability covariance measures the change in pointing from one instant to another. The stability weighting function with second, shown in Fig. 7, is a minimum at 0,1,2,… Hz and a maximum at 0.5,1.5,2.5,… Hz, so frequencies above contribute to the point-to-point stability. Point-to-point stability is called stability in Refs. 21 and 22. The point-to-point stability metric for spatial data accuracy of line scan data23 can be computed more efficiently by our method. However, the displacement, smear, and jitter metrics may be more appropriate.
The windowed stability covariance measures the change in displacement from one image to another. The windowed stability weighting function has two time parameters, the exposure time and the time between image center times. The windowed stability weighting function is plotted in Fig. 8(a) with and . It is essentially a bandpass function and goes to zero at low and high frequencies for any choice of and . The windowed stability weighting function looks significantly different for various and , as exemplified by Fig. 8(b) where and . Windowed stability is useful in image registration and to specify or evaluate performance for a frame-differencing camera.
Computation of the Power Spectrum and Covariances
A pointing performance analysis will typically produce both time-domain and frequency-domain data. Time-domain data is typically obtained from a time-domain simulator, and frequency-domain data is typically obtained from transfer functions driven by harmonic or white noise. Although the mean and covariance of displacement, smear, and jitter can be computed in either the time domain or in the frequency domain, their computation is most conveniently and efficiently performed in the frequency domain.
The main tool for computing the pointing covariances from uniformly sampled data is the FFT. The FFT of the sampled pointing motion is scaled to a power spectrum (not a density) by dividing it by and then computing its magnitude squared, where is the number of samples of data and is assumed to be a power of two, . (This assumption can be relaxed.) The power spectrum is then shifted (FFTSHIFT in Table 3) so that the zero frequency line is at the center. The frequencies range from to Hz in increments of Hz, where is the sample interval in seconds. Note that the sum of the discrete power spectrum is equal to the second moment of the time-domain data,
Summary of calculations for the power spectrum of pointing motion.
|length of data record|
|frequency range (), (sec)|
|power spectrum of ,|
|denotes one of the weighting functions|
|biased sample autocorrelation of|
|,||power spectrum from the biased autocorrelation|
Since the power spectrum does not converge to the true spectrum with increasing , the data should be segmented and the power spectra of the segments should be averaged. Alternatively, the power spectrum can be computed from the biased, and possibly windowed, sample correlation function of . A detailed discussion of computation of the power spectrum is beyond the scope of this paper; the reader is referred to Ref. 36 or one of many books on spectral analysis.
In the frequency domain, the pointing covariances are evaluated by computing the weighting functions at each frequency point, multiplying by the power spectrum of the pointing motion at each frequency, and then summing the terms. This computational algorithm is summarized in Table 3. The power spectrum can be computed using only non-negative frequencies, but the zero-frequency term is multiplied by one and the positive-frequency terms multiplied by two in the summation. Once the power spectrum is computed, the covariance matrix corresponding to one of the weighting functions in Table 2 is easily computed.
Pointing Covariance from Relative Motion Covariance
In Sec. 4.2, the pointing motion is computed from the relative translation and relative attitude (a small-angle representation) by using the camera model in Eq. (1). The power spectrum and covariance matrices are then computed from the pointing motion .
An alternative approach is to first compute the power spectrum of the relative translation and the power spectrum of the relative attitude motion. At this point, there are two paths to compute the covariances matrices. One path is to apply sensitivity matrices to map the power spectra of the relative motions into the power spectrum of the pointing vector by
A computationally more efficient path is to compute the covariance matrices and corresponding to and by using the formulas in Table 2. These covariance matrices are then mapped into the pointing covariances by an equation of the form37 and 38.
Pointing Performance Analysis
Figure 9 shows the pointing control system for an optical payload on an imaging vehicle. In the case of a spacecraft, the system model comprises models of attitude sensors, actuators, fuel slosh, a solar array drive, internal disturbances, and the optical system, all connected to appropriate nodes of a reduced-order Nastran model comprising rigid-body and flexible-body modes and mode shapes. The control loop is closed through the attitude controller. The attitude command reference input is a disturbance since it can excite structural and slosh modes, and the command itself may be subject to error (e.g., scan rate error or tracking rate error). Similar integrated modeling approaches are found in Refs. 3822.214.171.124.–43. An overview of modeling and analysis is given in Ref. 44.
As suggested in Ref. 19 (pp. 21–22) and Ref. 20 (pp. 573–574), the weighting functions can be approximated by linear transfer functions for use in control system analysis and synthesis. Standard state-space methods can then be applied to calculate the covariance matrices. A state-space solution that avoids having to compute the weighted FFT is presented in Refs. 45 and 46 but would have to be modified for our model of pointing motion to include smear and a different jitter weighting function.
Analysis of pointing performance is often faster and numerically more reliable (due to time scales) if the system response to disturbances is computed directly in the frequency domain from a linear or linearized closed-loop transfer function rather than in the time domain from a simulator. A time domain simulator can of course capture nonlinear and time-varying effects. The response of a system to high-frequency noise and disturbance is most accurately and efficiently computed in the frequency domain. For stochastic sources, the power spectrum can be computed directly by using standard state-space covariance methods. Once the power spectrum of is computed, it is a trivial effort to compute the covariances, as discussed in Sec. 4.2. Segments of the pointing motion pertaining to nonimaging attitude motions have to be eliminated if they are not representative of the motion during the exposure interval.
In a linear or linearized system, the covariance of the pointing motion from individual noise, disturbance, and other sources can be computed individually and added to obtain the total pointing covariance. The individual contributions can then be ranked so that the greatest offenders can be identified. The power spectrum may be computed as a combination of a system frequency response, an FFT of the autocorrelation of time-series data, discrete spectral lines due to harmonic disturbance sources, and stochastic sources such as sensor noise. The pointing motion from each source can be computed at different sample rates or frequency resolutions, though the sample rate should be high enough and frequency resolution small enough to accurately represent the high-frequency responses of the system and so that numerical errors in the computed covariance matrices are not significant. Similarly, time-domain data from different simulations do not have to be resampled to a common sample rate.
Two-dimensional statistical image motion OTFs for displacement, smear, and jitter components of image motion are derived. The LSF for zero-mean random smear is also derived. The statistical smear OTF measures the average optical system performance for an ensemble of images subject to nonzero mean Gaussian random smear. In comparison, the deterministic (sinc function) smear OTF measures performance for a specified smear length. The familiar Gaussian jitter OTF is also a statistical OTF.
Limiting cases for the statistical smear OTF are given: (1) fixed nonzero mean smear and diminishing smear dispersion, and (2) diminishing mean smear and fixed nonzero smear dispersion. In the first case, the statistical smear OTF converges to a sinc function (the well-known deterministic smear OTF), and in the second case it converges to the function. The statistical smear OTF begins to resemble the sinc function when the mean smear exceeds about twice the dispersion in the smear. For equal RMS attitude motion due to zero-mean random smear and jitter, the statistical smear OTF is greater than the jitter OTF at higher spatial frequencies. This corroborates the empirical observation that optical systems tolerate smear better than jitter.
The statistical OTFs are parameterized by means and covariances of the displacement, smear, and jitter components of pointing motion, with spatial frequency as the independent variable. The covariances are computed accurately and efficiently from a temporal-frequency-weighted power spectrum of the LOS pointing motion. The weighting functions are parameterized with only the exposure time. Essentially, the displacement weighting function is low pass, the smear weighting function is bandpass, and the jitter weighting function is highpass. These frequency regions overlap, so the spectral content of the image motion at any frequency contributes to all three measures of image motion; therefore, there are no arbitrary frequency regions associated with displacement, smear, and jitter. By examining the weighted power spectrum, a control system engineer can determine the temporal frequencies where the sensitivity of the OTFs to pointing motion is greatest. The control system design engineer can then focus on the most significant disturbance sources or frequencies, which can lead directly to improvements in the design of the pointing control system and in the design of the optical system. Because covariances are additive, individual disturbance sources can be analyzed to determine their relative contributions to the displacement, smear, and jitter OTFs. The weighting functions can also be used in control system synthesis to optimize a controller. The statistical OTFs and the method for determining their parameters are a basis for integrated modeling and multidisciplinary analysis and simulation.
In addition to the image motion OTFs and their associated means, covariances, and weighting functions, point-to-point stability and windowed stability are defined and formulas for the corresponding covariance matrices are derived. Point-to-point stability measures the change in pointing from one instant of time to another. Windowed stability measures the change in displacement from one image to the next.
The pointing (accuracy) covariance is the covariance of and is computed from
For consistency with other measures of pointing motion, we write the integral as
The displacement and displacement variance were originally derived in Refs. 19 and 20. We have written the definition of the displacement in a different but equivalent form in Eq. (8), so it is instructive to rederive the displacement covariance using our definition of the displacement. The steps involved are similar to those in Refs. 19 and 20. From Eqs. (8) and (50), we obtain the displacement covariance,
Since the pointing error is assumed to be a wide sense stationary process, the autocorrelation is independent of , and so the displacement metric is valid for all . The displacement covariance can be written as
Smear and Smear Rate Covariance
The smear rate covariance is obtained by substituting the smear rate from Eq. (11) into Eq. (48) and then by using Eq. (50), whence
Since the pointing error is assumed to be a wide sense stationary process, the autocorrelation is independent of , and so the smear rate metric is valid for all . The smear rate covariance can be written as
The smear was defined as , where is the average smear rate. The smear covariance is given by
Correlation of Displacement and Smear Rate
Here, we show that . This result is used in the derivation of Eq. (72):
The mean-square jitter over the interval is given by
The jitter covariance is the expected value of the average square jitter over :D. Now substitute for from Eq. (4) and carry out the expectation using the definitions of , , and :
Finally we have an expression for the jitter covariance,
Substitute from Eq. (14) to write the jitter covariance as
Now substitute Eqs. (58), (59), (62), (63), (65), (66), (67), and (68) into Eq. (74) to obtain the jitter covariance in terms of :
Thus, the former jitter defined in Refs. 2021.–22 is the sum of smear and jitter, which is termed “smitter.” Because smear and jitter affect the image motion OTF differently, the former definition of jitter is less useful than the present definition.
The mean square smitter over the interval is
The smitter covariance is2021.–22.
Substitute Eqs. (58), (59), (62), and (63) into Eq. (82) to obtain the smitter covariance in terms of the PSD :
Point-to-Point Stability Covariance
The change in the LOS pointing over an interval of length is given by
The point-to-point stability covariance measures the mean square change in pointing from one instant to another and is given by the second order structure function
Windowed Stability Covariance
There may be a requirement on the change in displacement of one image compared with a subsequent image. The change in displacement over an interval of length is given by
The windowed stability covariance measures the mean square change in displacement given by the second order structure functionFig. 8. The presence of in Eq. (93) causes the weighting to go to zero as the frequency increases.
This work is the result of independent research by the authors. Financial support for publication was provided by the Harris Corporation and by Aerospace Control Systems LLC. The authors especially thank the reviewers for their insightful comments, which considerably improved this paper.
Mark E. Pittelkau received his BS and PhD degrees from Tennessee Technological University in 1981 and 1988, and his MS degree from Virginia Polytechnic Institute in 1983, all in electrical engineering. His work has been in spacecraft guidance, navigation, and control. He has designed and analyzed attitude determination and control systems for precision-pointing imaging and science spacecraft.
William G. McKinley received his BS degree in physics magna cum laude from Arizona State University in 1971 and his MS degree in optical science from the University of Arizona in 1975. In his optical career, he has worked at Kodak, Goodyear, TRW, ITEK, Goodrich Aerospace, and Harris Corporation. He has been engaged in the creation and analysis of all types and aspects of optical systems and the processing and evaluation of optical system data products.