17 June 2016 Optical transfer functions, weighting functions, and metrics for images with two-dimensional line-of-sight motion
Author Affiliations +
We define the displacement, smear, and jitter components of image motion and derive the two-dimensional statistical image motion optical transfer function (OTF) corresponding to each component. These statistical OTFs are parameterized by means and covariances, which are computed most conveniently from a weighted power spectrum of the line-of-sight motion. Another feature of these results is the realization that all temporal and spatial frequencies contribute to each statistical OTF and that one can determine the frequencies that contribute most significantly to each OTF. Additionally, optical system design is typically based upon the properties of an individual image. In a comprehensive optical system design, the statistical properties of an ensemble of images should also be considered. For individual images subject to a constant but possibly unknown smear length, the OTF is a sinc function. This is called a deterministic smear OTF because it does not describe the smear statistically. The statistical smear OTF describes the average smear OTF for an ensemble of images.



The point spread function (PSF) is the image of a point source of light. The PSF is shaped by various effects, including the spatial frequency response of the optics and image motion due to line-of-sight (LOS) motion. The optical transfer function (OTF) of an isoplanatic (shift invariant) optical system is the two-dimensional (2-D) spatial Fourier transform (FT) of the PSF for noncoherent (incoherent) optical radiation, and the modulation transfer Function (MTF) is its magnitude.1,2 Prominent uses for the OTF of an optical system include predicting performance from simulation information, specifying performance tolerances and requirements for an optical system, and analyzing performance from test data.

Optical systems operating in real-world scenarios are subject to dynamic environments. The principle dynamic effect is image motion during the exposure interval in which electromagnetic energy is collected by the detector. Image motion reduces the system OTF, particularly at higher spatial frequencies, and therefore reduces image quality. Image motion is potentially a limiting factor in the imaging performance of an optical system. The image motion treated herein is the relative LOS pointing motion projected onto the two spatial dimensions of a focal plane. The relative pointing motion is due to camera attitude error, deliberate attitude motion, translational camera motion, and translational target motion. Other sources of image motion, distortion, and varying target aspect, for example, are not considered. The various types and sources of image motion are illustrated and explained in detail in Ref. 3 (Ch. 8, pp. 103–115). The effect of image motion on the performance of an optical system is measured by an image motion OTF. In addition to its contribution to the system OTF described above, the image motion OTF is also needed to calculate an inverse filter for image compensation. In this work, we consider systems where all elements of the image sensor are exposed simultaneously. Line scan detectors, time delay integration (TDI), and moving shutter systems are not considered.



The purpose of this paper is to derive statistical image motion OTFs in two dimensions of spatial frequency for image displacement, smear, and jitter, and to provide a methodology to compute the parameters of the OTFs from LOS pointing motion of the optical system. Conventional analysis of the smear OTF (a sinc function) assumes some particular value for smear, so we call it a deterministic smear OTF. In general, image smear has a mean value plus a random variation from one image to another. In some optical systems, the random variation dominates the mean. The statistical smear OTF measures the average performance of an ensemble of images subject to nonzero-mean Gaussian random smear. It is best visualized as a surface over the two dimensions of spatial frequency. The derivations yield the familiar Gaussian jitter OTF, which is also a statistical OTF, and a displacement OTF, which measures image offset due to image motion. The parameters of the OTFs are means and covariances computed from the power spectrum of the pointing motion weighted by frequency domain weighting functions. Various types of pointing metrics are defined. The OTFs and the method to compute their parameters are intended to support integrated modeling, multidisciplinary analysis, and simulation of electro-optical systems.


Historical Literature Survey

Various authors34. have analyzed the effect of image motion on the performance of optical systems. The OTF has been studied analytically and numerically for specific motions such as uniform linear motion, accelerated motion, low-frequency sinusoidal motion with period greater than the exposure interval and with various initial phase angles, high-frequency sinusoidal motion with period less than the exposure interval, and white Gaussian random motion (jitter). The image motion OTF has been studied extensively for deterministic motion. Except for the jitter OTF, statistical treatment of image motion OTF has been limited to numerical evaluation.

The image motion MTF derived in Refs. 4 and 5 for high-frequency sinusoidal motion, assuming an integral number of cycles during an exposure or many cycles so that fractional cycle is negligible, is shown to be a zero-order Bessel function J0(2πξD) of the spatial frequency ξ and amplitude D of the sinusoid. The low-frequency image motion MTF in Ref. 5 is simply the image motion MTF for uniform linear motion with the assumption that the image exposure time is much shorter than the period of the sinusoid. The image motion OTF for uniform linear motion and Gaussian random motion are also given in Ref. 4. The OTF for uniform linear motion and for sinusoidal motion, with zero to two cycles in the exposure interval, including fractional cycles, and for various initial phase angles, are analyzed in Ref. 7. The image motion OTF for quadratic motion was first analyzed in Ref. 6. The image motion OTF for linear plus quadratic (accelerated) motion is derived in Ref. 8, where it is shown that in the presence of accelerated motion the MTF is nonzero at any spatial frequency but approaches the sinc function as the smear due to acceleration becomes small compared with the smear due to the initial velocity. The MTF for a fractional-cycle sinusoid at a particular initial phase angle shown in Ref. 7 is similar to the MTF for accelerated motion in Ref. 8. This is not surprising, since a short segment of a sinusoid can be approximated as a quadratic. Image degradation due to various types of image motion is summarized in Ref. 3 (Ch. 8, pp. 115–124). A “lucky shot” probability model is derived in Ref. 10 and confirmed experimentally in Ref. 11 to predict how many independent exposures are needed, with a given probability, to obtain at least one image with a smear less than a given length. This result is important to compute the probability of target acquisition. A numerical method to compute the MTF from arbitrary motion data is presented in Ref. 12, and MTFs are computed numerically for linear motion and for high- and low-frequency sinusoidal motion. Average MTFs for low-frequency sinusoidal motion with random initial phase (relative to the start of the exposure) and for low-frequency motion with a range of amplitudes are also computed in Ref. 12. The OTF for sinusoidal image motion is computed in Ref. 13 by first obtaining a line spread function (LSF) from a histogram (probability density function) of the image motion data and then computing the OTF by a fast Fourier transform (FFT) of the LSF. The numerically computed OTF due to sinusoidal image motion is studied in greater detail and confirmed experimentally using motion sensor data in Ref. 13. The numerically computed OTF for accelerated motion is also analyzed in Refs. 13 and 14. The image motion analyses in Refs. 1011.12.13.14 are summarized in Ref. 15 (Ch. 14). An image motion MTF is derived in Ref. 16 by using moments of the motion data. There is no assumption about the type of motion or about its probability density. Results show that a large number of moments are typically required to achieve acceptable numerical accuracy, and the number of moments required depends on the data. A deterministic image motion MTF for a time-delay-integration (TDI) detector subject to uniform linear motion and a statistical (jitter) image motion MTF are derived in Ref. 17. The image motion MTF for a TDI line-scan detector and uniform linear motion was derived and analyzed via simulation in Ref. 18.

In many systems, the image motion is more accurately represented by a power spectral density (PSD) spread over a range of frequencies rather than a single vibrational frequency. Image motion is defined in Refs. 19 and 20 as a displacement plus jitter, and the variances of the displacement and jitter are computed from the PSD of the image motion weighted by temporal-frequency domain weighting functions for displacement and jitter. The computation of covariance matrices in the present work follows that of Refs. 19 and 20. The jitter MTF given in Refs. 19 and 20 is the same as in Ref. 17. In previous work by the first author21,22 the terms “stability” and “jitter” are defined and (point-to-point) stability and windowed stability are introduced as measures of image stability over multiple images. A standard adopted by the National Geospatial Intelligence Agency for spatial data accuracy23 defines a point-to-point stability metric and an algorithm to compute it, typically for line-scan data. This metric can be computed more efficiently by our method (Sec. 4.1). Windowed stability measures the change in displacement from one image to another and is useful for image registration and target tracking.



This work extends results24 for LOS motion in one spatial dimension to two spatial dimensions and removes the assumption of zero-mean smear rate (and zero-mean smear). We first define the displacement, mean smear rate, and jitter components of image motion over the exposure interval and derive expressions for these as a function of the pointing motion. The mean smear is the mean smear rate times the exposure time. We then derive from first principles the general image motion OTF as a function of the pointing motion. The general image motion OTF is written in terms of displacement, smear, and jitter, which is shown to be separable in these components of pointing motion. Taking expectations and time averages yields the statistical image motion OTFs. The OTFs are parameterized by means and covariances, which are computed from the power spectrum of the pointing motion weighted by frequency domain weighting functions. The frequency domain weighting functions are a direct result of the definitions of the displacement, mean smear rate, and jitter. A notable result is that the statistical smear OTF cannot be written as the product of one-dimensional (1-D) OTFs, unlike the deterministic smear OTF and the Gaussian jitter OTF.



In Sec. 2, the components of the pointing motion (displacement, smear, and jitter) are defined and expressions for these components in terms of the pointing motion are derived. In Sec. 3, we derive the general image motion OTF and derive expressions for the displacement, statistical smear, and jitter OTFs. The statistical smear OTF is characterized in Sec. 3.2. The statistical smear LSF (1-D PSF) is derived in Sec. 3.3. The OTFs and LSF are summarized in Table 1 in Sec. 3.3. In Sec. 4 and in Appendices AH, equations for the mean and covariance of the components of the pointing motion are derived based on the power spectrum of the pointing motion. The lengthy derivations of the weighting functions are relegated to the Appendices but summarized in Table 2 in Sec. 4. Weighting functions used to compute the various covariance matrices are discussed in Sec. 4.1. The computation of the power spectrum is presented in Sec. 4.2, and a method for simulating and analyzing the pointing motion in an imaging vehicle or platform is discussed in Sec. 4.4.


Pointing Motion Model

Image motion is due to the relative LOS motion of the camera and the observed object. The relative LOS motion is caused by the relative translational and rotational motion of the camera and the observed object. Image motion due to changes in aspect of the object is not considered here. The image motion p(t) can be modeled by


where c(X,θ) is a camera model parameterized by the relative translation X and relative attitude θ. The relative attitude here is a small-angle rotation vector but could be represented by a quaternion, direction cosine matrix, Euler angles, or other parameterization. Relative pointing motion and image motion are synonymous, with image motion being interior to the camera and relative pointing motion being exterior to the camera.

Figure 1 shows image motion comprising displacement, smear, and jitter. Displacement is the average image offset over the exposure interval of length T. Smear is due to a linear motion over the interval and is equal to T times the smear rate, where the smear rate is the average slope of the image motion over the exposure interval. Jitter is the residual motion after displacement and smear are removed from the image motion. Smear results in a streaked image and jitter causes an image to be blurred.

Fig. 1

Illustration of image motion comprising displacement, smear, and jitter.


The image exposure interval of length T centered at time t0 is

The image motion p(t) over the exposure interval is


where p¯(t0) is the image displacement over I(t0), v¯(t0) is the uniform smear rate (the average rate) over I(t0), and ψ(t) is the jitter motion in the interval I(t0). For convenience, let α=tt0 and write Eq. (2) as


The jitter over I(t0) is obtained from Eq. (3) as


We shall compute the displacement and smear rate from a least-squares fit of p¯(t0) and v¯(t0) to the pointing motion p(t) over the interval I(t0). The jitter motion is then the least-squares residual. The best fit minimizes the mean square jitter


where J(t0) is the mean square jitter in the interval I(t0).

Differentiation of J with respect to the displacement p¯ gives


Setting this partial derivative to zero yields


The displacement is thus given by



Differentiation of J with respect to the smear rate v¯ gives


Setting this partial derivative to zero yields


The smear rate is thus given by


Smear, rather than smear rate, is the observable to which optical system performance is traditionally linked. The smear length s¯(t0) is the linear change in the pointing over the interval I(t0) and is given by



We assume that p¯(t0) and v¯(t0) are Gaussian and wide sense stationary with expected value means




and covariances ΣD and ΣR, respectively. The mean smear due to the uniform smear rate is


with covariance


We assume also that the jitter motion ψ(t)=ψ(t0+α) is Gaussian. The jitter is zero mean when the data fits the model Eq. (2) at each t0:


The mean-square jitter over the interval I(t0) is given by


This is similar to the scalar average square jitter in Eq. (5). The jitter covariance is


Formulas to compute ΣD, ΣR, ΣS, and ΣJ from the power spectrum of the pointing motion p(t) are summarized in Table 2 in Sec. 4. These covariance matrices are used in the formulas in the previous section to compute the smear and jitter OTFs.


Image Motion Optical Transfer Function

The noncoherent (incoherent) imaging of an isoplanatic (shift invariant) electro-optical system is the product of the Fourier transform of the point spread function of the optical system and the Fourier transform of the object geometrically projected onto the detector plane. The imaging process in the Fourier transform domain is


where ξ=[ξxξy]T is the 2-D spatial frequency. The OTF of an optical system is the product of the OTF of the image motion, the OTF of the optical diffraction (aperture, wavefront, and so on.), the OTF of the detector, the OTF of the atmosphere, and any other effects that may be present. A typical system OTF is thus given by


In this work we are concerned only with the effect of image motion on the system OTF.

The irradiance Io(x,t) of a still image is a function of the spatial location x in the image. The average irradiance at x over an exposure of duration T seconds centered at time t0 is


We assume that the irradiance of a still image is constant with time so that Io(x,t)=Io(x), and so we have


The 2-D Fourier transform of go(x) is



The irradiance of the image subject to image motion p(t) is I(x,t)=Io[xp(t)], and the average irradiance over the exposure is


The 2-D Fourier transform of g(x) is


Let z=xp(t). Then x=z+p(t) and dx=dz. Substitute these into Eq. (26) to get


The term K(ξ,t0) is the general single image motion OTF for the exposure interval centered at t0:


It will be convenient to substitute t=t0+α and dt=dα into Eq. (28) to obtain


This single image motion OTF depends on the image motion p(t) during the exposure interval centered at time t0.


Statistical Image Motion Optical Transfer Function

An analytical expression for the statistical image motion OTF is derived in this section. The statistical image motion OTF is the expected value of the single image motion OTF in Eq. (29):


The integrand in Eq. (30) is


Substitute for p(t0+α) from Eq. (3) into Eq. (31) and factor the exponential:


It is shown in Appendix D that p¯(t0) and v¯(t0) are independent random variables in each interval I(t0), and are independent of the least-squares residual ψ(t0+α)=ψ(t) so we have for tI(t0) and α[T/2,T/2],


where OTFD(ξ) is the displacement OTF, OTFS(ξ,α) is the smear OTF, and OTFJ(ξ) is the jitter OTF. The dependence of OTFS(ξ,α) on α will be removed by integration over α in Eq. (30) so that




Displacement optical transfer function

The displacement of an image is represented in the OTF by a constant phase shift. For any given exposure centered at time t0, the displacement p¯(t0) is a random constant during the exposure interval. Therefore we take the expectation in Eq. (33) to obtain


An image displacement is merely a shift in position of the image in the focal plane, and so the displacement MTF is unity, since MTFD(ξ)=|OTFD(ξ)|=1.


Smear optical transfer function

For a Gaussian random smear rate v¯(t0) with mean ρ and covariance ΣR, the second term in Eq. (33) is the characteristic function associated with the Gaussian density of v¯(t0), so we have [Ref. 25 (p. 115)],




The dependence on α is removed by taking a time average over the exposure interval, which yields the statistical smear OTF,






For convenience, using Eqs. (15) and (16), q2 and r2 can be written in terms of the mean smear and smear covariance,



The function erfz(·) in Eq. (39) is the complex error function (usually denoted erf), and Re(·) is the real part of its complex argument. Equation (39) was obtained with the aid of Ref. 26 (p. 108, §2.33-1), which is also found in Ref. 27 [p. 3, §3.2, Eq. (3)]. The complex error function is found in Fourier analysis, Fresnel integrals, and the plasma dispersion function.

The error function erf(x) for real x is Ref. 25 (p. 48)


The complex error function erfz is the error function28 continued into the complex plane with a complex argument z in place of the real argument x. The complex error function is Hermetian, so erfz(z¯)=erfz(z)¯. [Similarly, the standard error function is odd, so erf(x)=erf(x).] The complex error function is bounded between ±1 for all real z=q, but unbounded on the imaginary axis ±ir as |r|. Therefore Eq. (39) is best computed from exp(|z|2)erfz(z) rather than from separate terms. The complex error function and its properties, related functions, and series expansions are given in Ref. 29 (p. 297–309). Algorithms, code, and documentation for computing the complex error function are found in Refs. 3031.32.33.34. One must be careful in using any numerical algorithm — remarks on the method in Ref. 33 indicate that it may be less accurate for complex arguments near the imaginary axis. It is beyond the scope of this paper to provide a detailed treatment of numerical methods to compute the complex error function. The reader is directed to Ref. 30 (§7) as a starting point, but beware that some articles and algorithms cited in Ref. 30 (§7.25) (and elsewhere) as methods for computing the complex error function actually compute the Faddeeva function. An exception is Ref. 32, which provides algorithms and code to compute the complex error function, the complementary complex error function, and the Faddeeva function. Although Ref. 35 provides algorithms for the Voigt function, it includes two series approximations for the complex error function, which are found in Refs. 33 and 34. The erfz function in Ref. 31, which comprises three separate algorithms noted in comments in the code, was used with its default settings to generate results in the next section.

Two limiting cases of the statistical smear OTF are of interest. (1) When ΣS=02×2 and s0, the statistical smear OTF becomes the well-known deterministic smear OTF,17,18


where sinc(x)=sin(x)/x is the sinus cardinalis (cardinal sine) function. (2) When s=0 and ΣS0, we have r=0 in Eq. (38), and the statistical smear OTF becomes


where erf(q) is the real error function. This is the same as the result obtained in Ref. 24 where ρ=0 (s=0) is assumed at the outset (and where the motion is 1-D). Equation (44) is the average of deterministic smear OTFs for images whose smear lengths are zero-mean Gaussian random variables. Equations (39)–(41) are the average smear OTF for images whose smear lengths are nonzero-mean Gaussian random variables. The relationship between these OTFs are illustrated and discussed in Sec. 3.2.

From Eqs. (39)–(41), the statistical smear OTF is clearly not separable in terms of mean smear and dispersion. Therefore, it is incorrect to model the deterministic and stochastic effects of smear as the product of the deterministic smear OTF (the sinc function) and the statistical smear OTF (with s=0). Furthermore, the two spatial frequency components of the statistical smear OTF are also not separable, so the statistical smear OTF cannot be expressed as the product of 1-D OTFs in each frequency variable. A coordinate transformation can be applied so that one component of s is zero. This rotates the graph of OTFS(ξ) so that one axis is aligned with the direction of the mean smear. Even in this coordinate system, OTFS(ξ) is not separable.

In computing OTFS(ξ), we could have switched the order of expectation and time averaging,


where sinc(x)=(sinx)/x and s¯(t0)=Tv¯(t0) is the random smear over the exposure interval centered at time t0. The remainder of the derivation in Eq. (45) is omitted here.


Jitter optical transfer function

The third term in Eq. (33) is the characteristic function [Ref. 25 (p. 115)] associated with the Gaussian density of ψ(t), so the jitter OTF is


This is the well-known blur model [Ref. 4 and Eq. (22) in Ref. 17]. Since the jitter OTF is real, the jitter MTF is MTFJ(ξ)=OTFJ(ξ).


Comparison of Deterministic and Statistical Smear Optical Transfer Function

The statistical smear OTF is characterized to show how it behaves as a function of smear and smear dispersion (standard deviation of smear). For clarity, the characterization is shown for one frequency axis. Surface plots are also provided to illustrate the statistical smear OTF in two dimensions of spatial frequency.

Figure 2 shows two plots of the 2-D statistical smear OTF evaluated for various mean smear s and various smear dispersion σS along one dimension in frequency. The frequency axis is one-sided since the OTF is an even function.

Fig. 2

Statistical smear OTF (in one dimension of spatial frequency). (a) Mean smear s=0.2  mm and smear dispersion σS ranging from 0.02 to 2.0 mm. (b) Smear dispersion σS=0.2  mm and mean smear s ranging from 0.02 to 2.0 mm.


Figure 2(a) shows the statistical smear OTF for mean smear s=0.2  mm and smear dispersion σS ranging from 0.02 to 2.0 mm. The OTF converges to the sinc function, Eq. (43), as σS0, and converges to the statistical smear OTF, Eq. (44), as s0. The curve for σS=0 is almost indistinguishable from the curve for σS=0.02 and so is not shown. An interesting characteristic is that the curves essentially go up as the dispersion increases until the dispersion equals the mean smear, and then the curves go down as the dispersion increases further. The degradation of the statistical smear OTF is pronounced as the dispersion increases above the mean smear. The curves begin to look like the sinc function when the dispersion is less than about half the mean smear.

Figure 2(b) shows the statistical smear OTF for a smear dispersion σS=0.2  mm and mean smear s ranging from 0.02 to 2.0 mm. The curve for s=0 is almost indistinguishable from the curve for s=0.02 and so is not shown. The curves move left and down as the smear increases, again indicating worsening degradation of the image. The curves begin to look like the sinc function when the smear is greater than twice the dispersion, which is consistent with Fig. 2(a).

The statistical smear OTF in two dimensions of spatial frequency is shown in the contour plots in Fig. 3. (A three-dimensional mesh plot is difficult to show clearly, so it is omitted.) The statistical smear OTF in Fig. 3(a) was produced with mean smear sx=0.2, sy=0  mm and smear dispersion σx=σy=0.1  mm. Compare with Fig. 2(a). Although there is smear in the x direction, the average smear OTF is not a sinc function, although the response is a sinc function for each realization of smear in each image. Figure 3(b) was produced with mean smear sx=sy=0.2  mm and smear dispersion σx=0.02, σy=0.2  mm. Although the mean smear is along a +45  deg line, the sinc response is along the x axis, and the erf(q)/q response is essentially along a 45  deg line. Although this may seem counterintuitive, it is because of the large random smear in the y axis. Since the statistical smear OTF can change dramatically with changes in the parameters, one must be careful in making any general statements regarding the smear OTF. Nevertheless, the statistical smear OTF provides information about OTF performance that is not revealed by the deterministic smear OTF (the sinc function) for any choice of smear such as a “worst-case” smear.

Fig. 3

Statistical smear OTF in two dimensions of spatial frequency. (a) sx=0.2, sy=0, σx=σy=0.1  mm. (b) sx=sy=0.2, σx=0.02, σy=0.2  mm.


In Fig. 4(a), the statistical smear MTF (σS=4, s=0) tightly bounds the deterministic smear MTF (σS=0, s=4). It also bounds the deterministic smear MTF for s4, since the deterministic smear MTF is smaller for longer smear lengths.

Fig. 4

Comparison of statistical smear OTF with deterministic smear OTF and jitter OTF (one dimension of spatial frequency). (a) Statistical smear MTF (σS=4, s=0) and deterministic smear MTF (σS=0, s=4). (b) Statistical smear and jitter OTF comparison.


The statistical smear and jitter OTF are shown in Fig. 4(b) for σS=4  mm, s=0, and σJ=1  mm. From Eq. (60), the contribution of smear to the root-mean-square (RMS) attitude motion is σS/12, or 1.15 for σS=4. This is only slightly larger than the jitter in this example. Empirical evidence indicates that image quality tolerates degradation from smear better than from jitter. The reason for this is because the statistical smear OTF goes slowly to zero with increasing spatial frequency, whereas the jitter OTF goes to zero quickly.


Statistical Smear Line Spread Function

The deterministic smear LSF is a rectangle (boxcar) function. The rectangle function is shown in Fig. 5(a) for three values of smear. The statistical smear LSF describes the average LSF over an ensemble of random rectangle LSFs whose widths are random from one image to another. The statistical smear LSF can be computed as the expected value of the random rectangle function. Alternatively, the statistical smear LSF can be obtained by computing the inverse Fourier transform of the statistical smear OTF, Eq. (39). The computation is facilitated by substituting Eq. (36) for the integrand in Eq. (38), and then switching the order of integration and inverse Fourier transform, and by assuming that the random smear rate is Gaussian. The derivation is lengthy by either method, so details are omitted. We have derived the statistical smear LSF in one dimension for zero-mean Gaussian smear. The 1-D statistical smear LSF for zero-mean smear is


where x is the spatial distance in the image, σS is the smear dispersion, and Ei(u) is the exponential integral function.

Fig. 5

Random and statistical (average) smear LSF. (a) Samples of random smear LSFs. (b) Statistical smear LSF.


The statistical smear LSF is shown in Fig. 5(b) for values of σS from 0.2 to 2.0 mm. In comparison, the deterministic smear LSF for smear s is a rectangle function of width s and amplitude 1/s (a Dirac delta function for s=0). The statistical smear LSF has the required property that it has unit area, as does the deterministic smear LSF.

Gaussian random smears are concentrated around the mean, which is zero in Eq. (47); hence LSFS(x) is large there, and LSFS(x) as x0. Large smears are infrequent, so LSFS(x)0 as x±. The LSFS(x) broadens but becomes thinner near x=0 as σS increases. For small σS, LSFS(x) is concentrated near x=0 and becomes a delta function [similar to the Dirac delta function δ(x)] as σS0. As can be seen in Eq. (47), σS scales the graph [Fig. 5(b)] of LSFS(x) in both axes.

Table 1

Summary of image motion OTFs and LSF.

Displacement (deterministic)
OTFD(ξ)=exp[i2πξTp¯(t0)]Eq. (35)
Smear (statistical)
OTFS(ξ)=π2qexp(r2)Re[erfz(q+ir)]Eq. (39)
q2=12(πT)2ξTΣRξ=12π2ξTΣSξEqs. (40) and (41)
r2=(ξTρ)22ξTΣRξ=(ξTs)22ξTΣSξEqs. (40) and (41)
Smear (statistical, s=0)
OTFS(ξ)=π2qerf(q)Eq. (44)
Smear LSF (deterministic, ΣS=0)
OTFS(ξ)=sinc(πξTs)Eq. (43)
Jitter (statistical)
OTFJ(ξ)=exp(2π2ξTΣJξ)Eq. (46)
Smear LSF (statistical, s=0)
LSFS(x)=1σS2π{Ei[2(x/σS)2]}Eq. (47)


Pointing Covariance

The OTFs for displacement, smear rate, smear, and jitter in Table 1 are parameterized with means μ (mean displacement), ρ (mean smear rate), s (mean smear), and covariances ΣD (displacement covariance), ΣR (smear rate covariance), ΣS (smear covariance), and ΣJ (jitter covariance) in Eqs. (13)–(19). These are derived in Appendices B, C, and E. Covariance matrices for other performance metrics are also derived. These are the covariances ΣA, ΣSJ, ΣPS, and ΣWS of the relative pointing motion, smitter (the sum of smear and jitter), point-to-point stability (the relative motion at points in time separated by Ts seconds), and windowed stability (displacements separated by Ts seconds). These are defined in Appendices A, F, G, and H and discussed further in Sec. 4.1.

Although the covariance matrices can be computed directly from sampled motion data, that approach is computationally intensive and does not reveal what spectral content of the pointing motion contributes significantly to the covariance matrices, hence to the image motion OTF. The spectral content can also reveal which sources of relative pointing motion contribute most significantly to the covariance matrices and to the image motion OTFs. The basic idea19,20 is to compute the PSD S(ω) from the autocorrelation R(τ) of the relative pointing motion p(t). Expressions for the covariances in terms of the PSD are derived in Appendices AH.

We assume only that p(t) is wide sense stationary during the exposure intervals. We can ignore pointing motion between exposure intervals that is not characteristic of the motion during the exposures (e.g., a slew between exposures). Any average displacement and trend over the ensemble of images should be removed so that a valid autocorrelation and PSD are computed. After subtracting the overall trend, the trend in each exposure can vary but average to zero over the ensemble. The average displacement and trend are added back into μ and s (or ρ) before computing the OTFs.

The autocorrelation of p(t) is the matrix


where τ=αβ. The Wiener–Kinchin theorem states that the PSD of p(t) is the Fourier transform of the autocorrelation function R(τ) of p(t),



From the inverse Fourier transform, we obtain the autocorrelation in terms of the PSD,



Since p(t) is real, R(τ) is real and even, S(ω) is real and even, and we can write Eq. (50) as


and similarly for Eq. (49).

The PSD and autocorrelation are used in the Appendices to derive expressions for the covariance matrices ΣA, ΣD, ΣR, ΣS, ΣJ, ΣPS, and ΣWS. The covariances are computed from expressions of the form


where the subscript X is one of A, D, R, S, J, SJ, WS, and PS. The WX(ωT) in Eq. (52) are frequency domain weighting functions, which are derived in the Appendices and summarized in Table 2. The means mX are also defined in Table 2.

Table 2

Summary of covariances and corresponding frequency domain weighting functions.

Pointing measureMeanCovarianceWeighting functionEquations
AccuracyμΣAWA(ωT)=1Eq. (59)
DisplacementμΣDWD(ωT)=sinc2(ωT/2)=[sin(ωT/2)ωT/2]2Eq. (63)
Smear rateρΣRWR(ωT)={12ωT2[sinc(ωT/2)cos(ωT/2)]}2Eq. (66)
Smears=TρΣS=T2ΣRWS(ωT)=T2WR(ωT)Eq. (68)
Jitter0ΣJWJ(ωT)=1WD(ωT)T212WR(ωT)Eq. (76)
=1WD(ωT)112WS(ωT)Eq. (76)
Smitter0ΣSJWSJ(ωT)=1WD(ωT)Eq. (84)
Point-to-point stability0ΣPSWPS(ωTs)=2[1cos(ωTs)]=4sin2(ωTs/2)Eq. (88)
Windowed stability0ΣWSWWS(ωT,ωTs)=WD(ωT)WPS(ωTs)Eq. (93)

The covariance of the pointing motion is derived in Appendix A, Eq. (60) and is given by

Similarly, the displacement, smear, and jitter weighting functions in Table 2 are such that

Smitter is the sum of smear and jitter, or equivalently the pointing motion with the displacement removed. The smitter covariance in Table 2 is the jitter covariance defined in previous works,1920.21.22 and is not used to compute the image motion MTFs in Table 1.


Weighting Functions

The displacement, smear, jitter, and smitter weighting functions in Table 2 are plotted in Fig. 6 with T=1 second. It can be seen that the frequency content of the pointing contributes to the covariances, hence the OTFs, over certain ranges of frequency. The displacement weighting function is lowpass, so low-frequency pointing motion contributes to the displacement. The smear weighting function peaks at 0.7 Hz and is zero at 0 and 1.4 Hz, and exhibits smaller peaks at higher frequencies. The smear weighting function is essentially bandpass, so pointing motion over a certain range of frequencies contributes significantly to smear. The jitter weighting function is highpass. Large-amplitude pointing motion can be significant at frequencies where the weighting function is small. The displacement, smear, and jitter weighting functions overlap, and so the spectral content of the image motion at any frequency contributes to all three measures of image motion. The contribution of the pointing motion to displacement, smear, and jitter depends on the PSD of the pointing motion as well as the weighting functions, so there are no arbitrary frequency regions associated with displacement, smear, and jitter.

Fig. 6

Displacement, smear, jitter, and smitter weighting functions (T=1).


The point-to-point stability covariance ΣPS measures the change in pointing from one instant to another. The stability weighting function WPS with Ts=1 second, shown in Fig. 7, is a minimum at 0,1,2,… Hz and a maximum at 0.5,1.5,2.5,… Hz, so frequencies above 1/Ts contribute to the point-to-point stability. Point-to-point stability is called stability in Refs. 21 and 22. The point-to-point stability metric for spatial data accuracy of line scan data23 can be computed more efficiently by our method. However, the displacement, smear, and jitter metrics may be more appropriate.

Fig. 7

Point-to-point stability weighting WPS(ωT) (T=1).


The windowed stability covariance ΣWS measures the change in displacement from one image to another. The windowed stability weighting function WWS has two time parameters, the exposure time T and the time Ts between image center times. The windowed stability weighting function is plotted in Fig. 8(a) with T=1 and Ts=1. It is essentially a bandpass function and goes to zero at low and high frequencies for any choice of T and Ts. The windowed stability weighting function looks significantly different for various T and Ts, as exemplified by Fig. 8(b) where T=1 and Ts=4. Windowed stability is useful in image registration and to specify or evaluate performance for a frame-differencing camera.

Fig. 8

Windowed stability weighting WWS(ωT,ωTs). (a) T=1, Ts=1. (b) T=1, Ts=4.



Computation of the Power Spectrum and Covariances

A pointing performance analysis will typically produce both time-domain and frequency-domain data. Time-domain data is typically obtained from a time-domain simulator, and frequency-domain data is typically obtained from transfer functions driven by harmonic or white noise. Although the mean and covariance of displacement, smear, and jitter can be computed in either the time domain or in the frequency domain, their computation is most conveniently and efficiently performed in the frequency domain.

The main tool for computing the pointing covariances from uniformly sampled data is the FFT. The FFT of the sampled pointing motion pk=p(tk) is scaled to a power spectrum (not a density) by dividing it by M and then computing its magnitude squared, where M is the number of samples of data and is assumed to be a power of two, M=2n. (This assumption can be relaxed.) The power spectrum is then shifted (FFTSHIFT in Table 3) so that the zero frequency line is at the center. The frequencies range from (M/2)/Mδ to (M/21)/Mδ Hz in increments of 1/Mδ Hz, where δ is the sample interval in seconds. Note that the sum of the discrete power spectrum is equal to the second moment of the time-domain data,


This serves as a useful check that the power spectrum is scaled correctly. The mean of pk should be subtracted out so that the computed accuracy and displacement covariances do not include the overall mean pointing motion.

Table 3

Summary of calculations for the power spectrum of pointing motion.

M=2nlength of data record
ω=2π[M/2:1:M/21]/Mδfrequency range (rad/sec), δ=sample time (sec)
P(ω)=|FFTSHIFT[FFT(p,M)]/M|2power spectrum of p(tk), 0kM1
ΣX=i=M/2M/21P(ωi)WX(ωiT)WX denotes one of the weighting functions
R=1Mk=0M1pkpk+0M1biased sample autocorrelation of p(tk)
P(ω)=FFTSHIFT(FFT(R,N))/N, N=2M+1power spectrum from the biased autocorrelation

Since the power spectrum does not converge to the true spectrum with increasing M, the data should be segmented and the power spectra of the segments should be averaged. Alternatively, the power spectrum can be computed from the biased, and possibly windowed, sample correlation function R() of pk. A detailed discussion of computation of the power spectrum is beyond the scope of this paper; the reader is referred to Ref. 36 or one of many books on spectral analysis.

In the frequency domain, the pointing covariances are evaluated by computing the weighting functions at each frequency point, multiplying by the power spectrum of the pointing motion at each frequency, and then summing the terms. This computational algorithm is summarized in Table 3. The power spectrum can be computed using only non-negative frequencies, but the zero-frequency term is multiplied by one and the positive-frequency terms multiplied by two in the summation. Once the power spectrum is computed, the covariance matrix ΣX corresponding to one of the weighting functions WX(ωT) in Table 2 is easily computed.


Pointing Covariance from Relative Motion Covariance

In Sec. 4.2, the pointing motion is computed from the relative translation X(t) and relative attitude θ(t) (a small-angle representation) by using the camera model in Eq. (1). The 2×2 power spectrum and covariance matrices are then computed from the pointing motion p(t).

An alternative approach is to first compute the 3×3 power spectrum SX of the relative translation and the 3×3 power spectrum Sθ of the relative attitude motion. At this point, there are two paths to compute the covariances matrices. One path is to apply sensitivity matrices to map the power spectra of the relative motions into the power spectrum of the pointing vector by


where the 2×3 sensitivity matrices CX and Cθ are


The 3×1 vectors X0 and q0 are the nominal relative translation and attitude, and c is the camera model in Eq. (1). The covariance matrices are then computed from the power spectrum S(ω) in Eq. (54). This approach is computationally intensive because the mapping matrices have to be applied to each frequency component of the power spectra. Furthermore, the attitude motion may be simulated at a higher sample rate than the translational motion due to typically higher spectral bandwidth of the attitude motion.

A computationally more efficient path is to compute the 3×3 covariance matrices ΣX and Σθ corresponding to SX(ω) and Sθ(ω) by using the formulas in Table 2. These covariance matrices are then mapped into the pointing covariances by an equation of the form


This may be the preferred approach, since the control system designer will simulate the relative translation and attitude motions but may not have details of the camera model. In addition, image motion on multiple focal planes can be evaluated from the same set of power spectra and covariance matrices by applying multiple mapping matrices. Another advantage of this approach is that the contribution of the relative translation and relative attitude motions to the displacement, smear, and jitter can be evaluated. An optical sensitivity matrix for the James Webb space telescope (JWST), formerly called the next generation space telescope, is presented in Refs. 37 and 38.


Pointing Performance Analysis

Figure 9 shows the pointing control system for an optical payload on an imaging vehicle. In the case of a spacecraft, the system model comprises models of attitude sensors, actuators, fuel slosh, a solar array drive, internal disturbances, and the optical system, all connected to appropriate nodes of a reduced-order Nastran model comprising rigid-body and flexible-body modes and mode shapes. The control loop is closed through the attitude controller. The attitude command reference input is a disturbance since it can excite structural and slosh modes, and the command itself may be subject to error (e.g., scan rate error or tracking rate error). Similar integrated modeling approaches are found in Refs. 3839. An overview of modeling and analysis is given in Ref. 44.

Fig. 9

Pointing simulation and analysis.


As suggested in Ref. 19 (pp. 21–22) and Ref. 20 (pp. 573–574), the weighting functions can be approximated by linear transfer functions for use in control system analysis and synthesis. Standard state-space methods can then be applied to calculate the covariance matrices. A state-space solution that avoids having to compute the weighted FFT is presented in Refs. 45 and 46 but would have to be modified for our model of pointing motion to include smear and a different jitter weighting function.

Analysis of pointing performance is often faster and numerically more reliable (due to time scales) if the system response to disturbances is computed directly in the frequency domain from a linear or linearized closed-loop transfer function rather than in the time domain from a simulator. A time domain simulator can of course capture nonlinear and time-varying effects. The response of a system to high-frequency noise and disturbance is most accurately and efficiently computed in the frequency domain. For stochastic sources, the power spectrum can be computed directly by using standard state-space covariance methods. Once the power spectrum S(ω) of p(t) is computed, it is a trivial effort to compute the covariances, as discussed in Sec. 4.2. Segments of the pointing motion pertaining to nonimaging attitude motions have to be eliminated if they are not representative of the motion during the exposure interval.

In a linear or linearized system, the covariance of the pointing motion from individual noise, disturbance, and other sources can be computed individually and added to obtain the total pointing covariance. The individual contributions can then be ranked so that the greatest offenders can be identified. The power spectrum may be computed as a combination of a system frequency response, an FFT of the autocorrelation of time-series data, discrete spectral lines due to harmonic disturbance sources, and stochastic sources such as sensor noise. The pointing motion from each source can be computed at different sample rates or frequency resolutions, though the sample rate should be high enough and frequency resolution small enough to accurately represent the high-frequency responses of the system and so that numerical errors in the computed covariance matrices are not significant. Similarly, time-domain data from different simulations do not have to be resampled to a common sample rate.



Two-dimensional statistical image motion OTFs for displacement, smear, and jitter components of image motion are derived. The LSF for zero-mean random smear is also derived. The statistical smear OTF measures the average optical system performance for an ensemble of images subject to nonzero mean Gaussian random smear. In comparison, the deterministic (sinc function) smear OTF measures performance for a specified smear length. The familiar Gaussian jitter OTF is also a statistical OTF.

Limiting cases for the statistical smear OTF are given: (1) fixed nonzero mean smear and diminishing smear dispersion, and (2) diminishing mean smear and fixed nonzero smear dispersion. In the first case, the statistical smear OTF converges to a sinc function (the well-known deterministic smear OTF), and in the second case it converges to the πerf(q)/2q function. The statistical smear OTF begins to resemble the sinc function when the mean smear exceeds about twice the dispersion in the smear. For equal RMS attitude motion due to zero-mean random smear and jitter, the statistical smear OTF is greater than the jitter OTF at higher spatial frequencies. This corroborates the empirical observation that optical systems tolerate smear better than jitter.

The statistical OTFs are parameterized by means and covariances of the displacement, smear, and jitter components of pointing motion, with spatial frequency as the independent variable. The covariances are computed accurately and efficiently from a temporal-frequency-weighted power spectrum of the LOS pointing motion. The weighting functions are parameterized with only the exposure time. Essentially, the displacement weighting function is low pass, the smear weighting function is bandpass, and the jitter weighting function is highpass. These frequency regions overlap, so the spectral content of the image motion at any frequency contributes to all three measures of image motion; therefore, there are no arbitrary frequency regions associated with displacement, smear, and jitter. By examining the weighted power spectrum, a control system engineer can determine the temporal frequencies where the sensitivity of the OTFs to pointing motion is greatest. The control system design engineer can then focus on the most significant disturbance sources or frequencies, which can lead directly to improvements in the design of the pointing control system and in the design of the optical system. Because covariances are additive, individual disturbance sources can be analyzed to determine their relative contributions to the displacement, smear, and jitter OTFs. The weighting functions can also be used in control system synthesis to optimize a controller. The statistical OTFs and the method for determining their parameters are a basis for integrated modeling and multidisciplinary analysis and simulation.

In addition to the image motion OTFs and their associated means, covariances, and weighting functions, point-to-point stability and windowed stability are defined and formulas for the corresponding covariance matrices are derived. Point-to-point stability measures the change in pointing from one instant of time to another. Windowed stability measures the change in displacement from one image to the next.


Appendix A:

Pointing Covariance

The pointing (accuracy) covariance ΣA is the covariance of p(t) and is computed from



For consistency with other measures of pointing motion, we write the integral as




is the accuracy weighting function. From Eq. (43) and Eq. (74) we have


The (1/12)ΣS term is the contribution to pointing covariance from the smear component of pointing motion.

Appendix B:

Displacement Covariance

The displacement and displacement variance were originally derived in Refs. 19 and 20. We have written the definition of the displacement in a different but equivalent form in Eq. (8), so it is instructive to rederive the displacement covariance using our definition of the displacement. The steps involved are similar to those in Refs. 19 and 20. From Eqs. (8) and (50), we obtain the displacement covariance,



Since the pointing error is assumed to be a wide sense stationary process, the autocorrelation is independent of t0, and so the displacement metric is valid for all t0. The displacement covariance can be written as




is the displacement weighting function.

Appendix C:

Smear and Smear Rate Covariance

The smear rate covariance is obtained by substituting the smear rate from Eq. (11) into Eq. (48) and then by using Eq. (50), whence



Since the pointing error is assumed to be a wide sense stationary process, the autocorrelation is independent of t0, and so the smear rate metric is valid for all t0. The smear rate covariance can be written as




is the smear rate weighting function.

The smear was defined as s=Tρ, where ρ is the average smear rate. The smear covariance ΣS is given by


and the corresponding smear weighting function is



Appendix D:

Correlation of Displacement and Smear Rate

Here, we show that E{p¯(t0)v¯T(t0)}=0. This result is used in the derivation of Eq. (72):


The integrand is an odd function in ω, so the integral is zero. The integrand is also purely imaginary, but the left side of the equation is real, so the integral evaluates to zero.

Appendix E:

Jitter Covariance

The mean-square jitter over the interval I(t0) is given by



The jitter covariance is the expected value of the average square jitter over I(t0):


Since ψ(t0+α) is zero mean, as a result of the least-squares minimization, we will omit the means μ and ρ from the derivation, since they will drop out. We will also use the fact that E{p¯(t0)v¯(t0)}=0, which is shown in Appendix D. Now substitute for ψ(t0+α) from Eq. (4) and carry out the expectation using the definitions of ΣA, ΣD, and ΣR:



Finally we have an expression for the jitter covariance,



Substitute from Eq. (14) to write the jitter covariance as



Now substitute Eqs. (58), (59), (62), (63), (65), (66), (67), and (68) into Eq. (74) to obtain the jitter covariance in terms of S(ω):





Appendix F:

Smitter Covariance

Jitter was formerly defined in Refs. 2021.22 as


or equivalently


It is easy to show that p¯(t0) can be obtained from the least-squares minimization of


which yields the same expression for p¯(t0) as in Eq. (8). From Eq. (78) and Eq. (3), we have



Thus, the former jitter defined in Refs. 2021.22 is the sum of smear and jitter, which is termed “smitter.” Because smear and jitter affect the image motion OTF differently, the former definition of jitter is less useful than the present definition.

The mean square smitter over the interval I(t0) is



The smitter covariance is


The smitter covariance ΣSJ in Eq. (82) is analogous to the jitter variance defined in Refs. 2021.22.

Substitute Eqs. (58), (59), (62), and (63) into Eq. (82) to obtain the smitter covariance in terms of the PSD S(ω):





Appendix G:

Point-to-Point Stability Covariance

The change in the LOS pointing over an interval of length T is given by



The point-to-point stability covariance measures the mean square change in pointing from one instant to another and is given by the second order structure function


These equations suggest two ways of computing ΣPS2 in the time domain, either by a time average or by way of autocorrelation. In the frequency domain, the point-to-point stability covariance is obtained by substituting Eq. (50) or Eq. (51) into Eq. (71):


where WPS(ωTs) is the stability weighting function


An obvious characteristic of the stability weighting function WPS(ωTs) is that it does not roll off at high frequency. This is because p(t) and p(tTs) are at instantaneous points in time. A useful fact is that WPS(ωTs)4, so ΣPS(Ts)4ΣA, and in fact ΣPS(Ts)4ΣA. Therefore, if 4ΣA is less than the stability requirement, no further analysis of stability is needed. These statements hold if the trace is applied to each side of the inequality. (For matrices A and B, AB means BA has non-negative eigenvalues.)

Appendix H:

Windowed Stability Covariance

There may be a requirement on the change in displacement of one image compared with a subsequent image. The change in displacement over an interval of length Ts is given by



The windowed stability covariance measures the mean square change in displacement given by the second order structure function


The autocorrelation Rp¯(T)=E{p¯(t)p¯T(tT)} of p¯(t) is most easily obtained from the inverse Fourier transform of the PSD of p¯(t),


Substituting Eq. (91) into Eq. (90) yields the windowed stability covariance




is the windowed stability weighting function. The windowed stability weighting function is shown in Fig. 8. The presence of WD(ω) in Eq. (93) causes the weighting to go to zero as the frequency increases.


This work is the result of independent research by the authors. Financial support for publication was provided by the Harris Corporation and by Aerospace Control Systems LLC. The authors especially thank the reviewers for their insightful comments, which considerably improved this paper.


1. J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley & Sons, New York (1978). Google Scholar

2. M. Born and E. Wolf, Principles of Optics, 2nd ed., Pergamon Press, Oxford (1964). Google Scholar

3. N. Jensen, Optical and Photographic Systems, pp. 116–124, John Wiley and Sons, Inc., New York (1968). Google Scholar

4. R. M. Scott, “Contrast rendition as a design tool,” Photogr. Sci. Eng. 3(5), 201–209 (1959).PSENAC0031-8760 Google Scholar

5. T. Trott, “The effects of motion on resolution,” Photogramm. Eng. J. Am. Soc. Photogramm. 26, 819–827 (1960). Google Scholar

6. M. D. Rosenau, “Parabolic image motion,” Photogramm. Eng. J. Am. Soc. Photogramm. 27(3), 421–427 (1961). Google Scholar

7. R. V. Shack, “The influence of image motion and shutter operation on the photographic transfer function,” Appl. Opt. 3(10), 1171–1181 (1964).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.3.001171 Google Scholar

8. S. C. Som, “Analysis of the effect of linear smear on photographic images,” J. Opt. Soc. Am. 61, 859–864 (1971).JOSAAH0030-3941 http://dx.doi.org/10.1364/JOSA.61.000859 Google Scholar

9. G. C. Holst, Electro-Optical Imaging System Performance, 5th ed., Vol. PM187, SPIE Press, Bellingham, Washington (2008). Google Scholar

10. D. Wulich and N. S. Kopeika, “Image resolution limits resulting from mechanical vibration,” Opt. Eng. 26, 266529 (1987). http://dx.doi.org/10.1117/12.7974110 Google Scholar

11. S. Rudoler et al., “Image resolution limits resulting from mechanical vibrations, Part II: expriment,” Opt. Eng. 30(5), 577–589 (1991). http://dx.doi.org/10.1117/12.55843 Google Scholar

12. O. Hadar, M. Fisher and N. S. Kopeika, “Image resolution limits resulting from mechanical vibrations. Part III: numerical calculation of modulation transfer function,” Opt. Eng. 31(3), 581–589 (1992). http://dx.doi.org/10.1117/12.56084 Google Scholar

13. O. Hadar, I. Dror and N. S. Kopeika, “Image resolution limits resulting from mechanical vibrations. Part IV: real-time numerical calculation of optical transfer functions and experimental verification,” Opt. Eng. 33(2), 566–578 (1994). http://dx.doi.org/10.1117/12.153186 Google Scholar

14. O. Hadar, S. R. Rotman and N. S. Kopeika, “Target acquisition modeling of forward-motion considerations for airborne reconnaissance over hostile territory,” Opt. Eng. 33(9), 3106–3117 (1994). http://dx.doi.org/10.1117/12.177485 Google Scholar

15. N. S. Kopeika, A System Engineering Approach to Imaging, Vol. PM38, SPIE Press, Bellingham, Washington (1998). Google Scholar

16. A. Stern and N. S. Kopeika, “Analytical method to calculate optical transfer functions for image motion and vibrations using moments,” J. Opt. Soc. Am. 14(2), 388–396 (1997).JOSAAH0030-3941 http://dx.doi.org/10.1364/JOSAA.14.000388 Google Scholar

17. J. F. Johnson, “Modeling imager deterministic and statistical modulation transfer functions,” Appl. Opt. 32(32), 6503–6513 (1993).APOPAI0003-6935 http://dx.doi.org/10.1364/AO.32.006503 Google Scholar

18. S. L. Smith et al., “Understanding image quality losses due to smear in high-resolution remote sensing imaging systems,” Opt. Eng. 38(5), 821–826 (1999). http://dx.doi.org/10.1117/1.602054 Google Scholar

19. S. W. Sirlin and A. M. San Martin, “A new definition of pointing stability,” in JPL Engineering Memorandum EM 343-1189, Jet Propulsion Laboratory, Pasadena, California (1990). Google Scholar

20. R. L. Lucke et al., “New definitions of pointing stability: AC and DC effects,” AAS J. Astronaut. Sci. 40(4), 557–576 (1992). Google Scholar

21. M. E. Pittelkau, “Definitions, metrics, and algorithms for displacement, jitter, and stability,” in Advances in the Astronautical Sciences, AAS/AIAA Astrodynamics Specialist Conf., Big Sky, MT, 3–7 Aug 2003, Vol. 116, pp. 901–920, Part II, Paper No. AAS 03-559 (2003). Google Scholar

22. M. E. Pittelkau, “Definitions, metrics, and algorithms for displacement, jitter, and stability,” in Flight Mechanics Symp., NASA Goddard Space Flight Center, NASA/CP-2003-212246 (2003). Google Scholar

23. T. P. Ager, An Analysis of Metric Accuracy Definitions and Methods of Computation, p. 13, NIMA InnoVision, Springfield, Virginia (2002). Google Scholar

24. M. E. Pittelkau and W. G. McKinley, “Pointing error metrics: displacement, smear, jitter, and smitter with application to image motion MTF,” in AIAA Guidance, Navigation, and Control Conf., 13–16 Aug 2012, Minneapolis, MN, p. 19, Paper No. AIAA-2012-4869. Google Scholar

25. A. Papoulis, Probability, Random Variables, and Stochastic Processes, 2nd ed., McGraw-Hill, New York, New York (1984). Google Scholar

26. I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 7th ed., Elsevier, Boston, Massachusetts (2007). Google Scholar

27. E. W. Ng and M. Murray Geller, “A table of integrals of the error functions,” J. Res. Natl. Bur. Stand. B. Math. Sci. 73B(1) (1969). http://dx.doi.org/10.6028/jres.073B.001 Google Scholar

28. E. W. Weisstein, “Erf.,” From MathWorld–A Wolfram Web Resource,  http://mathworld.wolfram.com/Erf.html and “Erfi,” Wolfram Research,  http://functions.wolfram.com/GammaBetaErf/Erfi/19/ShowAll.html (26 August 2015). Google Scholar

29. M. Abramowitz and I. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, §7 Error Functions and Fresnel Integrals, Applied Mathematics Series, Vol 55, p. 297, National Bureau of Standards, U.S. Government Printing Office, Washington, D.C. (1972). Google Scholar

30. N. M. Temme, NIST digital library of mathematical functions, “Section 7: error functions, dawson’s and Fresnel integrals,” Version 1.0.10, 2015, Department MAS, Amsterdam, The Netherlands, Source code and documentation are available at  http://dlmf.nist.gov/7 (26 August 2015). Google Scholar

31. K. Johnson, “Complex Erf (error function), Fresnel integrals,” 2011, Source code available from Matlab Central at  http://www.mathworks.com/matlabcentral/fileexchange/33577-complex-erf-error-function-fresnel-integrals (22 Febraury 2013). Google Scholar

32. S. G. Johnson, “Faddeeva package: complex error functions,” 2012, C++ and MEX source code available from Matlab Central at  http://www.mathworks.com/matlabcentral/fileexchange/38787-faddeeva-package-complex-error-functions;  http://ab-initio.mit.edu/Faddeeva (16 February 2013). Google Scholar

33. M. Leutenegger, “Error function of complex numbers,” 2008, Source code available from Matlab Central at  http://www.mathworks.com/matlabcentral/fileexchange/18312-error-function-of-complex-numbers (16 Febraury 2013). Google Scholar

34. I. A. Stegun and R. Zucker, “Automatic computing methods for special functions. IV. Complex error function, Fresnel integrals, and other related functions,” J. Res. Nat. Bur. Stand. 86(6), 661–686 (1981). http://dx.doi.org/10.6028/jres.086.031 Google Scholar

35. S. M. Abrarov and B. M. Quine, “Efficient algorithmic implementation of the Voigt/complex error function based on exponential series approximation,” Appl. Math. Comput. 218, 1894–1902 (2011).AMHCBQ0096-3003 http://dx.doi.org/10.1016/j.amc.2011.06.072 Google Scholar

36. S. M. Kay and S. L. Marple, “Spectrum analysis—a modern perspective,” Proc. IEEE 69(11), 1380–1419 (1981).IEEPAD0018-9219 http://dx.doi.org/10.1109/PROC.1981.12184 Google Scholar

37. O. de Weck, Singular Value Decomposition for NGST Optics, p. 20, Massachusetts Institute of Technology, MIT Space Systems Laboratory, Cambridge, Massachusetts, Memorandum MIT-SSL-NGST-98-3 (1998). Google Scholar

38. O. de Weck and D. W. Miller, Integrated Modeling and Dynamics Simulation for the Next Generation Space Telescope, Massachusetts Institute of Technology, Cambridge, Massachusetts Document Number SSL 5–99 (1999). Google Scholar

39. G. Mosier et al., “Fine pointing control for a next generation space telescope,” Proc. SPIE 3356, 1070 (1998).PSISDG0277-786X http://dx.doi.org/10.1117/12.324507 Google Scholar

40. O. de Weck et al., “Integrated modeling and dynamics simulation for the next generation space telescope (NGST),” Proc. SPIE 4013, 920 (2000).PSISDG0277-786X http://dx.doi.org/10.1117/12.393964 Google Scholar

41. G. Mosier et al., Dynamics and Controls, NGST Presentation, NASA Goddard Space Flight Center, Greenbelt, Maryland (1998). Google Scholar

42. K. C. Liu et al., “Jitter test program and on-orbit mitigation strategies for solar dynamic observatory,” in 20th Int. Symp. on Space Flight Dynamics, p. 16, NASA Goddard Space Flight Center, Greenbelt, Maryland (2007). Google Scholar

43. “STEREO guidance and control system specification,” The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, NASA Contract NAS5-97271, FSCM No. 88898, Drawing No. 7381–9310 (2003). Google Scholar

44. M. Santina et al., “Line-of-sight pointing and control of electro-optical space systems – an overview,” in Advances in the Astronautical Sciences, Guidance and Control 2011, AAS Guidance and Control Conf., 4–9 February 2011, Breckenridge, CO, Vol. 141, pp. 213–235, Paper No AAS-11-041 (2011). Google Scholar

45. D. S. Bayard, “State-space approach to computing spacecraft pointing jitter,” J. Guid. Control Dyn. 27(3), 426–433 (2004). http://dx.doi.org/10.2514/1.2783 Google Scholar

46. D. S. Bayard, “A simple analytic method of computing instrument pointing jitter,” JPL New Technology Report NPO-30525, NASA Tech Brief Vol. 27, No. 1. JPL Internal Document JPL D-19967, Jet Propulsion Laboratory, Pasadena, California (2000). Google Scholar


Mark E. Pittelkau received his BS and PhD degrees from Tennessee Technological University in 1981 and 1988, and his MS degree from Virginia Polytechnic Institute in 1983, all in electrical engineering. His work has been in spacecraft guidance, navigation, and control. He has designed and analyzed attitude determination and control systems for precision-pointing imaging and science spacecraft.

William G. McKinley received his BS degree in physics magna cum laude from Arizona State University in 1971 and his MS degree in optical science from the University of Arizona in 1975. In his optical career, he has worked at Kodak, Goodyear, TRW, ITEK, Goodrich Aerospace, and Harris Corporation. He has been engaged in the creation and analysis of all types and aspects of optical systems and the processing and evaluation of optical system data products.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Mark E Pittelkau, Mark E Pittelkau, William G. McKinley, William G. McKinley, } "Optical transfer functions, weighting functions, and metrics for images with two-dimensional line-of-sight motion," Optical Engineering 55(6), 063108 (17 June 2016). https://doi.org/10.1117/1.OE.55.6.063108 . Submission:

Back to Top