Limiting resolution is a simple metric that describes the ability of any image system to distinguish small details of an object. Limiting resolution is normally subjectively calculated from the smallest resolvable group and element in a resolution target such as the USAF 1951 target or analytically from the modulation transfer function MTF of the system. Although limiting resolution has limitations, it provides a quick method with low complexity to establish the performance of an imaging system. Various factors affect limiting resolution such as the optical performance of the system and sensor noise, both temporally and spatially. Evaluating the resolution performance of full motion video FMV results in uncertainty in limiting resolution due to the temporal variation of the system. In high performance FMV system where the modulation associated with the limiting resolution is small, the limiting resolution can vary greatly frame to frame. This paper explores how limiting resolution is measured, factors that affect its uncertainty in FMV system, and provides real world examples from airborne video.
As full-motion video (FMV) systems achieve smaller instantaneous fields-of-view (IFOVs), the residual line-of-sight (LOS) motion becomes significantly more influential to the overall system resolving and task performance capability. We augment the AFRL-derived Python-based open-source modeling code pyBSM to calculate distributions of motionbased modulation transfer function (MTF) based on true knowledge of line-of-sight motion. We provide a pyBSMcompatible class that can manipulate either existing or synthesized LOS motion data for frame-by-frame MTF and system performance analysis. The code is used to demonstrate the implementation using both simulated and measured LOS data and highlight discrepancies between the traditional MTF models and LOS-based MTF analysis.
This paper examines the measurement of MTF of slant edge targets from airborne imagery. The MTF is calculated by extracting the edge spread function from the slant edge, deriving the line spread function, the performing an FFT to get the MTF. Because characteristics of airborne imagery are not controlled, using edge targets to get the system level MTF present challenges. A method to calculate the MTF from edge targets in airborne imagery is proposed by normalizing the scan lines in the edge spread function and low pass filtering it. An example using air borne imagery is shown and compared with analytical results and laboratory measurements. The paper also examines extracting the effects on the MTF due to image blur from jitter common with air borne imagery.