The Video National Imagery Interpretability Rating Scale (VNIIRS) is a useful standard for quantifying the interpretability of motion imagery. Automated accurate assessment of VNIIRS would benefit operators by characterizing the potential utility of a video stream. For still, visible-light imagery the general image quality equation (GIQE) provides a standard model to automatically estimate the NIIRS of the image using sensor parameters, namely the ground sample distance (GSD), the relative edge response (RER), and signal-to-noise ratio (SNR). Typically, these parameters are associated with a specific sensor and the metadata correspond to specific image acquisition. For many tactical video sensors however, these sensor metadata are not available and it is necessary to estimate these parameters from information available in the imagery. We present methods for estimating the RER and SNR through analysis of the scene, i.e. the raw pixel data. By estimating the RER and SNR directly from the video data, we can compute accurate VNIIRS estimates for the video. We demonstrate the method on a set of video data.
Motion imagery can be used in some circumstances as a source of data for reconstructing three-dimensional (3D) representations of targets and objects in the scene. This research explores the utility of using motion image data collection simulation for input to a 3D target reconstruction algorithm based on structure-from-motion. The use of simulated video is advantageous for testing potential collections on targets or geographical areas for which real video and images may be unavailable. Examples are provided of tests of angular sampling and occlusion, degradation of input imagery through blurring, and measurement of their effects on 3D reconstruction quality.
Automated video quality assessment methods have generally been based on measurements of engineering parameters such as ground sampling distance, level of blur, and noise. However, humans rate video quality using specific criteria that measure the interpretability of the video by determining the kinds of objects and activities that might be detected in the video. Given the improvements in tracking, automatic target detection, and activity characterization that have occurred in video science, it is worth considering whether new automated video assessment methods might be developed by imitating the logical steps taken by humans in evaluating scene content. This article will outline a new procedure for automatically evaluating video quality based on automated object and activity recognition, and demonstrate the method for several ground-based and maritime examples. The detection and measurement of in-scene targets makes it possible to assess video quality without relying on source metadata. A methodology is given for comparing automated assessment with human assessment. For the human assessment, objective video quality ratings can be obtained through a menu-driven, crowd-sourced scheme of video tagging, in which human participants tag objects such as vehicles and people on film clips. The size, clarity, and level of detail of features present on the tagged targets are compared directly with the Video National Image Interpretability Rating Scale (VNIIRS).
Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).
Artifacts induced by distortions which sometimes occur in two- dimensional projection images can appear in the resulting tomographic reconstructions. We describe a procedure for analyzing, correcting and removing experimental artifacts, and hence reducing reconstruction artifacts. Two-dimensional and three-dimensional images acquired with scanning transmission x-ray microscopy of a sample containing an integrated circuit interconnect show how these procedures can be successfully applied.
We performed an x-ray nanotomography experiment at the Advanced Photon Source for the purpose of making a 3D image of a sample contain an integrated circuit interconnect. Nine projections of the sample were made over an angular range of 140 degrees using 1573 eV photons and scanning transmission x-ray microscope having a focal spot size of about 150 nm. Reconstructions of experimental and simulated data, using a simultaneous iterative reconstruction technique, show that a sample that is highly opaque along certain lines of sight must be strategically oriented with respect to the rotation axis to minimize the attenuation of photons through the sample and maximize the contrast in each image.