The interpretability of an image indicates the potential intelligence value of the data. Historically, the National Imagery Interpretability Rating Scale (NIIRS) has been the standard for quantifying the intelligence potential based on image analysis by human observers. Empirical studies have demonstrated that spatial resolution is the dominant predictor of the NIIRS level of an image. Today, the value of imagery is no longer simply determined by spatial resolution, since additional factors such as spectral diversity and temporal sampling are significant. Furthermore, analyses are performed by machines as well as humans. Consequently, NIIRS no longer accurately quantifies potential intelligence value for an image or set of images. We are exploring new measures of information potential based on mutual information. Our research suggests that new measures of image “quality” based on information theory can provide meaningful standards that go beyond NIIRS. In our approach, mutual information provides an objective method for quantifying divergence across objects and activities in an image. This paper presents the rationale for our approach, the technical description, and the results of early experimentation to explore the feasibility of establishing an information-theoretic standard for quantifying the intelligence potential of an image.
Recent investigations indicate cardiovascular function is a viable biometric. This paper explores biometric techniques
based on multiple modalities for sensing cardiovascular function. Analysis of data acquired with an electrocardiogram
(ECG) combined with corresponding data from pulse oximetry and blood pressure indicates that features
can be extracted from the signals, which correspond to individuals. While a person's heart rate can vary with mental
and emotional state, certain features corresponding to the heartbeat appear to be unique to the individual. Our protocol
induced a range of mental and emotional states in the subject and the analysis identifies features of the cardiovascular
signals that are invariant to mental and emotional state. Furthermore, the three measures of cardiovascular
function provide independent information, which can be fused to achieve robust performance compared to a single
The motion imagery community would benefit from standard measures for assessing image interpretability. The National Imagery Interpretability Rating Scale (NIIRS) has served as a community standard for still imagery, but no comparable scale exists for motion imagery. Several considerations unique to motion imagery indicate that the standard methodology employed in the past for NIIRS development may not be applicable or, at a minimum, requires modifications. The dynamic nature of motion imagery introduces a number of factors that do not affect the perceived interpretability of still imagery—namely target motion and camera motion. We conducted a series of evaluations to understand and quantify the effects of critical factors. This paper presents key findings about the relationship of perceived interpretability to ground sample distance, target motion, camera motion, and frame rate. Based on these findings, we modified the scale development methodology and validated the approach. The methodology adapts the standard NIIRS development procedures to the softcopy exploitation environment and focuses on image interpretation tasks that target the dynamic nature of motion imagery. This paper describes the proposed methodology, presents the findings from a methodology assessment evaluation, and offers recommendations for the full development of a scale for motion imagery.
Detection and mapping of subsurface obstacles is critical for safe navigation of littoral regions. Sidescan sonar data offers a rich source of information for developing such maps. Typically, data are collected at two frequencies using a sensor mounted on a towfish. The major features of interest depend on the specific mission, but often include: objects on the bottom that could pose hazards for navigation, linear features such as cables or pipelines, and the bottom type, e.g., clay, sand, rock, etc. A number of phenomena can complicate the analysis of the sonar data: Surface return, vessel wakes, fluctuations in the position and orientation of the towfish. Developing accurate maps of navigation hazards based on sidescan sonar data is generally labor intensive. We propose an automated approach, which employs commercial software tools, to detect of these objects. This method offers the prospect of substantially reducing production time for maritime geospatial data products.
Motion imagery will play a critical role in future intelligence and military missions. The ability to provide a real time, dynamic view and persistent surveillance makes motion imagery a valuable source of information. The ability to collect, process, transmit, and exploit this rich source of information depends on the sensor capabilities, the available communications channels, and the availability of suitable exploitation tools. While sensor technology has progressed dramatically and various exploitation tools exist or are under development, the bandwidth required for transmitting motion imagery data remains a significant challenge. This paper presents a user-oriented evaluation of several methods for compression of motion imagery. We explore various codecs and bitrates for both inter- and intra-frame encoding. The analysis quantifies the effects of compression in terms of the interpretability of motion imagery, i.e., the ability of imagery analysts to perform common image exploitation tasks. The findings have implications for sensor system design, systems architecture, and mission planning.
The motion imagery community would benefit from the availability of standard measures for assessing image interpretability. The National Imagery Interpretability Rating Scale (NIIRS) has served as a community standard for still imagery, but no comparable scale exists for motion imagery. Previous studies have explored the factors affecting the perceived interpretability of motion imagery and the ability to perform various image exploitation tasks. More recently, a study demonstrated an approach for adapting the standard NIIRS development methodology to motion imagery. This paper presents the first step in implementing this methodology, namely the construction of the perceived interpretability continuum for motion imagery. We conducted an evaluation in which imagery analysts rated the interpretability of a large number of motion imagery clips. Analysis of these ratings indicates that analysts rate the imagery consistently, perceived interpretability is unidimensional, and that interpretability varies linearly with log(GSD). This paper presents the design of the evaluation, the analysis and findings, and implications for scale development.
A fundamental problem in image processing is finding objective metrics that parallel human perception of image
quality. In this study, several metrics were examined to quantify compression algorithms in terms of perceived loss
of image quality. In addition, we sought to describe the relationship of image quality as a function of bit rate. The
compression schemes used were JPEG2000, MPEG2, and H.264. The frame size was fixed at 848x480 and the
encoding varied from 6000 k bps to 200 k bps. The metrics examined were peak signal to noise ratio (PSNR),
structural similarity (SSIM), edge localization metrics, and a blur metric. To varying degrees, the metrics displayed
desirable properties, namely they were monotonic in the bit rate, the group of pictures (GOP) structure could be
inferred, and they tended to agree with human perception of quality degradations. Additional work is being
conducted to quantify the sensitivity of these measures with respect to our Motion Imagery Quality Scale.
Motion imagery will play a critical role in future combat operations. The ability to provide a real time, dynamic view of the battlefield, as well as the capability to maintain persistent surveillance, together make motion imagery a valuable source of information for the soldier. Acquisition and exploitation of this rich source of information, however, depends on available communications bandwidth to transmit the necessary information to users. Methods for reducing bandwidth requirements include a variety of image compression and frame decimation techniques. This study explores spatially differential compression in which targets in the clips are losslessly compressed, while the background regions are highly compressed. This study evaluates the ability of users to perform standard target detection and identification tasks on the compressed product, compared to performance on uncompressed imagery or imagery compressed by other methods. The paper concludes with recommendations for future investigations.
The motion imagery community would benefit from the availability of standard measures for assessing image interpretability. The National Imagery Interpretability Rating Scale (NIIRS) has served as a community standard for still imagery, but no comparable scale exists for motion imagery. Several considerations unique to motion imagery indicate that the standard methodology employed in the past for NIIRS development may not be applicable or, at a minimum, requires modifications. The dynamic nature of motion imagery introduces a number of factors that do not affect the perceived interpretability of still imagery - namely target motion and camera motion. A set of studies sponsored by the National Geospatial-Intelligence Agency (NGA) have been conducted to understand and quantify the effects of critical factors. This study discusses the development and validation of a methodology that has been proposed for the development of a NIIRS-like scale for motion imagery. The methodology adapts the standard NIIRS development procedures to the softcopy exploitation environment and focuses on image interpretation tasks that target the dynamic nature of motion imagery. This paper describes the proposed methodology, presents the findings from a methodology assessment evaluation, and offers recommendations for the full development of a scale for motion imagery.
The development of a motion imagery (MI) quality scale, akin to the National Image Interpretibility Rating Scale (NIIRS) for still imagery, would have great value to designers and users of surveillance and other MI systems. A multiphase study has adopted a perceptual approach to identifying the main MI attributes that affect interpretibility. The current perceptual study measured frame rate effects for simple motion imagery interpretation tasks of detecting and identifying a known target. By using synthetic imagery, there was full control of the contrast and speed of moving objects, motion complexity, the number of confusers, and the noise structure. To explore the detectibility threshold, the contrast between the darker moving objects and the background was set at 5%, 2%, and 1%. Nine viewers were to detect or identify a moving synthetic "bug" in each of 288 10-second clip. We found that frame rate, contrast, and confusers had a statistically significant effect on image interpretibility (at the 95% level), while the speed and background showed no significant effect. Generally, there was a significant loss in correct detection and identification for frame rates below 10 F/s. Increasing the contrast improved detection and at high contrast, confusers did not affect detection. Confusers reduced detection of higher speed objects. Higher speed improved detection, but complicated identification, although this effect was small. Higher speed made detection harder at 1 Frame/s, but improved detection at 30 F/s. The low loss of quality at moderately lower frame rates may have implications for bandwidth limited systems. A study is underway to confirm, with live action imagery, the results reported here with synthetic.
The motion imagery community would benefit from the availability of standard measures for assessing image interpretability. The National Imagery Interpretability Rating Scale (NIIRS) has served as a community standard for still imagery, but no comparable scale exists for motion imagery. Several considerations unique to motion imagery indicate that the standard methodology employed in the past for NIIRS development may not be applicable or, at a minimum, require modifications. Traditional methods for NIIRS development rely on a close linkage between perceived image quality, as captured by specific image interpretation tasks, and the sensor parameters associated with image acquisition. The dynamic nature of motion imagery suggests that this type of linkage may not exist or may be modulated by other factors. An initial study was conducted to understand the effects target motion, camera motion, and scene complexity have on perceived image interpretability for motion imagery. This paper summarizes the findings from this evaluation. In addition, several issues emerged that require further investigation:
- The effect of frame rate on the perceived interpretability of motion imagery
- Interactions between color and target motion which could affect perceived interpretability
- The relationships among resolution, viewing geometry, and image interpretability
- The ability of an analyst to satisfy specific image exploitation tasks relative to different types of motion imagery clips
Plans are being developed to address each of these issues through direct evaluations. This paper discusses each of these concerns, presents the plans for evaluations, and explores the implications for development of a motion imagery quality metric.
Growing military requirements and shorter timelines are placing greater demands on imagery analysts. At the same time, advances in sensor technology have vastly increased the quantity and types of imagery data available. Together, these factors are driving toward greater reliance on automated exploitation tools, such as automated target cueing (ATC). Several studies indicate that operational performance depends not only on the accuracy of the ATC algorithm, but also on effectively conveying the ATC information to the user. Sonification, the presentation of information through audio signals, provides a novel method for assisting analysts with visual search tasks. This paper presents a recent proof-of-concept experiment in which analysts search for geometric targets in synthetic, two-band color imagery. The performance results indicate that sonification can enhance performance, particularly through false alarm mitigation. The range of performance across users also suggests that user training may play a big role in effective operational use of sonification methods.
Real time in vivo optical coherence tomography (OCT) imaging of the adult fruit fly Drosophila melanogaster heart using a newly designed OCT microscope allows accurate assessment of cardiac anatomy and function. D. melanogaster has been used extensively in genetic research for over a century, but in vivo evaluation of the heart has been limited by available imaging technology. The ability to assess phenotypic changes with micrometer-scale resolution noninvasively in genetic models such as D. melanogaster is needed in the advancing fields of developmental biology and genetics. We have developed a dedicated small animal OCT imaging system incorporating a state-of-the-art, real time OCT scanner integrated into a standard stereo zoom microscope which allows for simultaneous OCT and video imaging. System capabilities include A-scan, B-scan, and M-scan imaging as well as automated 3D volumetric acquisition and visualization. Transverse and sagittal B-mode scans of the four chambered D. melanogaster heart have been obtained with the OCT microscope and are consistent with detailed anatomical studies from the literature. Further analysis by M-mode scanning is currently under way to assess cardiac function as a function of age and sex by determination of shortening fraction and ejection fraction. These studies create control cardiac data on the wild type D. melanogaster, allowing subsequent evaluation of phenotypic cardiac changes in this model after regulated genetic mutation.
A spectral classification comparison was performed using four different classifiers, the parametric maximum likelihood classifier and three nonparametric classifiers: neural networks, fuzzy rules, and fuzzy neural networks. The input image data is a System Pour l'Observation de la Terre (SPOT) satellite image of Otago Harbour near Dunedin, New Zealand. The SPOT image data contains three spectral bands in the green, red, and visible infrared portions of the electromagnetic spectrum. The specific area contains intertidal vegetation species above and below the waterline. Of specific interest is eelgrass (Zostera novazelandica), which is a biotic indicator of environmental health. The mixed covertypes observed in an in situ field survey are difficult to classify because of subjectivity and water's preferential absorption of the visible infrared spectrum. In this analysis, each of the classifiers were applied to the data in two different testing procedures. In the first test procedure, the reference data was divided into training and test by area. Although this is an efficient data handling technique, the classifier is not presented with all of the subtle microclimate variations. In the second test procedure, the same reference areas were amalgamated and randomly sorted into training and test data. The amalgamation and sorting were performed external to the analysis software. For the first testing procedure, the highest testing accuracy was obtained through the use of fuzzy inferences at 89%. In the second testing procedure, the maximum likelihood classifier and the fuzzy neural networks provided the best results. Although the testing accuracy for the maximum likelihood classifier and the fuzzy neural networks were similar, the latter algorithm has additional features, such as rules extraction, explanation, and fine tuning of individual classes.
This paper describes to relationship of polarimetric observations from orbital and aerial platforms and the determination optimum sun-target-sensor geometry. Polarimetric observations were evaluated for feature discrimination. The Space Shuttle experiment was performed using two boresighted Hasselblad 70 mm cameras with identical settings with linear polarizing filters aligned orthogonally about the optic axis. The aerial experiment was performed using a single 35 mm Nikon FE2 and rotating the linear polarizing filter 90 deg to acquire both minimum and maximum photographs. Characteristic curves were created by covertype and waveband for both aerial and Space Shuttle imagery. Though significant differences existed between the two datasets, the observed polarimetric signatures were unique and separable.