Interpolation of remote sensing imagery is a ubiquitous task, required for myriad purposes such as registration of multiple frames, correction of geometric distortions, and mitigation of platform vibration distortions in imagery. Interpolation is also a classically systemic task, in that interpolator performance in pixel placement, anti-aliasing, and blur, affects the design of other system components, notably reconstruction filters. Interpolator design in a system context is the problem which first motivated development of the latent and apparent image quality metrics previously presented at Visual Information Processing XI and XIII.
This paper presents a suite of common interpolator design philosophies with length-4 examples of the designs analyzed in terms of signal processing and image quality metrics. Conclusions are drawn both with respect to the designs and with respect to the metrics.
Measures of image quality based on sensitivity of edge placement have previously been presented for use in imaging system analysis. Successful applications of these measures have included sharpening filter design, interpolator design, system focal length selection, compression bitrate selection, and phase diversity optical control analysis. These are applications for which the General Image Quality Equation (GIQE) is not recommended. The GIQE is intended only for assessment of optimized system designs, and is not robust in system optimization applications.
The alternative to system optimization by engineering metric methods is optimization by human assessments of simulated system design alternatives, a process which is slow and expensive as well as presenting considerable practical difficulties in validation and reproducibility. A practical approach to system design combines metric evaluation of day-to-day problems for which quick answers are needed, with simulation and human evaluation of overall system performance and larger system tradeoffs. In this combined approach one use of metric analysis is to provide reasonable design alternatives to include in the trade space being explored by analyst assessments.
Analysis of simple parametric studies taken from Optical Engineering is presented here in terms of the metrics, with comparison of results achieved with human analysis against results predicted from engineering metrics.
Eastman Kodak Company conducts image quality monitoring of U.S. Government-operated Synthetic Aperture Radar (SAR) sensors. Our quality assurance methodology uses automated metrics in parallel with human analyst scoring of image quality factors to track quality trends in an image chain. A key feature of the program is that analysis is performed periodically on images selected from actual mission data. Historically, tasking the sensors to fly over calibrated test sites on such a regular basis has failed because of contention for collection resources from higher priority jobs. In addition, detected, 8-bit NITF data is often the only image product that is distributed. The scarcity of high radar cross-section (RCS) individual point scatterers as well as the lack of complex data provides challenges to the ability to estimate a key image quality parameter, the impulse response function (IPR). This paper discusses a method to isolate and aggregate signatures of multiple low signal-to-noise ratio IPRs in detected mission imagery. Measures of -3dB and -15dB IPR widths in range and azimuth have been realized along with estimates of far sidelobe levels.
Measures of image quality are presented here that have been developed to assess both the immediate quality of an image and the potential at intermediate points in an imaging chain for enhanced image quality. The original intent of the metric(s) was to provide an optimand for interpolator design, and the metrics have subsequently been used for a number of differential image quality analyses and imaging system component designs. The metrics presented are of the same general form as the National Imagery Interpretability Rating Scale (NIIRS), representing quality as the base-2 logarithm of linear resolution, so that one unit of differential quality represents a doubling or halving of the resolution of imagery. Analysis of a simple imaging chain is presented in terms of the metrics, with conclusions regarding interpolator design, consistency of the latent and apparent image quality metrics, and the relationship between interpolator and convolution kernel design in a system where both are present. Among the principal results are an optimized division of labor between interpolators and Modulation Transfer Function Correction (MTFC) filters, consistency of the analytical latent and apparent image quality metrics with each other and with visually optimized aim curves, and an introduction to sharpening interpolator design methodology.
Synthetic Aperture Radar (SAR) imagery has traditionally posed a challenge to image compression algorithms operating at low to moderate bit-rates (0.25 to 1.0 bits per pixel), because SAR texture is typically reconstructed as smooth fields. This smooth reconstruction is visually objectionable and conceals information from interpreters, who are accustomed to analyzing textures and using texture to define the context of reflecting point targets and clusters. JPEG 2000 is emerging as a new international standard for wavelet-based image compression, and it too tends to reconstruct SAR texture as smooth fields when operating at low or moderate bit-rates. This characteristic of the new standard motivates an attempt to carry texture synthesis techniques proven on other compression algorithms over to JPEG 2000. This present effort demonstrates the value of root-mean-square (RMS) reconstruction, a technique previously proven on a proprietary codec, for improving the visually perceived quality of JPEG 2000 compressed SAR images. RMS reconstruction is found to be extremely useful for JPEG 2000, both for improving the quality of compressed SAR images and also for improving the visual appearance of compressed electro-optical (EO) imagery.
Next generation reconnaissance and Automatic Target Detection/Recognition (ATD/R) performance goals will impose new image quality requirements on integrated SAR hardware and software systems. Signal processing techniques using demonstrated non-parametric autofocus methods such as the Phase Gradient Autofocus algorithm and developments in robust super-resolution signal processing offer the opportunity for reducing overall system cost through utilization of less costly hardware options in integrated system design. Traditional requirements on image quality from integrated hardware-software SAR systems have used image quality metrics based on the characteristics of the overall system impulse response function. An additional class of image quality metrics is available based on the performance of the ATD/R algorithms that are to utilize the imagery. The performance of a given SAR system by these measures is expected to be context-sensitive and dependant on both target and clutter characteristics in a manner not necessarily readily characterizable solely in terms of system impulse response function measures of image quality. A simulation illustration of these issues is presented for a test case in which a range of SAR sensor hardware options are processed through a representative texture metric mechanization. Potential performance dependencies on target and clutter characteristics are reviewed and the efficacy of supplementing impulse response function image quality metrics with additional appropriate predictors of ATD/R performance is reviewed.
The wavelet-based JPEG 2000 image compression standard is flexible enough to handle a large number of imagery types in a broad range of applications. One important application is the use of JPEG 2000 to compress imagery collected by remote sensing systems. This general class of imagery is often larger -- in terms of number of pixels) -- than most other classes of imagery. Support for tiling and the embedded, progressively ordered bit stream of JPEG 2000 are very useful in handling very large images. However, the performance of JPEG 2000 on detected SAR (Synthetic Aperture Radar) and other kinds of specular imagery is not as good, from the perspective of visual image quality, as its performance on more 'literal' imagery types. In this paper, we try to characterize the problem by analyzing some statistical and qualitative differences between detected SAR and other more literal remote sensing imagery types. Several image examples are presented to illustrate the differences. JPEG 2000 is very flexible and offers a wider range of options that allow for technology that can be used to optimize the algorithm for a particular imagery type or application. A number of different JPEG 2000 options - - including subband, weighting, trellis-coded quantization (TCQ), and packet decomposition -- are explored for their impact to SAR image quality. Finally, the anatomy of a texture-preserving wavelet compression scheme is presented with very impressive visual results. The demonstration system used for this paper is currently not supported by the JPEG 2000 standard, but it is hoped that with additional research, a variant of the scheme can be fit into the framework of JPEG 2000.
We present a system for extraction of information bands from imagery that combines properties of subband/wavelet decomposition and factor analysis to achieve uniform presentations of ground truth from a variety of sensor inputs. Some unique information-regularizing features of the system are invariance to scaling, sorting, and skewing of the input data, as well as robustness to blurring or sharpening, nonlinear intensity remapping, and to the differences between literal and non-literal input imagery. These features enhance both visual interpretability of an RGB color image and machine exploitability of regularized information bands.