We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.
Imagery acquired with modern imaging systems is susceptible to a variety of degradations, including blur from the point
spread function (PSF) of the imaging system, aliasing from undersampling, blur and warping from atmospheric
turbulence, and noise. A variety of image restoration methods have been proposed that estimate an improved image by
processing a sequence of these degraded images. In particular, multi-frame image restoration has proven to be a
particularly powerful tool for atmospheric turbulence mitigation (TM) and super-resolution (SR). However, these
degradations are rarely addressed simultaneously using a common algorithm architecture, and few TM or SR solutions
are capable of performing robustly in the presence of true scene motion, such as moving dismounts. Still fewer TM or
SR algorithms have found their way into practical real-time implementations. In this paper, we describe a new L-3 joint
TM and SR (TMSR) real-time processing solution and demonstrate its capabilities. The system employs a recently
developed versatile multi-frame joint TMSR algorithm that has been implemented using a real-time, low-power FPGA
processor system. The L-3 TMSR solution can accommodate a wide spectrum of atmospheric conditions and can
robustly handle moving vehicles and dismounts. This novel approach unites previous work in TM and SR and also
incorporates robust moving object detection. To demonstrate the capabilities of the TMSR solution, results using field
test data captured under a variety of turbulence levels, optical configurations, and applications are presented. The
performance of the hardware implementation is presented, and we identify specific insertion paths into tactical sensor
The detectors within an infrared focal plane array (FPA) characteristically have responses that vary from detector to
detector. It is desirable to remove this "nonuniformity" for improved image quality. Factory calibration is not sufficient
since nonuniformity tends to drift over time. Field calibration can be performed using uniform temperature sources but
requires briefly obscuring the field-of-view and leads to additional system size and cost. Alternative "scene-based"
approaches are able to utilize the normal scene data when performing non-uniformity correction (NUC) and therefore do
not require the field-of-view to be obscured. These function well under proper conditions but at times can introduce
image artifacts such as "ghosting". Ghosting results when scene conditions are not optimal for NUC. The scene-based
approach presented in this paper estimates a correction term for each detector using spatial information. In parallel,
motion estimation and texture features are used to identify frames and regions within frames that are suitable for NUC.
This information is then employed to adaptively converge to the proper correction terms for each detector in the FPA.
The presence of parasitic jitter in video sequences can degrade imaging system performance. Image stabilization systems correct for this jitter by estimating motion and then compensating for undesirable movements. These systems often require tradeoffs between stabilization performance and factors such as system size and computational complexity. This paper describes the theory and operation of an electronic image stabilization technique that provides sub-pixel accuracy while operating at real-time video frame rates. This technique performs an iterative search on the spatial intensity gradients of video frames to estimate and refine motion parameters. Then an intelligent segmentation approach separates desired motion from undesired motion and applies the appropriate compensation. This computationally efficient approach has been implemented in the existing hardware of compact infrared imagers. It is designed for use as both a standalone stabilization module and as a part of more complex electro-mechanical stabilization systems. For completeness, a detailed comparison of theoretical response characteristics with actual performance is also presented.
KEYWORDS: Infrared imaging, Electronics, Detection and tracking algorithms, Imaging systems, Image processing, Video, Image enhancement, Video processing, Algorithm development, Color and brightness control algorithms
The MWIR imaging systems developed by L-3 Communications Cincinnati Electronics (L-3 CE) include several video processing algorithms designed to provide enhanced imagery that meets a variety of military and other application requirements. When IR imaging systems are confronted with varying IR conditions, video processing algorithms are designed and selected to optimize human interpretation of specific scene details. The Visual Difference Predictor model has been used and a derived Image Enhancement Score has been developed to provide an objective metric to evaluate the effects of processing algorithms on imagery. Comparing the Image Enhancement Score of the processed image gives an objective measure of the success of the video processing algorithm being evaluated. This paper will describe selected algorithms in the L-3 CE Video Processing Suite, evaluate them against several test scenes and present associated Image Enhancement Scores. These will include a novel local contrast enhancement, general sharpening, and display mapping algorithms. Finally, the direction of ongoing and future efforts in Video Processing Suite development will be discussed.
Common infrared video imagery can experience large variations in signal level across different portions of a scene. Global image processing techniques are not capable of using standard displays to show both large variations and detail within individual regions of interest. For this reason, local image processing approaches have been developed to increase contrast in localized areas. These are typically high latency, post-video techniques targeted for specific applications. We have developed a unique video processing approach that has near-zero latency and is not computationally intensive, so imagery can be processed and displayed for real-time human observation using minimal hardware. Local scaling factors are computed using a flexible distribution technique, allowing adjustable levels of sensitivity and local detail enhancement.