A "life size" thermal target array has been developed to facilitate in-flight testing of airborne weapon systems containing night vision subsystems. This in-flight testing to measure the performance of the night vision subsystem and its effect on overall weapon system performance is essential to the test and evaluation process of the particular weapon under test. This measurement of subsystem performance is called the Modulation Transfer Function, or MTF. In addition, a laser designator subsystem is frequently incorporated in a precision guided munition weapon system. In the test and evaluation of the designator, such quantities as beam quality (energy distribution), beam divergence, and beam wander are of interest. The thermal targets may be used to evaluate armored weapon systems. The capability of providing carefully controlled and variable thermal signatures in a field test environment is considered unique. The thermal target array consists of three targets: A six bar recognition target, a two bar detection target, and a laser designator scoring board (cross-hair). The image dimensions of 2.3 meters by 2.3 meters were derived from an optimized threat envelope. The thermal signatures of the targets are controllable to within 0.3 C about a differential setpoint. This differential setpoint is measured between the active element and the target background (or "ambient"). Several differential temperature settings are available to the test officer: 1.25°C, 3°C, 5°C, 7.5°C, and 10°C. This paper reviews the thermal array test objectives, target array fabrication, methodology of target utilization, and representative results.
This paper describes a computer model for generating 3-D images of small vehicles. The paper shows examples and gives throughput, memory, and accuracy for implementation on a VAX computer. Each vehicle is described in terms of components such as wheels, chassis, and turret. The model decomposes these components into three-point facets which are the basis for generating an image. Each point of a facet can be assigned a specific temperature, emissivity, and reflectivity. Range contour imagery from the model is useful in developing identification and classification algorithms for laser radars.
Two-way optical communication systems are described. These systems were designed to use the same optical subsystem for transmitting as for receiving. The large information bandwidth and narrow beam width are attributed to the use of lasers as the optical sources. Single-mode elliptical optical fiber is used as transmission medium. This fiber is capable of preserving the two orthogonally polarized modes of propagation. For applications such as linking remote television stations over a long distance, a modulated and highly collimated beam is transmitted through the atmosphere. The effects of the operating environments on the performances of laser beam acquisition is described.
The Electro-Optical Systems Atmospheric Effects Library (EOSAEL) is a state-of-the-art collection of user oriented models and computer codes for quantifying the obscuration effects of natural and man-made contaminants on the propagation of radiation in the atmosphere. The EOSAEL models address the visible and near-infrared (0.2-2.0 μm), mid-infrared (3.0-5.0 μm), far-infrared (8.0-12.0 μm), and millimeter wave (10-350 GHz) regions of the spectrum plus 53 laser lines. The current EOSAEL includes sixteen models that will aid the researcher in determining: contrast and contrast transmission; laser beam jitter and wander; laser scattering by atmospheric aerosols; atmospheric transmission through fogs/hazes, rain, snow, various types of clouds and man-made smokes, dirt/dust, and self-screening grenades; probability of detection for static targets; and climatological conditions for select European locations. An overview will be presented showing the data bases used (where apglicable), the salient points of the EOSAEL models, and a typical scenario that may be constructed using this library.
This paper compares the performance of four candidate target detection algorithms. The best known of these, "Superslice," was developed at the University of Maryland in 1977-78.' The other three algorithms are the "Spoke Filter" developed by the Army Missile Command in Huntsville;2,3 the "contrast box" (CB), a new concept under development at Texas Instruments; and the Ford-Aerospace double-gated contrast filter.' As part of Texas Instruments development activity, CB algorithm performance was compared with the others. To do this meaningfully, all four concepts were tested using a common data base. The measure of detection performance was the total number of candidate targets handed off to the feature extractor and classifier to achieve a specified probability of including the actual target in the total. An ideal detection algorithm in terms of this measure would only need to hand off one target (assuming only one target in the FOV) to achieve a probability of 1. Other performance measures were input and output signal-to-noise ratios (SNR) and algorithm gain. The data base used in this investigation consisted of 256 independent IR images containing targets nominally varying between 5 and 500 pixels in size. The results presented not only illustrate how the algorithm performance depends on range to the target, but show that no one algorithm is best for all values of pixels on target. Of the four, however, the contrast box appears to provide the best overall performance.
The aim of the edge thinning process is to remove the inherent edge broadness in gradient images while retaining edge continuity of the image. Such a process is required so that a useful silhouette of the target can be generated for target classification purposes. This paper compares the results from three candidate edge thinning algorithms: nonmaximum sup-pression technique by D. Miligram and A. Rosenfeld, modified USC thinning algorithm, and Reeves' algorithm. The results are compared subjectively and in terms of the number of computations required for their implementation.*
The problem of determining the position of an object in a given field of view and extracting it for subsequent shape analysis is basic to many image processing applications. One such important defense application is to locate airborne targets within terrain or sky back-grounds.
A new image analysis technique is presented in which a modified version of the Hough transform is employed. This transform algorithm operates on edge images in which the polarity of the edge, relative to a reference direction in the image plane, is preserved. The edge polarity is represented by a positive or negative sign resulting from the differentiation operation used in the edge enhancement algorithms. The new Hough transform creates a positive or a negative peak for each edge depending on the polarity of the linear edge features. The polarity information plays a major role in forming or testing a scene hypothesis directly from the Hough transform output. In many applications simple logic performs the equivalent task of several complicated testing procedures. These procedures are normally required after ordinary Hough transformation for the acceptance or rejection of a scene hypothesis. An example is presented in which the objective is to detect a generic river and bridge scene from a range-azimuth image created by an active millimeter wave sensor. In this example, the new Hough transform succeeds in detecting and interpreting the scene, both when it consists of the river and bridge, or the river alone. Also presented as a part of the overall algorithm, are the results of a new edge detection algorithm that employs median filtering prior to edge enhancement, and locally adaptive thresholding in edge detection. Both of these techniques have significantly increased the robustness of the edge detection algorithm, which in turn, has improved the overall performance of the scene analysis algorithm.
Significant development has been made in programs to analyze and display multiple vector-valued clusters of input, due in large part to the usefulness of multispectral data in remote sensing. This paper describes recent results in laser range identification of building surfaces to illustrate the general utility of such analysis/display programs. A 4-tuple of features was computed at each pixel of a range image by fitting a plane to every 5 X 5 pixel region in the image. The four plane parameters (i.e., azimuth, elevation, length of the normal vector, and residual fit error) constitute a vector image to be histogrammed and clustered. The multidimensional clusters have actual 3-D descriptions (angles in degrees, distance and residual in meters) which provide insight into the relationships within the 3-D scene, and clarifies their value in planar surface identification. Interesting cluster categories can be designed in the original image. This capability also permits visual verification of the sensitivity of the cluster choice to the results obtained.
The use of intensity correlation techniques for the detection of passive optical devices is considered. It is assumed that the devices have small angular sizes due to their large distances from the detection system, and it is assumed that the final optical apertures of the devices appear to be stationary, whereas their surroundings appear to fluctuate. This difference in apparent motion forms the basis for intensity correlation detection, which has potential advantages because of its insensitivity to phase variations due, for example, to motion of the detection system or to atmospheric turbulence. Some relationships between intensity correlation detection and certain areas in optical physics, including laser radar (lidar), intensity correlation interferometry, and speckle phenomena due to moving diffuse objects are discussed. A simple model of intensity correlation detection is developed and related to the influence of stationary coherent background illumination on the autocorrelation function in time of a coherently illuminated moving rough surface. An expression for this autocorrelation function is derived and discussed for certain elementary conditions.
Frequently in image processing an unknown image is identified by matching features from the image to features of known images. One method of doing this is to compute a distance measure between the features of the two images. This measure indicates the dissimilarity between the images. This distance measure is computed for each of a large set of different known images. The process of exhaustively locating the known image with the minimum distance, and therefore the best match is called nearest neighbor matching.
Detecting weak targets in the presence of a strong backgroud or an out of field source requires an optical system that can reject unwanted signals. Properly designed baffles can prevent unwanted energy from reaching a focal plane or detector. To illustrate basic baffle design techniques and resulting benefits, the stray radiation analysis of a typical heat seeking missle is presented.
A system which accepts a two-dimensional scene as input and produces an image of the scene as output introduces degradations to the image which cause a loss of information about the original scene. A mathematical model is described which, under certain assumptions about the scene, the transformations and the noise sources, models the effects of these degradations on the original scene and on the ability of any algorithm to classify objects in the scene.
The two-dimensional joint histogram or co-occurrence matrix of two images provides a convenient vehicle for the design of scene matching algorithms which effectively incorporate information about expected variations between reference and sensor-derived data. Thus, specified histogram pairings, orderings, or degrees of clustering can be rewarded as appropriate by the matching algorithm. In particular, intensity-independent algorithms can be designed which are invariant under an arbitrary permutation of labels in one or both images to be matched and, as a result, they are insensitive to contrast reversals. Such algorithms can utilize synthetic reference maps constructed by assigning arbitrary labels to designated regions, thus eliminating the need for material identification and sensor response prediction. Two different types of intensity-independent algorithms are described and their performance is compared to that of conventional matching algorithms using a set of infrared images acquired over a 24-hour period.
A hyperspace formulation of matched spatial filter pattern recognition together with band-pass filter preprocessing leads to synthesis of a synthetic discriminant function that can recognize a reference object independent of intensity and geometrical differences between inputs. Use of a maximum common information preprocessing concept, Karhunen-Loeve techniques, non-unitary transformations with multi-channel synthetic discriminant functions and a new decorrelation transformation to provide inter-class discrimination for our synthetic discriminant function system are considered. Experimental verification on M-60 tank targets and an armored personnel carrier false target are included.
Feature analysis and classifier design toward real-time FLIR target identification are presented. After target candidates of military vehicles are segmented in a low resolution FLIR scenario, 1 a set of 17 features is extracted from these target areas in a single pass. These features consist of two optical intensity features, nine geometry features as well as six texture features. Five types of target candidates are examined in this study. They are tanks, APC's, jeeps, burning hulks, and other nontargets such as noise regions. A simple tree classifier is designed that is based on the manual interpretation of feature distribution among target categories at each nonterminal node of the tree classifier. The approach of feature extraction and classifier design has been applied to FLIR imagery provided by the U.S. Army Night Vision Laboratory and has shown promising results.
Automatic ship recognition is of interest in such problems as over-the-horizon surface surveillance and targeting, long range air targeting, and satellite ocean surveillance. Our approach is model-driven. It uses the fact that the wake caused by a cruising ship has a higher temperature profile than the surrounding water background. In addition, we distinguish between the active wake and the turbulent water surrounding the ship. Furthermore, the temperature of the ship itself is usually lower than that of the ocean. Finally, we make use of the knowledge of ship sizes, convoy patterns and other information concerning ships traveling in formation. An image from a radiometric sensor forms the basis of the analysis. Edge detection and region association techniques are used to locate a "zone of activity", a region of the image that contains the ship. Grey level histogram analysis of the zone is then used to categorize pixels into "ship", "wake", and "water". Results of experiments using this technique are presented.
An analysis of the statistics of the moments and the conventional invariant moments shows that the variance of the latter become quite large as the order of the moments and the degree of invariance increases. Moreso, the need to whiten the error volume increases with the order and degree, but so does the computational load associated with computing the whitening operator. We thus advance a new estimation approach to the use of moments in pattern recog-nition that overcomes these problems. This work is supported by experimental verification and demonstration on an infrared ship pattern recognition problem. The computational load associated with our new algorithm is also shown to be very low.
The purpose of the target classification algorithms is to properly categorize the object isolated by the target detection and extraction algorithms. Feature determination and object classification with the given features are the two distinct phases associated with target classification. This paper compares the impact of using "radial and angular moments" versus Hu's seven Cartesian moment invariants, and also compares silhouette moments and intensity moments for feature extraction. The k-nearest neighbor approach is then used for the object classification phase. The efficacy of the technique has been evaluated off-line via the method of confusion matrices. The theoretical results are presented, supported by validation on the synthetic data base generated in our Digital Image Processing Lab.
The tracking of partially obscured targets is an important problem. This paper shows that "non-classical" correlation measures using Lp norms offer great potential in this area. It is shown that as the "p" of the Lp norm is decreased from its normal ("classical") value of 2 the generalized cross-correlation function becomes, up to a point, increasingly immune to partial target obscuration. Optimum performance appears in the range p Pz 0.2. These "low p" correlation methods have a certain relationship to the maximum entropy methods, as shown in the paper. Numerical and graphical examples are presented.
Passive ranging can be performed using a Kalman-Bucy (KB) filter to process noisy sequential bearing measurements made by a moving observer. This can be performed also using the Moore-Penrose (MP) pseudomatrix inverse to obtain a least-squares fit to the observed bearing measurements. The advantage of the MP method is that no linearizations or state/covariance initializations are required. This paper formulates a two-dimensional bearings-only passive-ranging problem for solution by the KB and MP methods and compares the performance of the two methods.
The work reported herein resulted from a subcontract with the McDonnell-Douglas Astro-nautics Company, whose prime contract was sponsored by Defense Advanced Research Projects Agency under ARPA Order No. 3974 and monitored by U.S. Army Missile Command under Contract No. DAAH01-80-C-0799. Infrared focal plane array technology has advanced rapidly in the past two years. This technology promises to offer advantages for FLIR systems as well as advanced missile seeker concepts. RCA is in the forefront of this technology and has incorporated its focal plane array into a missile seeker which is described in this presentation. The presentation is divided into the four parts as shown in Figure 1. In the first part, the RCA Schottky Barrier infrared focal plane array is briefly described. Since it has been thoroughly described in the literature, only summary information is provided. However, a list of references is provided so that those interested can obtain more information on the technology. The second part describes the details of the seeker along with the reasons certain design approaches were selected. The third part provides a summary of the key seeker performance parameters as they relate to the requirements selected for the program. The fourth part summarizes results of the entire effort.
Of various concepts (both staring and scanning) which have been proposed for the detection of dim moving targets from space, many have sensitivity limitations arising from smear - induced drift which causes displacement between frames of data. A multi-fan pushbroom scan concept recently proposed by W.K. Davis has an advantage over some other scanning concepts in that the frames of data can be processed by the same high performance algorithms applicable to staring sensors. The displacements caused by smear in this technique are discussed as well as three simple techniques for eliminating the displacements. The principal technique of the three which compensates for most of the displacement is a slow controlled yaw of the spacecraft. An example is presented in which the uncorrected smear displacement is 800 m while the corrected smear displacement is 0.8m, for a smear reduction factor of 1000.
A statistically based tracking algorithm is presented that utilizes contrast in intensities and target edges to separate a target image from the background scene. Having the ability to select either contrast or edges to separate the target image, the tracking algor-ithm demonstrates a remarkable ability to locate and track tank images in very noisy background scenes. When there is low contrast between the target and background, the algorithm utilizes edge information to locate and track the target. On the other hand, if there is good contrast, then intensities are used to separate the target from the background. Both of the separation techniques are implemented with a Bayesian statistical decision rule. Sample output of the tracking algorithm using IR focal plane array imagery is shown.