ATRAN was the pioneer terrain sensing guidance system developed in the 1950 era and deployed in Europe on the Air Force's mobile, ground launched TM-76A MACE cruise missile in the late 1950's and early 1960's. The background, principles and technology are described for this system which was the forerunner of todays modern autonomous standoff terrain sensing guided weapons.
The Cruise Missile is guided by an inertial guidance system aided by an updating technique called Terrain Contour Matching (TERCOM). Chance-Vought first proposed the terrain correlation technique in the late 1950's. Since that time TERCOM has evolved into a reliable, accurate, all weather, day and night method of position fixing and updating for cruise missiles. A brief history of TERCOM development will be presented giving results where possible. A description of TERCOM and how is works will be discussed. A snapshot of the present TERCOM status and future planned developments will be addressed.
The optical area correlation system developed by The Boeing Company for missile terminal guidance is called BOSSCO. This development was partially funded under the Air Force Armament Laboratories (AFATL), Eglin AFB, Florida, which sponsored the Pavestorm III program as a highly accurate clear-weather weapon for stationary targets. This paper describes the system design, hardware design and test program conducted during the period of the contract (1972-1974). Also described is the redesign and retest effort conducted under independent research funding to correct design errors and reach the present performance level. Alternate applications including navigation update, map matching and image correlation or alignment were later explored both independently and under contract with the Engineering Topographical Laboratories (ETL), Fort Belvoir, Virginia. Further flight testing and laboratory demonstrations took place over Fort Belvoir and at the Army Missile Command (MICOM), Huntsville, Alabama. This system represents a unique capability which, when mechanized in large scale integrated circuit technology, could significantly increase our national defense capability by providing a near-term fire-and-forget guidance option.
In 1965, the Advanced Development Program (ADP)-679A of the Avionics Laboratory initiated development of guidance systems for stand-off tactical missiles. Employing project engineering support from the Aeronautical Systems Division, WPAFB, the Avionics Laboratory funded multiple terminal guidance concepts and related midcourse navigation technology. Optical correlation techniques which utilize prestored reference information for autonomous target acquisition offered the best near-term opportunity for meeting mission goals. From among the systems studied and flight tested, Aimpoint* optical area guidance provided the best and most consistent performance. Funded development by the Air Force ended in 1974 with a MK-84 guided bomb drop test demonstration at White Sands Missile Range and the subsequent transfer of the tactical missile guidance development charter to the Air Force Armament Laboratory, Eglin AFB. A historical review of optical correlation development within the Avionics Laboratory is presented. Evolution of the Aimpoint system is specifically addressed. Finally, a brief discussion of trends in scene matching technology is presented.
An autonomous missile guidance concept based on area correlation of sensed ground scenes is described. A conceptual description of the DSMAC system is followed by discussions of the system architecture and functional design of various component parts. Image processing techniques and hardware are also covered.
Developed specifically as a position updating guidance system for missile applications, the feasibility of the Range Only Correlation System (ROCS) was established by computer simulation in 1969-1970 and flight testing in 1975-1977. ROCS consists of a conventional radar sensor and a digital processor; it establishes the actual location of a vehicle flying at either high or low altitude extremes and over a wide variety of terrain characteristics. For high altitude applications this system derives position information by sequentially comparing three or more independent radar range returns with prestored range references. For low altitude applications vertical position information is obtained with the system operating in an altimeter mode, while horizontal position information is derived by comparing two independent radar range returns with references. The general features of this position updating system are presented. This includes a description of the radar sensor, the digital processor, the reference data, the performance characteristics, and lastly, the objectives of the current flight test program.
Synthetic aperture radar (SAR) images obtained in real time on a moving vehicle can provide a means for obtaining fix data for the vehicle navigation system. Reference features are located in the SAR images through the use of map matching techniques, with each match providing a measurement of range and range rate to a known reference point. Three matches or fixes made in different directions can provide data for a complete position and velocity determination. A map matching technique has been developed for use with SAR images that utilizes a reference template that encodes only the shape (and not the difficult-to-predict image intensity levels) of the selected reference feature. Through an adaptive and localized normalization of the sensed image pixel amplitudes a matching metric is computed that is a strong function of the degree of shape match of the sensed image and the reference template but is only weakly dependent on the image intensity and contrast. This results in the reference feature being acquired and located with high probability even in the presence of computing features with possibly higher contrast. The map matching algorithm is described and results of theoretical analysis of its performance characteristics are presented with specific attention given to the effects of scene scintillation or speckle in the sensed imagery. The algorithm has been used on a large data base of SAR imagery with good success. Several examples are included to indicate typical performance for both urban and rural environments.
The U.S. Navy has expended considerable effort on the development of microwave radiometric (MICRAD) terrain-sensing guidance. Early developments utilized continuous map-matching and in some cases man-in-the-loop techniques to accomplish guidance. Techniques from radar correlators and other terrain-sensing guidance systems were incorporated. The development proceeded from the early, analog, map-matching systems to the current digital update systems. This paper outlines the development of MICRAD terrain-sensing guidance by the Navy. It describes the origins of the concept, its early laboratory implementations, history of flight tests, and current status.
Boeing Aerospace Company (BAC) of Seattle, Washington and Sperry Microwave Electronics of Clearwater, Florida have developed a multiple-beam radiometric navigation update system. This paper describes the system design, flight test program, and preliminary results. The system was designed and its performance evaluated using analytically derived formulas for performance measures and detailed Monte Carlo simulations. As a result BAC recommended a five or seven fixed beam radiometer. Sperry built a seven-beam, 35 GHz radiometer which BAC flight tested in 1979 to demonstrate its effectiveness over a variety of test scenes under various environmental conditions. Four scenes were selected for the flight test varying from land-water to highly forested regions. Preliminary analysis of the flight test results confirm the expected performance improvement over the single-fixed-beam system tested in 1975. This approach to a terrain sensing millimeter wave radiometer would be applicable to low altitude penetrating aircraft. The system is low cost, with no moving parts; low volume, requiring only a single receiver with small wide-beam antennas; and stealthy, being completely passive. Radiometry can also be complementary to todays terrain correlation approach since flat areas usually contain a maximum of cultural features; where one system works poorly the other works well. This test program provides a data base for studying a wide variety of pattern matching and correlation algorithms, with and without attitude compensation, and using various subsets of the full seven-beam combination.
This paper describes an extension of the linear matched filter concept as it applies to the detection of resolved targets embedded in spatially correlated background clutter. The practical problems associated with the design of filters matched to sets of targets are discussed. The use of multiple or parallel filters for the detection of dissimilar targets or target ensembles is also described. Finally, the performance of ensemble matched filters and parallel filters is compared through the use of a twenty scene background data set.
The goal of FLIR image enhancement is to obtain a good quality display by compressing the global scene dynamic range while enhancing the local area contrast. This paper presents the investigation and the implementation of six candidates for FLIR image enhancement and shows some experimental results. The six enhancement candidates are: (1) variable threshold zonal filtering, (2) statistical differencing operator, (3) unsharp masking, (4) prototype automatic target screener technique, (5) constant variance, and (6) histogram equalization. All the enhancement techniques make use of the local nonstationary mean, the local variance, or both, to achieve edge enhancement or contrast stretching in local regions. The local nonstationary mean and the local variance, in each case, are computed by a two-dimension rolling window averaging processor. Finally, an experiment based on subjective criteria to judge the enhanced image quality was conducted. The results showed that the variable threshold zonal filter, prototype automatic target screener, and unsharp masking methods were the superior techniques.
A variety of image enhancement techniques for use on active (10.6μm) infrared images are presented. The image phenomena known as speckle and glint, which severely degrade image quality, are also discussed. Of primary importance is the identification of those algorithms which can provide a visually recognizable image to a human observer under real-time conditions. Beyond this goal is an extension to automatic recognition of objects of interest. Examples of each processing technique, generated by applying the algorithms on experimental data recorded digitally in real-time by an infrared radar testbed system, are included.
A major operational need facing the next generation of Scout (ASH) and Attack (AAH) helicopters is to detect targets from nap-of-earth altitudes, on a realistic battlefield in a complex and cluttered scene. The targets of interest here are primarily the single, high threat target which will not be contained with the main body of target tanks and will not present many detection cues. Various image enhancement methods have been evaluated in terms of improved operator performance. Quantitative performance measures such as contrast, resolution and signal-to-noise ratio are computed for selected algorithms and related to observer performance probabilities. This paper presents analysis and simulation results for an image enhancement method known as local area gain control. It shows dependence on parameter selection and develops criteria for evaluation in terms relatable to detection probability.
Many image processing tasks require the ability to extract useful features from digital images. The selection of a set of appropriate attributes to be extracted from an image constitutes a major problem. This paper examines a number of region features for their utility in target detection and object recognition. A simple hierarchical detection scheme is constructed which can be easily extended to perform classification. Experimental results are presented for FLIR images of tactical targets.
There has been increased emphasis in recent years to provide very accurate delivery of non-nuclear weapons against strategic targets. Goodyear Aerospace Corporation (GAC) under contract to the Defense Advanced Research Project Agency (DARPA) has participated in development of an Autonomous Terminal Homing (ATH) system to provide the required accuracy. The requirement for the ATH system is to provide accurate three-dimensional position measurement of a low-flying cruise missile as it converges on a target. Accurate position measurement and missile guidance requires that the three-dimensional characteristics of a target area be utilized during the processing of onboard sensor data. The processes and requirements for three-dimensional position measurement are reviewed. Critical relationships between scene signature content and sensor data quality are described. An illustration of sensor data processing is shown.
Conventional scene matching algorithms such as mean absolute difference or normalized cross correlation are often ineffective for use with images collected at different times or in different spectral bands because of the occurrence of significant differences in scene reflectivity or emissivity. A new scene registration concept has been devised for the dissimilar image problem and tested against real-world data with encouraging results. Whereas conventional approaches to map-matching are based upon the computation of a measure of image similarity, with a penalty imposed for non-corresponding intensities, the proposed algorithm provides a reward for region correspondence as evidenced by clustering of the image intensity joint histogram. This approach results in a maximum match score for images which differ by a random permutation of intensity levels. The use of an intensity-free matching algorithm provides the designer with the flexibility to use either sensor derived references, or synthetic references which are region coded, material coded, or coded with the predicted sensor response.
The problem of correlating images with random contrast reversals is considered. This problem arises mainly in registration of Multi-Sensor or Multi-Temporal imagery for applications such as Missile Guidance, Map Matching, Map Differencing and alike. A new approach to the registration performance analysis is presented, which enables the choice of both an appropriate processor configuration and optimal parameters - so as to maximize some common performance measures. The mathematical model, developed to depict the problem, not only leads to the formal solution but also provides insight into the physical relationships involved. A hybrid Optical/Digital processor is considered for the hardware implementation because their respective proven advantages seem to complement each other for this specific use. preliminary experimental results, in support of the theoretical work, are reported.
Scene matching, by a human, is done by mentally registering the two images at a trial match position. If the observed residual match errors are consistent with the type of match errors known (by a priori experience) to occur, then the trial match is judged to be the true-match position. This approach has not been possible in autonomous scene-matching because there has been no way to exactly register the two images. Unless this is done, the true match-errors (the likelihood of which can be judged against a priori knowledge) are completely swamped by spurious match-errors due to, even slight, residual mis-registration (rotation, scale-schange, aspect, etc.). In this paper it is shown how a new process, that automatically registers the two images, could be applied in an autonomous version of the human scene-matching technique. This technique would be particularly appropriate when the match is difficult because, for example, the reference is a cartoon-like synthetic image, the scene content is unfavorable for the easy extraction of gross features, the acquisition basket is large, the sensed image quality is poor due to bad weather, etc.
Many situations arise in which it is desired to combine several undersampled images of a scene to provide improved resolution or signal to noise ratio. For example, one technique to reduce the bandwidth of a real time tactical air vehicle video link to overcome jamming is to subsample each frame. Since there is generally a substantial frame to frame overlap, the received data could potentially be combined to approximate the original resolution if the pixels are adequately aligned. Prior workl showed the alignment accuracy requirement to be approximately 0.15 pixels, so that the received undersampled (aliased) frames would have to be rectified and correlated to this precision. Changes in perspective in the sequence of frames can prevent such precise alignment for two reasons. (1) Resampling, during rectification, and changes in ground resolution between images both degrade correlation accuracy. (2) Small correlation windows are required to minimize the effect of scene relief; thus, reducing potential averaging out of (1) by numerous pixels. This paper reports on experiments done to deter-mine over what range of perspective changes, small windows from video frames can be aligned well enough to permit resolution improvement. The results show that the goal of .15 pixel registration precision is achievable for images down to 8 x 8 pixels in size for scale changes between images of up to 30%.
This paper examines the problem of registering a reconnaissance side-looking synthetic aperture radar (SAR) image to a three-dimensional reference map. The registration technique developed in the present work is based on computing an image-to-data base correspondence in terms of a SAR sensor model as a function of such parameters as altitude, range , scale, etc. If their exact values are known, the model can precisely predict the two-dimensional image coordinates for any three-dimensional data base point, thereby accomplishing registration. However, the platform ephemeris data usually provides only model parameter estimates. The objective, then, is to improve them so that the model can predict the image location of any data base point within a desired accuracy range. Preliminary investigations demonstrate the feasibility of achieving location accuracy within 50 m.
This paper explores the application of video disc digital storage technology to cruise missiles. A brief summary of current and near term video disc data storage technology is provided and then several cruise missile mission applications which depend on the availability of an onboard storage system capable of storing 10-100 billion bits of data are discussed.
In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.
Advanced scene matching concepts for application to autonomous terminal homing have been investigated as part of the Autonomous Terminal Homing Program. ATH program objectives are to develop a precision terminal guidance system for fixed targets capable of operating successfully during both day/night and adverse weather conditions using synthetically generated references. Scene matching issues and a conceptual framework to address these issues are presented. Functional comparisons between different processing components are summarized, and suggested approaches to the development of a robust scene matching processor are presented.
Scene matching is a valuable and easily implemented procedure for navigation update determination. However, for scene matchers a measure of match confidence is required to judge either match acceptability or reference suitability. No confidence measure has previously been available for feature based matchers. An analytical technique is presented that utilizes feature statistics and measurements of the match surface or predicted match surface to estimate match quality as a confidence measure. The technique has been used both for a priori reference selection and for a posteriori measurement of match confidence. Feature statistics have been obtained by combining analysis with empirical results. This reliability prediction technique has been validated over a large number of test matches. Approximation techniques exist which reduce the required computations to a trivial amount.
Advanced pattern matching techniques were developed that are capable of matching complex terrain scenes for use in midcourse navigational updating of aircraft and missiles. This method utilizes key features in an image to represent scene content. The key features are converted into a line-based model, which is then used in the actual matching process. The pattern-matching approach is more tolerant of scene diversities than are correlation techniques, and it can match scenes containing severe contrast reversal, small prominent features, or scale and orientation differences. Both high- and low-altitude flight profiles are considered, with matches performed for each case. Comparisons with conventional correlation are made for a variety of scenes.
A range sensor computes the distance from the sensor to the nearest scene point along a given ray. A range image is an array of range values for a raster of ray displacements. Range images preserve the 3-D geometry of a scene as viewed from the sensor. Thus actual measurements of scene geometry, e.g. lengths, areas, etc. can be derived from a range image and compared to similar measurements taken from a prepared reference model. The Lockheed Signal Processing Laboratory has developed a method for utilizing range imagery in intelligence, guidance, and recognition tasks, with particular emphasis on missile guidance using onboard reference imagery. In our approach, the reference model is used to predict which planar surfaces will be visible from the estimated sensor position. Descriptions of these surfaces are stored as a reference list. The sensed range image is then acquired and the local 3-D orientation of each pixel is computed. Adjacent pixels appearing to lie in the same surface are aggregated into primitive planes. Descriptions of these planes form a sensed plane list. The sensed plane list is matched to the reference list in 3-D coordinate space. This matching determines the actual sensor position resulting in an accurate vehicle position fix.
The practical scene matching problem presents certain complications which must extend classical image processing capabilities. In this paper, we consider certain aspects of the scene matching problem which must be addressed by algorithms for missile guidance. In the first section, we outline a philosophy for treating the matching problem for the terminal homing scenario. Later, we consider certain aspects of the feature extraction process and symbolic pattern matching.
Target recognition in the terminal homing scenario consists of matching a set of sensed features with a set of reference features in the prestored reference feature map. An efficient feature matching algorithm, MACHAL, is described. This iterative algorithm employs clustering of feature metrics rather than exhaustive correlation calculations between reference and sensed features. Clustering, followed by data thinning, quickly reduces both reference and sensed data sets and thereby reduces the computational burden. MACHAL is a general algorithm which is capable of matching feature vectors of arbitrary dimension. The computational requirements increase with the dimension of the feature space and with thE increasing number of feature vectors in the sensed and reference feature sets. In this paper, MACHAL is applied to a low order feature matching in a relatively sparse feature space, characteristic of terminal homing problems. A probability model for the algorithm is developed and its validity tested by Monte Carlo simulation. Upper bounds for the clustering threshold and for the noise variance are developed using the probability model. The performance of the algorithm is evaluated by assessment of match accuracy, and robustness to noise resulting from typical sets of sensed and reference scenes. The application of MACHAL to higher order feature space is demonstrated.
The purposes of the realisation of an interactive flying soot are described in thiapaper. This flying spot is connected to a digital picture processing system. One of the characteristics is analysed in detail: the possibility of proceeding in an interactive way to geometrical deformations of pictures with a programmable scanning system. The application described is a method allowing to the geometrical matching of a geographic man and a Landsat digital standoff photograph. Some advantages Of this method are emphasized.
Autonomous terminal homing of a smart missile requires a stored reference scene of the target for which the missle is destined. The reference scene is produced from stereo source imagery by deriving a three-dimensional model containing cultural structures such as buildings, towers, bridges, and tanks. This model is obtained by the precise matching of cultural features from one image of the stereo pair to the other. In the past, this stereo matching process has relied heavily on local edge operators and a gray scale matching metric. The processing is performed line by line over the imagery and the amount of geometric control is minimal. As a result, the gross structure of the scene is determined but the derived three-dimensional data is noisy, oscillatory, and at times significantly inaccurate. This paper discusses new concepts that are currently being developed to stabilize this geometric reference preparation process. The new concepts involve the use of a structural syntax which will be used as a geometric constraint on automatic stereo matching. The syntax arises from the stereo configuration of the imaging platforms at the time of exposure and the knowledge of how various cultural structures are constructed. The syntax is used to parse a scene in terms of its cultural surfaces and to dictate to the matching process the allowable relative positions and orientations of surface edges in the image planes. Using the syntax, extensive searches using a gray scale matching metric are reduced.
In order to find the three-space coordinates of a scene from a pair of images it is necessary to obtain a "camera model" for each image. If the geometry and physics of the camera are complicated, it may be more convenient to develop the camera model from phenomenological considerations rather than from exact geometric and physical considerations. Thus, some properties of a general camera model are investigated. A general camera model is defined as a transformation from three-space onto two-space such that the pre-image of any point in two-space is a straight line. Therefore, an arbitrary transformation from three-space onto two-space is not a camera model. The most general camera model which maps any straight line in three-space to a straight line in two-space is developed. Finally, the general properties of a pair of images which possess an "epipolar geometry" are examined, and an example of a camera model which yields "epipolar curves" (as opposed to lines) is given.
The correspondence problem, matching the same feature in two views, is a central problem of stereo vision. We examine geometric constraints on stereo correspondence and describe progress toward formulation from first principles of an evaluation function for selection of the best among alternative correspondences. Within a surface interpretation, conditions on correspondence of edges and surface intervals are shown. These conditions are useful with wide angle stereo and provide particularly tight constraints for narrow angle stereo. We invoke the general assumption that edge and surface orientations are not related to observer position. We are combining these general constraints with ongoing work in scene modeling of known regularities which include distributions with respect to the gravity vector (horizontal and vertical edges and surfaces), parallelism and alignment with local coordinate systems, and orthogonal corners. We will use them to calculate a likelihood measure for correspondences.
An identification procedure for a generalized nonanalytic, nonlinear autoregressive-moving average process model (GENRA) is introduced. A new set of polynomials representing this model is proposed. When these polynomials are memoryless and analytic, they reduce to the Kolmogorov-Gabor representation of a system. Otherwise, they admit noninteger exponents into an analytic background to provide representations of processes containing singular transformation derivatives. Such noninteger exponents have been observed in the radiation embrittlement of materials, missile nose cone erosion, and thermodynamic phase transitions. The analytic, moving average part of the model is applied separately to the development of advanced optimal missile guidance laws and to image target recognition and aim-point selection. The synthesis that closes the guidance loop to an image terminal homing seeker is under current investigation. An Adaptive Learning Network (ALN) approach to the implementation of GENRA process models is discussed. Two ALN implementation techniques are reviewed: cross validation (decision regularization) and Akaike's Information Criterion. The latter technique was employed to demonstrate the feasibility of using ALN methods to provide passive implementation of modern optimal guidance laws. An ALN implementation of modified proportional navigation demonstrated excellent performance in a six-degree-of-freedom simulation. Results are presented.
Up to this point, optical information processing system designers have taken an end-to-end approach to pattern recognition and feature extraction from imagery. The majority of algorithms that have been proposed for this category of problems have used a completely optical solution in which only the signal conditioning and detection have been implemented by other means. It is the purpose of this paper to present an alternate approach in which optical methods are used to provide image features for an adaptive learning network  (ALN). The result of the ALN design process is to specify a functional mapping between input feature space and a set of response variables that can be interpreted as indicators of particular processes associated with the input data. This mathematical mapping can then be reduced to hardware and made to perform on real time inputs. In the case of recognizing patterns in the field of view of airborne sensors, certain useful features of the input image such as the spatial frequency content or optical moment information have been discarded as inputs for the ALN due to computational complexity and the packaging constraints of an aerial platform. It is here that the speed, resolution and potential compactness of coherent optical methods can be used to extract features from input images that otherwise would not be feasible. This paper will describe those optical operations which are applicable for conditioning data for the ALN process, and present an example of how this hybrid approach can be used.
The Adaptive Learning Network Synthesis methodology has been used to implement an image classification algorithm for infrared images. Using features extracted from transforms of the original image, the algorithm achieves range and aspect angle independent separation of images that contain a specific target (a tank) from images that do not contain the target. A ROC analysis of the algorithm, using 385 sample images, shows >95% detection rate, <5% false alarm rate, and a small (<1%) false dismissal rate.
Combined multisensor display concepts for man-in-the-loop navigation and target detection/recognition in attack aircraft and operator guided missiles were developed and evaluted against multifunction and multidisplays. The analytical and experimental (using simulated FLIR and radar sensors) data studied under this ONR sponsored effort indicated that combined multisensor displays may offer significant improvements in mission profiles by permitting all weather and nighttime operation, improving navigation, permitting lower altitude operation, increasing the number of targets detected and recognized improving weapon delivery, reducing pilot workload and reducing display space. In conclusion, combined multisensor displays provide a high-leverage technique (significant improvements with a minimum investment) to improve the operator interface with aircraft and missile. The improved interface may provide significant improvements in mission profiles.
This paper discusses a technique for converting digitized oblique aerial photography to gray level imagery as seen from other viewing perspectives. The one- and two-camera perspective transformations are derived and applied. Inputs to these transformations are camera model information and a stick figure representation of the scene. The required camera model information is extracted from the imagery in conjunction with limited map data. Three-dimensional stick figure models are constructed interactively using the camera model and the digitized photograph. Using the stick figure information, pixel-by-pixel gray level transformations and hidden surface corrections are performed to produce the new perspective views. The resulting images maintain the noise and resolution qualities of the original digitized imagery.
It is generally agreed that image collection technology for high altitude aircraft and satellite remote sensing programs is far ahead of data interpretation and exploitation, especially for oceanographic purposes. Data presented here contribute to the hypothesis that remote sensing records reveal, in their anomalous gray shades and thermal evidence for upwelling, specific differences in aerosol chemistry that reflect both surface and sub-surface sources. Specific illustrations of the striking changes in aerosol composition are provided for continental, marine, and coastal regions, highlighting the unique enrichment of nitrate particulates in areas of coastal upwelling. Spectra characteristic of these differing atmospheric particulates, as collected in numerous field studies, are included for the mid-infrared range. These spectra provide essential "ground truth" for satellite and high altitude aircraft image interpretation.
The potential use of data from existing and planned earth resources satellites for the purpose of identification of surface materials is examined. Two different methods of the application of existing data to surface materials identification are discussed. The first method seeks to apply basic models of the physical processes involved in the reflection and emission of radiation. The second method utilizes "training sets" to empirically determine the signature of particular materials under given illumination conditions. Several difficulties complicate the application of the first method, and it is concluded that the present and planned Landsat spacecraft probably do not justify the use of the basic physical models. The method using training sets is judged useful with presently available data. Several examples using special multispectral combinations of Landsat data demonstrate the present potential of the existing earth resource satellite systems and suggest the increased potential promised by the higher spatial and spectral resolution of the next generation of earth resources satellite systems.
A knowledge of the thermal inertia of the Earth's surface can be used in geologic mapping as a complement to surface reflectance data as provided by Landsat. Thermal inertia, a body property, cannot be determined directly but can be inferred from radiation temperature measurements made at various times in the diurnal heating cycle, combined with a model of the surface heating processes. We have developed such a model and applied it along with temperature measurements made in the field and from satellite to determine thermal properties of surface materials. An example from a test site in western Nevada is used to illustrate the utility of this technique.
This paper briefly summarizes some past work on texture analysis, including comparisons between different classes of features based on spatial statistics and Fourier analysis; recently developed refinements of the spatial statistics approach; methods based on the extraction and description of texture primitives; texture models based on random geometric processes; methods of improving texture classification by feature value smoothing; and methods of segmenting images into textured regions. References are given to reports and papers in which further details can be found.
Computer-based image analysis requires explicit models of the image-forming process in order to deal with the effects of variations in viewing direction, incident illumination, surface slope and surface material. A fixed illumination, surface material and imaging geometry is incorporated into a single model, called a reflectance map, that allows observed brightness to be written as a function of surface orientation. The reflectance map is used to generate synthetic images from digital terrain models. Synthetic images are used to predict properties of real images. This technique is illustrated using Landsat imagery. Accurate shadow regions are determined from a digital terrain model by calculating which surface elements are visible from the light source. Once shadows are determined, the effect of sky illumination and atmospheric haze is estimated.
Basic technologies for automatic mapping of terrain features from black and white imagery are summarized. The principal effects inhibiting automatic mapping--sensor, film, and atmospheric variables--are accounted for with a set of simple normalization techniques. Terrain features can then be mapped from algorithms dependent upon brightness, texture and syntactic information.
A method is presented for classifying each pixel of a textured image, and thus for segmenting the scene. The "texture energy" approach requires only a few convolutions with small (typically 5x5) integer coefficient masks, followed by a moving-window absolute average operation. Normalization by the local mean and standard deviation eliminates the need for histogram equalization. Rotation-invariance can also be achieved by using averages of the texture energy features. The convolution masks are separable, and can be implemented with 1-dimensional (vertical and horizontal) or multipass 3x3 convolutions. Special techniques permit rapid processing on general-purpose digital computers.
A set of electromagnetic, thermal and shaded computer graphics models that form the basis for passive infra-red band imagery simulations are presented. Pertinent theory is also reviewed. Applications of the simulation system are illustrated graphically with plots and simulated imagery for a typical infrared sensor system. The simulation system presented here represents a significant improvement in IR modeling in terms of scene realism and total simulation flexibility. This approach uses a geometric scene, environment, and sensor that are each functionally separate in models.
The primary objective of the digital sensor simulation investigations being conducted at the Defense Mapping Agency Aerospace Center (DMAAC) is to establish an editing and analysis capability for digital culture and terrain data bases. These data bases are being produced by DMAAC to support advanced aircraft simulators by providing an improved low level radar training capability offered by digitally generated radar landmass images. As a result of the technology developed for aircraft simulator support, sensor guidance reference scenes, visual, and microwave scenes are also being digitally generated. Currently, intensive studies are underway to generate synthetic input data bases with apparent resolutions finer than in the original data bases, using supportive data base information. Highly realistic sensor simulations have been generated, and the continuing emphasis is on modeling new sensors as well as improving resolution without increasing data base production costs.
The Department of Defense has applications for radiometric navigation update systems and thus an ongoing need for improved radiometric reference map preparation procedures: more automatic procedures that produce more widely applicable data bases. An existing radiometric reference map preparation procedure is described that involves screening suitable navigation update sites, preparing temperature maps, and validating the temperature maps. The screening is partly manual (a photo interpreter looks for radiometrically unstable boundaries or insufficient spatial detail) and partly automatic (a computer evaluates the proportions of various materials). Temperature maps are automatically prepared from material region maps which are prepared by a photo interpreter tracing boundaries and identifying materials. Mission tape (temperature map) validation is by detailed simulation. We recommend replacing the manual screening techniques with automatic boundary evaluation. Instead of manual boundary tracing and material identification, we recommend region growing and boundary extraction techniques. Reference maps are needed since a radiometric navigation update system compares (using pattern matching or correlation) a sensed scene with a prestored reference map to estimate position. The terrain sensor detects energy reflected and emitted by the ground in portions of the millimeter wave band (usally at 35 or 94 GHz).
Recent activity in synthetic reference scene generation from geographic data bases has lead to new and expanding production responsibilities for the mapping community. It has also spawned a new and growing population of geographic data base users. Optimum utilization of this data requires an understanding of the natural and cultural patterns represented as well as knowledge of the conventions and specifications which guide data base preparation. Prudence dictates effective mechanisms for data base inspection by the user. Appropriate implementation of data display procedures can provide this capability while also supporting routine analysis of data base content. This paper first illustrates a set of convenient mechanisms for the display of the elevation and planimetric components of geographic data files. Then, a new USAETL program in Computer-Assisted Photo Interpretation Research (CAPIR) is introduced. The CAPIR program will explore issues of direct data entry to create geographic data bases from stereo aerial photography. CAPIR also provides a technique for displaying geographic data base contents in corresponding three-dimensional photo models. This capability, termed superposition, will impact on the critical tasks of data validation, revision and intensification which are essential for effective management of geographic files.
Various methods for simulation of reticle-systems will be described, which have practical application for the simulation of missile guidance systems. The reticle-pattern, as well as the received image, is digitized using common picture processing equipment. The sampled representations, of the reticle and image, are converted to a polar-coordinate system, and the data is then put into a vector-string. Setting up a cyclic matrix with this vector-string, one can describe the periodic system-signal by cyclic convolution of reticle-string and image-string. For this linear discrete system approximation, the transfer function in the Z-domain can be deduced. A similarity transform to the Jordan form of a quadratic cyclic matrix, by means of the discrete Fourier matrix, reduces the amount of calculations required. This is an advantage in digital simulation.
The missile map-matching problem for guidance updating or target homing is shown in Figure 1. The problem as defined here consists of locating the position of a sensor image relative to a reference map which is stored onboard the vehicle's computer. Once the match location is found the relative location between the two map centers can be used to update the vehicle's navigational position. The two important performance considerations are the avoidance of false fixes as measured by their frequency of occurrence and the accuracy with which the position fix can be made.
Edge correlation between multi-sensor images is an efficient technique for scene matching. However, in aerial images, the question of which edges of a scene should be used in the correlation requires consideration. A method is presented which suggests a criterion for selecting the best shape for connected edges for edge correlation. The criterion is based on reducing any secondary correlation peaks. A basis for selecting the best edges directly from a knowledge of the edge shapes is also presented.
Several pattern recognition methods have been used in scene matching. These methods include the use of intensity difference, edge measurements, invariant moments and symbolic descriptions as measurement features. This paper describes a method used in extracting edge measurements from radar and optical images for the purpose of scene matching.
The performance of two dimensional correlations applied to a set of images of a scene, each exhibiting a different characteristic of it, is presented. The probability density function for the correlation output is assumed to be Gaussian. Both edge and area correla-tions are used. The Bayes probability of error, Chernoff and Bhattacharyya error bounds and Fisher's criteria are used as figures of merit to characterize the performance of the matching algorithm. Empirical results are given for a set of four images of a scene, i.e., down-looking, target-looking, real and synthetic images. These results show that for real images the target-looking view has better performance than the down-looking view and for synthetic images the down-looking view outperforms the target-looking view.
A signal processing window based on the binomial distribution function is presented. Being a discrete form of the normal distribution, it offers the minimum uncertainty of 1/4π when locating periodic components in sampled data. Low, high and band pass forms are shown for processing two-dimensional data in rectangular and hexagonal tessellation. Real time convolution of dynamic imagery avoiding high speed multiplication is provided by a cascade of the first order binomial cell. Applications include feature extraction and image enhancement, which because of the low sidelobe characteristics of the window avoids aliasing. The convolution process avoids block type artifacts and the binomial function reduces location uncertainty. An example of processing with a band pass form approximating the spatial response of the human retina is shown. Other band pass applications include edge extraction using the location of zero crossings of the windowed output, also the screening and location of objects based upon their size, aspect or form.