(The following Keynote Address was presented by Dr. Jordan to open the Airborne Reconnaissance III Symposium.) Good Morning Ladies and Gentlemen: I am-honored and pleased to have been given the opportunity to open this meeting this morning because surely there is no question as to the importance of a continuing, cooperative dialogue between those of you from industry and related activities and those of us in the Department of Defense. The reconnaissance technology area is particularly demanding of cooperative attention due to exponentially increasing technical capabilities.
Reconnaissance systems planners need to know the performance of airborne sensor systems. This performance is measured in the quality of the imagery produced. Image quality is almost undefinable and is a complex of physical factors of which one or more are usually measured; such as sharpness, resolution, etc. The Air Force and most of industry rely heavily on measurements of system resolution for specifying and evaluating performance. System resolution, despite its limitations, generally has been easy to measure, readily accepted, proven effective in serially grading lenses, systems, etc., and not known to cause significant errors. The methods usually used by the Air Force to measure resolution are: 1. Tribar Target (RP) Reading 2. Visual Edge Matching (VEM) 3. Maximum Magnification Factor (MMF) 4. Modulation Transfer Function (MTF) All four methods are used when conditions warrant, and where the required equipment is available. Uncertainties between readings are thereby reduced, and the credibility of results increased. The Air Force continues to acknowledge the need for increasing image quality and to improve image quality measurement methods for all of the Air Force Reconnaissance and Mapping sensor systems. The Air Staff, therefore, established project "SENTINEL SIGMA" to improve appropriate emphasis to this effort. Within this program, all image evaluation methods are being evaluated at the Air Force Sensor Evaluation Center (AFSEC) to determine their adequacy of measuring image quality. To date, preliminary tests give a high correlation between VEM and TRIBAR readings.
MTF analysis procedures permit post-flight evaluations of sensor performance and image quality to be compared with predicted values. However, careful consideration must be given to the selection of edge targets, microdensitometry and computational procedures because of the high spatial frequency content of images recorded by reconnaissance sensor systems. An analyst skilled in MTF evaluation procedures can isolate and quantify the effects of factors responsible for image degradation, including vacuum failure, focus error, image motion, and duplicating processes.
Two motives for image analysis are system performance determination and subjective image quality evaluation. The first is essentially an objective measurement process while the second must be subjective if the imagery is intended to be read out by humans. There is presently no single evaluation procedure to suit both aims. Although there have been important advances in subjective image evaluation procedures over the last ten years, they are not yet generally in use, and as a consequence objective performance measures are often improperly employed as indicators of subjective image quality. The danger in such a use is exemplified by experience with resolution in objective/subjective correlation analyses and by the analysis of basic tribar target characteristics. Optimizing system performance in terms of subjective image quality is possible by carrying out subjective image quality evaluation (using either actual or simulated system product) and functionally relating the subjective results with objectively measured system performance parameters using statistical multivariate procedures. Current subjective engineering practice, utilizing procedures such as binning, JND scaling and multidimensional scaling (as described), provides the ability to determine subjective quality to any desired degree of precision and in both relative and absolute quality terms.
The signal-dependence of photographic grain is easily demonstrated by studying the first order statistics (mean and variance) of the scattered light in a coherent optical spectrum analyzer. It is shown empirically and from simple theoretical considerations that the granularity of the photographic material can be determined using a coherent measurement and can be interpreted as an inverse signal to noise ratio. The traditional linear Fourier domain filter can be shown by simple considerations to be ill-suited for dealing with the signal-dependence of photographic grain.
A new recursive filter approach greatly simplifies real time implementation of a local area adaptive contrast enhancement scheme for imaging sensors (FUR, TV). Local area contrast enhancement adaptively stretches the intensities in each local area of a scene to the display luminance dynamic range. Even scenes possessing large global dynamic ranges (>40 dB) can then be squeezed into the limited dynamic range (20 dB) of a display without losing the vital local area contrast essential for target acqui-sition. This paper describes the recursive filter implementation (using charge coupled devices) of the local area contrast enhancement scheme and the resultant real time hardware. This module can accept standard 525 and 875 line TV compatible video from any source (FLIR, Vidicon, Video Tape Recorder, etc. ). Several examples from video taped FUR imagery are included to demonstrate the effectiveness of this simple hardware.
The speed and parallel operations capabilities of optical data processors make them ideally suited to the automatic target identification problem. However, to be successful, a means for analyzing the correlations must be provided, thus a hybrid processor is implied. A system which utilizes both optical correlation and Euclidean distance classification for target identification will be detailed. Both operations are necessary here for successful target recognition because optical correlation in general yields results that vary slowly with parameter changes. These results are then supported by further signature classification to create the sensitivity that is otherwise lacking. This concept is independent of the way in which the data is acquired but simply utilizes the input data to realize a multiple feature extraction of the target. Optical correlation is performed between the acquired, multiplexed features and ideal (e.g. aircraft) features. This then produces a multi-point target signature in feature space which is recognized and identified by the "designator", a hard wired Euclidean distance classifier. Already stored in the designator are the signatures generated from the ideal feature correlation masks used to train and load the device. The designator is a hybrid computer; it stores up to 200 aim points in eight dimensions with class names, and sequentially tests these against an eight-channel analog input. It computes each distance and determines if this is smaller than the previous value in 10 microseconds and completes the entire analysis in 2 1/2 milliseconds. The results obtained are the names of the closest and the next closest designated class. Results of experiments performed using the designator for cloud detection and screening will be shown.
This paper describes techniques for detecting changes between two infrared (IR) images of an area, taken under different ambient conditions. Change detection requires accurate registration between the two images as well as photonormalization of one image with respect to the other. Because of the high contrast of IR imagery, registration between the two images can be accomplished accurately using cross-correlation. Diurnal temperature variations complicate the detection of valid changes between two IR images taken at different times of the day. In the first approach, targets are detected on each of the original images before differencing. A second technique is described for overcoming diurnal temperature variations by photonormalization.
Coherent optical processing of multi-sensor imagery (with geometrical and intensity distortions) for pattern recognition and registration are considered. Sensor-invariant high information content image features are considered as well as the degree to which an aerial image need be converted to its true radar image for frame registration and map matching applications. Digital, electronic and optical edge enhancement, as well as the use of special features of storage-mode real-time spatial light modulators and parameter control in optical matched spatial filter synthesis are considered as adjuncts whereby the required final result can be achieved. The initial results of coherent optical correlations on actual multi-sensor imagery are reported.
A variety of new and improved sensors are evolving from advanced development programs. (Systems such as ESSWACS, (Electronic Solid-State Wide-Angle Camera System), the LOREORS (Long Range Electro/Optic Reconnaissance System), and the FLIR/AN/AAD-5 IR System). The time has come to combine these and other sensor capabilities into a tactical reconnaissance operation which includes an effective real-time capability. A general approach to real-time reconnaissance is employing several airborne sensors and including both airborne and ground data management devices and procedures. Automatic (digital) data processing information will help minimize the amount of irrelevant data presented to human observers. The human observer represents the final and essential filtering agent required to reduce the information rate to and level suitable for dissemination over data links for rapid (real-time) access to tactical commanders.
A laser recorder has been constructed which will record video data from a digital source onto 70mm dry silver or metalized film at an average rate of 6 megapixels per second. A cavity dumped argon laser is A/0 modulated, collimated and scanned by a multifacted pyramidal mirror through a 6-inch focal length imaging lens onto a film roller. The system incorporates unique high duty cycle optics which allows a duty cycle of 83 percent with full energy transfer and minimum mirror size. The optical train efficiency is greater than 30 percent. Line scan rates are continuously variable from 250 to 600 lines per second and are slaved to the external data source. Each line consists of 10,000 six micron pixels. An area of film, 10,000 pixels by 10,000 lines, is rear projected onto a 20-by-20 inch display screen. Metalized film recordings are displayed within 15 seconds of exposure, but zero time delays are possible in future systems.
An optical recording technique which permits high bit packing densities, extension of recording media dynamic range and most importantly, the recording and retrieval of extremely large digital pixel word values within a human readable/machine readable pictorial format has been formulated for imagery recording applications with conventional silver based film, electro-static materials and metallized film materials.
The target screener/FLIR system and its performance were reported at NAECON-77*. The target screener was designed and built by Honeywell for the Air Force in 1974. The system operated on imagery data from an AAS-27 sensor, detected man-made objects (MMOs) and cued the operator by displaying a symbol at each sector area containing MMOs. It achieved 92 percent probability of detection of MMOs and less than 3 percent probability of false alarm. In 1975-76 the target screener system was modified to accept FLIR data and for the first time detection and cueing of HMOs in FLIR imagery was demonstrated. The system achieved 95 percent probability of detection with 5 percent probability of false alarm. Last year the target screener was modified. The manual thresholds for the extraction of candidate MMOs were eliminated by autothreshold which is an automated technique to extract edges and bright signals. The modified system achieved a 91 percent probability of detection with 4.3 percent probability of false alarm. Secondary screener techniques were developed to screen targets based on true size and shape. The overall target screener system, including the secondary screener, achieved 88.5 percent probability of detection and 2 percent probability of false alarm.
The requirements for an automatic target screener operating on imagery from a tactical thermal imaging system are derived from the use scenario. A design approach using analog front end processing on the single line video with subsequent digital feature generation and linear discriminant classification is discussed. The design includes an automatic threshold system for scene segmentation and will utilize moment or Fourier descriptors as discriminant features to achieve classification into five ground classes updated ten tirnes per second.
Automatic target cueing systems are reaching advanced stages of development. As now envisioned, they could provide the operator of a FLIR or TV system aboard an aircraft with an audible warning of targets in his field of view, show their class and location on his display, and add a priority assignment to aid in his response. The result of this machine assistance is expected to improve his target acquisition rate and to decrease his reaction time, thereby improving his chances for survival. Acceptance of cueing systems for aircraft use will require demonstration of adequate performance, physical characteristics, and cost. As demonstrated in the laboratory, performance has achieved levels of interest to potential users. Testing aboard an aircraft, with man in the loop, is the next crucial step in the acceptance process. It is anticipated that several such tests will begin within 2 years. For the aircraft or combat vehicle application, the constraints of size, weight, power consumption, and cost are vital considerations. Westinghouse is currently involved with three distinct generations of hardware development for cueing systems. These include a real-time breadboard system, assembled in 1974 for the Army's ARRADCOM; an engineering model cuer, which will provide improvements in both performance and physical characteristics, and the beginnings of the large-scale integration of cueing circuits using charge-coupled devices. The breadboard system is being used to measure and improve performance statistics, and as a means for determining appropriate system parameter values. The system consists of an equipment rack containing an analog scan converter for selecting video frames, a digitizer, a hardwired digital image preprocessor for extraction of key image features, and a minicomputer to collect the data associated with individual targets (segmentation) and perform classification. Targets are indicated by characters grouped on the edge of the video display. The speed of the minicomputer limits the breadboard field of view to an area of 100 by 100 image elements, which is processed approximately twice per second. The engineering model will include a digital input buffer for full frame operation at rates up to 10 frames per second. It will also include a fully redesigned preprocessor, special-purpose segmentation hardware, and a militarized general-purpose computer. Audible warnings will be provided to the operator intercom system, and visible cues will be added to his sensor display. Redesign of the preprocessor has resulted in size reductions relative to the breadboard unit by six times, and power reductions by ten times. Various operations and thresholds can be adjusted under software control on a frame-by-frame basis.
Automatic image processing techniques for detection and recognition of military targets in imagery have recently made great strides. This paper describes the general approaches used and discusses their recent evolution and results. A very advanced approach which is implementable in CCDs is described in detail and limitations of the technology and likely future efforts are discussed.
An image segmentation technique using prototype similarity is described here. The prototype similarity is a method for transforming an attribute image into a set of symbols, each of which represents the relationship of a local region to other parts of the image. It consists of two main steps: (1) prototype generation and (2) inference. Generating prototypes is equivalent to finding a maximal set of mutually dissimilar regions using a similarity relation. A similarity relation is a symmetric, reflexive binary relation and not an equivalence relation. It is not bound by metric properties. The generated set of prototypes is used to transform the attribute image into a symbolic image. Some a priori information about the scene is used to infer meaning of each cell in this symbolic image. The segmentation results of this technique on FLIR images are included.
Charge Coupled Devices (CCDs) have been available commercially for several years and have been designed into several aerial imaging systems. In most cases, the resolution achievable has been limited by the CCD element size, which has in turn been limited by the available MOS fabrication technology. Recent advances in fabrication technology allow the fabrication of much smaller devices so the system designer must now ask what the optimum element size for the CCD is. Three simple analyses from different view-points are presented here. All indicate that for typical long-range aerial reconnaissance missions, the size is in the neighborhood of 10 micrometers with the optimum only weakly dependent on the variables involved.
The Optical Systems Division of Itek Corporation has developed techniques for analytical and experimental evaluation of image performance that have proven especially well suited to verifying the design of electro-optical focal planes. Analytical techniques used in the design process include computer programs that calculate focal plane signal parameters versus scene and camera system parameters. The results are applied to calibrated aerial scenes to produce pictorial simulations of the imagery expected from the focal plane design. These simulations may then be examined by photointerpreters to establish the quality of the imagery. The focal plane breadboard or prototype is evaluated experimentally by imaging a calibrated scene onto the sensor using a standardized lens and light source and collecting image data with a specially designed data acquisition facility. This facility includes a computer that processes the sensor data and produces a hard copy image for examination within minutes of exposure at the sensor. The performance of the focal plane hardware is thus easily compared with the predicted performance.
The Electronic Wide Angle Camera System (EWACS) enables for the first time the real-time generation of high-resolution, wide-angle (140°), high-speed, low-altitude, tactical aerial imagery and auxiliary data for display at a ground station. The high performance and broad angular coverage are achieved by using a scanning prism and imaging of the scene onto a linear charge-coupled device (CCD). The solid-state, electro-optical recon sensor is suitable for use in high performance manned aircraft or RPV's with its small size (0.7 ft3 ), weight (35 lb) and power (95 W) requirements. The recon sensor demonstrated a bench performance level equivalent to the resolution of a 0.7-foot target at a 1000-foot range for a low-contrast, low-light level condition, while the overall system capability is reduced by a factor of about two due to scan converter limitations. These performance levels were maintained throughout dynamic analyzer and flight testing programs. The EWACS evaluation has validated the system concept and verified that program objectives have been met as demonstrated throughout 14 flights which provided reliable operation and excellent imagery of resolution and tactical targets.
Real time reconnaissance imaging systems development has to date concentrated on data collection, transmission and ground reproduction of the collected data. Systems typical of this class include Quick Strike Reconnaissance (QSR), Electronic Wide Angle Camera System (EWACS), Long Range Electro-Optical Reconnaissance System (LOREORS), and Electronic Solid State Wide Angle Camera System (ESSWACS). This paper is concerned with the application of Laser Beam Recorders (LBR) for the ground reproduction in near real time (15-30 seconds) of information collected by these systems and the tested QSR/AAD-5 system. The status of the ESSWACS system is reviewed and application of the LBR to that system emphasized.
A real-time reconnaissance system for tactical application based on the Itek KA-102A pod-mounted LOROP Camera is under development by Itek Corporation's Optical Systems Division. The 66-inch focal length, f/4 lens provides 4-degree field coverage for a linear CCD sensor array operating in the pushbroom mode with cross-track pointing elevation angles from 65 to 90 degrees. Image data from the sensor arrays is digitized in parallel, compressed, and multiplexed into a 50-megabit rf link. Ground-based image reconstruction equipment includes an Itek Image Processor and Real Time Display (RTD). The processor reconstructs and calibrates imagery from the compressed digital sensor data received over the rf link. The RTD records imagery in the form of a transparency on 70-millimeter metalized plastic film. The recorded image is displayed by rear projection on a large viewing screen within seconds of exposure at the sensor. This paper discusses the tactical application of the real time KA-102A EO LOROP Camera. Individual components of the system are described in detail. Performance projections in terms of system operating envelope and ground image quality are presented.
The technology necessary to exploit real-time and near real-time reconnaissance data from both manned and unmanned airborne platforms is available. This is true with regard to sensors, wideband data links, mass data storage, and display techniques. The technical challenge for system designers is how to integrate these elements into usable and cost-effective reconnaissance systems. Sperry Univac has developed and integrated real-time and near real-time data link systems for operational use by the Air Force and Army over the past 10 years. The systems have been operated in both the 10-GHz (I/J-band) and 15-G Hz (J-band) ranges, utilizing data rates from 100 kHz to 274 Mb/s (analog and digital). Both simplex and full-duplex operation have been implemented. This paper briefly describes some of the technical capabilities (secure, antijam, range, etc.) of the operating systems presently being evaluated and/or used (i.e., strike RPVs, UPD-4, QSR, MODATS, SOTAS, AN/UPQ-3(A), AN/ASQ-159, AN/MSQ-108, BGM-34C, etc.). It will also attempt to summarize the present state of the art and possible future trends as a result of actions by the Department of Defense Data Link Committee.
A proposal for adapting principles of stereoscopy with the technology of optics and electronics for airborne reconnaissance, is a basic system design to equip drone aircraft with instrumentation consisting of two television cameras, related ancillary equipment, and radio link for transmitting television signals to other airborne stations and ground station television monitors. Television cameras are alternately rotated 90° side-to-track of vehicle motion, package-mounted and gyroscopically stabilized. Empirical to system design, a delay-point-of-focus of the cameras can be fixed or variable. For variable control, a radio controllable computer function for sine-cosine, fore-aft camera pivot, to vary the delay-point-of-focus and angular convergence, is possible. The two television cameras can also be gimbal mounted to permit radio controllable and simultaneous lateral traversing left and right to direction of track. A memory type subsystem at each station, integral to one of the two television monitors, complements the delay-point-of-focus function, for simultaneous stereo-picture display. Refractive or reflective type stereoglasses, or polarize type stereoglasses with matching polarize cover glass plates for each television monitor, depending upon system requirements, provides additional optical adaptation for stereo viewing by photo interpreters.
Operational interpretation of SAR imagery is degraded by display/processing limitations. For example, the inability to maintain optimum focus over an entire image often results in severe impulse response degradation. Conventional hardcopy film recording can handle only a portion of the available SAR dynamic range; the benefits of a wide dynamic range are lost. Holographic viewers offer a means of overcoming these problems as well as offering potential benefits in other areas, such as sidelobe weighting and artifact removal. Most important, these operations can be performed interactively by the interpreter according to his requirements. The benefits of holographic viewers are unique, not being shared by other display devices -- film and CRT displays. The capabilities of current holographic viewers are discussed. Comparisons in terms of cost and utility are made with other display media. Recommendations for further development are presented.
The experiment described in this paper was designed to evaluate, in a statistically valid manner, the relative image quality of nine digital images (algorithms) and two film images (laser print and duplicate positive). Three important (and basically different) modes of presenting imagery were compared: a duplicate positive film, a laser film print of a digital image, and a CRT display of the digital images. Imagery was selected in coordination with the Army to assure the tactical relevance of the questions and imagery chosen. Five operational interpreters, having from two to five years experience, took part in the experiment as image quality evaluators. For any particular image question, there were eleven (two film and nine digital) versions of the image on which to judge relative image quality. Each interpreter examined all eleven versions in a pairwise (i.e., two-at-a-time) fashion and specified both the version preferred and the strength of the preference. The interpreter was asked to evaluate the imagery relative to a particular question. For example, a question might be in which image could the interpreter most easily see the wheels on a vehicle. In addition to target-specific questions, there were also general questions on the overall interpretability of scenes.
The Digital Image Analysis Laboratory (DIAL) at USAETL is a large scale interactive digital image processing system designed for research in photo- and radar-interpretation, feature extraction, mapmaking, and other remote sensing activities. General techniques such as search, change detection, dynamic range adjustments, geometric corrections, and mensuration have been implemented and applied to digital imagery for evaluation. In particular, the DIAL system was used for a series of four experiments designed to evaluate the utility of soft copy (CRT) exploitation of digital imagery for tactical field army applications. The results of those four experiments were conducive to continued development of the soft copy capability. At USAETL that continued development has taken the form of a van-mounted system, now being implemented, consisting of a control minicomputer, high density tapes, and an advanced digital image display system. The advanced display system provides rapid search, magnification, dynamic range, and filtering capabilities as well as many other now-standard image processing functions. It is expected that the van-mounted system will provide at least a partial solution to the very basic problems encountered in the manipulation and display of very large digital images.
The Harassment Weapons System (HWS) program, and the Vampire sensor technology support program are evolving a mini drone strike capability which will be effective against a wide variety of targets. The key features which make an autonomous expendable mini drone weapons system extremely attractive in a total force context are (1) very high initial surge sortie rates, (2) survivability, (3) flexibility, (4) selectivity, (5) counter-measures resistance, and (6) low cost/target kill. The paper provides an overview on a system which is unique in that it can autonomously search, detect, classify and commit to an attack without a man in the loop.
IMFRAD is an airborne radar system providing unique capabilities in the real-time detection of targets immersed in foliage, targets screened by foliage, as well as targets in open terrain. Careful selection of operating frequencies, combined with state-of-the-art digital processor technology, provides the airborne commander with the ability to penetrate foliage, map terrain, identify moving targets and detect changes in battlefield activity levels. The system may be operated in broadside, aft or forward-looking modes. IMFRAD hardware is presently in the final stages of checkout and in early 1978 will be installed in a C141 aircraft preparatory to a year of flight testing. During the test program the system will be flown against a variety of real targets at a number of sites ranging from open desert to tropical forests. The flight test plan is aimed at establishing a comprehensive data base (target cross sections, foliage attenuation and refraction characteristics, etc.) for design of IMFRAD engineering models. This paper discusses the operational potential of IMFRAD capabilities. It reviews the status of the current hardware and the objectives of the upcoming flight test program.