This paper describes a new RS-170 video camera for use in the military environment. The camera is designed so that it can be reconfigured for various defense applications. A functional description of the detector, an area array Charge Coupled Device (CCD), and the video processing circuitry is provided. The camera's Signal-to-Noise Ratio (SNR) and various other important performance parameters are discussed.
CAI, a Division of RECON/OPTICAL, INC. has developed an Airborne Minefield Detection and Reconnaissance System (AMIDARS)* which provides passive day/night reconnaissance of minefields, armored/ground vehicles, troop concentrations, bridges/railways and buildings from unmanned reconnaissance vehicles. This paper discusses the results of the integration flight tests. Selected imagery is presented and evaluated to assess the system's performance.
Loral Corporation is developing a generic imagery processing and exploitation system to meet the requirements of every level of imagery intelligence operations. The system's modular hardware and software design allows for easy reconfiguration and tailoring to the user's needs.
This paper describes the French groundstation for the CL289 UAV, alias AN/USD 502. The main feature of the MATRA groundstation is an original concert based on both hard-copy and softcopy, where detection is made on film in real time, while internretation is performed on digital data images.
A Reconnaissance Management System (RMS) for use in a Low Intensity Reconnaissance Aircraft (LIRA) is described. This RMS includes all system control functions (such as sensor selection and pointing) pod management functions (doors, turrets, etc.) as well as processing of the video output of the selected sensor to produce a standard format video signal for viewing and recording. The LIRA/RMS was required to utilize to the greatest extent possible existing equipment and designs in order to expedite development and allow for concept validation flight demonstrations on an accelerated schedule. The LIRA RMS which was flight demonstrated is described as well as system enhancements which have been added since the flight demos and others which are currently being developed.
Application of instrumentation recorders for data acquisition in hostile environments has for years been accomplished by means of longitudinal recorders specially designed for that application. Two recent trends have impacted the applicability of these machines: the need for record times longer than can be provided by the longitudinal machines and the trend in the instrumentation industry to utilize digital recording techniques.
The Joint Service Imagery Processing System (JSIPS) provides the services with an imagery receipt, processing, exploitation, and reporting capability that is responsive to the requirements of the tactical commanders. JSIPS combines existing and modified hardware and software with all-source digital imagery processing and exploitationto provide intelligence support that enhances the commander's ability to act within the enemy's decision cycle. JSIPS provides not only Tactical all-source imagery processing but integrates the Tactical, National, Auxiliary and Secondary imagery sources into one resource available to the image analyst for production of the commander's comprehensive report. JSIPS has been designed as a modular architecture which is readily expandable and reconfigurable to meet individual user needs. Pre-planned capabilities along with allocated space has been provided for growth in processing, throughput, and I/O requirements. JSIPS provides interoperability with several other imagery programs such as IDPS, TRAC, SYERS, IITS/FIST, NITF, UPD-8, and ATARS. It provides the ability to add additional imagery programs with little or no changes in the JSIPS design. JSIPS provides a softcopy and hardcopy exploitation capability for all military branches with output to their respective intelligence centers. The JSIPS system design, as shown in figure 1.0-1, has the following segments: Tactical Input Segment (TIS), National Input Segment (NIS), Exploitation Support Segment (ESS), Softcopy Exploitation Segment (SES), Hardcopy Exploitation Segment (HES), Communications Support Segment (CSS), and System Support Segment (SSS). JSIPS consists of standardized modules and segments with the collective capability to receive, process, exploit and disseminate imagery and reports based on multi-source imagery from the Tactical and National inputs. The overall JSIPS System operates in either a shelterized or non-shelterized configuration. Tailored JSIPS Systems are packaged in a minimum number of deployable shelters. Primary imagery inputs are from Tactical and National sources. The system provides all processing necessary to support both softcopy and hardcopy imagery exploitation. The system design is expandable, contractible, modular and segmentable. Modularity consists of packaging segments in functional units so they may be added or deleted when and where required to meet specific user requirements. Segmentability is achieved by allocating system requirements to a single functional area known as a segment. The design uses off-the-shelf hardware and software where practical and cost effective. The System is capable of operating either in a stand-alone mode while deployed or while tethered to and supporting fixed intelligence facilities. Each segment provides all functions and capabilties needed to satisfy the performance requirements for that segment. Functional and physical interface documents define the exchange of information and data between segments and with external systems. To enhance future Pre-Planned Product Improvements (P3I), the architecture promotes technology transparency within JSIPS through bus type structures. Evolutionary development allows for emerging sensors and platforms to be accommodated by substituting or adding a minimum number of plug-in hardware or software interface modules to an operational production model. Technology transparency allows for incorporation of new technologies as they become available.
Significant strides have been made during the past few years in the development and standardization of rotary head helical scan digital recorders, for reconnaissance applications. Two years ago at the 30th Annual, International Symposium on Optical & Optoelectronic Applied Science & Engineering, San Diego, this recording technology was reviewed. This paper provides an update of the progress of that development and standardization process. It begins with a brief review of the reconnaissance requirements, as viewed by the recording industry, then and now. Following this, an update of candidate recorder types, including the magnetic media and cassettes, fulfilling these requirements are discussed. Some potential problems facing the industry are then mentioned, and lastly, a brief look into the future is provided.
This paper addresses the C3 aspects of intelligence data collection processing and dissemination. The paper approaches the subject from the viewpoint of three networks: 1) the local area network on a platform with a gateway between the platform and the external world, 2) a wide area transmission network between the platform and the final information user, and 3) a local area network at the user end to pass, process, and disseminate the information. The paper addresses the command and control management of all the networks involved, the threat assessments of both friendly and enemy threat scenarios, and the reconfiguration of the network paths because of a dynamic external environment. Major premises discussed in the paper are: 1) The design environment of these networks is threat driven; 2) Interoperability is a major design driver; 3) Command and control devolution is an important consideration in the final choice of an architecture; 4) Information flow control is the major element of design in dealing with a dynamic environment.
The requirement for near real time intelligence in today's tactical environment is driven by the adversaries' multiple lines communications (LOC) and the highly dynamic forces operating within a small geographic area. Military forces within the Warsaw Pact, and even Third World countries, have highly mobile, rapidly deployable weapon systems. With their established LOCs they can bring these weapons to bear against friendly forces within minutes. Keeping track of these forces and their movements is a most formidable task for our tactical forces. However, we do have the resources to accomplish the mission. The tendency has been to concentrate on the European scenario, but this does leave a void in the arsenal for fighting brush fires or limited conflicts. What is lacking is a modularized, mobile, self sustaining, ground processing and intelligence fusion center that can be deployed at a moment's notice to a forward bare base and be operational within three hours after arriving. The purpose herein is to define a modularized, tactical ground processing and fusion center that can be tailored to satisfy the requirements of each of the Services for a lightweight, highly mobile system. A system definition describing the physical characteristics of the system is presented, along with a concept of operation. Also presented is a brief discussion of the system architecture and the interaction among the ground mission supervisors (GMS). Finally, several methods of system implementation and the communications for tasking and product dissemination are discussed.
Most military aircraft are equipped with external fuel tanks that increase range and, thereby, extend mission profiles. Considerable rationale exists for using fuel tank structures as a housing for special purpose equipment such as a reconnaissance system. Foremost, is the availability of all technical, tooling, manufacturing, and test data. If the external shape is not significantly altered, the equipment pod can be submitted for installation and flight test based upon similarity and analytical margins of safety. Thus, the resultant cost savings and delivery schedule improvement can be significant. External fuel tanks are designed for high volume production as shown in Figure 1. The United States Air Force generally prefers the three-section, low assembly time, design; whereas, the United States Navy favors a monocoque construction design having access doors for servicing internal components. Either concept can readily be converted to house "special purpose" equipment instead of fuel, to enable reconnaissance, photographic, counter measures, and other military missions (reference Figure 2).
This report discusses the use of magnetic suspension to isolate sensors from the sometimes severe vibrational environment encountered on Airborne Reconnaissance Systems. Techniques developed by Aura Systems in the design and fabrication of magnetically suspended isolation systems show great promise for this particular application. This paper will cover the performance results of the Magnetic Gimbal Fabrication and Test (MGFT) project, which consisted of the fabrication and performance testing of a single axis magnetic gimbal. Also covered is the progress made by Aura Systems in the miniaturization of the magnetic and electronic components involved with magnetic suspension isolation systems. The MGFT results showed LINE OF SIGHT (LOS) accuracy from 3 to 8 grad with an angular disturbance of 48 rad/sec2 (the amount produced due to the occurrence of 10 gravity lateral disturbances), achieving vibration rejection of up to 79 dB. Since the completion of the MGFT program in November, 1986, a vast reduction in the size of the magnetic bearings and associated electronics has been achieved, making them weight-compatible with current mechanical gimbal systems.
There are a number of organizations with a need for large-scale (in the range of between 1:100 and 1:1000) aerial photographs. A large percent of these organizations are also budget limited. A program of low altitude, large-scale reconnaissance (LALSR) developed and used at Brigham Young University may be useful in many of the applications where these fiscal and safety limitations exist. The development, criteria, methodology, and use of this program are outlined in this presentation.
There is overwhelming evidence for the fact that image analysis is rapidly moving into the digital domain. Computer workstations serve to analyze pixel arrays, thereby providing the user with automation opportunities otherwise not available. Sensing of visible light imagery has made great strides into the digital arena as well. However, for the time being high resolution metric cameras and film will remain very important means of imaging the earth's surface, the environment and objects in industrial settings for non-real time applications. The problem therefore exists to provide capabilities of using film imagery but to submit this to digital analysis. We are discussing a solution to this need in the form of a system for on-line digitization of film and interactive analysis of the digitized pixel arrays. The result is a combination of state-of-the-art film imaging with state-of-the-art softcopy image analysis. Key Words: Photogrammetry, reconnaissance imaging, photography, softcopy imagery, computer-based photo-interpretation.
The advent of real-time electro-optic reconnaissance heralded by the Advanced Tactical Air Reconnaissance System (ATARS) and the Joint Services Image Processing System (JSIPS) will place a heavy burden on the ground exploitation of the collected imagery. Image enhancement, rotation, warping, feature extraction, and mensuration are some of the imagery functions that can be automated. However, current military processing power is not sufficient to allow these functions to be performed in real time. Parallel processing techniques offer a solution for image exploitation in the real-time and near real-time domains. Control Data's Parallel Modular Signal Processor (PMSP) is a single chassis, militarized package combining multiple high-speed vector-processing modules with control, storage and interfacing modules. The PMSP performs Fast Fourier Transforms (FFT) and other video and signal-processing functions at execution rates in excess of 600 million operations per second. The PMSP is very flexible and programmable, capable of handling multiple sensors separately or combined, while providing many new possibilities for image data fusion algorith applications. This level of performance will allow real-time image processing for today and tomorrow.
Most people see little similarity between a battlefield manager and a natural resource manager. However, except for the element of time, many striking similarities may be drawn. Indeed, there are more differences between the tranquil scenes of mountain scenery, forests, rivers or grasslands and bomb scarred battlefields where survival is often the prime objective. The similarities center around the basic need for information upon which good decisions may be made. Both managers of battlefields and of natural resources require accurate, timely, and continuous information about changing conditions. Based on this information, they each make decisions to conserve the materials and resources under their charge. Their common goal is to serve the needs of the people in their society. On the one hand, the goal is victory in battle to perpetuate a way of life or a political system. On the other, the goal is victory in an ongoing battle against fire, insects, disease, soil erosion, vandalism, theft, and misuse in general. Here, a desire to maintain natural resources in a productive and healthy condition prevails. The objective of the natural resource manager is to keep natural resources in such a condition that they will continue to meet the needs and wants of the people who claim them for their common good. In this paper, the different needs for information are compared and a little history of some of the quasi-military aspects of resource management is given. Needs for information are compared and current uses of data acquisition techniques are reviewed. Similarities and differences are discussed and future opportunities for cooperation in data acquisition are outlined.
The subject of imagery evaluation as it applies to electro-optical (EO) sensor performance testing standards is discussed. Some of the difficulties encountered in the development of these standards for the various aircraft Line Replaceable Units (LRUs) are listed. The use of system performance testing is regarded as a requirement for the depot maintenance program to insure the integrity of total system performance requirements for EO imaging systems such as the Advanced Tactical Air Reconnaissance System (ATARS). The necessity for tying NATO Essential Elements of Information (EEIs) together with Imagery Interpretation Rating Scale (IIRS) numbers is explained. The requirements for a field target suitable for EO imagery evaluation is explained.
A new electro-optical sensor concept is presented. With this "Conformed Panoramic" sensor; a uniform GSD is implemented in both directions. Unwanted redundancy and "S" curve scan geometry are eliminated, and the duty-cycle approaches unity. These features make possible an electronically rectified, uniform-image-scale, minimum-bandwidth collection system.
Using atmospheric modulation transfer function area (MTFA) as a single-valued numerical criterion for image quality propagated through the atmosphere, a statistical study of atmospheric imaging data has led to the determination of regression coefficients with which to quantitatively predict effects of windspeed, air temperature, and relative humidity on image quality propagated through the atmosphere as a function of wavelength over the 400-1000 nm wavelength region. Utilization of this procedure is quite simple. One simply plugs in expected values for windspeed, air temperature, and relative humidity in the regression coefficient expression for mcfa. The larger the expected mcfa, the better the expected image quality. Data for desert atmospheres have been presented previously. Here, the model for non-desert atmospheres is presented. Preliminary experimentation indicates the accuracy of the present model is quite good.
The ever expanding uses of remote sensing continue to drive requirements in industry, Government, and the military for an imagery data base and processing system that provides timely, flexible support to imagery analysts and yet is simple to use. This paper describes a system called the Digital Enhancement Data Base System (DEDS) that allows real-time image processing, storage, display, and enhancement of multisensor imagery, charts, and textual data. It is an imagery analyst's interactive "shoe box" that can replace hundreds of pounds of hardcopy imagery and collateral information by storing that data in digital, video, and analog forms. The system contains a graphics package that allows annotation of the data to be stored. A split-screen display feature allows side-by-side comparison of new imagery or data with collateral information retrieved from the data base. Futhermore, information in the data base can be recalled and distributed via local wide-area communications networks. The complete system is designed to operate within ruggedized, transportable cases.
The Model 324 Advanced Technology RPV system was designed and developed to carry a 250 'pound reconnaissance payload over a 1400 nautical mile range at speeds of 0.8 Mach plus. The stability of flight, non ell,issive navigation and autonoHous flight via microprocessor mission logic control unit on-board computer system make the vehicle a true "reconnaissance platform". This paper explores the vehicle from concept through flight test collpletion and includes a discussion of payload consideration for unmanned vehicles.
Linear arrays have been optically butted to increase the resolution and swath in imaging applications1,2. In this paper the authors describe a new configuration for the optical butting of four N x M matrix arrays, providing an array of 2N x 2M pixels. The composite array can provide an improved resolution by a factor of 2 without doubling the required field of view. In a matrix array the row pixels are separated by the column shift registers and the column pixels are separated by the channel stoppers. The areas corresponding to these shift registers and channel stoppers remain unexposed and hence unused. The proposed scheme utilizes these areas for the optical butting of four matrix arrays. The arrangement consists of a cubic prism having two diagonal beam splitter surfaces and two sets of area arrays A,B and C,D. The incident radiation on the prism is divided into four equal parts with the help of these two splitting surfaces and is allowed to fall on the arrays A,B and C,D. The arrays A and B are offset horizontally to provide coincidence of the centers of the row pixels of array A with the centers of column shift registers of array B. The arrays C and D are also offset in exactly the same manner. The two set of arrays A,B and C,D are again offset vertically to cause coincidence of the center of the column pixels of set A,B with the center of the channels stoppers of the set C,D. The composite array developed in this way has double the number of row and column pixels with little overlapping of pixels.
Imaging sensor performance is discussed as a function of compounded degradations of the image signal. Results of a computer model for a very long standoff reconnaissance Electro-Optical (E-0) sensor are presented, revealing relative contributions by various degradation sources. Current E-0 reconnaissance sensors and displays are surveyed, suggesting two additional, often overlooked sources of image signal degradation. The two sources, image subsampling motivated by display limitations and random bit errors during transmission and recording, are analytically discussed.
Real-time viewing of high resolution infrared line scan reconnaissance imagery is greatly facilitated using Honeywell's Real Time Display in conjunction with a D-500 Infrared Reconnaissance System. The Real-Time Display (RTD) provides the capability of on-board review of high resolution infrared imagery using the wide infrared dynamic range of the D-500 infrared receiver to maximum advantage. The scan converter accepts, processes, and displays imagery from four channels of the IR Receiver after formatting by a multiplexer. The scan converter interfaces with a standard RS-170 video monitor. Detailed review and on-board analysis of infrared reconnaissance imagery stored on a videotape is easily accomplished using the many user-friendly features of the RTD. Using a convenient joystick controller, on-screen mode menus, and a moveable cursor, the operator can examine scenes of interest at four different display magnifications using a four step bidirectional zoom. Imagery areas of interest are first noted using the scrolling wide field display mode at 8x reduced display resolution. On noting an area of interest, the imagery can be marked on the tape record for future recovery and a freeze frame mode can be initiated. The operator can then move the cursor to the area of interest and zoom to higher display magnification for 4x, 2x, and lx display resolutions so that the full 4096 x 4096 pixel infrared frame can be matched to the 512 x 512 pixel display frame. At 8x wide field display magnification the full line scanner field of view is displayed at 8x reduced resolution. There are two selectable modes of obtaining this reduced resolution. The operator can use the default method, which averages the signal from an 8 x 8 pixel group, or it is also possible to select the peak signal of the 8 x 8 pixel block to represent the entire block on the display. In this alternate peak-signal display the wide field can be effectively scanned for hot objects which are more likely to be candidate targets. The intermediate 4x and 2x zoom steps are very useful in maintaining operator orientation in examining target clusters and industrial complexes. The four operating modes of the RTD are described and their use to the operator on a typical mission is outlined. Some installation details are given. The RTD as part of a complete D-500 Infrared Linescan Reconnaissance System is now being installed on a Beech 1900 Environmental Control Aircraft to monitor pollution in very sensitive and commercially important marine ecologies. Its application on military reconnaissance missions will allow the normal review of recorded videotape imagery at a ground station immediately after return of the aircraft to base. The areas of highest interest will have been previously marked during the airborne real-time review by the operator. The RTD packages into only two Line Replaceable Units (LRUs), a Scan Converter, and a Control Unit which includes a joystick hand controller. The CRT display is assumed to be part of the aircraft.
Modern tactical aircraft commanders face increasing threats on the battlefield of the 1990s, forcing their aircrews to fly at low altitudes and/or in night and adverse weather conditions. The dynamic nature of these threat and weather conditions requires responsive mission data management both on board the aircraft and on the ground during the mission planning process. Obviously the need exists for timely exploitation of intelligence data collected on previous missions, especially for tactical reconnaissance mission planning. In response to this need, Fairchild Communications & Electronics Company has developed an all digital mission planning system, known as the MAPS 300, which eliminates the manual cutting and pasting of paper charts and manual flight log calculations from the mission planning process. In addition, the MAPS 300 recommends minimum threat exposure routes and generates visual perspective views, permitting the aircrews to concentrate on selecting the mission tactics. The MAPS 300 is now in production for the U.S. Air Force as part of the Mission Support Systems II (MSS II) currently being procured for the F-4 NWDS, F-15E and F-111 aircraft. The MAPS technology has future applications for Strike Recce and Unmanned Aerial Vehicle (UAV) programs.