This paper shows the results of a research program funded by Douglas Aircraft Co. (DAC) to enhance the situational awareness of pilots flying large aircraft low to the ground in high-threat environments. Radical display formats are employed in flying the aircraft. These formats relate to the spatial alignment of a pilot to mobile threats, imprecisely located destinations, moving weather, and fixed terrains. Assumed in the program is a survival-enhancing intelligent avionics autopilot, called Adaptive Network for Avionics Research Management (ANARM). Demonstrations can be done on any IBM-compatible personal computer supporting VGA displays. The capability to allow a pilot to modify the situation's pseudo 3-D viewpoint as a function of time appears to be of particular importance.
This paper presents the theory of operation, configuration, laboratory and ound test results obtained with an helicopter airborne laser positioning system developed by Princeton University.Unfortunately, due to time constraints, flight data could not be completed for presentation at this time. The system measures the relative position between two aircraft in three dimensions using two orthogonal fan-shaped laser beams sweeping across an array of four detectors. Specifically, the system calculates the relative range, elevation, and azimuth between an observation aircraft and a test helicopter with a high degree of accuracy. The detector array provides a wide field of view in the presence of solar interference due to compound parabolic concentrators and spectral filtering of the detector pulses. The detected pulses and their associated time delays are processed by the electronics and are sent as position errors to the helicopter pilot who repositions the aircraft as part of the closed loop system. Accuracies obtained in the laboratory at a range of 80 ft in the absence of sunlight were ±1° in elevation; +0.5° to -1.5° in azimuth; +0.5 to -1.0 ft in range; while elevation varied from 0 to +28° and the azimuth varied from 0 to ±45°. Accuracies in sunlight were approximately the same for a range of 80 ft, except that the field of view was reduced to approximately 40° (±20°) in direct sunlight.
In this paper we review ray optical and wave optical techniques to evaluate the performance of an imaging HOE at a shifted reconstruction wavelength. To demonstrate these techniques we give spot diagrams and plots of the intensity and phase distributions of the imaging wave in different planes behind the HOE.
Head-Up Displays (HUDs) utilizing holographic combiner elements can suffer from poor display brightness uniformity across the head motion volume or field of view. The brightness non-uniformity is due to angle differences between the construction and the enduse orreconstruction geometries. This problem is especially acute when holographic combinerelements are positioned relatively close to the pilot's design eye location, when large head motion volumes are desired, and when narrow-band phosphors (e.g. P-53) are used. This paper presents a hologram design approach that maximizes the HUD combiner phosphor reflectivity, the transmissivity through the combiners, and the display brightness uniformity. This technique can be applied to HUDs using dual combiners or wide field-of-view combiners.
The Strategic Air Command is currently in the process of contracting production quantities of Night Vision Goggle Head—Up Display (NVG/HUD) systems for their conventional mission B—52G aircraft. This system displays flight and navigation information onto a combiner glass which is mounted to one of the NVG objective lenses. This allows the pilot to have an "eyes out" orientation, thereby decreasing communication and workload, and increasing mission safety, situational awareness, and mission effectiveness. This report will attempt to reconstruct the development history to date of the NVG/HUD system, and how it finally was incorporated into the B—52 airplane.
On August 31, 1986, an Aeromexico DC-9 and a Piper Archer collided at approximately 6,400 feet over Cerritos, California. Both aircraft were operating in the Los Angeles Terminal Control Area (TCA) , a form of positive—control airspace created to prevent this kind of disaster.
This document reports the results of the authors' work on understanding the problems associated with dynamic terrain (DT) in networked visual training simulators. Dynamic terrain (construction of emplacements, cratering and repair, etc.) is of substantial military interest as ground-based simulation becomes a common training technology. The basic cost/performance issues of visual simulation are analyzed with regard to the introduction of DT. An overview of current networking (SIMNET) technology for visual simulation is provided, and the difficulty of extending the SIMNET paradigm to dynamic terrain is discussed. An object-oriented representation for terrain is suggested, and its advantages are described. Finally, we consider the implications of dynamic terrain within networked simulation, with particular reference to the problems of scale imposed by the interaction of high speed aircraft flying nap-of-earth (i.e. at treetop level) and low speed ground vehicle simulators.
Grumman is developing an aircrew trainer suite for the A-6 and F-14 aircraft. The primary mission of the A-6 is ground/surface attack, while that of the F-14 is air superiority. A major part of the development is designing a visual simulation database that is correlated with a high resolution radar database. Analysis and evaluation were performed on alternative approaches to developing a terrain skin database. Costs and benefits of the alternative approaches are discussed.
The scene content and fidelity of Computer Image Generators (CIG) has increased dramatically in recent years. Photographic source data is now being used in today's real-time visual simulators, and the expectation is that it will be used to a much greater extent in the next generation ofvisuals. Photographic source will be used to extract traditional CIG cultural features such as roads, rivers, and fields, expressed geometrically as polygons and radiometrically as photo textures. In addition, overhead aerial and satellite photography will be applied to the terrain surface as photo texture, resulting in the need for multigigabyte on-line image data bases. At the same time, highly accurate representations of the terrain and the 3-D features on the terrain surface will be supported by these advanced systems. Data bases will be extremely dense, with near-continuous scene density for both 2-D (photographic) and 3-D (polygonal) features. The emerging requirement for mission rehearsal capabilities in a visual simulator will continue to increase the needed data base fidelity, while imposing a severe time constraint on data base construction. The Rapidly Reconfigurable Data Base (RRDB) Project, funded by PM-TRADE under the USAF/ASD Project 2851, proposes to develop a data base generation system capable of constructing a data base in 72 hours; the Special Operations Forces have specified an Aircrew Training System with a 48-hour data base turnaround requirement. This paper will discuss how traditional CIG architectures and companion data base generation systems have been impacted by the addition of photo-based visual technology, and the emergence of mission rehearsal applications.
This paper describes the selection of an optimized projection display system based upon the customer's requirements, the limitations imposed by current display technology, physical limitations, and system design considerations. While this paper portrays the specific results of a given case, a tandem cockpit aircraft, it is intended to also reveal numerous circumstances present in current visual display definition and a potential solution to some of them.
The Fiber Optic Helmet Mounted Display (FOHMD) projects high and low resolution computer generated imagery via fiber optic bundles through collimated helmet mounted optics to each eye. Combined head and eye position information is then used to position a high resolution area of interest within the head tracked low resolution background. Methods for evaluation of the eye tracker are described and experimental results presented that reveal its present performance characteristics.
Recent display research has produced high quality flat panel displays by combining liquid crystal technology with arrays of thin film transistors (TFT's). The present high cost of these displays can be reduced by integrating the scanning electronics onto the glass plate along with the pixel switching transistors. This paper describes a 320,000 pixel self-scanned liquid crystal display (LCD) with CMOS-TFT gray-scale generators. Its "chip size" of over 100 x 200 mm makes this the largest wafer scale integrated circuit ever built and the integration of the scanning circuitry on the plate has reduced the number of input leads from 1200 to 44. This dramatically illustrates the potential of wafer scale integration techniques to improve reliability & reduce cost. Since this display IC is too large to fit within the field of even a 2:1 wafer stepper, the design was first preassembled on a CAD workstation and then partitioned into 1 6 smaller reticle segments. These segments were written onto a 7 1/4 inch reticle and the final assembly of the display was then completed by proper programming of a large area wafer stepper. This work was substantially funded under Wright Patterson Air Force Base contract # F33615 - 88 - C-1825.
In the engineering flight simulation environment, due to functional and flexibility requirements, it is often necessary to change video routing in real—time. As facilities grow to include multiple Image Generators (IGs), domes (including projectors), and System Control Stations (SCS), the requirements for video switching grow geometrically. Video recording is often required for studies and data collection purposes. However, due to the nature of computer generated Imagery (high line—rates and separate red, green, and blue (RGB}) most of the video is not in an acceptable format for recording. Therefore, various means are required to convert the video to commercial standards for recording. After the video is in the proper format, recording, editing, and playback can be performed. If playback is required on a monitor normally used for IG video (ie. RGB) the video needs to be converted back again. Sometimes a study will require recording of multiple sources from various formats, with various special effects (split screens, overlays, etc.) on multiple recorders. McDonnell Douglas Helicopter Company's approach to meeting the requirements of video switching, recording, and distribution for engineering flight simulation was to bring all the video sources into a central point called the Facility Video Switching and Distribution (FVSD) system. The FVSD allows the user to quickly configure the video system before and during a simulation with touch screens. It also permits mission related switching under computer control, video recording of multiple video formats with some special effects capability including time code, and video distribution. This paper will discuss the design and implementation of the FVSD system in the Simulation Systems Group at McDonnell Douglas.
Visual simulation of the external environment has become an essential part of devices used in flight and air combat training. In the case of a highly maneuverable aircraft, such as a fighter, the pilot's visual field-of-view is extremely large. As a result, many of the modern air combat simulators use the inner surface of a spherical dome as the display screen for the projection of surrounding imagery, such as other aircraft and ground features. When the reflected scene appears brighter than that from a Lambertian surface, the screen is said to exhibit gain. The increase in brightness results from redirecting the incident light along a preferred path. As the apparent gain increases in the preferred direction, less light is reflected in other directions, resulting in a sharp drop in apparent brightness. When the reflecting surface is curved, as is the inner surface of a spherical dome, the preferred direction of reflection is continually changing. Placement of the projectors and the location of the observer, together with a high surface gain, comprise the complex projection system development.
This paper presents an electronically tuneable light filter. The filter can separate a very narrow band of light in an electronically controlled manner. The separated band of light is focussed as an image. The filter can work as a very fast shutter too. A speed of that shutter can reach 1 microsecond or better. The term "light" is understood herein as the visible and infrared regions of electromagnetic radiation. The paper also explains the physics of the filter and shows mathematical analysis of image generation.
Conventional head-up display (HUD) optics are relatively limited in both instantaneous field of view (IFOV) and display brightness and may be inadequate some applications. Considerable improvements to both parameters may be made by the use of pupil relay optics employing powered holographic combiners but this type of system tends to be complex and relatively expensive. The usefulness of conventional HUD optics may be increased by the use of combiners that improve instantaneous elevation field of view or display brightness.
Although SPIE is not involved in politics, "Air Traffic Safety" has a high political profile because it has an effect on all of us who travel by air—whether we're flying the flying machine as a pilot, or just riding in it as a passenger.