Our lives are spent identifying, judging and using objects, largely without any direct input from our non-visual senses. This process makes major use of the visual structures built up within our brains from superficial characterizations of the surrounding object world. It is truly amazing how much we can rely (usually with high precision) upon the visually perceived properties of objects when we have no other direct sensory knowledge. We learn to see at such an early age, under the influences of such a battery of sensory inputs, that the whole process is still not firmly understood. The heuristic processes of the brain in developing a model of the egocentric world derives from the encoding of sensory experience into that model by calibrating the observed object world on the basis of "hardwired" physiological reactions and subsequent learned world parameters. The great contribution of the visual sense is that it frees us from direct contact requirements - moving the recognition of possible difficulties to a (somewhat) more remote future, enabling a planned rather than instinctive response. This paper discusses the development of visual perception in the brain, the unification of sensory system inputs into a coherent world structure, and some processes the individual uses inthe organization of the visual field for most useful extraction of information.
Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.
Eye trackers based on the standing potential of the eye have been researched and found to have considerable base line drift and poor accuracy. A technique employing visual feedback and base line drift compensation has been developed that is sufficiently accurate to select menu items on a video display. This technique is analog and of low cost. This system is implemented to replace the joy stick on an personal computer.
Simulation devices have developed into critical tools for training operators of complex vehicles such as airplanes. As the costs of these vehicles have increased to exorbitant levels, so too have the costs of the simulation equipment increased. An alternative to producing and using more simulators is the introduction of embedded equipment to support training.
The goal of this research was to investigate the effects of foveal load on sensitivity in the peripheral visual field. Foveal load was manipulated by comparing the simple fixation of a cross vs. a first-order (i.e., rate) compensatory tracking task. Peripheral sensitivity was determined simultaneously for light flashes presented at different eccentricities along the horizontal meridian. The effects of training on the task were also evaluated in terms of changes in peripheral sensitivity. In general, the results showed no losses in peripheral sensitivity or a "tunnel vision" effect under the experimental conditions employed. These results are contrary to data obtained by previous investigators. Reasons for these findings are discussed.
Aerial images produce the best stereoscopic images of the viewed world. Despite the fact that every optic in existence produces an aerial image, few persons are aware of their existence and possible uses. Constant reference to the eye and other optical systems have produced a psychosis of design that only considers "focal planes" in the design and analysis of optical systems. All objects in the field of view of the optical device are imaged by the device as an aerial image. Use of aerial images in vision and visual display systems can provide a true stereoscopic representation of the viewed world. This paper discusses aerial image systems - their applications and designs and presents designs and design concepts that utilize aerial images to obtain superior visual displays, particularly with application to visual simulation.
Large liquid crystal cells mounted in juxtaposition to a video screen display can output polarized light, alternating between left and right handed circularly polarized light. When properly produced stereoscopic images are displayed on such a device, the result can be pleasing three-dimensional images when viewed through appropriate analyzing spectacles. There are advantages to this approach which uses passive spectacles, compared to the active approach using powered liquid crystal shutters.
Modern aircraft displays with relatively high visual brightness levels present day and night sensor images (generated by electro-optical systems) to crew members for navigation and fire control purposes. A heads out display (HOD) on a cathode ray tube (CRT) screen, while effective for one crew member, may distract or irritate another crew member if the image is reflected off a canopy panel into his eyes, particularly at night. This paper presents one solution applied to canopy reflection suppression encountered in the U.S. Army's APACHE Advanced Attack Helicopter where the co-pilot's HOD reflections interfered with the pilot's vision. When the co-pilot would move his head away from the screen, the reflected image path to the pilot, sitting above and behind the co-pilot, would no longer be blocked and distract him. A variety of polarizers were studied and the problem was solved by placing a linear polarizer over the CRT with its axis crossed relative to the skipping vector of the reflection, letting the canopy panel act as an analyzer. Reflected luminance was reduced by more than 25 times.
Traditional training methods usually provide for alternating free practice trials with demonstrations of ideal or desired performance. Current work at the University of Central Florida's Human Factors Laboratory has tested the feasibility of simultaneously coupling direct practice with exposure to a proficient model's behavior. By optically superimposing a videotape of a model's execution of the task on the student's video terminal during training, subjects can practice complex skills such as an aircraft carrier launch and approach to landing by mimicking the altitude, pitch and roll demonstrated by the model. Results suggest that coupling direct practice with simultaneous exposure to the modeled demonstrations yields the same level of proficiency with a 33% savings in training time.
The introduction of night vision goggles into the cockpit environment may produce incompatibility with existing cockpit optoelectronic instrumentation. The methodology used to identify the origin of the spurious signal is demonstrated with the example of an electronic display. The amount of radiation emitted by a gray body in the wavelength region of goggle sensitivity is calculated. A simple procedure for preflight testing of cockpit instrumentation using a commercially available infrared camera is recommended. Other recommendations include the specification of cockpit instrumentation for compatibility with night vision devices.
Binocular helmet-mounted displays have become increasingly popular over the past several years; particular emphasis has been placed on achieving wide field of view displays with resolution capability greater than that attainable with a monocular system utilizing a single CRT. Binocular display systems with severely divergent axes have been developed wherein the horizontal field is divided into three areas, that visible to the right eye only, that visible to the left eye and an overlap region. A typical system has individual displays with 80 degree fields of view with axes turned outward ±20 degrees achieving a total field of 120 degrees with a 40 degree overlap. The turnout of the optical axes means that the center of the display field is 20 degrees off-axis in the individual displays. Almost all points in the overlap regions are at significantly different off-axis angles in the two displays. The implications of these factors relative to required aberrational correction and system characteristics are discussed.
To develop a helmet mounted display (HMD) multiple disciplines are required to include personnel skilled in optics, electronics, mechanics, video, vision, composites, etc. It also helps for each team member to become familiar with the other's HMD technology. This paper will discuss some of the "lessons learned" in developing the Hamilton Standard HMD. (Fig. 1)
The traditional approach to helmet-mounted display (HMD) design has been to add the display onto an existing helmet. This has produced displays which. while functional, have not always been favorably looked upon by the user community in terms of their weight. center of gravity, obstruction of vision, head motion restriction and aesthetic appeal. To solve this dilemma, we have approached the problem from a new perspective. By first designing an optical system which has the desired performance, folding it such that it conforms to the shape of the human head, then designing a helmet around the optics, it is possible to produce a display system which will meet the operational requirements. compromising neither the user's ability to carry out the mission nor the life support capability of the helmet and mask combination. There are several programs currently underway at Kaiser Electronics in which this approach has been the key driving element. An example is given of the current Agile-Eye helmet integrated display system, a 120field-of-view (FOV) monocular, stroke (symbols only) display designed for use in a fixed-wing environment.
The Helmet Airborne Display and Sight (HADAS) system under development, has succeeded in surmounting many of the problems experienced by current, as well as past helmet mounted display and sight designs for operation in fighter aircraft. The goal has been achieved by combination of holographic optical elements and fiber optics for the display function, as well as real-time image processing of the helmet location for the sight function. The integrated system can provide "all aspect head-up display" performance in the cockpit.