Since the second world war, the Human Factors Scientist has played a fairly significant role in the design of equipment for military use. In the 1940's, psychologists were assisting in the design of such varied equipments as radar consoles, instrument dials, gunsight reticles, information control and display systems, underwater sound detection devices, aircraft cockpit display systems, communications systems and training simulators. To a large extent, this earlier work could be classified as pure human engineering since the primary concern was with the dial legibility, knob selection, visual coding, tracking, control/display relationships, work place and console layouts and anthropometrics of the human being. Much of this information and research data is presently well known and documented. Human Engineering Data can be found in comprehensive reference texts such as:
Controlled studies, that seek to explore the role of the human observer in the evaluation of photographic images, are possible only when stimulus material is employed whose objective, quality is well known. When such studies involve the imagery of complex photo-reconnaissance systems, the necessary stimulus material is difficult to acquire due to the variability in image quality and inadequate means of evaluating such images. In view of these difficulties, techniques have been developed whereby the prerequisite stimulus material can be prepared in the laboratory by simulating the characteristics of the photographic system. These techniques will be discussed, typical stimulus material will be displayed, and applications in the field of psychophysical experiments will be considered.
This paper discusses some aspects related to the capability and limitations of the human observer as an integral part of a photo-optical system. The subject matter is more or less divided into two parts covering respectively the visual mechanism of the human eye, its functional behavior, its resolution limits and visual perception and the system engineering aspects as related to display systems and sensors. Most of the material covered in the first part is available in current publications or textbooks. The main emphasis here has been on correlating and organizing existing material in a comprehensive manner to expose the scientist-engineer to this perhaps somewhat unfamiliar discipline. Very often the human observer or human operator is considered as the magical "black box" somewhere in the loop of the overall system, be it airborne or earthbound. Such a black box approach usually considers only the input signals or stimuli and the corresponding output signals or responses. Nevertheless it seems appropriate, at least once, to dig into this black box, i.e., the human operator, and have a closer look at its (or his) internal mechanism. Perhaps it may help us in our evaluation of the human performance as a vital link in the overall system. The latter part of the discussion is more related to the human ability to detect and recognize targets in real-time exploitation of airborne sensor data. This will be treated in terms of graphical presentations of critical target dimension versus observation time. It is proposed that such presentations can be employed for effective utilization of various airborne sensors. A simple analysis of display system parameters relevant to dynamic imagery presentation and interpretation will also be discussed.
The successful performance of military operations depends upon the availability of information regarding the location and activity of enemy forces. Such information is obtained from various sources, one of the most important of which is aerial photographs. The photographs are acquired by reconnaissance systems and are then examined by image interpreters in order to extract the desired information from them.
To make a display legible from the rear of a large audience, one has to use fairly large symbols, which restricts the amount of information that may be packed into a display of given size. Up to a certain point, this restriction can be relieved by enlarging the screen. The limit is reached when oblique viewing annihilates the advantage of increased magnification.
It may help to try getting "way off"-- we like to call it being "objective" -- and observe humanity with perspective -- say about 2500 years of perspective. We would then find ourselves probing the my-steries of man with the Greeks. The Greek philosopher Protagorus observed that "Man is the measure of all things." Apparently that idea didn't originate with Plato, but he agreed and gave the concept dissemina-tion by teaching it to his students. How much have we learned about man in the past 2500 years? Of course, we've learned a great deal. But what have we learned about the really important things?
The requirements for acceptable visual presentations by simulation techniques for use in training pilots and other vehicle operators are defined. These are defined in terms of the human operator's visual capabilities. Among the requirements to be discussed are: the field of view, necessary detail, light level of the picture, performance characteristics of the flight simulator or other vehicle simulated which are important to a visual display/simulator combination, requirements for specific missions, and simulator per-formance. The second portion of the paper will describe some visual simulation techniques and research hardware developed to date. The compromises and limitations of the equipment as well as advantages of various techniques will be discussed. The visual simulation techniques to be covered will include the shadowgraph/point light source, television, and optical techniques for providing clear day or low visibility conditions.
In the simulation of the view seen from a spacecraft during a docking operation or from a low flying or landing air-craft, the use of reduced scale models of three dimensional form has proven to be a direct and useful means of generating the display associated with the simulation system. The basic elements of a visual display produced by this means are a model, an articulated image forming lens system, a closed circuit TV link and a display.
This paper describes a recently developed system which enhances the human decision-making capability in analyzing and interpreting aerial photography. Advanced tactical multi-sensor recon-naissance missions will provide four separate rolls of film which require interpretation. Since each roll of film covers approximately the same area, it is advantageous to view all four at the same time, with each running at its appropriate speed so that the same images on all four rolls can be viewed simultaneously. The Multi-Sensor Viewer, with its various functions of simultaneous scanning of panoramic, frame, infrared and SLR photography; comparison viewing of tactical target records and map chips; mensuration, printing and processing, and stereo-viewing, will be described. In addition, a brief discussion of its integration within the Image Interpretation Central will be presented. The Multi-Sensor Viewer was designed and developed as part of the Image Inter-pretation Central (IIC) AN/MSQ-58, for the Rome Air Development Center under Air Force Contract AF30(602)-2882.
In the past five years, photo-interpretation research (Zeidner 1961; Zeidner, et. al. 1961; Leibowitz and Sulzer, 1965) has cast some doubt upon the usefulness of stereoscopic devices in enhancing photo-interpreter performance. Such findings are surprising in view of the fact that most laboratory studies, as well as many other applied studies, have indicated that stereoscopic viewing conditions yield higher increments of performance than non-stereo viewing conditions (Ogle, 1959; Robinson, 1964; Woodworth and Schlosberg, 1954; Chubb, 1964; Gould, 1964; and Smith & Gould, 1964). It should also be noted that a Russian photo-interpretation study (Gamezo and Rubakhin, 1961) concluded that: "As can be seen, the development and active functioning of the (spatial) concepts considerably broadens the possibilities of stereoscopy. Practice in stereoscopic,examination of aerial photograptis, in its turn, contributes to the development of spatial concepts, the enrichment of the store of initial, conventionally schematized, images of the stereo-model type."
This paper gives preliminary results of a continuing experimental study of factors affecting the precision of centering black circular measuring marks in sharp, high-contrast targets with homogeneous backgrounds, subtending visual angles up to 45 minutes of arc, in photopic vision. The results support the proposition that adjacency effects at edges contain significant visual information, and this could appear to be important where visual settings are being made by bringing geo-metrical configurations into close relationship with one another. The maximum information content for the centering task investigated was contained in ribbons approximately 1 minute of arc wide around the light areas of target and measuring mark. The most precise pointings were made by selecting a measuring mark to give a minimum annulus width within these ribbons irrespective of the target size. The results support the concept of a retinal zone approximately 4 minutes of arc in diameter over which acuity is constant, but further suggest that this may be dependent on the type of task involved. The acuity in the horizontal retinal meridian was some 30% greater than that in the vertical meridian for annulus widths up to 4 minutes of arc. Whilst the results were obtained for a restricted set of circumstances, they will be discussed in the broader context of measurement to objects on photographs in an attempt to relate the accuracy attainable to the image quality parameters of the photographic system as it affects the rendition of edge gradients on photographs.
A practical, approach to specifying and designing large format rear screening projectorc for aerial reconnaissance photography is outlined. Unique brightness control methods are discussed in terms of their proper use to keep the interpreter's pupillary diameter at a size to allow maximum visual acuity. Methods of keeping a uniform brightness over the screen for different viewing positions are described. Solutions are out-lined for factors contributing to uneven screen brightness such as the observation angle with the principal axis of the diffusion lobe of the screen, system vignetting, and cosine effects. These solutions include the use of large diameter Fresnel lenses, strategically placed apodizing filters, and a discussion of the Luneburg criteria for condenser lens design. Characteristics of rear projection screening materials are discussed in terms of their performance in the projection of high definition (100 to 200 medium contrast lines per millimeter) photographic records. A system of evaluating screening materials by their modulation transfer runction is pioposed. Measurement of the imagery by moving reticles in the projected field is discussed and problems associated with obtaining sharp, fine lines on the screen with this type of system are summarized. Various light sources and their effects on interpretation, good and bad, are described. The use of image rotators, both optical and mechanical, as an aid in angular measurement of the imagery is outlined. A formula is developed from an empirical formula for determining the minimum magnification of a projector based on the resolution capability of all of the elements in the system from film to and including the eye. Also, a method of designing projection systems using the modulation transfer function of each element, including the eye, is analyzed.
When an object is displaced horizontally in depth relative to an observer, this deviation from the center of the object is called "slant". The vertical boundaries of the object are located in depth in terms of this slant orientation of the object to the observer. As the position of an object in space determines the projected shape of the figure to an observer, it is only natural to find that the judg-ment of slant of an object is a problem whose historical anteced-ents originated with the general subject of 'shape constancy' in psychology.
Earlier, I spoke to you of my interest in optical illusions. I find that studying illusory phenomena helps to understand how the eye works since illusions permit a study of the effects of conflicting cues.
The problem which faces photointerpreters is that of extracting the maximum amount of military information from a certain set of film data which often consists of only about two or three pieces of simultaneous imagery. Usually, panchromatic, near infra red and thermal infra red photography is employed. It is from this photography that the photointerpreter is required to obtain pertinent terrain characteristics as well as the location, strength and disposition of the enemy.
This paper describes a manual method for rapid human decoding of digital data recorded on film. This method provides visual enhancement of tiny digital data for more effective human decision making from image presentations. The Department of Defense has directed the Army, Navy, Air Force and Marine Corps to use the common reconnaissance/ mapping data marking system established by MIL-STD-782B(Wep). All existing data marking systems are to be converted to the binary-coded-decimal system described in this standard. This simple manual method provides a means for the photo interpreter to perform relatively rapid, yet accurate decoding and recording of the binary coded decimal (BCD) data. The tiny BCD bits are magnified by a rear projection system, so that they are easily read by eye and provide sufficient spacing for rapid annotation of each character on a specially designed mask. This mask serves a dual purpose: 1. Grouping the data bits functionally for rapid identification and reading under all specified positions of the CMB. 2. Providing a media for recording the data being read which can be reused or kept for record purposes. The compact projection system is designed for maximum light transmission so that it can use existing diffused light sources. It is simply placed over the tactical reconnaissance film being viewed on any direct viewing light table.
What our creative scientists are working on now are ways to strengthen our senses and to extend our ability to comprehend complex situations. One result of this work is the computer-generated visual display.
This paper discusses measurement of image interpreter performance as means for assessing techniques associated with proposed improvements in reconnaissance technology (i.e., associated with the collection, display, and processing of reconnaissance imagery). Selected examples of research will be described to illustrate this approach including a discussion of methodology for evaluating the contribution of color imagery. The goals of aerial reconnaissance are oriented toward providing detailed, accurate, and timely intelligence information for a multitude of uses. Attaining these goals is often quite difficult. Even with the advanced sensor systems available today, it must be recognized that the extraction of intelligence information is a complex and time consuming process. In order to increase the accuracy and to reduce the time required for information extraction, it is necessary that we have a knowledge of the present extraction capability, anticipated advances in imagery, along with time requirements. It is extremely important that this knowledge be derived from controlled experimental situations which simulate operational conditions and requirements. Reconnaissance systems normally are improved through a step by step refinement of the sensor's ability to discriminate and record objects in the real-world environment. In some cases, scientific break-throughs make possible significant gains in a sensor's ability to record intelligence information. All of this advancement is lost however, if some efficient and expeditious means is not found to (1) extract, (2) analyze and (3) report the pertinent information to a using command. The pursual of carefully planned, executed and analytic research in reconnaissance can lead to a valuable source of answers for a number of critical questions. To cite an example, in the evaluation of a new sensor system the question of primary importance to the developer and/or user might be whether or not timely detection of freight train traffic at night is possible. In setting up an experimental design addressed to this problem, the first step is to determine the (1) questions, (2) parameters, and (3) interactions, etc. which might be expected. The following type of questions might arise: 1. Can freight trains be detected at night by sensor X? 2. What is the maximum range for such a detection? 3. How does weather affect the sensor/re-cord? 4. What advantages does it offer over conventional techniques? 5. What are the time-lags incurred by various information extraction and technical decisions for using the information? Parameters in such a study might include: (a) Aspect angle (b) Altitude (c) Scale (d) Target/background contrast (e) Special interpreter viewing devices (f) Interpreter background/training (g) Interpreter task (screening versus detailed analysis, etc.) When all of these factors have been determined, they must be combined into an experimental design which will reflect the requirements for image acquisition and realistically
A conceptual framework was proposed which conveniently summarizes a multitude of specific activities, which when taken collectively, com-prise a significant portion of the image interpretation process. Usually, these activities are either informally or formally organized into sequences of behavior guided by a desire to follow certain general rules. In this study, quantitative criterion data, based upon user needs, were obtained which permit derivation of clear image interpretation strategies. If followed, the strategies should allow interpreters to maximize the worth of their reports. The data are particularly applicable to the task of interpreting ambiguous or incomplete imagery.
Visual presentations, commonly called displays, overlap many engineering disciplines. As a result, many different methods for measuring display performance have been developed, depending on the background and experience of the particular agency, company or individuals involved. This paper is intended to highlight the problems of first obtaining a workable definition of the end result desired from a given display, and second, obtaining a workable method for rapid and accurate measurements of the performance of individual components and equipment which must eventually go together and satisfy the user's needs. Typically, the user wants legibility and error-free performance, while the component and system designer is more interested in such parameters as contrast, brightness, color and resolution. To further complicate the problem, the units and measurement techniques differ widely between different engineering disciplines. Deficiencies in measuring equipment further restrict both the user and the display designer from reaching quick and easy conclusions about the performance of a particular display. The small lighted areas of most displayed data make some form of spot photometer necessary to measure brightness; the monochromatic nature of many displays makes most spot photometer measurements highly questionable unless care has been taken to calibrate the instrument on a standard source having the proper spectral energy distribution. Contrast is considered to be important display parameter, yet the various methods for establishing a known ambient illumination level can give quite different numerical results. The human eye is extremely tolerant of color and brightness changes, yet most display specifications are quite rigid in controlling these parameters. It is hoped that the discussion stimulated by these and other problems will help to establish a climate where coordination of display measuring techniques and standards will be accomplished.