The holographic process is well known. But the efficient creation of "holographic filters" for optical automatic pattern re cognition devices demands little-known technique refinements. Problems of making filters and their solutions are discussed. Basic equipment requirements and significant variants are presented for filter generation and processing systems. Filtering results are shown for typical re connaissance targets and for fingerprint identification. A proposed automatic de vice is depicted for recognizing and counting reconnaissance targets. A list of re-commendations is given to those just beginning work in the field.
A rapid and inexpensive method with which to validate the credit of a customer at the time and place of purchase has been designed and developed. The requirements call for an autonomous storage device capable of handling ten million accounts and having weekly update capability. Paramount design considerations were cost, maintenance, simplicity of operation, reliability, accuracy and readout speed. One system now being readied for field trials stores information on a piece of film 5" x 7" containing 20,000 separate holograms each .031" square. Each hologram, containing credit data on up to several hundred amounts, reconstructs information in the form of a binary spot pattern which is projected onto a photodetector matrix. A small pulsed gas laser of unique design is used in each device for instantaneous readout. Very stringent requirements are placed on beam balance ratio and film exposure in order to insure high fidelity pattern reconstruction, good signal to noise, and reasonable diffraction efficiency. Intensity threshold detection is complicated by both a widely varying number of spots in the reconstruction patterns and non-uniformity in the detector elements.
The usual practice in pattern recognition is first to adopt some working definition for the patterns of interest and then to expend most of the effort on the equipment and/or automated techniques required to find and recognize these patterns. This paper is concerned only with the first step; it outlines a strategy, based on psychophysical methodology, for determining those features of the pattern which are most likely to be informative. The approach used is termed psychopictorics; this is defined as a subfield of psychophysics which is concerned with pictorial stimuli, and in which it is assumed that, in a picture, the information of significance to the human observer may be characterized and analyzed in terms of the properties of perceived objects. The analysis of these properties involves the measurement of many psychophysical variables while the observer is responding to repeated, controlled changes of the features of single objects in the picture. Thus psychopictorics is strongly dependent on the development of computer picture processing techniques which permit such controlled manipulations without unduly degrading the quality of the picture.
The application of Shannon's information metric is considered within the context of specifying physically realizable economic and technologic performance constraints, such as the maximization of returned information (bits/dollars), or the minimization of expected informational redundancy. The robustness of the selected doctrine is examined as a function of the classification resolution, and the marginal effects, e.g., entropies, of additional data sets (or observation variables) are also investigated relative to the enhancement of the discrimination surfaces.
The most natural two-dimensional rep resentations of our three-dimensional world are perspective projections. Man, and man's "seeing machines" both input visual information in this form. Man processes this information to obtain a model of his surround, ings. It is quite natural therefore, to desire such a visual processing capability for man's machines. A first step on the path to such a facility was taken by Rcbacts (Ref. 1) in his procedure for modeling the objects shown in a single photograph. There are two drawbacks to this approach; (1) it can not resolve such visual ambiguities as shown in Fig. 1, and, (2) it is necessary to be able to break up the picture into a set of predetermined models.
Scene input,restoration, and extraction are discussed. An input model portrays the sampled scene as corrupted by additive Gaussian noise. The effect of this noise upon gradient-square and Laplacian boundary extraction operations is reduced by minimization of a pre-specified loss function. The loss can be related to the efficiency of the system. An adaptive area extraction pro-cedure based upon a gray level thresholding technique is also developed. The threshold is locally adjusted to meet varying background gray-level ranges. Results of computer simulations illustrate visually characteristics of the extraction approaches considered.
In recent years, an engineer trying to mechanize any form of "intelligence" has faced new challenges. Giant strides in technology do not seem to have overcome fundamental difficulties, nor, so it seems, have ever increasing advances in physics, mathematics, chemistry, and the other branches of science. Obviously, an important link is missing.
It is shown that the light intensity distribution in the image plane of an optical system producing a moire patterns the relevant information is the phase angle. The above described analysis forms the basis of a method for data retrieval. The intensity of light distribution on a pattern is recorded in the form of density changes in a photographic film negative. The changes of density are then retrieved by a light sensor. If one analyzes the density record, one can see that the trace is not only modulated by the relevant quantities but also by various other processes. Furthermore the effect of the nonlinear characteristic curve of the recording film is to generate for each narrow-band componet of the signal, an infinite number of narrow-band high order components of the fundamental argument. To eliminate the effect of unwanted componets, of the fundamental or the effect of unwanted comonents, the digitalized vision of the signal is filtered by numerical narrow-band pass filters in-quadrature. The computer program gives the strains in the case of moire patterns and relative retardations in the case of photoelasticity. The given examples of application show that the proposed data retrieving and data processing technique increases both the sensitivity and the accuracy of the moire and photoelasticity. This result is of great practical value particularly in the moire method since it extends its applicability to the ranges of interest of practically all engineering materials.
Numerous current studies of traffic flow are being carried out via aerial photography. Aerial photography has dominated other means of traffic flow data acquisition, primarily because it provides the instantaneous state of all traffic over a very wide metropolitan area. Sequences of aerial photographs taken at predetermined intervals, for instance one-second intervals, provide discrete data from which the continuous traffic flow may be reconstructed. Vehicular trajectories and various traffic flow parameters are determined from this data. This paper describes ,the photographic techniques, data reduction, and computer interface methods which were developed at the Institute of Transportation and Traffic Engineering, University of California, Los Angeles. Photography is carried out with a camera of 70mm format, utilizing a 3 8mm lens. Data reduction employs the 29E Film Reading System, and its associated 282E Telecordex, both manufactured by Computer Industries. Computer pro-grams have been developed and imple-mented, using the IBM 7094 computer. To eliminate the human operator, computer-controlled film-scanning and data reduction are under development. This method reduces data reduction time to 5-10% of the time required by the method above. It employs the Programmable Film Reader (PFR-3), manufactured by Information International, Inc. This paper describes the photogram-metric techniques employed in locating a vehicle in a ground coordinate system. It also describes pattern recognition techniques, whereby the behaviour of each vehicle traversing the area under study can be determined.
Hardware and computer programs have been developed which allow bubble chamber events to be recognized and measured without manual assistance. All tracks in the three film images are digitized. Those track verticies which signify events are found, and the tracks are associated between views. Cost, precision and reliability are favorable in comparison to manual methods. The unassisted analysis of bubble chamber data has advantages not only as an economical means of converting these data into a meaningful numerical form, but also as a necessary step in the direction of putting such analysis online to the experiment. Modern techniques allow accelerators and bubble chambers to produce one photographic exposure per second over sustained intervals. This means that 300,000 sets of three "stereo-triad" film images can typically be obtained in a calendar week of chamber operation. An average of one event of interest is contained in each triad. Analysis by conventional means of a week's output from the bubble chamber consumes a thousand man-weeks of effort for scanning and measuring alone, just to find and convert to numerical form the data contained in the film images. Yet the statistical needs of current experiments often require two and three times this quantity of data in order to obtain significant results. Clearly there are compelling reasons to develop improved analysis procedures, from consideration of the labor cost as well as the urgent need to complete the experiment within a reasonably short time. A number of analysis systems (Ref. 1-7), have been developed in recent years which use computers to assist the human operators. Still another system (Ref.8),has come into being which measures some frames automatically, and calls for help only when problem events are detected. A substantial fraction of the events require such manual assistance. Therefore, both systems are limited by human response times. A factor of ten gained over conventional methods seems to be an upper limit; further gains are made difficult by the fact that the operator must have time to see, recognize and direct the measurement of each event whose analysis is given to him. Measurement costs have been decreased, although for obvious reasons, not in direct proportion to the reduction of manual effort. These systems represent significant advances in the technology of data acquisition, but they are severely limited by their inability to attain analysis rates commensurate with those of bubble chamber operation. Typically four to eighteen months are required to analyze the output of one week's chamber operation. Analysis of bubble chamber experiments within the real-time scale of chamber operation offers significant advantages to the experimenter. Not only can the data acquisition process be monitored so as to assure that technical failures do not weaken the usefulness of the gathered results, but also the planned experiment can be modified in response to informa-tion found in its early phases. However, experimental physics budgets are not able to support this step into real-time if it adds significantly to the analysis cost. The criteria for improved procedures are thus established; to move toward reliable real-time analysis while continuing to decrease the cost per event.
This paper describes diffraction-pattern sampling as a basis for automatic pattern recognition in photographic imagery; it covers: diffraction-pattern generation, diffraction-pattern/image-area relationships, diffraction-pattern sampling, algorithm development, facility description, and experimental results which have been obtained over the last few years at General Motors' AC Electronics-Defense Research Laboratories in Santa Barbara, California. Sampling the diffraction pattern results in a sample - signature - a different one for each sampling geometry. The kinds of information obtainable from sample-signatures are described, and considerations for developing algorithms based on such information are discussed. A tutorial section is included for the purpose of giving the reader an intuitive feeling for the kinds of information contained in a diffraction pattern and how it relates to the original photographic imagery.
One of the major objectives of the Bio-Medical Division at Lawrence Radiation Laboratory (Livermore) is to determine the effects of radiation on man-particularly the effects of chronic exposure to low doses of radiation or moderate doses delivered at low rates. Biology and machine, engineering, and computer science are represented in the LRL chromo-some project, which is a team effort by about ten people. In June 1967, we were privilege to present our initial efforts in chromosome scanning to a seminar jointly sponsored by SPIE and the U. S. A. F. Office of Aerospace Research in Washington, D. C. (Ref. 1).
Angiocardiographic methods of determining left ventricular volume are well established, but the amount of labor involved in any manual procedure reduces the applicability of these techniques to myocardial function studies. An automated method of ventricular volume computation from unaltered x-ray images is clearly needed. This capability is offered by the amalgamation of state-of-the-art radiographic equipment with sophisticated programmable film readers. Problem areas are many, but primary attention should be paid to methods of determining boundaries for the objects in question from x-ray images and to utilization of accurate volume computation algorithms. Preliminary investigations in these areas are reported involving maximum likelihood estimation of boundary location and modified Simpson integration volume determination.
A calculation was performed of the attenuation of X-rays in passing through a tantalum filter and a layer of standardized compact bone. The calculations took into account the heterogeneity of the output of the X-ray tube and the energy dependence of the mass absorption coefficients of the material in which the radiation was absorbed. A similar calculation was made for the attenuation of X-rays in passing through a tantalum filter and an alumunum step wedge. The results of the two calculations were compared, and the comparison was used to estimate the thickness of compact bone which would produce the same darkening of the X-ray film as a given thickness of aluminum. This comparison was contrasted with the results from a simple formula (based on the assumption of monochromatic radiation) which related bone thickness to aluminum thickness to produce the same degree of darkening of the film. The simple formula produced a value for the thickness of bone from the calculation based on polyenergetic radiation. The method provides an algorithm for automatic quantitative roentgenography.