Engineers at ECAC have pioneered in applying numerical mapping techniques to the solution of electromagnetic compatibility problems. Data processing procedures for making digitized topographic data useful as a problem solving tool have been developed, (Ref. 1) and an economical means for acquiring digitized topographic data has been developed by Army Map Service personnel. (Ref. 2,3,4) As of February 1, 1967, the topographic data base contained data representing approximately 25% of the United States. A collection plan has been established with the Army Map Service to supply additional data on a routine basis.
Two years ago at the tenth SPIE Symposium I presented a paper (Ref. 1) which illustrated some of the work JPL has been doing in the development of hardware and techniques for processing of the Ranger and Mariner pictures. It is the purpose of this paper to bring you up to date on our more recent activities.
For several years operational reconnaissance films have been annotated with flight, data digitized in the format of MIL-STD-782 code blocks. It is almost universally accepted in the using services that a code block annotated. on film permanently, visibly, and in direct. association with the pertinent photo imagery-is a long term, continuing, requirement. For over a year, Fairchild Hiller Corporation conducted., a substantial company-funded effort to fully explore all aspects of reading. This work resulted in contracts for a feasibility model. from the Photo Management office of the Naval Air Systems Command, and later for a produc-tion design from the Tri-Service TIPI SPO. The need for automatic re.Hders of the code block is a direct result of the large volume of film being handled. Equipments such as motorized light tables, computer aided mensura tion systems, continuous process film titters and route-and-coverage plotters all will benefit from the use of compact, reliable, error free,. high speed code block readers. -
A conceptually simple computer plotting program has been developed that provides an efficient method of plotting a projection of a three-dimensional surface with remarkable clarity, even if the surface comprises a large amount of detail. The method consists of plotting successive parallel cuts of the surface, each displaced to establish the desired projection. Only that portion of the cut that lies above the previous cuts is then plotted. The plotting sometimes requires several hours, depending on the speed of the computer and on the complexity of the function, but the cost need not be excessive since a relatively small computer can be used. A few of the functions that have been plotted are discussed in some detail.
Sorting and classifying photo graphic imagery have long been eye-straining, time-consuming operations that have to be done by human operators. Compounding an already difficult situation, rapid advances in technology have greatly increased imagery acquisition capabilities and reduced photographic processing time. The net result has been to bury the human photographic interpreter in a seemingly insurmountable backlog of imagery.
In a previous paper,1 an approach to automatic classification of terrain types in aerial photographs was described. Experiments verifying the approach were conducted on a computer-controlled cathode ray tube scanner system called the Natural Image Computer (NIC). The system simulated in serial fashion concepts of parallel local image shape extraction and decision derived from prior optical experiments.2,3 Classification experiments were performed on terrain samples of orchards, woods, lakes, oil tank farms, and railroad yards on 1:50,000 scale imagery, with a resolution corresponding to four feet on the ground.
The TIEOS/ESSA weather satellites use vidicon camera systems to photograph the earth and its clouds. The pictures, destined for digital computer mapping and display preparation, contain geometric distortions caused by the scan control mechanism, the optics, and the satellite/earth geometry. Such distortions are measured or calculated and taken into account when determining the relative positions of picture elements and their earth locations (ref. 1). This process sufficiently corrects for data position errors, but there are also brightness errors introduced by the variable scene illumination, the transmission characteristics of the camera system, and irregularities in the digitizing hardware. This paper describes these brightness distortions, discusses the procedures used to quantitatively determine them, and explains the application of the resultant corrections. The overall goal of this picture conditioning is to normalize the observed scene to a uniformly illuminated image free of brightness distortion, so that separate scenes can be associated indiscriminately.
The application of spatial filtering has recently received new impetus. It has been demonstrated that spatial filters can be produced which contain not only amplitude information but also a record of the desired phase. This is done by applying holographic techniques in producing matched filters, i.e., filters that represent the complex conjugate of the Fourier transform of the object function. Filters of a new kind, called binary spatial filters, have several advantages over holographically produced matched filters. First, since the transmittances of the new filters have values only of 0 or 1, oversized representations can be conveniently produced by a computer-guided plotter, and then photo-graphically reduced to the desired size. Second, the filter function need not exist physically; it must only be capable of mathematical description for the computer. Third, not only binary matched filters can be generated by computer, but also filters for a variety of other image-processing operations, such as code translation and differentiation.
The application of Fourier transform analysis to problems of optical imagery has been extensively discussed in the literature for over 20 years. Since the famous book of Duffieux (Ref. 1) in 1946, summarizing his previous articles, many articles, too numerous to cite here, have been written on the subject.
The very rapid increase in computing capability has placed an ever increasing premium on the development of display techniques which permit rapid understanding of the results of computations. Stereoscopic presentation uses one of the highest ordered mental processes to provide an important adjunct to current display technology, and it may provide the means to obtain solutions to previously unsolved problems. A simple introduction to stereoscopy, some reasons for its use and some examples of our use are presented.
For over a decade, there has been an extensive effort by government and university research groups as well as private industry toward the development of machine systems for the "recognition" of machine printed alpha-numeric characters. Other related development efforts have been devoted to systems for the recognition of cursive writing, based on measurements taken of the motion of the stylus during the time the written material is being com-posed. It is only more recently that various organizations, among them IBM, have been investigating machine methods, including those having interaction with human operators for the realization of various functions, for recognizing or identifying fingerprints. There are some common requirements between machine processing of alpha-numeric characters and fingerprint patterns for identification purposes. One point of commonality is the need for scanning the source material, and converting it to a representation suitable for machine processing. Another mutual requirement is that of sufficiently enhancing the source image so that it can be successfully processed, and the features for machine identification extracted. Still another need shared by character recognition and fingerprint identification is the matching of the features of the pattern or image to be identified and similar images available in a library or storage facility.
The development of an "image" processing system is presented as it relates to a research program to devise techniques to survey agricultural conditions from aerospace platforms. The motivation for the research and arguments for selected techniques are discussed. Finally, the current "image" handling methods are outlined, and a system is proposed which logically extends present capabilities.
The bubble chamber is one of the most versatile tools for high energy physics research. It is used to detect the passage of particles and to measure their trajectory, momentum and velocity. Nuclear events are produced by the interaction of beam particles from the accelerator with nuclei of the chamber medium.
A Bio-Medical Division was established in 1963 at the Lawrence Radiation Laboratory. The four main divisions of the LRL Bio-Medical program are (Ref. 1): 1) prediction of the possible impact of release of radiation and radionuclides upon the biosphere - in particular upon man from any type of nuclear event; 2) documentation of the life history of radio-nuclides produced in the event; 3) determination of any effects of the radionuclides upon man; 4) development of countermeasures to minimize any possible radiation burden to man. These studies deal mainly with internal emitters and consider possibilities of slow delivery of low total doses of radiation.
This paper deals with a program of research in which the techniques and fundamental limitations of the restoration of degraded images is receiving study. The research program includes all forms of image degradation but this paper is concerned only with the case of degradation of the type encountered in looking up through the atmosphere with a large ground based telescope.
With the advent of the so-called "offset-reference" technique (Refs. 1,2,3) for recording holograms, many of the previous limita-tions of Gabor's wavefront-reconstruction process (Refs. 4,5,6) have been alleviated. It is natural, then, to consider the possible benefits offered by holography in various ap-plications where "direct" or "conventional" image formation has customarily been used.
Many image processing problems, including the restoration of atmospherically degraded images , can be benefited by the combined use of digital computers and coherent optical equipment, preferably in real time. Various questions then arise: What types of operations can better be done on a digital computer than by coherent optics, and vice versa? How can one transfer data quickly between a digital computer and coherent optical equipment? In this paper, we first briefly review and then compare digital computer and coherent optical image processing techniques to indicate their relative merits and faults. Then, we propose schemes for data transfer between digital computers and coherent optical equipment, and discuss the hardware available for carrying out these schemes. Finally, we look into the future and describe some desirable features of large image processing centers.