It has been reported that Confucius said, "A picture is worth a thousand words". Unfortunately, the historian did not leave us information as to the number of bits contained in each word. However, to my meager knowledge the Chinese language is developed about a multiconcept word structure, so that the bit content of a word is higher than our own language. Yet, his comparison must still be considered a distinct understatement.
The great variety of demands upon photographic science have stimulated the development of many new photographic processes. It would be nearly impossible to describe the characteristics of all of these systems of photography. As it happens, they all have certain kinds of measurable characteristics in common. Let us consider the general kinds of characteristics that are important for information recovery.
Here and in other laboratories it has been observed that films exposed to subjects of an extended brightness range, processed in low gamma developers, show imagery of improved resolution and reduced flare in the normally overexposed region, and of generally improved visual fidelity over most of the exposure range, compared to the same exposure processed in a high gamma more active developer, although with a loss of toe speed and perhaps of pictoral quality. It has been generally assumed that since high gamma processing increases the contrast of the final image, it would not be possible to obtain improved imagery from low gamma processing. We explain the improved imagery of low gamma development in part by the effects of reduced density and increased contrast in the high exposure region, but principally by the fact that low gamma development leads to lower granularity and increased detectivity over most of the exposure range, compared to higher gamma development. We review detectivity and quantum efficiency theory and show that the ratio Rel d = y/σ (D, A) as a function of exposure, is the detectivity parameter of interest, since the ratio of Rel d for any two conditions of processing is the ratio of quantum efficiencies and of detectivities.
REGI-B is a system incorporating a scanning microscope that automatically measures the optical microdensity of an image in conjunction with position over an 18-by 24-inch area, with a resolution of O. 0001 inch. The high resolution of the density measurement taken during the precision scanning makes it possible to define the position of an image at any specified density level to a high degree of accuracy. Earlier positional measuring device s could not locate images accurately because they had no accurate measurement of the optical density gradient at the edge of the image. The hardware consists of an X - Y scanner with a linear grating on one axis and a stepper motor on the other. The linear grating monitors position on a reversible digital counter. The stepper motor automatically increments a preset distance after each scan. A beam of light is reflected or transmitted from a Kohler light source. The detector is amicroscope that can be fitted with a round or a rectangular aperture. A photomultiplier converts the light to an electrical signal that is amplified in a logarithmic or linear mode; the former is necessary when density is being measured directly. The system can be set to trigger on any density level; the position of that level is recorded on magnetic tape. In an alter-native mode, the density information is digitized and then recorded on tape. The output tapes serve as the input to a program by which the computer can make comparisons of the data.
Isodensitometry is a technique whereby the optical densities of a photographic image are grouped into discrete levels and presented as a permanent record of equidensity contours resembling the relief portrayal of a topographical map. The Method is useful in the analysis of photographically stored information for two reasons. First, it permits the retrieval and use of the photometric information content of a photograph, i.e., surface bright-ness, radiation dose, x-ray transmission, etc., of objects under study. Secondly, it assists in the retrieval of low contrast image structure. Typically, this type of imagery is encountered in industrial and medical x-rays, and shadow areas of aerial reconnais-sance photography.
This paper discusses the use of multiple gratings to obtain additional information from photographs of spectra whose zeroth order is resolved in two dimensions. Techniques of microdensi-tometry are discussed. Data processing and computer programming methods are presented. Several configurations of data display are feasible. Results using parallel, perpendicular, and circular spectra arrays are given. The various conditions in which the images can be disentangled are derived. Some consideration is given to various applications.
During the past decade the ability to acquire, process and utilize technical and scientific information has undergone an explosive growth. This expansion is primarily due to the development of the large scale electronic digital computer. It is ironic to note however that the principal input to these magnificent processing systems is still manual. Classical methods of computer entry have originated in punched paper tape, and tabulator cards, both primarily prepared by manual methods. Only recently. have direct electrical and optical inputs become practical. Equipments such as the Program. Controlled Film Reader/Recorder described here offer the opening of a vast new source of scientific data to automated processing.
Before proceeding into the second half of this seminar, I would like to expand the example I used for the introduction in the context of the discussions that were generated by yesterday's papers. I do not intend that these remarks be considered derogatory to our previous speakers, but are used in an attempt at a startling example to describe the magnitude of the problem that faces us.
One of the principal limitations to extraction of geometric and radiometric information from photographic records is the irregular structure of the photographic grain. Since this granularity is a random phenomenon we must treat it using statistical techniques. Actually the randomness is not confined to the grain structure. If we are truly getting new information about original objects, then the radiance distribution of the original objects is in a sense unpredictable, i. e. random. In this situation, any improvements that are made in information recovery equipment should improve the performance on the average for a whole class of original objects.
The fine detail in a photographic image is limited by the effective point spread of the optical system. Additional sources of image blur, such as uniform image motion, may further limit the detail. In an attempt to enhance the image and recover the detail masked by these degrading processes, we must consider how to design the restoring filter. One approach which we discuss is the inverse filter" where the noise in the image (granularity) and realizability are important factors which force us to modify the initial concept. An example of the restoration of detail blurred by uniform image motion is presented and the results discussed. Another approach to the design of the restoring filter is to require that the net point spread of the enhanced image be non-negative, a characteristic common to all real imaging optical systems for incoherent illumination. The requirement is translated into the conditions on the shape of the restoring filter. It is shown, for example, that the filter should not allow the modulation at any spatial frequency to increase above 1.0. This requires that the optical transfer function for the net degrading process be known. Other conditions on filter roll-off and curvature are also described.
The recovery of photographic system transfer functions from film records is invariably complicated by the presence of grain noise, which often causes these experimentally determined functions to oscillate wildly, and introduces into the calculated data a positive bias with respect to the correct transfer functions. However, by utilizing the contrast between the random behavior of grain noise and the more well behaved photographic system transfer function, we may remove a substantial amount of noise-caused error through the use of suitable smoothing techniques. To demonstrate the merit of a particular smoothing technique, which improves the signal to noise ratio in the frequency domain by convolving the raw transfer function with a variable bandwidth smoothing function, noisy photographic edge image traces were synthesized using a CDC-3300 computer, the Fourier analysis program, FRAF, and microdensitometer traces of an evenly exposed film sample. The application of the smoothing technique to the raw transfer functions produced from this edge data and the utilization of the smoothed transfer functions in the calculation of line and three-bar image cross sections serve to illustrate the effectiveness of the method.
Image processing with in-line optical systems has been improved with improved experimental techniques and with precise methods of filter generation. Both analog and digital processing techniques are available and combined hybrid processing methods have been developed. Analog processing includes in-line and holographic optical processing and hybrid processing utilizes the combined capabilities of digital and analog systems. This paper is concerned with in-line optical processing and one hybrid scheme utilizing digital processing for filter fabrication and an in-line analog system for image processing. Experimental results are demonstrated with illustrations including restored defocused and blurred imagery.
Photographic distortion caused by one-dimensional linear motion of the film during exposure is treated in this paper. The distortion (point spread)function is obtained by making a photographic record of the pulse shape. This data is in turn used by a digital computer to produce the Fourier transform of the point spread function. Optical spatial filtering of the distorted image with an inverse filter using a coherent, monochromatic optical processor has been used. Images which have been smeared by up to three times the minimum resolution length have been restored. The filtering technique consists of manipulating both the phase and amplitude of the distorted scene. In general, the theory of optimal filtering has considered additive noise in the form of signal-to-noise ratios. The signal-to-noise ratio has been treated by considering it as a constant or some function of the spatial frequency. In the latter case, improvement in both restoration and cosmetics was obtained. Film linearity of both the input and output imagery is controlled by processing the film over the linear portion of the Ta vs. E curve. MTF curves of the frequency response before and after filtering are presented. Theoretical error analysis was performed on the restored imagery and good agreement between theory and experiment was obtained.
Photographic image processing is currently among the fastest grossing areas of information processing. Problems in fields ranging from biomedicine to photoreconnaissance are currently being attacked, using both analog and digital techniques. Most investigators agree that the digital approach is much more work-able, except in cases where vast numbers of points have to be handled and computer time and or storage locations become a limiting factor.
"Mr. Chairman, the session chairman has certain advantages.
One of the advantages is that I would like
to philosophize a little bit before we start. Most of
us here have been going to seminars for a long time and
I sense that since about 1960 they have been getting,
perhaps, overly polite. You do not get rough questions
any more and I hope to reverse that trend. My purpose
here is to be something like an "adjutant nrovacatuer"
or devil's advocate. I hope that we get some tough