Optical data processing is often thought of as a relatively new and promising discipline. There are those who believe that this field is a product of the science and technology of this last decade. The more knowledgeable shake their heads wisely and are confident that they know that the work really started in the early 1950's. However, the real officianados are aware of many important forerunners of the current techniques of optical processing. These forerunners include the development of the knife-edge test by Foucault, the introduction of the Schlieren system by Tbpler, the theoretical work of Abbe on image formation in a microscope, and the subsequent illustrative experiments by Porter. Perhaps the real classic is the phase contrast microscope invented by Zernike.
The Optics Laboratory can be an exciting and productive place to work or a source of constant frustration. If unlimited funds are available for the purchase of expensive optical devices frustration may be kept to a minimum, but as this is infrequently the situation a lot of ingenuity and inventiveness is an essential ingredient for success.
The need in information processing for increased capacity, faster speeds, and economy continues to grow unabated. With the progress in lasers and holographic techniques, coherent optical systems show promise as a viable approach to high speed information processing of a large data base because of their ability to perform parallel operations simultaneously on all data points. In addition to the potentially large data handling capacity, certain operations such as Fourier transformation and correlation are performed simply and more rapidly than with serial digital computer techniques, and the results are often available in a form convenient for human interactive displays.
One method of generating synthetic or computer generated holograms is to first compute the Discrete Fourier Transform (DFT) of the desired image. The hologram is then formed by coding in some manner on photographic film the complex values of this DFT. This paper will describe a method for generating synthetic binary holograms that avoids completely the necessity for computing the DFT of the desired image. The relationship between this new method and previously used techniques will be explored.
Optical data processing is one of the most powerful and versatile processing tools to emerge in recent years. Numerous texts, conferences, and special issues of journals have been devoted to its many accomplishments. There are many types of optical processors: some use non-coherent rather than a coherent light source, while others use non-holographic rather than holographic processing methods. The hybrid optical processor described in this paper can be partitioned into two sections. One is a conventional coherent optical processor while the second section is a digital minicomputer. These two sections communicate via a special purpose digital interface. With this configuration, the best features of both optical and digital processing are retained.
The sources of noise present in coherent optical systems are reviewed and classified according to their origin. It is shown that the ultimate limit of image quality is the noise present in the film. The measurable film parameters (MTF, granularity, resolution limit, and threshold modulation) influencing the performance of coherent optical processing systems are combined to set forth a performance criterion that defines the image quality attainable in a given experiment. Sources of noise originating within the system, rather than from the film, can be lessened or eliminated by various techniques that effectively reduce the degree of spatial and/or temporal coherence of the system. These techniques create an ensemble of random noise patterns superimposed upon a stationary image. This superposition averages the noise to a uniform background and reduces the contrast of the image. This system noise reduction leaves the film noise as the limiting parameter in the system. Both holographic and coherent imaging experiments are compared in terms of noise reduction techniques. Five experimental procedures to be considered when designing a coherent optical system with minimum noise are recommended, and examples of their application are demonstrated.
The power spectrum of a time varying signal may be realized using optical techniques by first converting the temporal variations to corresponding spatial patterns, then illuminating the spatial pattern with coherent, collimated light and observing the far field diffraction pattern. The light intensity distribution in the diffraction pattern is directly equivalent to the power distribution in the spectrum of the original signal (Ref.1). This paper describes the implementation and perform-ance of a real time, coherent optical spectrum analyzer that uses magnetic tape recording to convert temporal variations to corresponding spatial magnetic patterns, and uses the longitudinal Kerr magneto-optic effect to couple these magnetic patterns to a coherent illumination source. A magneto-optic transducer was developed that replicates the magnetic patterns from magnetic tape by contact printing and has magnetic and optical properties required to achieve a sufficiently large Kerr effect for practical applications.
Several deconvolution filters for the restoration of linearly degrad-ed images are presented, together with the restored images obtained using these masks on an optical bench. The sampled holographic spatial filters were constructed using a computer and a high-resolution film plotting device. This method provides a practical and flexible approach to the optical deconvolution problem. For example, compensation for film nonlinearities can be included in the computations, and consistent results can be achieved without an extremely stable and dust-free environment. The effects of sampling are briefly considered.
We report a new holographic meth-od for producing contours on diffuse-ly reflecting surfaces. The contour spacing can be continuously varied from 2.5 μm (the lower limit so far) up to 1 mm or larger. The holographic emulsion is placed close to the object and a single beam of laser light (in our case, a 1 mw He-Ne laser) is used to supply both the object and reference beams. To produce contours, a double exposure hologram is recorded with the refer-ence beam incident from two different angles. The observed contour spacing is determined by the angular separation of the incident beams as well as the viewing angle. The contours may be viewed in white light and the contour intervals were verified using calibrated wedges. The method is realized through the use of a new holographic processing technique employing a spray developer and a very short development time which greatly enhances the brightness of the holo-gram image.
Some of the adjectives used in describing a hologram are self-explanatory, but others give rise to confusion as to what distin-guishes the hologram from other types. It is the purpose of this paper to define many types of holograms and to discuss a few of the types in detail, in the hope that some of the confusion can be dispelled.
One of the basic reasons for using optical processing is the sim-plicity of using a spherical lens to take a 2-dimensional Fourier transform (FT) of input data. Intrinsically, the 2-dimensional FT is best understood in rectangular coordinates (Ref. 1). Most FT techniques take advantage of rectangular formatted data. However, a polar format is often the most natural form for data obtained with a technique where rotation is inherent in the data taking process. Such cases exist in radio astronomy of rotating planets (Ref. 2) and other stellar objects where the earth's rotation is used to advantage (Ref. 3), and in processing image projections (Ref. 4). In addition, the concept of recording data on a rotating disk for near real time optical processing is also very attractive (Ref. 5). All these areas of interest might take advantage of optical data processing. However, a major stumbling block is that the FT of many of the simplest polar forms, e.g. a pie shaped wedge (Ref. 6) do not lend themselves to closed form expressions that can be easily under-stood. Hence, whatever advantages a polar formatted optical processing system might have are not readily evident.
Optical processing was first used as an integral part of a holographic microscope by McFee. (Ref. 1) Studying crystal growth from the melt, he inserted a field lens in the recording and reconstruction system. Upon reconstruction he could insert a spot diaphragm at the focal point of the lens to accomplish dark field illumination. He could also insert a phase plate at the same location to accomplish phase contrast. These techniques primarily enhance edge contrast in the reconstructed image.
Mertz and Young introduced the idea of using a Fresnel zone plate as a shadow-casting reticle, or coded aperture, in x-ray astronomy. More recently, considerable progress has been made toward using the zone-plate aperture for gamma-ray imaging in nuclear medicine. The most successful configuration has used an off-axis section of a zone plate in conjunction with a halftone screen. In this paper, we discuss a variety of closely related coded apertures, including an annulus, an inverted zone plate, a spiral zone plate and the Girard grill. In most cases, the technique of grid-coded subtraction is used to suppress the zero-order (DC) background light usually associated with zone-plate imaging. The first application of this technique, reported by Stoner et al., used a sequence of two to four on-axis zone plates. In the present paper it is shown that the method can be extended to other apertures and is also very useful in synthesizing the spatial filters for optical decoding.
This paper concerns itself with the investigation of a method for solving the long-standing problem of longitudinal image distortion encountered in longwave (microwave or acoustic) holography that has so far prevented the exploitation of the three dimensional imaging capability of holography. A scheme for real-time, distortionless image recon, struction from longwave holograms that has the potential of yielding three dimensional images which can be viewed with the unaided eye is described. The scheme is based on combining the operations of a spatial microwave modulator with that of a plasma chamber in which visualization is due to light emission caused by microwave induced enhanced ionization and subsequent luminosity in a weakly ionized r.f. plasma. The scheme has the advantage of being implementable in real-time when a light value is added. Relatively low level millimeter microwave reconstruction power of .6 watts average is shown to be sufficient for the reconstruction of a three dimensional image consisting of 1000 resol-vable points using commercially available millimeter wave sources assuming that the local enhanced luminosity is caused by a doubling of the ionization rate in a low frequency r.f. discharge plasma.
Existing radiographs, or x-ray projec-tions, record the transmission of various materials to a broad spectrum of x-ray energies. It would be highly desirable if specific spectral regions could be isolated. For example the absorption region for contrast material, iodine and barium, occurs in specific regions of the energy spectrum. If the information relating to contrast material absorption could be separately delineated, many diagnostic procedures could be greatly facilitated. Similarly lower energy regions which are more responsive to bones and calcifications could be delineated from higher energy regions where the absorption is due mainly to Compton scattering and thus dependent mostly on tissue density rather than atomic number. The separation of these two spectral regions in a processed radiograph would allow the radiologist to look through the bony pattern and thus see soft tissue, lesions, and airways which underlie the bone. For example, lung tumors under the ribs could be clearly delineated rather than being obscured.
A description of various 3-D image reconstruction algorithms are given, with particular reference to transmission tomography. Fourier, Fourier-convolution and analytical methods are presented in detail.