Sensitometry and densitometry are the measurement techniques used to derive quantitative data from a pictorial record on photosensitive material. A general overview of these two areas will be presented progressing from the basic principles of each to the problems and pitfalls sometimes overlooked. Basic sensitometry in the macro sense will be covered including exposing methods, light sources, spectral conditions, attenuators, and photosensitive material peculiarities. The differences between macro and micro sensitometry will be elaborated upon with emphasis on light scattering, development adjacency effects, and granularity. Densitometry will be examined. The geometrical and spectral conditions, notation system, stand-ards, Callier Q factor, and types of instruments will be explained for macrodensitometry. The difference between macro and micro densitometry will then be studied plus the special problems of linearity, flare, coherency, maintainance of focus, and calibration for micro-densitometry.
A novel method to identify simple shapes defined by straight line segments is described. The Atlantic Research Corporation technique uses a laterally moving optical fringe pattern that is generated by two laser beams, converging at a small angle. The two laser beams are identical in intensity derived from the same laser by using a Bragg cell as a beamsplitter. One of the two laser beams is shifted in frequency by an amount, determined by the ultrasonic frequency of the Bragg cell, which causes the fringes to move in a direction perpendicular to their plane. In addition, the shifting fringe pattern is continuously rotated about the bisector of the two converging beams. Transmitted (or scattered) radiation is detected, the RF component amplified and displayed on an oscilloscope, providing an angular spectrum of the image. In the case of a triangle for example, made up of 3 lines (or 3 slits on an otherwise opaque negative plate), the acquired spectrum consists of 3 vertical lines displayed along the base line. A detailed description of the method is presented, including experimental data. In conclusion, the paper describes interesting application areas.
This paper is an update of part of an article in the SPIE Journal for February-March 1966, entitled "Symbols, Units, and Nomenclature for Atmospheric Transmission," by Irving J. Spiro, R. Clark Jones, and David Q. Wark. In the interim 1966-1974, Fred Nicodemus, lately of the Naval Weapons Laboratory at China Lake, California and now with the National Bureau of Standards (NBS) at Gaithersburg, Maryland, has done an outstanding job in solidifying the concepts proposed (as a committee action) in 1966.
The primary fact that separates image processing from other digital computer processing is the sheer bulk of the data base. This has effects at every stage of the processing: acquisition, digitization, storage, computation, regeneration, and interaction. The workshop was designed to expose new users to the problems encountered in each of these areas and give relevant examples. Thus the user should have a concept of the magnitude of his problem prior to its implementation. The workshop is not trying to describe how to do image processing, but to expose the facets to be considered in the design stages.
In recent years much effort has been devoted to the problem of restoring signals that have been convolutionally degraded, perhaps by a process that can be modeled by a linear system. Typical degraded signals may be one or two-dimensional, such as a resonated audio signal or a blurred photograph. Most approaches to the restoration problem, however, have assumed in advance a knowledge of the degrading system, and have centered around workable methods of restoration. Recent research at Utah has led to the development of a non-linear homomorphic restoration system needing very little a priori information about the system that produced the convolutional degradation.
The state of the art in large scale digital computers has recently opened the way for high resolution image processing by digital techniques. With the increasing availability of digital image input/output devices it is becoming quite feasible for the average computing facility to embark upon high quality image restoration and enhancement. The motivation for such processes becomes self evident when one realizes the tremendous emphasis man puts on his visual senses for survival. Considering the relative success achieved in one dimensional (usually time) signal processing, it is to be expected that far greater strides could be made in the visual two dimensional realm of signal processing.
Pseudo-color processing is a technique that maps each of the grey levels of a black and white image into an assigned color. This colored image, when displayed, can make the identification of certain features easier for the observer. The mappings are computationally simple and fast. This makes pseudo-color an attractive technique for use on digital image processing systems that are designed to be used in the interactive mode. This paper will discuss the application of several pseudo-color mapping schemes. Various color maps can give contrast enhancement effects, contouring effects, or grey level mapping (depicting areas of a given grey level). Pseudo-color schemes can also be designed to preserve or remove intensity information. Since the nature of the original black and white image can determine the success or failure of a particular color scheme, it is necessary to find a rational approach to the design and selection of the color maps. The paper will describe some methods of designing color schemes that use ideas from the fields of colorimetry and visual perception.
Most methods for measuring the system response of image digitizers have involved an optical knife-edge. There are two reasons for this: first, high quality knife-edges are not difficult to obtain; and second, the one-dimensional impulse response of the system may be obtained by differentiating the response to a step function (which the knife-edge approximates).* The technique presented below uses a knife-edge scan in a slightly novel context to obtain the edge spread function (ESF) of image scanners. Traditional methods may then be used to obtain the line spread function (LSF) from the ESF.
The development of large high powered laser systems for fusion research has placed new demands on the diagnostic analysis of laser beams. Such lasers often require specific and sophisticated analysis of both the temporal and spatial beam profiles at several stages throughout the chain. Both the volume of the diagnostic data and the sophistication of the analysis require that a high speed digital computer be used to perform this analysis. A computer code named PHOTOCAL has been written and is currently being used at Lawrence Livermore Laboratory to calibrate and analyse both the temporal and spatial profiles of sub-nanosecond laser pulses. In this report a description of the code will be given.
The construction of digital Fourier transform holograms by methods analogous to the optical Vander Lugt pro-cess, using a high-precision, multiple gray-level plotting source, is demonstrated. The theory of sampled holograms and the problems inherent in these digital methods are discussed, and calibration problems are explored. Finally, examples of the use of such holograms as spatial filter masks are presented.
In coherent optical data processing, the optical system is used as a linear space invariant system. A method in which the optical system is highly space variant was described recently l. With this method, a geometric transformation of an object can be obtained in the frequency plane of the object. The optical component responsible for the geometric transformation is a computer-generated phase filter. In this paper, we shall briefly describe this method of optical map transformation. Because of the importance of the phase filter in the optical system, we will also discuss a recent method for making computer-generated holograms2. An example is presented which shows how conformal mapping is achieved with this optical technique.
Today I would like to discuss an approach to synthesizing digital holograms developed while I was at Stanford University. These holograms, like the kinoforml, an approach by IBM, require only phase transmittance material for recording and are reconstructed on-axis without conjugate images. Unlike the kinoform, however, amplitude information on the hologram plane is not discarded, pre-venting the associated loss of fidelity2, which is a fundamental limitation of the kinoform. The synthesis procedure for these holograms is essentially the same as that for the kinoform with only one minor additional step in the digital processing stage. This extra step shapes the spectrum in a variety of manners including leveling the spectrum completely. The leveled spectrum is then made into a phase hologram. In the recon-struction, the original object can be reconstructed with full fidelity together with some extraneous elements introduced by the spectrum shaping. However, these extraneous elements, which we called "parity elements"' are clearly separated from the data and can be discarded.
The availability and use of phase-only optical processing materials has stimulated the study of phase-only holography and spatial filtering. (Refs. 1,2,3) This study leads naturally to the question of how much information about an image can be stored in the phase of the Fourier transform of the image. The kinoform is an optical de-vice that uses only the phase of the Fourier transform in image reconstruction; any information contained in the magnitude of the Fourier transform is lost, and this contributes to image degradation.
A phase and amplitude, off-axis hologram has been synthesized from three computer-generated masks, using a multiple exposure technique. Each of the masks controls one fixed phase cothpOnent'of the Complex hologram transmittance. The basic grating is generated optically, relieving the computer of the burden of drawing details the size of each fringe. The maximum information capacity of the computer plotting device can then be ap-plied to the generation of the grating modu-lation function. With this arrangement large holograms (25mm'X'25mm) have been synthesized in dichromated gelatin.
Despite their existence for nearly a decade, computer-generated (synthetic) holograms are seldom used for visual displays. One problem not easily solved is the time and expense of calculating and displaying the large number of resolution elements required for a high quality image. Another problem, one which is more easily solved, is the production of color images rather than the usual unnatural-looking monochromatic ones.