When I was first contacted about the possibility of developing a special issue devoted to vision, I was left in a quandary as to what elements to discuss and emphasize. The task has proved to be easier than I had thought, due to the help of the authors and the editor John DeVelis.
At the present time, the preferred method of determining optical corrections suitable for prescribing optical aids such as spectacles is through subjective response of the individual being examined. Instrumentation for this purpose has been basically unchanged over the last forty years. New types of optical components and optical systems are now available that promise to increase the ease and reliability of such subjective refractive determinations. One such system incorporating the recently developed variable power spherical and astigmatic lens systems results in a particularly convenient form of subjective refractor, which is described in this article.
The optical systems of an electro-optical instrument which measures the refractive error of human eyes are described. The analog printout enables one to read off the refractive correction as sphere, cylinder, and cylinder axis. The principle involved is a variation of the Foucault knife-edge test. Validation studies are summarized and various applications are briefly mentioned.
A computer-controlled binocular vision testing device has been developed as one part of a system designed for NASA to test the vision of astronauts during spaceflight. The .device, called the Mark III Haploscope, utilizes semi-automated psychophysical test procedures to measure visual acuity, stereopsis, phorias, fixation disparity and accommodation/convergence relation-ships. All tests are self-administered, yield quantitative data and may be used repeatedly without subject memorization. Future applications of this programmable, compact device include its use as a clinical instrument to perform routine eye examina-tions or vision screening, and as a research tool to examine the effects of environment or work-cycle upon visual function.
The so-called subjective and objective measurements of the visual system are discussed. Two objective and automated instruments are described which can measure very different dynamic vision functions under stress or drug conditions without relying on decision or subjective responses of the subject. Results are shown for extreme alcohol intoxication. The underlying principle of recording certain eye movements for specific stimulus conditions applies to measuring other vision functions as well.
Photokeratoscopy is not an invention of the twentieth century- it goes back to the nineteenth century when Alvar Gullstrand made his first attempt in 1896. He used the Placido disc, a planar object, and photographed the reflected concentric rings from the corneal surface acting as a convex mirror. iGullstrand was unable to photograph the total corneal surface from a single picture; he had to take two or three of them to cover the corneal surface. His planar object introduced several aberrations and therefore the quality of the image was not adequate. Moreover, Gullstrand hypothesized the cornea as having a spherical surface. This simplification is seen in his dioptric diagram describing the cornea.
The NON-CONTACT TM Tonometer is an applanation tonometer which, unlike all other tonometers, in measuring intra ocular pressure makes no mechanical contact with the eye. Applanation-the flattening of an area of the eye-is achieved by a pulse of air whose force increases linearly with time. Applanation is detected opto-electronically and the interval of time required to produce applanation is utilized as the correlate of intra-ocular pressure. Measurement is accomplished in milliseconds without the use of topical anesthesia.
A slit lamp fluorometer is described which can be used to measure the tear volume and tear turnover rate. The instrument is made principally from commercially available components and is easy to operate. Preliminary results indicate that the fluorometer measurements are accurate, reliable, and can be applied to a clinical environment.
Color vision testing can be accomplished by several means, each of which has its virtues and limitations. The most common test of color perception is the pseudoisochromatic plate. This is a card composed of a field of dots of varying hue and saturation within which persons with normal color vision can see some figure, but persons with a color vision defect cannot. Alternatively, some pseudoisochromatic plates are constructed so that color normals see one figure and color defectives another. These tests rely upon either the reduced hue discrimination or the reduced saturation discrimination of color defectives as a means of identification of these color defectives. Unfortunately, the illuminant under which the test is conducted is critical, and the diagnostic value of this type of test is limited.
Moire fringes and the Talbot self-imaging effect can be combined to produce shearing-interferometric-like maps of lens refractive power with high sensitivity and accuracy. Here we explore the properties of a simplified interferometer, consisting of a white-light source and two dissimilar gratings, designed to be rugged enough for field testing of sunglass lenses.
Until recently, corneal contact lenses have been made of poly methyl methacrylate. The roughing of these lenses is done on precision lathes and the polishing is done using conventional optical workshop techniques. A technique of spincasting is described which is being used to fabricate corneal contact lenses using a monomer mixture of poly hydroxyethyl methacrylate. Theory and practice are briefly described.
The five papers included in this special issue entitled Medical Applications of Optics were selected as representative of the many excellent presentations at the topical symposium on "Application of Optical Instrumentation in Medicine IV" in September 1975 in Atlanta.
A protocol for acceptance testing and performance monitoring of radiographic phototimers is described, together with results obtained during application of the protocol to evaluation of the response characteristics of phototimers on two new x-ray generators.
Attempts to improve the resolution capability of an x-ray imaging system have to include the presence of the patient, particularly the effect of subject motion. From this it follows that the loading capability of the x-ray machine should be an integral part of the analysis and optimization of systems resolution. We have tried to quantify both resolution and load capacity and to merge them in a "figure of merit," which seems helpful in judging focal spot performance. A true falling load characteristic for the focal spot distribution as well as for the exposure load characteristic appears to give optimum performance in terms of systems resolution. With computer help, we have calculated the pertinent parameters, such as intensity, modulation transfer, load, and temperature, as functions of spatial and temporal coordinates.
A physical model of xeroradiography is described which predicts a visible and measureable effect of the x-ray radiation noise. Agreement between the theoretical and experimental magnitudes of the developed radiation noise and its dependence on the exposure energy is excellent. The results imply xeroradiography is capable in principle of the ultimate in sensitivity providing the photoreceptor absorbs all the incident x-rays.
An investigation was made to evaluate currently used subtraction radiography techniques. The first order subtraction technique was found to be adequate for conventional clinical use. However when using fine-grain high contrast films for the originals, the second order technique gave a more accurate subtraction image. The limiting factor in the sensitivity of the radiographic subtraction technique for detecting small contrast changes over an area of about 2 cm in diameter was found to be film noise and not quantum statistics. Extra-fine-grain films were compared with conventionally used films. While these extra-fine-grain films gave slightly better results, they also required x-ray exposures which would be too high for routine clinical use.
The use of a random object distribution for the characterization of x-ray focal spots for quality control purposes is described. This technique, utilizing a coherent optical processor, allows for the direct generation of the two-dimensional modulation transfer function (MTF). The distance betwep the first zeroes of the MTF, in the orientation of interest, are then measured to provide an estimation of the x-ray focal spot size. This technique is compared to the conventional pinhole image measurement technique and the failure of resolution (star measurement) technique and is found to produce similar variability and sensitivity to changes in the focal spot size.
Electronic color sorting machines pose maintenance problems when used for hygroscopic agricultural particles because of the high bias voltages required by photomultipliers. Unbiased photodiodes, with appropriate optical filters, should improve such systems considerably if used in place of photomultipliers.
This paper summarizes the optical design of a large aperture telescope for near IR imaging onto a linear array of detectors. System performance is described in relative terms because specific data relating absolute performance characteristics is classified.
The modes and probabilities of error in scanning UPC symbols are calculated as functions of noise and printing error. Probabilities of error in character decoding are compared for the different possible modes. A one module error in decoding the T2 interval is seen to be the most likely failure mode. "Convolution distortion" is described for situations where the scanning beam diameter exceeds the width of bars or spaces within the symbol. Examples are given where the probability of a character error increases rapidly with beam diameter. Expressions for detectable and undetectable error probabilities per symbol scan are derived. For Version A symbols, the detectable error rate can be reduced by employing error correction, but at the expense of a higher undetectable error rate. Undetectable error probabilities for Version A symbols are seen to be at least an order of magnitude lower than for Version B and E symbols. The dominant mode for Version A is. the transformation of one-half of the symbol into a Version E symbol. Undetectable error probabilities are shown to increase very rapidly with noise and printing error for all symbol versions.
We have shown that our specially designed nonachromatic objective allows us to produce sharp images of particle tracks in nuclear emulsions, for visual study and/or photographic recording, with a lateral resolution of about one micron, and a depth of field many times greater than can be achieved with conventional objectives.
When an LWIR sensor is calibrated, the question generally arises as to what the precision, accuracy, and repeatability are for the calibration. The measurement precision is the formulation and interpretation of the terms "accuracy" and "repeatability." A derivation is presented in this paper in which a logical formulation of these terms is obtained. Forms for both short-term and long-term uncertainties are given. In addition, the use of confidence intervals in connection with sensor calibration is discussed, along with an appropriate example. A compilation of uncertainty terms and the definitions of these terms are proposed for adoption by the LWIR community.