The light field describes the radiance at a given point provided by a ray coming from a particular direction. Integrating the light field for all possible rays passing through that point gives total irradiance. For a static scene, the light field is unique. Cameras act as integrators of the light field. Previously, it was demonstrated that freeware rendering software can be used to simulate the light field entering an arbitrary camera lens. This is accomplished by placing an array of ideal pinhole cameras at the entrance pupil location and rendering. The pinhole camera images encode the ray directions for rays passing through the pinholes. The set of images from this array then describes the light field. Images for real camera lenses with different types of aberrations are then simulated directly from the light field. The advantage of this technique is that the light field only needs to be calculated once for a given scene. Calculation of the light field is computationally expensive and the practicality of implementing high resolution light field simulations on a desktop computer is limited. However, cloud-based rendering services with arrays of CPUs and GPUs are now readily available and affordable. These services enable more realistic simulations and different scenes to be rapidly created. Here, the techniques are demonstrated for different real lens aberration forms.
The light field describes the radiance at a given point from a ray coming from a particular direction. Total irradiance comes from all rays passing the point. For a static scene, the light field is unique. Cameras integrate the light field. Each camera pixel sees the integration of the light field over the entrance pupil for ray directions associated with lens aberrations. Images of this scene for any lens can then be simulated if the light field is known at its entrance pupil. Freeware rendering software was used to create a scene’s light field and images for real camera lenses with different aberrations are simulated.
Wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of focus of an imaging system. The coding element is an asymmetrical phase plate shape, for most applications in the form of a trefoil or a cubic polynomial. Phase plates with trefoil shape generate not only the desired amount of trefoil aberration but also spherical aberration. It has been recently shown that a wavefront coding based optical system shows high tolerance to spherical aberration for monochromatic images; however, the depth of focus is considerably shortened for color images. In this work, we will show how to modify the shape of a phase plate in order to optimize its performance for color imaging. The design parameters of the phase plate are obtained by minimizing a merit function by means of genetic algorithms developed for this purpose. The evaluation of the optical characteristics of the phase plates for a feedback with the optimization algorithm is obtained by Zemax. Results will be illustrated by numerical simulations of color images.
Realistic image simulation is useful understanding artifacts introduced by lens aberrations. Simple simulations which convolve the system point spread function (PSF) with a scene are typically fast because Fast Fourier transform (FFT) techniques are used to calculate the convolution in the Fourier domain. This technique, however, inherently assumes that the PSF is shift invariant. In general, optical systems have a shift variant PSF and the speed of FFT is lost. To simulate shift variant cases, the scene is often broken down into a set of sub-regions over which the PSF is approximately isoplanatic. The FFT methods can then be employed over each sub-regions and then the sub-regions are recombined to create the image simulation. There is an obvious tradeoff between the number of sub-regions and the fidelity of the final image. This fidelity is dependent upon how quickly the PSF changes between adjacent sub-regions. Here, a different strategy is employed where PSFs at different points in the field are sampled and decomposed into a novel set of basis functions. The PSF at locations between the sampled points are estimated by interpolating the expansion coefficients of the decomposition. In this manner, the image simulation is built up by combining the interpolated PSFs across the scene. The technique is verified by generating a grid of PSFs in raytracing software and determining the interpolated PSFs at various points in the field. These interpolated PSFs are compared to the PSFs calculated directly for the same field point.
Quadratic pupils representing Gaussian apodization and defocus are expanded into Zernike polynomials. Combinations of the pupil expansion coefficients are used, in turn to expand the Optical Transfer Function into a novel set of basis functions.
Phoropters are the most common instrument used to detect refractive errors. During a refractive exam, lenses are flipped in front of the patient who looks at the eye chart and tries to read the symbols. The procedure is fully dependent on the cooperation of the patient to read the eye chart, provides only a subjective measurement of visual acuity, and can at best provide a rough estimate of the patient’s vision. Phoropters are difficult to use for mass screenings requiring a skilled examiner, and it is hard to screen young children and the elderly etc. We have developed a simplified, lightweight automatic phoropter that can measure the optical error of the eye objectively without requiring the patient’s input. The automatic holographic adaptive phoropter is based on a Shack-Hartmann wave front sensor and three computercontrolled fluidic lenses. The fluidic lens system is designed to be able to provide power and astigmatic corrections over a large range of corrections without the need for verbal feedback from the patient in less than 20 seconds.
A technique for decomposing the Optical Transfer Function (OTF) into a novel set of basis functions has been developed. The decomposition provides insight into the performance of optical systems containing both wavefront error and apodization, as well as the interactions between the various components of the pupil function. Previously, this technique has been applied to systems with circular pupils with both uniform illumination and Gaussian apodization. Here, systems with annular pupils are explored. In cases of annular pupil with simple defocus, analytic expressions for the OTF decomposition coefficients can be calculated. The annular case is not only applicable to optical systems with central obscurations, but the technique can be extended to systems with multiple ring structures. The ring structures can have constant area as is often found in zone plates and diffractive lenses or the rings can have arbitrary areas. Analytic expressions for the OTF decomposition coefficients again can be determined for ring structures with constant and quadratic phase variations. The OTF decomposition provides a general tool to analyze and compare a diverse set of optical systems.
The Zernike polynomials provide a generalized framework for analyzing the aberrations of non-rotationally symmetric optical systems with circular pupils. Even when systems are designed to be rotationally symmetric, fabrication and alignment errors will lead to non-rotationally symmetric aberrations. The properties of the Zernike polynomials are reviewed to illustrate their properties. Different indexing, normalization and ordering schemes are found in the literature and commercial software. He the schemes are compared to demonstrate some of the potential pitfalls of comparing Zernike polynomial results from different sources.
The Point Spread Function (PSF) indirectly encodes the wavefront aberrations of an optical system and therefore is a metric of the system performance. Analysis of the PSF properties is useful in the case of diffractive optics where the wavefront emerging from the exit pupil is not necessarily continuous and consequently not well represented by traditional wavefront error descriptors such as Zernike polynomials. The discontinuities in the wavefront from diffractive optics occur in cases where step heights in the element are not multiples of the illumination wavelength. Examples include binary or N-step structures, multifocal elements where two or more foci are intentionally created or cases where other wavelengths besides the design wavelength are used. Here, a technique for expanding the electric field amplitude of the PSF into a series of orthogonal functions is explored. The expansion coefficients provide insight into the diffraction efficiency and aberration content of diffractive optical elements. Furthermore, this technique is more broadly applicable to elements with a finite number of diffractive zones, as well as decentered patterns.
Simulated images provide insight into optical system performance. If the Point Spread Function (PSF) is assumed spatially invariant, then the simulated image is the convolution of the PSF with the object scene. If the PSF varies across the field, multiple field points are sampled and interpolated to give the PSF. Simulated images are then assembled from the individual PSFs. These techniques assume a 2D planar object. We extend the image simulations to 3D scenes. The PSF now depends on the field point location and distance in object space. We investigate 3D scene simulation and examine approximations that reduce computation time.
The Optical Transfer Function (OTF) and its modulus the Modulation Transfer Function (MTF) are metrics of optical system performance. However in system optimization, calculation times for the OTF are often substantially longer than more traditional optimization targets such as wavefront error or transverse ray error. The OTF is typically calculated as either the autocorrelation of the complex pupil function or as the Fourier transform of the Point Spread Function. We recently demonstrated that the on-axis OTF can be represented as a linear combination of analytical functions where the weighting terms are directly related to the wavefront error coefficients and apodization of the complex pupil function. Here, we extend this technique to the off-axis case. The expansion technique offers a potential for accelerating OTF optimization in lens design, as well as insight into the interaction of aberrations with components of the OTF.
Early diagnosis of glaucoma, which is a leading cause for visual impairment, is critical for successful treatment. It has been shown that Imaging polarimetry has advantages in early detection of structural changes in the retina. Here, we theoretically and experimentally present a snapshot Mueller Matrix Polarimeter fundus camera, which has the potential to record the polarization-altering characteristics of retina with a single snapshot. It is made by incorporating polarization gratings into a fundus camera design. Complete Mueller Matrix data sets can be obtained by analyzing the polarization fringes projected onto the image plane. In this paper, we describe the experimental implementation of the snapshot retinal imaging Mueller matrix polarimeter (SRIMMP), highlight issues related to calibration, and provide preliminary images acquired from the camera.
This book connects the dots between geometrical optics, interference and diffraction, and aberrations to illustrate the development of an optical system. It focuses on initial layout, design and aberration analysis, fabrication, and, finally, testing and verification of the individual components and the system performance. It also covers more specialized topics such as fitting Zernike polynomials, representing aspheric surfaces with the Forbes Q polynomials, and testing with the Shack–Hartmann wavefront sensor. These techniques are developed to the point where readers can pursue their own analyses or modify to their particular situations.
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
The Shack Hartmann wavefront sensor is a technology that was developed at the Optical Sciences Center at the University of Arizona in the late 1960s. It is a robust technique for measuring wavefront error that was originally developed for large telescopes to measure errors induced by atmospheric turbulence. The Shack Hartmann sensor has evolved to become a relatively common non-interferometric metrology tool in a variety of fields. Its broadest impact has been in the area of ophthalmic optics where it is used to measure ocular aberrations. The data the Shack Hartmann sensor provides enables custom LASIK treatments, often enhancing visual acuity beyond normal levels. In addition, the Shack Hartmann data coupled with adaptive optics systems enables unprecedented views of the retina. This paper traces the evolution of the technology from the early use of screen-type tests, to the incorporation of lenslet arrays and finally to one of its modern applications, measuring the human eye.
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A
conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera
image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an
image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding
to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor
pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the
2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the
transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image
plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray
error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light
field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet
array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Capturing light field data with a plenoptic camera has been discussed extensively in the literature. However, recent
improvements in digital imaging have made demonstration and commercialization of plenoptic cameras feasible. The
raw images obtained with plenoptic cameras consist of an array of small circular images, each of which capture local
spatial and trajectory information regarding the light rays incident on that point. Here, we seek to develop techniques for
representing such images with a natural set of basis functions. In doing so, reconstruction of slices through the light
field data, as well as image compression can be easily achieved.
In designing optical systems where the eye serves as the final detector, assumptions are typically made regarding the
optical quality of the eye system. Often, the aberrations of the eye are ignored or minimal adjustments are built into the
system under design to handle variations in defocus found within the human population. In general, the eye contains
aberrations that vary randomly from person to person. In this investigation, a general technique for creating a random set
of aberrations consistent with the statistics of the human eye is developed. These aberrations in turn can be applied to a
schematic eye model and their effect on the combined visual instrument/eye system can be determined. Repeated
application of different aberration patterns allows for tolerance analysis of performance metrics such of the modulation
transfer function (MTF).
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture threedimensional
images inexpensively and without major modifications to current cameras is uncommon. Our goal is to create
a modification to a common commercial camera that allows a three dimensional reconstruction. We desire such an imaging
system to be inexpensive and easy to use. Furthermore, we require that any three-dimensional modification to a camera
does not reduce its resolution.
Here we present a possible solution to this problem. A commercial digital camera is used with a projector system
with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different
focus depths for horizontal and vertical features of the projected pattern, thereby encoding depth. This projector could be
integrated into the flash unit of the camera. By carefully choosing a pattern we are able to exploit this differential focus
in image processing. Wavelet transforms are performed on the image that pick out the projected pattern. By taking ratios
of certain wavelet coefficients we are able to correlate the distance an object at a particular transverse position is from the
camera to the contrast ratios.
We present our information regarding construction, calibration, and images produced by this system. The nature of
linking a projected pattern design and image processing algorithms will be discussed.
Non-mechanical variable lenses are important for creating compact imaging devices. Various
methods employing dielectrically actuated lenses, membrane lenses, and/or liquid crystal lenses
were previously proposed<sup>1-4</sup>. Here we present tunable-focus flat liquid crystal diffractive lenses
(LCDL) employing binary Fresnel zone electrodes fabricated on Indium-Tin-Oxide using
conventional micro-photolithography. The phase levels can be adjusted by varying the effective
refractive index of a nematic liquid crystal sandwiched between the electrodes and a reference
substrate. Using a proper voltage distribution across various electrodes the focal length can be
changed. Electrodes are shunted such that the correct phase retardation step sequence is
achieved. If the number of 2π zone boundaries is increased by a factor of <i>m</i> the focal length is
changed from f to f/m based on the digitized Fresnel zone equation: <i>f = r<sub>m</sub> <sup>2</sup>/2mλ</i>, where <i>r<sub>m</sub> </i>is <i>m<sup>th</sup></i>
zone radius, and λ is the wavelength.
The lenses operate at very low voltage levels (±2.5V ac input), exhibit fast switching
times (20-150 ms), can have large apertures (>10 mm), and small form factor, and are robust and
insensitive to vibrations, gravity, and capillary effects that limit membrane and dielectrically
actuated lenses. Several tests were performed on the LCDL including diffraction efficiency
measurement, switching dynamics, and hybrid imaging with a refractive lens. Negative focal
lengths are achieved by adjusting the voltages across electrodes. Using these lenses in
combination, magnification can be changed and zoom lenses can be formed. The promising
results make LCDL a good candidate for non-mechanical auto-focus and zoom lenses.
We demonstrate that, by using circular array of electrode pattern and applying multi-level phase modulation in each zone, a high-efficiency switchable electro-optic diffractive lens using liquid crystal as the active medium can be produced as a switchable eyewear. The lens is flat and the thickness of the liquid crystal is 5 μm. Two different designs are presented. In one design, all the patterned electrodes are distributed in one layer with a 1-μm gap between the electrodes. In the other design, the odd- and even-numbered electrodes are separately patterned in two layers without any lateral gaps between the electrodes. In both cases, vias are made for interconnection between the electrodes and the conductive wires. With the one-layer electrode design, both 1-diopter and 2-diopter 8-level lenses are demonstrated with an aperture of 10 mm. With the two-layer electrode design, a 2-diopter, 15-mm, 4-level lens is demonstrated. The diffraction efficiency of the 8-level lens can be higher than 90%. The ON- and OFF-state of the electrically controlled lens allow near- and distance-vision respectively for presbyopic eyes. The focusing power of the lens can be adjusted to be either positive or negative. The focusing power of the 8-level lens can be adjusted for near-, intermediate-, and distance vision. The lens is compact and easy to operate with fast response time, low voltages and low power dissipation. This is the first demonstration of the switchable lenses that almost meet the requirements for spectacle lens.
Liquid crystal spatial light modulators, lenses, and bandpass filters are becoming increasingly capable as material and electronics development continues to improve device performance and reduce fabrication costs. These devices are being utilized in a number of imaging applications in order to improve the performance and flexibility of the system while simultaneously reducing the size and weight compared to a conventional lens. We will present recent progress at Sandia National Laboratories in developing foveated imaging, active optical (aka nonmechanical) zoom, and enhanced multi-spectral imaging systems using liquid crystal devices.
An optical testbed has been developed for the comparative analysis of wavefront sensors based on a modified Mach Zender interferometer design. This system provides simultaneous measurements of the wavefront sensors on the same camera by using a common aberrator. The initial application for this testbed was to evaluate a Shack-Hartmann and Phase Diversity wavefront sensors referenced to a Mach-Zender interferometer. In the current configuration of the testbed, aberrations are controlled using a liquid crystal spatial light modulator, and corrected using a deformable mirror. This testbed has the added benefit of being able to train the deformable mirror against the spatial light modulator and evaluate its ability to compensate the spatial light modulator. In the paper we present results from the wavefront sensors in the optical testbed.
Visual optics requires an understanding of both biology and optical engineering. This Field Guide assembles the anatomy, physiology, and functioning of the eye, as well as the engineering and design of a wide assortment of tools for measuring, photographing, and characterizing properties of the surfaces and structures of the eye. Also covered are the diagnostic techniques, lenses, and surgical techniques used to correct and improve human vision.
Purpose: To measure ocular aberrations before and at several time periods after LASIK surgery to determine the change to the aberration structure of the eye. Methods: A Shack-Hartmann wavefront sensor was used to measure 88 LASIK patients pre-operatively and at 1 week and 12 months following surgery. Reconstructed wavefront errors are compared to look at induced differences. Manifest refraction was measured at 1 week, 1 month, 3 months, 6 months and 12 months following surgery. Sphere, cylinder, spherical aberration, and pupil diameter are analyzed. Results: A dramatic elevation in spherical aberration is seen following surgery. This elevation appears almost immediately and remains for the duration of the study. A temporary increase in pupil size is seen following surgery. Conclusions: LASIK surgery dramatically reduces defocus and astigmatism in the eye, but simultaneously increases spherical aberration levels. This increase occurs at the time of surgery and is not an effect of the healing response.