|
1.IntroductionUncorrected refractive error is the leading cause of vision impairment worldwide.1–3 There is a need for inexpensive solutions, particularly in the developing world, which lacks sufficient resources and specialists. Furthermore, the prevalence of myopia has been rapidly increasing in recent years in Asia,4 putting a strain on eyecare systems. Wavefront sensors provide one potential avenue for automation to help address this problem. Unlike the phoropter,5 which is a complex apparatus that requires operation by a trained expert to determine the best subjective correction, a wavefront sensor-based device can directly measure the optical performance of the patient’s eyes. Conventional autorefractors6 used older technologies and were generally unsuccessful at automating this evaluation. The advantage of the wavefront sensor is that it also provides measurements of higher-order aberrations (where low order refers to defocus and astigmatic error) that have been shown to be critical for determining an accurate correction.7,8 Given these measurements, a more accurate correction may then be performed with conventional spectacle lenses by optimizing a sharpness metric that estimates the net effect on the retinal point spread function.9 Unfortunately, wavefront sensors still tend to be very sophisticated systems, primarily designed for expensive applications such as refractive surgery10 when used in ophthalmic practice; in such an application, the cost of errors is very high and clearly an expert operator must be presumed to be available. For the application of determining the needed refractive correction, however, we can potentially trade off much of the complexity and cost to address a much wider population. When the goal is correction of refractive errors, an estimate of the full wavefront error is not needed. It is primarily a subset of third- and fourth-order aberrations, particularly the more central terms in the Zernike expansion such as coma and spherical aberration, which dominate the effects of aberrations on visual acuity.7,8 Furthermore, these are the largest high-order aberrations seen in the population.11 A better reduction in complexity would, therefore, be achieved with a method that focuses on robustly estimating just these important terms. The most common ophthalmic wavefront sensor utilizes a Shack–Hartmann lenslet array.10,12 This is a relatively expensive device due to fabrication costs,13 particularly for high-density arrays, though prices might be reduced significantly given the development of high-volume fabrication techniques targeting the smartphone market.14 Indeed, higher density arrays are desirable to improve accuracy;15 lower density arrays suffer more from the curvature of the wavefront across each lenslet, as well as edge effects and related distortions caused by localized artifacts. A potentially cheaper high-density option15 is grating-based sensors, such as those that utilize the Talbot effect.16,17 These can easily use grating patterns with a very small pitch (equivalent to a high-density array), which can be produced by simply etching the grating onto a glass slide attached to the detector. In addition to the sensor device, wavefront sensing requires a specialized algorithm to estimate the wavefront from the detected intensity pattern. Figure 1 shows an overview of common methods utilized for Shack–Hartmann and Talbot devices. With higher density lenslet arrays (and with Talbot gratings), it becomes increasingly difficult to determine the displacements for individual elements to perform the classical methods of Fig. 1(a), due to diffraction and reduced signal-to-noise ratio,20 which suggests Fourier methods may be more attractive. As depicted in Fig. 1(b), Fourier methods typically must unwrap the phase after demodulation of the grating frequency.21 Unfortunately phase unwrapping remains an open problem,22 as it is ill-posed and NP-hard;23 even in a noise-free scenario, one cannot guarantee an optimal result without testing a combinatoric number of possible combinations of steps. In practical terms, phase unwrapping algorithms are particularly sensitive to localized image artifacts, as are the spot detection methods of Fig. 1(a), and common issues such as eyelashes or corneal scars can cause catastrophic failures in reconstruction.24 Figure 2 shows examples of eyelashes occluding the pupil, a common artifact that can result in significant difficulties. Research on phase unwrapping continues, utilizing sophisticated nonlinear optimization techniques such as total-variation denoising,25 unscented Kalman filters,26 and sparse modeling.27 Researchers have also proposed techniques to improve or extend other components of the Fourier methods of Fig. 1(b). Demodulation can be performed by a fast filtering in the image domain.28 Gradient inversion is itself a nontrivial inverse problem, and a variety of techniques have been developed29 including fast noniterative approaches.30 If the wavefront phase is small, it may also be extracted directly after demodulation in the spatial frequency domain, by taking the anti-hermitian component after centering,31 which can then be directly used in Fourier-based inversion methods. However, this approach requires symmetry of the signal aperture, and the small-angle approximation leads to increasing errors for larger aberrations. As noted earlier, there are only a limited number of major aberration components of interest for refractive correction, so a more specialized approach may sidestep the major difficulties of these conventional reconstruction algorithms. This paper formulates a direct approach to estimate major aberrations, generally described in Fig. 1(c), which does not need phase unwrapping or iterative inversion. The approach uses a numerical technique mathematically similar to a spatial heterodyne, to create a diversity of images with different aberrations. At each polynomial order, the algorithm estimates each aberration coefficient by finding the peak frequency component. This technique is employed in a successive fashion to compute and correct for successively lower aberrations until only the low-order terms remain to be calculated much more accurately. Furthermore, the fact that this approach is not reliant on high-quality estimates of spot locations or pixel phase allows us to take advantage of higher pitch gratings, which reduces the relative effect of edges and localized artifacts, and also allows for a very compact and easily produced sensor. We will demonstrate the performance of the method with simulations depicting the robustness against severe artifacts, as well as over a range of realistic aberration magnitudes. 2.MethodAn ophthalmic wavefront sensor typically utilizes a monochromatic laser source scattered from a point on the subject’s retina. Viewing this as a point source, the scattered light passes through the eye’s optics in reverse, and deviations of the resulting signal from a planar wavefront provide an estimate of the eye’s optical aberrations. Most commonly, the returning signal is reimaged from the eye onto a wavefront sensor, such as in Fig. 3, which depicts a sensor utilizing the Talbot effect (a) and a Shack–Hartmann array (b). The resulting image at the camera may be described as the intensity of a interference pattern of the following equation: where are the amplitudes for the different spectral components; is the pitch of the device array (e.g., the lenslet spacing); and and are proportional to the horizontal and vertical gradients, respectively, of the wavefront incident on the sensor, where is a constant that depends on the type of sensor. Neglecting magnification for simplicity for a Shack–Hartmann sensor,21 we have , the focal length of the lenslets. For a Talbot sensor,17 , the Talbot distance for the sensor, where is the wavelength of the light (assuming the detector is at the first Talbot plane, otherwise incorporate an appropriate integer for the plane used). Figure 4 shows a simulated example depicting the result of a Talbot sensor with a sinusoidal grating pattern, as well as a Shack–Hartmann sensor with an equal pitch. The primary difference is the scalars for the harmonics.The two-dimensional Fourier transform of the detected image is The frequency content of the image is depicted in Fig. 5. The important information can be retained with just the first-linear harmonic terms where and (0,1) denoted by the boxes in Fig. 5. For a plane wave, the peaks of the first terms will be located at and , where . Assuming the information is sufficiently concentrated near these points, we can extract the subimages depicted in Fig. 6, where the origins correspond to the and points, respectively, from the original spatial frequency image. By taking the inverse Fourier transform of these subimages, we get complex images modulated by the phase gradients 2.1.Low-Order AberrationsIn the absence of high-order aberrations, we can immediately calculate the defocus and astigmatism based on the peak locations, via a relatively simple and well-known Fourier transform technique. A purely quadratic surface can be written using Zernike basis functions as The derivatives of this wavefront form linear functions, yielding and so the Fourier transforms of Eq. (5) have the form of delta functions (presuming infinite continuous functions for clarity) with peak locations as labeled in Fig. 6. Solving for the Zernike coefficients based on the peak locations gives the following estimate:For some applications, such as system calibration or determining the amount of compensation to use, this simple estimate may suffice. In the presence of high-order aberrations, however, this approach will become increasingly inaccurate. 2.2.High-Order AberrationsTo address high-order aberrations, we will take advantage of the fact that ophthalmic wavefronts are dominated by a small number of Zernike terms, while retaining the robustness and simplicity of Fourier-transform-based methods such as in Sec. 2.1. We address high-order terms by numerically computing shifted products of the demodulated images and their conjugates, and thereby convert higher-order terms to lower orders by a process similar to differentiation. This will allow us to subsequently utilize a low-order estimation method like that of Sec. 2.1. While the approach appears quite involved mathematically, the derivation only requires manipulations of small matrices, and the implementation consists of phase adjustments and fast Fourier transforms (FFTs) of the relatively small subimages around the peaks. The first step is to process the subimages as follows, for both the and subimages, Recall that the demodulated subimages are complex, unlike the original detected image. We will label the phase of these subimages with for those derived from the (1,0) term and for those derived from the (0,1) terms. For example, the phase of is , and the phase of is . The parameter is similar conceptually to the distance used in a finite-difference differentiation estimate (instead of first and second derivatives, we refer to them as first and second products). The precise choice is not critical, except that a large discards more information (as depicted in Fig. 7), while a smaller will result in a smaller frequency shift for our subsequent calculation, making the result more susceptible to noise and quantization errors. Section 3 gives a simulation demonstrating that the output is relatively insensitive to except for extreme values. The result of such combinations is most easily understood in terms of transformations of monomials. Recall that a vector of Zernike coefficients describing all aberrations up to some order can be described equivalently by a vector of coefficients for monomial terms.32 The vector is related to by a simple linear transformation (i.e., ). A fourth-order wavefront can be described as the monomial expansion for which the coefficients may be easily visualized in the matrix form (using base-zero indexing) with using the vectorization operator, an operator that reforms matrix elements into a vector (purely for convenience of notation). In matrix form, separable linear operations may be performed using a matrix , such as to transform the -coordinate and to transform the -coordinate. See Ref. 32 for a more thorough introduction.The coefficients of the gradient wavefronts and can be computed using a differentiation matrix , to get In this section, we will assume the device constants to simplify the notation; the scale factor may be applied when done, the same as it appears in Sec. 2.1 for low-order coefficient estimates. Note that the locations (row and column, counting from zero) in the matrix and give the power of the new term via an expansion similar to Eq. (11) for and , respectively. Hence we can easily see that the wavefronts are now third order, as would be expected from differentiation, since the highest row or column index (when counting from zero) for a nonzero element is three. To determine the effect of the shifts by , we use translation matrices of the form So, e.g., to determine the monomial coefficients for given we would simply compute . The resulting coefficients are A similar computation for givesThe wavefront described by these coefficients is linear. Writing out the polynomial gives Hence, the processed image will be of the form (where, again, we assume ) and its two-dimensional Fourier transform will have a peak at a location So by finding the location of this peak, we can estimate the and coefficients of . With similar logic applied to the other processed images, we are able to estimate all fourth-order terms, to get where the terms that appear in multiple images are averaged, and the coordinates for the peak of the Fourier transform of an image are denoted as .The next step is to remove these components from the lower order images, which we achieve by applying a phase adjustment based on the quantity of the term that exists in each (see Appendix). For example, from Eq. (16), we see that there are quadratic components resulting from the fourth-order terms of , namely, in the (2,0) entry, in the (1,1) entry, and in the (0,2) entry. As a result we would apply to . Similarly for the other three quadratic images, we would apply Now we can calculate the cubic terms from using the peaks of the Fourier transform of these corrected images, asFinally, we correct the third- and fourth-order terms in the original gradient subimages, to accurately compute the second-order aberrations, by forming With these, we can calculate Finally, we form the vector of monomial coefficients that can be transformed to Zernike coefficients for application To summarize, we have the following algorithm:
3.ResultsNext, the method was tested with a variety of simulations of a Talbot sensor. The simulations used a pupil size of 5 mm with a detected image size of 10 mm for . The grating pitch was , the wavelength was 850 nm, and the magnification was (this simply required that a scaling be applied to the final terms based on their order). Figure 8 shows the estimates versus true coefficients for the example from Fig. 4. Here, the result of the simple second-order estimation of Eq. (9) is also demonstrated, which does not use the subsequent high-order estimation and their correction. Figure 9 shows the Fourier transforms of the subimages calculated in the method, for one of the two linear harmonic terms used [specifically for just the term]. First note that the peak in (a), the original subimage, exhibits a large amount of high order, as do the subimages in (b) and (c), computed from a single-shifted product. However, the second-shifted product images in (d), (e), and (f) result in Airy disks, as there is no remaining high order at this point. Similarly, the correction of the previous subimages, depicted in (g), (h), and (i), result in Airy disks in the Fourier transforms, as the high order has been accurately estimated and removed. Figures 10 and 11 show results for the same case with severe artifacts intentionally added to the image; again, the high-order coefficients are accurately estimated. The previous examples simulate a purely fourth-order wavefront, meaning no fifth- or higher-order aberrations are present. This presumption is the basis for the simplifications we were able to make. To test the effect of realistic levels of higher-order aberrations, a large set of images were simulated based on a secondary analysis of real sixth-order Zernike coefficients for measurements from 1500 subjects collected at multiple clinics in North America and Asia utilizing multiple different aberrometers. Figure 12 shows the standard deviations for each Zernike coefficient at 5 mm diameter versus the residual (true minus estimate), which generally agrees with the statistics found in other studies.11 We see that the magnitude of the terms drops off quickly after fourth order (corresponding to coefficients above 14). Typically, values in this region are -rms, which is comparable to the repeatability of the measurement due to biological variation. In Fig. 12, we see that the error after correction is reduced to this range as well. Finally, this large simulation was used to test the parameter choice . This parameter directly results in the shift of the peaks within the processed subimages, hence larger makes this estimate more accurate. Though note also that interpolation of the Fourier transform, peak locations will generally be needed to accurately measure small coefficients. The choice of larger also comes at a price, however, of utilizing less data from the subimages as described in Fig. 7. In our initial testing, a moderate choice of seven pixels was sufficient, and the result was not very sensitive to this choice until extremes were reached. This agrees with the simulation results shown in Fig. 13, where error was found to be small and relatively constant between the extremes of small shifts and a maximum of about 13. 3.1.DiscussionThis paper presented an approach to estimate wavefront aberrations that takes advantage of the limited order of aberrations seen in ophthalmic applications, in order to provide a measurement that is simple and robust, advantages that are critical for low-cost applications. The drawbacks of more sophisticated methods include the need for an expert operator to achieve the best measurement possible and visually reject images containing problematic artifacts to avoid capturing images that result in catastrophic phase-unwrapping failures. Consider the risks of using phase unwrapping to estimate the frequency of a noisy signal versus simply using a Fourier transform and finding the peak. Sophisticated methods also require additional expert algorithm development to further detect problematic artifacts that cannot be avoided by the operator, such as scars and reflections. These expert algorithms generally require careful tuning as with any detection and classification algorithm, and this tends to limit the range of applicable subjects, including certain population groups. The key to the approach presented here is the ability of transform methods to exploit the redundancy from many grating periods, which also complements the choice of accurate and inexpensive grating-based sensor elements such as the Zernike-based sensor used in the examples. It should also be noted that the same approach works for a Shack–Hartmann sensor, as the subimages will be mathematically equivalent, as depicted in Fig. 5. A fourth-order representation was used because this captures the majority of aberrations in most eyes, as we demonstrated with the population data analysis, but it is straightforward to extend the method to higher orders. The most obvious way would be to simply continue the process of computing additional shifted products. Another tactic might be to use a hybrid technique that takes advantage of the reductions in high order in the shifted-product subimages to subsequently utilize a simpler phase estimate, perhaps which no longer needs to unwrap the phase, such as fast Fourier demodulation.31 At the very least, it would be simple to test the quality of the Fourier transform peak to determine when the method has succeeded. ReferencesN. G. Congdon, D. S. Friedman and T. Lietman,
“Important causes of visual impairment in the world today,”
J. Am. Med. Assoc., 290 2057
–2060
(2003). http://dx.doi.org/10.1001/jama.290.15.2057 JAMAAP 0098-7484 Google Scholar
C. A. McCarty,
“Uncorrected refractive error,”
Br. J. Ophthalmol., 90 521
–522
(2006). http://dx.doi.org/10.1136/bjo.2006.090233 BJOPAL 0007-1161 Google Scholar
L. Pizzarello et al.,
“Vision 2020: the right to sight: a global initiative to eliminate avoidable blindness,”
Arch. Ophthalmol., 122 615
–620
(2004). http://dx.doi.org/10.1001/archopht.122.4.615 AROPAW 0003-9950 Google Scholar
I. G. Morgan, K. Ohno-Matsui and S.-M. Saw,
“Myopia,”
Lancet, 379 1739
–1748
(2012). http://dx.doi.org/10.1016/S0140-6736(12)60272-4 LANCAO 0140-6736 Google Scholar
D. B. Elliott, Clinical Procedures in Primary Eye Care, Elsevier Health Sciences, Philadelphia, Pennsylvania
(2013). Google Scholar
I. C. J. Wood,
“A review of autorefractors,”
Eye, 1 529
–535
(1987). http://dx.doi.org/10.1038/eye.1987.80 12ZYAS 0950-222X Google Scholar
L. N. Thibos et al.,
“Accuracy and precision of objective refraction from wavefront aberrations,”
J. Vision, 4
(4), 9
–9
(2004). http://dx.doi.org/10.1167/4.4.9 1534-7362 Google Scholar
J. D. Marsack, L. N. Thibos and R. A. Applegate,
“Metrics of optical quality derived from wave aberrations predict visual performance,”
J. Vision, 4
(4), 8
–328
(2004). http://dx.doi.org/10.1167/4.4.8 1534-7362 Google Scholar
N. Maeda,
“Clinical applications of wavefront aberrometry—a review,”
Clin. Exp. Ophthalmol., 37 118
–129
(2009). http://dx.doi.org/10.1111/ceo.2009.37.issue-1 Google Scholar
J. Porter et al.,
“Monochromatic aberrations of the human eye in a large population,”
J. Opt. Soc. Am. A, 18 1793
(2001). http://dx.doi.org/10.1364/JOSAA.18.001793 JOAOD6 0740-3232 Google Scholar
L. A. Carvalho,
“A simple and effective algorithm for detection of arbitrary Hartmann–Shack patterns,”
J. Biomed. Inf., 37 1
–9
(2004). http://dx.doi.org/10.1016/j.jbi.2003.10.002 Google Scholar
M. Lombardo and G. Lombardo,
“New methods and techniques for sensing the wave aberrations of human eyes,”
Clin. Exp. Optom., 92 176
–186
(2009). http://dx.doi.org/10.1111/cxo.2009.92.issue-3 Google Scholar
R. Voelkel,
“Wafer-scale micro-optics fabrication,”
Adv. Opt. Technol., 1
(3), 135
–150
(2012). http://dx.doi.org/10.1515/aot-2012-0013 Google Scholar
M. Lombardo et al.,
“Adaptive optics technology for high-resolution retinal imaging,”
Sensors, 13 334
–366
(2012). http://dx.doi.org/10.3390/s130100334 SNSRES 0746-9462 Google Scholar
N. H. Salama et al.,
“Wavefront sensor using the Talbot effect,”
Opt. Laser Technol., 31 269
–272
(1999). http://dx.doi.org/10.1016/S0030-3992(99)00053-5 Google Scholar
Y. Liu et al,
“Z-View diffractive wavefront sensor: principle and applications,”
Proc. SPIE, 6018 601809
(2005). http://dx.doi.org/10.1117/12.669248 PSISDG 0277-786X Google Scholar
C. Canovas and E. N. Ribak,
“Comparison of Hartmann analysis methods,”
Appl. Opt., 46 1830
–1835
(2007). http://dx.doi.org/10.1364/AO.46.001830 APOPAI 0003-6935 Google Scholar
Y. Carmon and E. N. Ribak,
“Phase retrieval by demodulation of a Hartmann–Shack sensor,”
Opt. Commun., 215 285
–288
(2003). http://dx.doi.org/10.1016/S0030-4018(02)02254-X OPCOB8 0030-4018 Google Scholar
D. Podanchuk et al.,
“Bottlenecks of the wavefront sensor based on the Talbot effect,”
Appl. Opt., 53 B223
(2014). http://dx.doi.org/10.1364/AO.53.00B223 APOPAI 0003-6935 Google Scholar
E. J. Sarver, J. Schwiegerling and R. A. Applegate,
“Extracting wavefront error from Shack–Hartmann images using spatial demodulation,”
J. Refract. Surg., 22 949
–953
(2006). Google Scholar
J. Ma, J. B. Son and J. D. Hazle,
“An improved region growing algorithm for phase correction in MRI,”
Magn. Reson. Med., 76 519
–529
(2016). http://dx.doi.org/10.1002/mrm.v76.2 MRMEEN 0740-3194 Google Scholar
C. W. Chen and H. A. Zebker,
“Network approaches to two-dimensional phase unwrapping: intractability and two new algorithms,”
J. Opt. Soc. Am. A, 17 401
(2000). http://dx.doi.org/10.1364/JOSAA.17.000401 JOAOD6 0740-3232 Google Scholar
V. Akondi et al.,
“Phase unwrapping with a virtual Hartmann–Shack wavefront sensor,”
Opt. Express, 23 25425
(2015). http://dx.doi.org/10.1364/OE.23.025425 OPEXFF 1094-4087 Google Scholar
H. Y. H. Huang et al.,
“Path-independent phase unwrapping using phase gradient and total-variation (TV) denoising,”
Opt. Express, 20 14075
(2012). http://dx.doi.org/10.1364/OE.20.014075 OPEXFF 1094-4087 Google Scholar
Z. Cheng et al.,
“Practical phase unwrapping of interferometric fringes based on unscented Kalman filter technique,”
Opt. Express, 23 32337
(2015). http://dx.doi.org/10.1364/OE.23.032337 OPEXFF 1094-4087 Google Scholar
V. Katkovnik and J. Bioucas-Dias,
“Wavefront reconstruction in phase-shifting interferometry via sparse coding of amplitude and absolute phase,”
J. Opt. Soc. Am. A, 31 1801
(2014). http://dx.doi.org/10.1364/JOSAA.31.001801 JOAOD6 0740-3232 Google Scholar
A. Talmi and E. N. Ribak,
“Direct demodulation of Hartmann–Shack patterns,”
J. Opt. Soc. Am. A, 21 632
(2004). http://dx.doi.org/10.1364/JOSAA.21.000632 JOAOD6 0740-3232 Google Scholar
L. Huang et al.,
“Comparison of two-dimensional integration methods for shape reconstruction from gradient data,”
Opt. Lasers Eng., 64 1
–11
(2015). http://dx.doi.org/10.1016/j.optlaseng.2014.07.002 OLENDN 0143-8166 Google Scholar
A. Talmi and E. N. Ribak,
“Wavefront reconstruction from its gradients,”
J. Opt. Soc. Am. A, 23 288
(2006). http://dx.doi.org/10.1364/JOSAA.23.000288 JOAOD6 0740-3232 Google Scholar
Y. Carmon and E. N. Ribak,
“Fast Fourier demodulation,”
Appl. Phys. Lett., 84 4656
–4657
(2004). http://dx.doi.org/10.1063/1.1759770 APPLAB 0003-6951 Google Scholar
K. Dillon,
“Bilinear wavefront transformation,”
J. Opt. Soc. Am. A, 26
(8), 1839
–1846
(2009). http://dx.doi.org/10.1364/JOSAA.26.001839 JOAOD6 0740-3232 Google Scholar
BiographyKeith Dillon is cofounder of Formulens, LLC, which applies advanced numerical methods to optical system optimization. He received his PhD degree from the University of California, San Diego, his MS degree from Rice University, and his BS from Tulane University. His research interests include computational imaging and vision, particularly neuroimaging, and entrepreneurship. He is also currently a postdoctoral fellow at Tulane University. |