Open Access
7 December 2016 Fast and robust estimation of ophthalmic wavefront aberrations
Keith Dillon
Author Affiliations +
Abstract
Rapidly rising levels of myopia, particularly in the developing world, have led to an increased need for inexpensive and automated approaches to optometry. A simple and robust technique is provided for estimating major ophthalmic aberrations using a gradient-based wavefront sensor. The approach is based on the use of numerical calculations to produce diverse combinations of phase components, followed by Fourier transforms to calculate the coefficients. The approach does not utilize phase unwrapping nor iterative solution of inverse problems. This makes the method very fast and tolerant to image artifacts, which do not need to be detected and masked or interpolated as is needed in other techniques. These features make it a promising algorithm on which to base low-cost devices for applications that may have limited access to expert maintenance and operation.

1.

Introduction

Uncorrected refractive error is the leading cause of vision impairment worldwide.13 There is a need for inexpensive solutions, particularly in the developing world, which lacks sufficient resources and specialists. Furthermore, the prevalence of myopia has been rapidly increasing in recent years in Asia,4 putting a strain on eyecare systems. Wavefront sensors provide one potential avenue for automation to help address this problem. Unlike the phoropter,5 which is a complex apparatus that requires operation by a trained expert to determine the best subjective correction, a wavefront sensor-based device can directly measure the optical performance of the patient’s eyes. Conventional autorefractors6 used older technologies and were generally unsuccessful at automating this evaluation.

The advantage of the wavefront sensor is that it also provides measurements of higher-order aberrations (where low order refers to defocus and astigmatic error) that have been shown to be critical for determining an accurate correction.7,8 Given these measurements, a more accurate correction may then be performed with conventional spectacle lenses by optimizing a sharpness metric that estimates the net effect on the retinal point spread function.9 Unfortunately, wavefront sensors still tend to be very sophisticated systems, primarily designed for expensive applications such as refractive surgery10 when used in ophthalmic practice; in such an application, the cost of errors is very high and clearly an expert operator must be presumed to be available. For the application of determining the needed refractive correction, however, we can potentially trade off much of the complexity and cost to address a much wider population. When the goal is correction of refractive errors, an estimate of the full wavefront error is not needed. It is primarily a subset of third- and fourth-order aberrations, particularly the more central terms in the Zernike expansion such as coma and spherical aberration, which dominate the effects of aberrations on visual acuity.7,8 Furthermore, these are the largest high-order aberrations seen in the population.11 A better reduction in complexity would, therefore, be achieved with a method that focuses on robustly estimating just these important terms.

The most common ophthalmic wavefront sensor utilizes a Shack–Hartmann lenslet array.10,12 This is a relatively expensive device due to fabrication costs,13 particularly for high-density arrays, though prices might be reduced significantly given the development of high-volume fabrication techniques targeting the smartphone market.14 Indeed, higher density arrays are desirable to improve accuracy;15 lower density arrays suffer more from the curvature of the wavefront across each lenslet, as well as edge effects and related distortions caused by localized artifacts. A potentially cheaper high-density option15 is grating-based sensors, such as those that utilize the Talbot effect.16,17 These can easily use grating patterns with a very small pitch (equivalent to a high-density array), which can be produced by simply etching the grating onto a glass slide attached to the detector.

In addition to the sensor device, wavefront sensing requires a specialized algorithm to estimate the wavefront from the detected intensity pattern. Figure 1 shows an overview of common methods utilized for Shack–Hartmann and Talbot devices. With higher density lenslet arrays (and with Talbot gratings), it becomes increasingly difficult to determine the displacements for individual elements to perform the classical methods of Fig. 1(a), due to diffraction and reduced signal-to-noise ratio,20 which suggests Fourier methods may be more attractive. As depicted in Fig. 1(b), Fourier methods typically must unwrap the phase after demodulation of the grating frequency.21 Unfortunately phase unwrapping remains an open problem,22 as it is ill-posed and NP-hard;23 even in a noise-free scenario, one cannot guarantee an optimal result without testing a combinatoric number of possible combinations of 2π steps. In practical terms, phase unwrapping algorithms are particularly sensitive to localized image artifacts, as are the spot detection methods of Fig. 1(a), and common issues such as eyelashes or corneal scars can cause catastrophic failures in reconstruction.24 Figure 2 shows examples of eyelashes occluding the pupil, a common artifact that can result in significant difficulties. Research on phase unwrapping continues, utilizing sophisticated nonlinear optimization techniques such as total-variation denoising,25 unscented Kalman filters,26 and sparse modeling.27

Fig. 1

(a) In classical Shack–Hartmann methods,18 focal spot displacements are estimated to get samples of the wavefront x- and y-gradients; the wavefront is computed from these gradients via an inversion algorithm, and Zernike coefficients can be computed from the reconstructed wavefront. (b) Fourier-based methods19 replace the spot detection step with a demodulation that directly extracts components containing the x- and y-gradients in their phase; the phase values are modulus 2π, however, so phase unwrapping is required. (c) The proposed method operates directly on the demodulated signal components, by estimating the peak locations of products of signals; no phase unwrapping or inversion is needed.

JBO_21_12_121511_f001.png

Fig. 2

Examples of eyelash shadows partly occluding pupil signal; the bright regions are reflections of the laser source from the cornea and subsequent structures.

JBO_21_12_121511_f002.png

Researchers have also proposed techniques to improve or extend other components of the Fourier methods of Fig. 1(b). Demodulation can be performed by a fast filtering in the image domain.28 Gradient inversion is itself a nontrivial inverse problem, and a variety of techniques have been developed29 including fast noniterative approaches.30 If the wavefront phase is small, it may also be extracted directly after demodulation in the spatial frequency domain, by taking the anti-hermitian component after centering,31 which can then be directly used in Fourier-based inversion methods. However, this approach requires symmetry of the signal aperture, and the small-angle approximation leads to increasing errors for larger aberrations.

As noted earlier, there are only a limited number of major aberration components of interest for refractive correction, so a more specialized approach may sidestep the major difficulties of these conventional reconstruction algorithms. This paper formulates a direct approach to estimate major aberrations, generally described in Fig. 1(c), which does not need phase unwrapping or iterative inversion. The approach uses a numerical technique mathematically similar to a spatial heterodyne, to create a diversity of images with different aberrations. At each polynomial order, the algorithm estimates each aberration coefficient by finding the peak frequency component. This technique is employed in a successive fashion to compute and correct for successively lower aberrations until only the low-order terms remain to be calculated much more accurately. Furthermore, the fact that this approach is not reliant on high-quality estimates of spot locations or pixel phase allows us to take advantage of higher pitch gratings, which reduces the relative effect of edges and localized artifacts, and also allows for a very compact and easily produced sensor. We will demonstrate the performance of the method with simulations depicting the robustness against severe artifacts, as well as over a range of realistic aberration magnitudes.

2.

Method

An ophthalmic wavefront sensor typically utilizes a monochromatic laser source scattered from a point on the subject’s retina. Viewing this as a point source, the scattered light passes through the eye’s optics in reverse, and deviations of the resulting signal from a planar wavefront provide an estimate of the eye’s optical aberrations. Most commonly, the returning signal is reimaged from the eye onto a wavefront sensor, such as in Fig. 3, which depicts a sensor utilizing the Talbot effect (a) and a Shack–Hartmann array (b). The resulting image at the camera may be described as the intensity of a interference pattern of the following equation:

Eq. (1)

s(x,y)=n,mCn,mcos{m2πP[xα(x,y)]+n2πP[yβ(x,y)]},
where Cn,m are the amplitudes for the different spectral components; P is the pitch of the device array (e.g., the lenslet spacing); and α(x,y) and β(x,y) are proportional to the horizontal and vertical gradients, respectively, of the wavefront incident on the sensor, w(x,y)

Eq. (2)

α(x,y)=dxw(x,y),

Eq. (3)

β(x,y)=dyw(x,y),
where d is a constant that depends on the type of sensor. Neglecting magnification for simplicity for a Shack–Hartmann sensor,21 we have d=f, the focal length of the lenslets. For a Talbot sensor,17 d=zT=2P2/λ, the Talbot distance for the sensor, where λ is the wavelength of the light (assuming the detector is at the first Talbot plane, otherwise incorporate an appropriate integer for the plane used). Figure 4 shows a simulated example depicting the result of a Talbot sensor with a sinusoidal grating pattern, as well as a Shack–Hartmann sensor with an equal pitch. The primary difference is the scalars Cn,m for the harmonics.

Fig. 3

Wavefront sensor: (a) employing grating with pitch P at appropriate distance from a camera to utilize Talbot effect and (b) employing lenslet array with pitch P at appropriate distance from camera to produce focused spot pattern.

JBO_21_12_121511_f003.png

Fig. 4

Simulated grating (top row) and Shack–Hartmann (bottom row) result; detector image (left) and spatial-frequency image (right). Detector images are 1024×1024  pixels covering 10 mm, with a pitch of 42  μm and a magnification of 2.0; here images are zoomed to central 200×200  pixel region for visibility of pattern.

JBO_21_12_121511_f004.png

The two-dimensional Fourier transform of the detected image is

Eq. (4)

s˜(kx,ky)=exp{i2π(kxx+kyy)}s(x,y)dxdy.

The frequency content of the image is depicted in Fig. 5. The important information can be retained with just the first-linear harmonic terms where (n,m)=(1,0) and (0,1) denoted by the boxes in Fig. 5. For a plane wave, the peaks of the first terms will be located at (0,±k0) and (±k0,0), where k0=(1/P). Assuming the information is sufficiently concentrated near these points, we can extract the subimages depicted in Fig. 6, where the origins correspond to the (0,k0) and (k0,0) points, respectively, from the original spatial frequency image. By taking the inverse Fourier transform of these subimages, we get complex images modulated by the phase gradients

Eq. (5)

sα(x,y)=C1,0exp[2πiPα(x,y)],sβ(x,y)=C0,1exp[2πiPβ(x,y)].

Fig. 5

Example of s˜(kx,ky), two-dimensional spatial Fourier transform of sensor image, depicting locations of subimage selection about harmonic terms of interest.

JBO_21_12_121511_f005.png

Fig. 6

Subimages s˜α and s˜β extracted from Fig. 5, depicting peak locations resulting from aberrations.

JBO_21_12_121511_f006.png

2.1.

Low-Order Aberrations

In the absence of high-order aberrations, we can immediately calculate the defocus and astigmatism based on the peak locations, via a relatively simple and well-known Fourier transform technique. A purely quadratic surface can be written using Zernike basis functions as

Eq. (6)

w(x,y)=c326xy+c43(2x2+2y21)+c56(x2y2).

The derivatives of this wavefront form linear functions, yielding

Eq. (7)

α(x,y)=d(c326y+c443x+c526x),β(x,y)=d(c326x+c443yc526y),
and so the Fourier transforms of Eq. (5) have the form of delta functions (presuming infinite continuous functions for clarity)

Eq. (8)

s˜α(kx,ky)=δ[2π(xXα),2π(yYα)],s˜β(kx,ky)=δ[2π(xXβ),2π(yYβ)],
with peak locations as labeled in Fig. 6. Solving for the Zernike coefficients based on the peak locations gives the following estimate:

Eq. (9)

c3=P46d(Xβ+Yα),c4=P83d(Xα+Yβ),c5=P46d(XαYβ).

For some applications, such as system calibration or determining the amount of compensation to use, this simple estimate may suffice. In the presence of high-order aberrations, however, this approach will become increasingly inaccurate.

2.2.

High-Order Aberrations

To address high-order aberrations, we will take advantage of the fact that ophthalmic wavefronts are dominated by a small number of Zernike terms, while retaining the robustness and simplicity of Fourier-transform-based methods such as in Sec. 2.1. We address high-order terms by numerically computing shifted products of the demodulated images and their conjugates, and thereby convert higher-order terms to lower orders by a process similar to differentiation. This will allow us to subsequently utilize a low-order estimation method like that of Sec. 2.1. While the approach appears quite involved mathematically, the derivation only requires manipulations of small matrices, and the implementation consists of phase adjustments and fast Fourier transforms (FFTs) of the relatively small subimages around the peaks.

The first step is to process the subimages as follows, for both the sα and sβ subimages,

Eq. (10)

sα(x)(x,y)=sα(xΔ,y)sα(x+Δ,y)*,sα(xx)(x,y)=sα(x)(xΔ,y)sα(x)sα(x+Δ,y)*,sα(y)(x,y)=sα(x,yΔ)sα(x,y+Δ)*,sα(yy)(x,y)=sα(y)(x,yΔ)sα(y)(x,y+Δ)*,sα(xy)(x,y)=sα(y)(xΔ,y)sα(y)(x+Δ,y)*.

Recall that the demodulated subimages are complex, unlike the original detected image. We will label the phase of these subimages with α for those derived from the (1,0) term and β for those derived from the (0,1) terms. For example, the phase of sα(x) is α(x), and the phase of sβ(x) is β(x). The parameter Δ is similar conceptually to the distance used in a finite-difference differentiation estimate (instead of first and second derivatives, we refer to them as first and second products). The precise choice is not critical, except that a large Δ discards more information (as depicted in Fig. 7), while a smaller Δ will result in a smaller frequency shift for our subsequent calculation, making the result more susceptible to noise and quantization errors. Section 3 gives a simulation demonstrating that the output is relatively insensitive to Δ except for extreme values.

Fig. 7

An excessively large Δ parameter reduces the amount of pupil information utilized (a) by 2Δ for the first product s(y) [and similar for s(x)], (b) by 4Δ for the second products s(yy) [and similar for s(xx)], and (c) by 2Δ in both directions for the remaining second product s(xy). So, a smaller Δ is desirable to retain the most pupil area.

JBO_21_12_121511_f007.png

The result of such combinations is most easily understood in terms of transformations of monomials. Recall that a vector z of Zernike coefficients describing all aberrations up to some order can be described equivalently by a vector p of coefficients for monomial terms.32 The vector p is related to z by a simple linear transformation (i.e., z=Mm2zp). A fourth-order wavefront can be described as the monomial expansion

Eq. (11)

w(x,y)=i=04j=04ipi,jxiyj,
for which the coefficients may be easily visualized in the matrix form (using base-zero indexing)

Eq. (12)

P=(p0,0p0,1p0,2p0,3p0,4p1,0p1,1p1,2p1,30p2,0p2,1p2,200p3,0p3,1000p4,00000),
with p=vec(P) using the vectorization operator, an operator that reforms matrix elements into a vector (purely for convenience of notation). In matrix form, separable linear operations may be performed using a matrix M, such as MTP to transform the x-coordinate and PM to transform the y-coordinate. See Ref. 32 for a more thorough introduction.

The coefficients of the gradient wavefronts α(x,y) and β(x,y) can be computed using a differentiation matrix MD, to get

Eq. (13)

Pα=MDTP=(p1,0p1,1p1,2p1,302p2,02p2,12p2,2003p3,03p3,10004p4,0000000000),

Eq. (14)

Pβ=PMD=(p0,12p0,23p0,34p0,40p1,12p1,23p1,300p2,12p2,2000p3,1000000000).

In this section, we will assume the device constants P=d=1 to simplify the notation; the scale factor P/d may be applied when done, the same as it appears in Sec. 2.1 for low-order coefficient estimates. Note that the locations (row and column, counting from zero) in the matrix Pα and Pβ give the power of the new term via an expansion similar to Eq. (11) for α(x,y) and β(x,y), respectively. Hence we can easily see that the wavefronts are now third order, as would be expected from differentiation, since the highest row or column index (when counting from zero) for a nonzero element is three.

To determine the effect of the shifts by Δ, we use translation matrices of the form

Eq. (15)

MΔ=(1000Δ100Δ22Δ10Δ33Δ23Δ1).
So, e.g., to determine the monomial coefficients for α(x)(x,y)=α(xΔ,y)α(x+Δ,y) given Pα we would simply compute MΔTPαMΔTPα. The resulting coefficients are

Eq. (16)

Pα(x)=(4Δp2,0+8Δ3p4,04Δp2,14Δp2,2012Δp3,012Δp3,10024Δp4,00000000).
A similar computation for α(xx)(x,y)=α(x)(x+Δ,y)α(x)(xΔ,y) gives

Eq. (17)

Pα(xx)=(24Δ2p3,024Δ2p3,10096Δ2p4,000000000000).

The wavefront described by these coefficients is linear. Writing out the polynomial gives

Eq. (18)

α(xx)(x,y)=i=04j=04i[Pα(xx)]i,jxiyj=24Δ2p3,0+96Δ2p4,0x+24Δ2p3,1y.
Hence, the processed image will be of the form (where, again, we assume P=d=1)

Eq. (19)

sα(xx)(x,y)=C1,0exp[2πiα(xx)(x,y)]=C1,0exp[2πi(96Δ2p4,0x+24Δ2p3,1y)],
and its two-dimensional Fourier transform will have a peak at a location

Eq. (20)

(kx,ky)=(96Δ2p4,0,24Δ2p3,1).
So by finding the location of this peak, we can estimate the p4,0 and p3,1 coefficients of w(x,y). With similar logic applied to the other processed images, we are able to estimate all fourth-order terms, to get

Eq. (21)

p4,0=196Δ2X[sα(xx)],p3,1=172Δ2{Y[sα(xx)]+X[sα(xy)]+X[sβ(xx)]},p2,2=164Δ2{Y[sα(xy)]+X[sα(yy)]+Y[sβ(xx)]+X[sβ(xy)]},p1,3=172Δ2{Y[sα(yy)]+Y[sβ(xy)]+X[sβ(yy)]},p0,4=196Δ2Y[sβ(yy)],
where the terms that appear in multiple images are averaged, and the coordinates for the peak of the Fourier transform of an image s are denoted as [X(s),Y(s)].

The next step is to remove these components from the lower order images, which we achieve by applying a phase adjustment based on the quantity of the term that exists in each (see Appendix). For example, from Eq. (16), we see that there are quadratic components resulting from the fourth-order terms of w, namely, 24Δp4,0 in the (2,0) entry, 12Δp3,1 in the (1,1) entry, and 4Δp2,2 in the (0,2) entry. As a result we would apply

Eq. (22)

δα(x)(x,y)=exp[2πiΔ(24p4,0x2+12p3,1xy+4p2,2y2)]
to sα(x)(x,y). Similarly for the other three quadratic images, we would apply

Eq. (23)

δα(y)(x,y)=exp[2πiΔ(6p3,1x2+8p2,2xy+6p1,3y2)],δβ(x)(x,y)=exp[2πiΔ(6p3,1x2+8p2,2xy+6p1,3y2)],δβ(y)(x,y)=exp[2πiΔ(4p2,2x2+12p1,3xy+24p0,4y2)].
Now we can calculate the cubic terms from w(x,y) using the peaks of the Fourier transform of these corrected images, as

Eq. (24)

p3,0=112ΔX[δα(x)sα(x)],p2,1=112Δ{Y[δα(x)sα(x)]+X[δα(y)sα(y)]+X[δβ(x)sβ(x)]},p1,2=112Δ{Y[δβ(x)sβ(x)]+X[δβ(y)sβ(y)]+Y[δα(y)sα(y)]},p0,3=112ΔY[δβ(y)sβ(y)].

Finally, we correct the third- and fourth-order terms in the original gradient subimages, to accurately compute the second-order aberrations, by forming

Eq. (25)

δα(x,y)=exp[2πiΔ(4p4,0x3+3p3,1x2y+2p2,2xy2+p1,3y3+3p3,0x2+2p2,1xy+p1,2y2)],δβ(x,y)=exp[2πiΔ(p3,1x3+2p2,2x2y+3p1,3xy2+4p0,4y3+p2,1x2+2p1,2xy+3p0,3y2)].

With these, we can calculate

Eq. (26)

p2,0=12X(δαsα),p1,1=12[Y(δαsα)+X(δβsβ)],p0,2=12Y(δβsβ).

Finally, we form the vector of monomial coefficients that can be transformed to Zernike coefficients for application

Eq. (27)

p=(p2,0,p1,1,p0,2,p3,0,p2,1,p1,2,p0,3,p4,0,p3,1,p2,2,p1,3,p0,4)T.

To summarize, we have the following algorithm:

  • 1. Demodulate detected image, producing complex subimages: sα and sβ.

  • 2. Compute first product images: sα(x), sα(y), sβ(x), sβ(y).

  • 3. From first product images, compute second product images: sα(xx), sα(xy), sα(yy), sβ(xx), sβ(xy), sβ(yy).

  • 4. From FFT peaks of second product images, compute fourth-order coefficients: p4,0, p3,1, p2,2, p1,3, p0,4.

  • 5. Using fourth-order coefficients, generate phase corrections: δα(x), δα(y), δβ(x), δβ(y).

  • 6. From FFT peaks of first product images with phase corrections applied, compute third-order coefficients: p3,0, p2,1, p1,2, p0,3.

  • 7. Using third-order coefficients, generate phase corrections: δα, δβ.

  • 8. From FFT peaks of original subimages with phase corrections applied, compute second-order coefficients: p2,0, p1,1, p0,2.

  • 9. Transform monomial coefficients to Zernike coefficients: z=Mm2zp.

3.

Results

Next, the method was tested with a variety of simulations of a Talbot sensor. The simulations used a pupil size of 5 mm with a detected image size of 10 mm for 1024×1024  pixels. The grating pitch was 42  μm, the wavelength was 850 nm, and the magnification was ms=2.0 (this simply required that a scaling be applied to the final terms based on their order). Figure 8 shows the estimates versus true coefficients for the example from Fig. 4. Here, the result of the simple second-order estimation of Eq. (9) is also demonstrated, which does not use the subsequent high-order estimation and their correction. Figure 9 shows the Fourier transforms of the subimages calculated in the method, for one of the two linear harmonic terms used [specifically for just the (n,m)=(1,0) term]. First note that the peak in (a), the original subimage, exhibits a large amount of high order, as do the subimages in (b) and (c), computed from a single-shifted product. However, the second-shifted product images in (d), (e), and (f) result in Airy disks, as there is no remaining high order at this point. Similarly, the correction of the previous subimages, depicted in (g), (h), and (i), result in Airy disks in the Fourier transforms, as the high order has been accurately estimated and removed. Figures 10 and 11 show results for the same case with severe artifacts intentionally added to the image; again, the high-order coefficients are accurately estimated.

Fig. 8

True Zernike coefficients used in simulation, and result of algorithm to estimate fourth-order Zernikes (Fast4), as well as a version that stops after estimating second-order coefficients (Fast2). Net rms error (the root of the sum of the errors for each coefficient squared) is 0.043  μm-rms for Fast4, and 2.65  μm-rms for Fast2.

JBO_21_12_121511_f008.png

Fig. 9

Example of Fourier transforms of subimages used in processing, showing aberrations of peak: (a) s˜α, (b) s˜α(x), (c) s˜α(y), (d) s˜α(xx), (e) s˜α(xy), (f) s˜α(yy), (g) δα(x)s˜α(x), (h) δα(y)s˜α(y), and (i) δαs˜α.

JBO_21_12_121511_f009.png

Fig. 10

Image artifact examples, with random spots or lines to demonstrate algorithm robustness.

JBO_21_12_121511_f010.png

Fig. 11

Resulting estimates for the three artifact examples. Net rms errors (the root of the sum of the errors for each coefficient squared) for Fast4 are 0.056, 0.18, and 0.19  μm-rms. As compared to an error of 0.043  μm-rms for the example with no artifacts. The net errors for Fast2 are 2.58, 2.56, and 2.74  μm-rms for the three cases, comparable to the 2.65  μm-rms error in the artifact-free case.

JBO_21_12_121511_f011.png

The previous examples simulate a purely fourth-order wavefront, meaning no fifth- or higher-order aberrations are present. This presumption is the basis for the simplifications we were able to make. To test the effect of realistic levels of higher-order aberrations, a large set of images were simulated based on a secondary analysis of real sixth-order Zernike coefficients for measurements from 1500 subjects collected at multiple clinics in North America and Asia utilizing multiple different aberrometers. Figure 12 shows the standard deviations for each Zernike coefficient at 5 mm diameter versus the residual (true minus estimate), which generally agrees with the statistics found in other studies.11 We see that the magnitude of the terms drops off quickly after fourth order (corresponding to coefficients above 14). Typically, values in this region are <0.02  μm-rms, which is comparable to the repeatability of the measurement due to biological variation. In Fig. 12, we see that the error after correction is reduced to this range as well.

Fig. 12

Standard deviations of coefficients for each Zernike term in simulated data (top) and in residual error after estimate (bottom). Standard deviations of true Zernike coefficients 3, 4, and 5, are 0.27, 1.40, and 0.4274, respectively.

JBO_21_12_121511_f012.png

Finally, this large simulation was used to test the parameter choice Δ. This parameter directly results in the shift of the peaks within the processed subimages, hence larger Δ makes this estimate more accurate. Though note also that interpolation of the Fourier transform, peak locations will generally be needed to accurately measure small coefficients. The choice of larger Δ also comes at a price, however, of utilizing less data from the subimages as described in Fig. 7. In our initial testing, a moderate choice of seven pixels was sufficient, and the result was not very sensitive to this choice until extremes were reached. This agrees with the simulation results shown in Fig. 13, where error was found to be small and relatively constant between the extremes of small shifts and a maximum of about 13.

Fig. 13

Average wavefront error versus shift parameter Δ for large population of simulated wavefronts. For this pupil size and sample spacing, the result is relatively insensitive to choices of Δ in the range from three to twelve.

JBO_21_12_121511_f013.png

3.1.

Discussion

This paper presented an approach to estimate wavefront aberrations that takes advantage of the limited order of aberrations seen in ophthalmic applications, in order to provide a measurement that is simple and robust, advantages that are critical for low-cost applications. The drawbacks of more sophisticated methods include the need for an expert operator to achieve the best measurement possible and visually reject images containing problematic artifacts to avoid capturing images that result in catastrophic phase-unwrapping failures. Consider the risks of using phase unwrapping to estimate the frequency of a noisy signal versus simply using a Fourier transform and finding the peak. Sophisticated methods also require additional expert algorithm development to further detect problematic artifacts that cannot be avoided by the operator, such as scars and reflections. These expert algorithms generally require careful tuning as with any detection and classification algorithm, and this tends to limit the range of applicable subjects, including certain population groups. The key to the approach presented here is the ability of transform methods to exploit the redundancy from many grating periods, which also complements the choice of accurate and inexpensive grating-based sensor elements such as the Zernike-based sensor used in the examples. It should also be noted that the same approach works for a Shack–Hartmann sensor, as the subimages will be mathematically equivalent, as depicted in Fig. 5.

A fourth-order representation was used because this captures the majority of aberrations in most eyes, as we demonstrated with the population data analysis, but it is straightforward to extend the method to higher orders. The most obvious way would be to simply continue the process of computing additional shifted products. Another tactic might be to use a hybrid technique that takes advantage of the reductions in high order in the shifted-product subimages to subsequently utilize a simpler phase estimate, perhaps which no longer needs to unwrap the phase, such as fast Fourier demodulation.31 At the very least, it would be simple to test the quality of the Fourier transform peak to determine when the method has succeeded.

Appendices

Appendix:

Coefficient Matrices

We list here all the coefficient matrices resulting from the shifted-product operations used

Eq. (28)

Pα(x)=(4Δp2,0+8Δ3p4,04Δp2,14Δp2,2012Δp3,012Δp3,10024Δp4,00000000),

Eq. (29)

Pα(y)=(2Δp1,1+2Δ3p1,34Δp1,26Δp1,304Δp2,18Δp2,2006Δp3,10000000),

Eq. (30)

Pα(xx)=(24Δ2p3,024Δ2p3,10096Δ2p4,000000000000),

Eq. (31)

Pα(xy)=(8Δ2p2,116Δ2p2,20024Δ2p3,100000000000),

Eq. (32)

Pα(yy)=(8Δ2p1,224Δ2p1,30016Δ2p2,200000000000),

Eq. (33)

Pβ(x)=(2Δp1,1+2Δ3p3,14Δp1,26Δp1,304Δp2,18Δp2,2006Δp3,10000000),

Eq. (34)

Pβ(y)=(4Δp0,2+8Δ3p0,412Δp0,324Δp0,404Δp1,212Δp1,3004Δp2,20000000),

Eq. (35)

Pβ(xx)=(8Δ2p2,116Δ2p2,20024Δ2p3,100000000000),

Eq. (36)

Pβ(xy)=(8Δ2p1,224Δ2p1,30016Δ2p2,200000000000),

Eq. (37)

Pβ(yy)=(24Δ2p0,396Δ2p0,40024Δ2p1,300000000000).

Disclosures

Dr. Dillon has nothing to disclose.

References

1. 

N. G. Congdon, D. S. Friedman and T. Lietman, “Important causes of visual impairment in the world today,” J. Am. Med. Assoc., 290 2057 –2060 (2003). http://dx.doi.org/10.1001/jama.290.15.2057 JAMAAP 0098-7484 Google Scholar

2. 

C. A. McCarty, “Uncorrected refractive error,” Br. J. Ophthalmol., 90 521 –522 (2006). http://dx.doi.org/10.1136/bjo.2006.090233 BJOPAL 0007-1161 Google Scholar

3. 

L. Pizzarello et al., “Vision 2020: the right to sight: a global initiative to eliminate avoidable blindness,” Arch. Ophthalmol., 122 615 –620 (2004). http://dx.doi.org/10.1001/archopht.122.4.615 AROPAW 0003-9950 Google Scholar

4. 

I. G. Morgan, K. Ohno-Matsui and S.-M. Saw, “Myopia,” Lancet, 379 1739 –1748 (2012). http://dx.doi.org/10.1016/S0140-6736(12)60272-4 LANCAO 0140-6736 Google Scholar

5. 

D. B. Elliott, Clinical Procedures in Primary Eye Care, Elsevier Health Sciences, Philadelphia, Pennsylvania (2013). Google Scholar

6. 

I. C. J. Wood, “A review of autorefractors,” Eye, 1 529 –535 (1987). http://dx.doi.org/10.1038/eye.1987.80 12ZYAS 0950-222X Google Scholar

7. 

L. N. Thibos et al., “Accuracy and precision of objective refraction from wavefront aberrations,” J. Vision, 4 (4), 9 –9 (2004). http://dx.doi.org/10.1167/4.4.9 1534-7362 Google Scholar

8. 

J. D. Marsack, L. N. Thibos and R. A. Applegate, “Metrics of optical quality derived from wave aberrations predict visual performance,” J. Vision, 4 (4), 8 –328 (2004). http://dx.doi.org/10.1167/4.4.8 1534-7362 Google Scholar

9. 

D. R. Williams, “Sharpness metric for vision quality,” U.S. Patent No. 7077522 B2 (2006).

10. 

N. Maeda, “Clinical applications of wavefront aberrometry—a review,” Clin. Exp. Ophthalmol., 37 118 –129 (2009). http://dx.doi.org/10.1111/ceo.2009.37.issue-1 Google Scholar

11. 

J. Porter et al., “Monochromatic aberrations of the human eye in a large population,” J. Opt. Soc. Am. A, 18 1793 (2001). http://dx.doi.org/10.1364/JOSAA.18.001793 JOAOD6 0740-3232 Google Scholar

12. 

L. A. Carvalho, “A simple and effective algorithm for detection of arbitrary Hartmann–Shack patterns,” J. Biomed. Inf., 37 1 –9 (2004). http://dx.doi.org/10.1016/j.jbi.2003.10.002 Google Scholar

13. 

M. Lombardo and G. Lombardo, “New methods and techniques for sensing the wave aberrations of human eyes,” Clin. Exp. Optom., 92 176 –186 (2009). http://dx.doi.org/10.1111/cxo.2009.92.issue-3 Google Scholar

14. 

R. Voelkel, “Wafer-scale micro-optics fabrication,” Adv. Opt. Technol., 1 (3), 135 –150 (2012). http://dx.doi.org/10.1515/aot-2012-0013 Google Scholar

15. 

M. Lombardo et al., “Adaptive optics technology for high-resolution retinal imaging,” Sensors, 13 334 –366 (2012). http://dx.doi.org/10.3390/s130100334 SNSRES 0746-9462 Google Scholar

16. 

N. H. Salama et al., “Wavefront sensor using the Talbot effect,” Opt. Laser Technol., 31 269 –272 (1999). http://dx.doi.org/10.1016/S0030-3992(99)00053-5 Google Scholar

17. 

Y. Liu et al, “Z-View diffractive wavefront sensor: principle and applications,” Proc. SPIE, 6018 601809 (2005). http://dx.doi.org/10.1117/12.669248 PSISDG 0277-786X Google Scholar

18. 

C. Canovas and E. N. Ribak, “Comparison of Hartmann analysis methods,” Appl. Opt., 46 1830 –1835 (2007). http://dx.doi.org/10.1364/AO.46.001830 APOPAI 0003-6935 Google Scholar

19. 

Y. Carmon and E. N. Ribak, “Phase retrieval by demodulation of a Hartmann–Shack sensor,” Opt. Commun., 215 285 –288 (2003). http://dx.doi.org/10.1016/S0030-4018(02)02254-X OPCOB8 0030-4018 Google Scholar

20. 

D. Podanchuk et al., “Bottlenecks of the wavefront sensor based on the Talbot effect,” Appl. Opt., 53 B223 (2014). http://dx.doi.org/10.1364/AO.53.00B223 APOPAI 0003-6935 Google Scholar

21. 

E. J. Sarver, J. Schwiegerling and R. A. Applegate, “Extracting wavefront error from Shack–Hartmann images using spatial demodulation,” J. Refract. Surg., 22 949 –953 (2006). Google Scholar

22. 

J. Ma, J. B. Son and J. D. Hazle, “An improved region growing algorithm for phase correction in MRI,” Magn. Reson. Med., 76 519 –529 (2016). http://dx.doi.org/10.1002/mrm.v76.2 MRMEEN 0740-3194 Google Scholar

23. 

C. W. Chen and H. A. Zebker, “Network approaches to two-dimensional phase unwrapping: intractability and two new algorithms,” J. Opt. Soc. Am. A, 17 401 (2000). http://dx.doi.org/10.1364/JOSAA.17.000401 JOAOD6 0740-3232 Google Scholar

24. 

V. Akondi et al., “Phase unwrapping with a virtual Hartmann–Shack wavefront sensor,” Opt. Express, 23 25425 (2015). http://dx.doi.org/10.1364/OE.23.025425 OPEXFF 1094-4087 Google Scholar

25. 

H. Y. H. Huang et al., “Path-independent phase unwrapping using phase gradient and total-variation (TV) denoising,” Opt. Express, 20 14075 (2012). http://dx.doi.org/10.1364/OE.20.014075 OPEXFF 1094-4087 Google Scholar

26. 

Z. Cheng et al., “Practical phase unwrapping of interferometric fringes based on unscented Kalman filter technique,” Opt. Express, 23 32337 (2015). http://dx.doi.org/10.1364/OE.23.032337 OPEXFF 1094-4087 Google Scholar

27. 

V. Katkovnik and J. Bioucas-Dias, “Wavefront reconstruction in phase-shifting interferometry via sparse coding of amplitude and absolute phase,” J. Opt. Soc. Am. A, 31 1801 (2014). http://dx.doi.org/10.1364/JOSAA.31.001801 JOAOD6 0740-3232 Google Scholar

28. 

A. Talmi and E. N. Ribak, “Direct demodulation of Hartmann–Shack patterns,” J. Opt. Soc. Am. A, 21 632 (2004). http://dx.doi.org/10.1364/JOSAA.21.000632 JOAOD6 0740-3232 Google Scholar

29. 

L. Huang et al., “Comparison of two-dimensional integration methods for shape reconstruction from gradient data,” Opt. Lasers Eng., 64 1 –11 (2015). http://dx.doi.org/10.1016/j.optlaseng.2014.07.002 OLENDN 0143-8166 Google Scholar

30. 

A. Talmi and E. N. Ribak, “Wavefront reconstruction from its gradients,” J. Opt. Soc. Am. A, 23 288 (2006). http://dx.doi.org/10.1364/JOSAA.23.000288 JOAOD6 0740-3232 Google Scholar

31. 

Y. Carmon and E. N. Ribak, “Fast Fourier demodulation,” Appl. Phys. Lett., 84 4656 –4657 (2004). http://dx.doi.org/10.1063/1.1759770 APPLAB 0003-6951 Google Scholar

32. 

K. Dillon, “Bilinear wavefront transformation,” J. Opt. Soc. Am. A, 26 (8), 1839 –1846 (2009). http://dx.doi.org/10.1364/JOSAA.26.001839 JOAOD6 0740-3232 Google Scholar

Biography

Keith Dillon is cofounder of Formulens, LLC, which applies advanced numerical methods to optical system optimization. He received his PhD degree from the University of California, San Diego, his MS degree from Rice University, and his BS from Tulane University. His research interests include computational imaging and vision, particularly neuroimaging, and entrepreneurship. He is also currently a postdoctoral fellow at Tulane University.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2016/$25.00 © 2016 SPIE
Keith Dillon "Fast and robust estimation of ophthalmic wavefront aberrations," Journal of Biomedical Optics 21(12), 121511 (7 December 2016). https://doi.org/10.1117/1.JBO.21.12.121511
Published: 7 December 2016
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Wavefronts

Biological research

Fourier transforms

Wavefront aberrations

Wavefront sensors

Image processing

Back to Top