Open Access
1 November 2010 Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry
Phillip Bedggood, Andrew Metha
Author Affiliations +
Abstract
Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

1.

Introduction

Hartmann-Shack aberrometry is one of the most common methods used to characterize the wavefront aberrations of an optical system,1 most especially the aberrations of the human eye.2 In its most common implementation, an array of lenslets is used to form an array of images of a point source that has passed through some optical media. Each subimage, or “spot,” is deviated away from the axis of its lenslet by an amount proportional to the average wavefront slope across the lenslet. Measuring these deviations across the pupil enables the wavefront shape to be reconstructed and/or an appropriate adaptive optics correction to be determined.1

Locating the position of each spot is relatively trivial using standard image processing techniques. The more challenging task is to ensure that each spot is assigned to the correct lenslet, a process referred to as “sorting” the Hartmann-Shack pattern. The easiest way to accomplish this is to assume that each spot image is formed on the detector plane within the bounds of its lenslet. The problem is then reduced to a series of small search windows of fixed location and size. We refer to this as the “conventional” algorithm. With this algorithm, when the local wavefront slope is too steep, a spot will appear outside the bounds of its lenslet and therefore be ascribed to the wrong lenslet. This causes errors in the wavefront fit and in any adaptive optics correction.

The range of this conventional algorithm is sufficient for the majority of normal human eyes when measuring aberrations close to the optical axis. However, the algorithm can fail when measurements are sufficiently off-axis, when refractive error is moderate to high, or when pathology is present that distorts the refractive components of the eye.

Several hardware changes to Hartmann-Shack systems have been proposed to circumvent the limitations of the conventional approach. Larger magnification of the pupil can be used to reduce wavefront slope and hence spot deviation, but this requires a wider detector.2 Lenslet focal length can be shortened to again provide less deviation at the detector plane, but this decreases accuracy due to pixelation effects.2 Lenslet size can be made larger, but at the expense of reduced lateral spatial resolution of the wavefront.2 Yoon suggested a rapidly translating plate that could block certain lenslets in alternating frames, although this sacrifices temporal resolution.3 Lindlein and Pfund suggested an array of astigmatic lenslets with different orientations to uniquely identify each spot in a local area.4 Ares described the use of an array of astigmatic lenslets together with a software algorithm that was able to trace the resultant line foci outside the bounds of the conventional search area.5 For aberrometry on the human eye, appropriate trial lenses are often put into place. This can inject unwanted reflections and is time consuming in that a refraction must be performed first, and it does not help with imaging the peripheral fundus where refraction changes markedly,6 or for higher order effects such as in patients with corneal disease.

Software-based approaches can also increase the range, while avoiding alterations to existing systems and minimizing the cost of system construction. For example, Pfund described a sorting algorithm that was inspired by a common approach used in unwrapping phase images in interferometry.7 Groening 8 and Lundström and Unsbro9 described more complex algorithms where a spline surface was fit to central spots, and then iteratively extrapolated to predict the locations of subsequent spots. Leroux and Dainty used a similar approach but fit Zernike polynomials to the spot displacement data instead of a spline, producing a smoothing effect that suppresses noise.10 Lee 4 and Smith and Greivenkamp12 each described algorithms in which spots are located in a predetermined order in such a way as to reduce ambiguity as to lenslet-spot correspondence.

These software methods generally reported dramatic improvements in the range of aberrometry, but it is difficult to compare algorithms directly due to differences in system parameters, aberrations considered, and criteria for failure. In particular, no range data at all appeared with the Smith and Greivenkamp12 algorithm. The major aim of this review is to facilitate the direct comparison of several of these algorithms by providing further range data obtained under consistent conditions. We also explore the robustness of the considered algorithms in the context of ocular aberrometry by considering the effects of poorly sensed spots, decentration of the pupil (in the absence of a pupil monitor), and decentration of the input beam.

2.

Methods—The Algorithms

Three of the preceding algorithms were selected to reflect the most modern tactics that provide large increases in range over the conventional approach, and to cover a range of complexity in terms of processing required.9, 10, 12 For each algorithm, it is assumed that the spot positions are already known via some image processing method, and that the only remaining problem is the assignment of the spots to their corresponding lenslets. Figure 1 depicts the order in which spots are located for each algorithm. For a more thorough treatment of a given algorithm, the reader should consult the original paper.

Fig. 1

Order in which lenslets of a square lenslet array are unwrapped for each of the algorithms explored here. The zero entries indicate the initial location of central spots before the main part of the algorithm takes effect. Subsequent numbers indicate iterations of each algorithm. The quoted terms are used to reference each algorithm in the text.

067004_1_1.jpg

It is noteworthy that each of the algorithms explored here approach the spot allocation process in a generally similar way. An initial assumption is made in each case that a small number of spots nearest to the pupil center correspond to their directly overlying lenslets. This is a valid assumption since aberrations are smallest at the pupil center. From this anchor point, each algorithm expands iteratively outward from the pupil center. On a given iteration only spots that are adjacent to previously allocated spots are sorted, which enables the algorithms to proceed with minimum uncertainty since the wavefront changes minimally from one lenslet to the next.

2.1.

Lundström and Unsbo9—“B-Spline”

This algorithm assumes that the central nine spots of the Hartmann pattern correspond to their overlying lenslets. A B-spline function is fit to the spot displacements and extrapolated to predict the locations of surrounding spots that have not yet been ascribed a lenslet. This is followed by a new B-spline fit and further gradual extrapolation, until all spots have been sorted. We refer to this as the “B-spline” algorithm. The B-spline algorithm is the most general in its processing—it is therefore predicted to offer the greatest range, but the slowest processing time. Upon request, the authors generously supplied us with a MATLAB implementation of their algorithm, which was used to generate data here.

2.2.

Leroux and Dainty10—“Zernike”

This algorithm follows an approach similar to that of Lundström and Unsbo.9 In place of a B-spline fit, Zernike polynomials are fit to the actual wavefront data. A second-order fit is employed for the central nine spots, and a fourth-order fit beyond that. The smoothing effect of the Zernike fit was reported to improve performance in the presence of noisy measurements and/or missing spots. We refer to this as the “Zernike” algorithm. The code for this algorithm was kindly made available on the author's website, and was used to generate results here after making some minor changes to match the parameters of our chosen lenslet array.

2.3.

Smith and Greivenkamp12—“Spiral”

This algorithm was originally published using a hexagonal lenslet array, but is equally valid for the square lenslet array considered here. The algorithm initially assigns the three spots closest to the pupil center to their directly overlying lenslets. Following this, the algorithm selects each new lenslet one at a time in a predetermined order that spirals outward from the pupil center and matches the geometry of the lenslet array. The location of the spot for each new lenslet is predicted using the spot locations of the closest three lenslets that have already been sorted. The closest actual spot that matches this prediction is then assigned to the lenslet, as long as it is within some maximum distance from the prediction (e.g., one lenslet diameter). The geometry of an outward spiral from the pupil center ensures that changes in wavefront slope between subsequent lenslets are kept as gradual as possible, enabling the algorithm to correctly sort spots even in the presence of high levels of aberration. We refer to this as the “spiral” algorithm.

We adopted an approach in our aberrometry software virtually identical to that suggested by Smith and Greivenkamp; MATLAB code can be downloaded from our website at http://www.optometry.unimelb.edu.au/research/biophotonics.html. Our implementation of this algorithm is simplified further in that only the closest two spots are used to predict the location of each new spot in the series. The use of two spots represents the maximum computational simplicity for any solution in which previously located spots are used to predict subsequent spot locations. The other difference in our implementation is that when no spot is found for a given lenslet, instead of recording no data we record the predicted spot location as a “dummy” entry that can be used to make future spot predictions. Such entries enable the algorithm to robustly skip over poorly sensed spots within the pupil (e.g., due to media opacities or low signal strength), and also aid in navigating the circular edge of the pupil with a square-shaped spiral.

2.4.

A Note on Fourier-Based Algorithms

The preceding approaches operate in the spatial domain, directly on the Hartmann-Shack images. The strong periodicity of Hartmann-Shack images make them amenable to Fourier analysis also. For example, the first harmonic in the x and y directions can be isolated and shifted to the origin, at which point the inverse Fourier transform is calculated. The phase of the resulting complex array, when scaled appropriately, gives the wavefront slope across the image.13, 14 However, this slope data is “wrapped” modulo 2π and must be “unwrapped.” In a highly analogous way to the spot sorting problem already described, this can be done by stepping in a fast, simple manner through each data point or by adopting more advanced approaches that begin in the pupil center and work outward.15 The core problem therefore remains the same as for the other approaches, and so we do not specifically investigate Fourier-based methods here. Note that Fourier-based methods become more desirable when the array size and number of spots is large (due to savings in processing time), and when there is significant noise or a large number of poorly sensed spots.13

3.

Methods—Analysis

To initially compare each of the preceding algorithms, custom MATLAB code was used to simulate double-pass Hartmann-Shack images obtained from a human eye. Monochromatic aberrations were simulated at 830 nm using a phase plate located in the pupil plane, with the retina located 20.2 mm behind this. The retinal space refractive index was 1.33. Fourier optics was used to propagate a 1.0-mm-diam entry beam to form a realistically blurred point source on the retina. Light from a point on this retinal object was then propagated back out of the “eye” with an exit pupil diameter of 6.0 mm. The pupil plane wavefront was sectioned into 0.4-mm-diam squares to simulate a 0.4-mm-pitch square lenslet array. For each section, the point spread function was computed for a lenslet with a 24.0-mm focal length, and convolved with the retinal spot to simulate the double-pass image of the spot on the detector.

Spot centroids were located by standard image processing functions in MATLAB in two steps. First, a binary image was generated by thresholding at intensity levels of 20% and the MATLAB function “bwlabel” was used to label the centers of the resulting binary spots. Second, these approximate labels were supplied as arguments to the MATLAB “regionprops” function, along with the original gray-scale image, to determine the center of mass for each spot.

To quantify the range for a given aberration, the aberration was varied in magnitude with a trial-and-error approach until the cut-off point was reached (to within 0.05 μm) at which spots began to be incorrectly assigned to a lenslet. Only negative Zernike coefficients were considered—e.g., the results for defocus represent a myopic eye. Negative units were chosen because the range with the improved algorithms is much larger for hyperopic defocus; since spots do not overlap here it is actually the size of the detector array that limits the range.

As well as determining range for a well-centered eye, we modeled the effects on the achievable range of some common sources of errors in ocular aberrometry. First, without a separate pupil monitor, there may be as much as a 0.2-mm error in lateral pupil alignment for our 0.4-mm-pitch lenslet array; we considered the effects of a 0.2-mm displacement on the range. Second, the incident beam is usually displaced from the optical axis in ocular aberrometry to avoid the corneal reflection. This causes the least-displaced Hartmann-Shack spot to lie away from the pupil center, which can introduce errors if not taken into account.10 To simulate this effect we modeled a 1.2-mm error in the deviation of the beam from the optical axis. Finally, when the SNR is low, some spots may not be adequately detected. We randomly excluded 10 of the 177 spots in the pupil from the analysis and measured the range achievable (the same 10 spots were excluded for each analysis).

Following the computational modeling just described, we employed a real adaptive optics system to offer a more “real-world” test of each algorithm. The system consisted of a Hartmann-Shack wavefront sensor with identical parameters to that modeled here, in combination with a Mirao 52-d deformable mirror from Imagine Eyes. A model eye was used, consisting of a +20 D lens positioned 3 mm in front of an adjustable aperture (the pupil). A piece of white paper acted as the retina.

To sample the large number of wavefront shapes achievable with the deformable mirror, we systematically applied random voltage signals to the 52 actuators. The random commands were generated on the range –1 to +1 (the maximum signal possible to send to the mirror), scaled by some scaling factor. Scaling factors from 0.05 to 0.50 were used, in increments of 0.05. This means that in the highest amplitude condition it was possible for neighboring actuators to have a difference in voltage of 1.0. For each scaling factor, 100 random sets were generated and applied to the mirror actuators, and the resulting Hartmann-Shack image was recorded. We therefore recorded 1000 separate images resulting from the mirror undergoing various shapes and amplitudes of deformation. Each image was processed in the same way as for the preceding modeling (a higher threshold was used due to the presence of noise in the real images) and analyzed with each of the algorithms. In the case of the Zernike algorithm, it is critical to know the location of the input beam; to measure this we placed a white card in the pupil plane and estimated the centre of the ∼1.2-mm-diam spot that was produced on the wavefront sensor CCD.

Since in this real system case we did not have definitive knowledge of spot-lenslet correspondence, we relied on the internal error checking of each algorithm to reject spots that were probably not correctly assigned. This is a possible source of error, although each of the algorithms used rejects spots in a similar and generally sensible way (primarily by comparing predicted positions to found positions and rejecting those that are greater than a certain number of pixels away). Relying on this internal error checking, we arbitrarily defined failure as 5 or more spots being missed of the ∼280 spots across the 7.5-mm pupil. This criterion is slightly more lax than the single-spot criterion used in the preceding computational modeling, but it seemed necessary to account for occasional poorly sensed spots.

4.

Results and Discussion

The results for the computational modeling are graphed in Fig. 2 under the various conditions already described for defocus, horizontal-vertical astigmatism, vertical coma, and spherical aberration. Data for the various aberration types were stacked to aid visualization. All units are normalized Zernike coefficients in micrometers, in accordance with reporting standards for vision science,16 and were fit over a 6.0-mm pupil. Coefficients are all negative in sign (e.g., the defocus term represents a myopic eye).

Fig. 2

Range for each Hartmann-Shack sorting algorithm for various types of aberration, expressed in units of (negative) normalized Zernike coefficients for a 6.0-mm pupil. Astigmatism is horizontal-vertical and coma is vertical. In the “decentered pupil” case, there was a 0.2-mm error in assessment of the pupil center; in the “decentered beam” case there was an uncompensated 1.2-mm decentration of the input beam to the eye; in the “spots missing” case there were 10 randomly chosen spots missing from across the pupil.

067004_1_2.jpg

For the well-centered “normal” configuration, the ranges of each iterative algorithm were greatly improved over the conventional algorithm and were generally comparable to each other. As predicted, the B-spline algorithm showed the greatest range. Up to 16.6 D of defocus and 32.9 D of astigmatism could be accurately measured with this algorithm and our chosen lenslet array. Compared to the B-spline algorithm, the spiral algorithm showed identical range for defocus and coma, but 15 to 17% less range for astigmatism and spherical aberration. The Zernike algorithm showed equivalent range for coma and spherical aberration, but 16 to 18% less range for defocus and astigmatism, again compared to the B-spline algorithm.

The robustness of each algorithm to common sources of error is also explored in Fig. 2. We can see that none of the algorithms suffered a significant decrease in range when random spots were missing in the pupil; despite their serial nature, there were no downstream errors for poorly sensed spots. This is by virtue of the large number of spots considered when making each prediction for the B-spline and Zernike algorithms, and by the use of the “dummy” grid for the spiral algorithm. In fact, the B-spline and spiral algorithms were also highly robust in the presence of the other error types considered; range decrease was no more than 6 to 10% across all of the error conditions for these two algorithms. The Zernike algorithm was not robust against decentrations in the estimated position of either the pupil or the input beam, making it advisable when using this method to use a calibrated scale when decentering the input beam,10 and possibly a separate pupil monitor if maximal range is desired.

It is of interest to know whether the range limits shown here could be improved on with some other as yet undiscovered sorting algorithm, or whether the obtained limits were a result of physical spot overlap (i.e., limited by image processing difficulties rather than the sorting algorithm). To test this, we repeated the range measurements by using the predicted spot locations from geometric optics rather than relying on the MATLAB centroiding functions. The range for the B-spline algorithm increased by 22 to 42%, depending on the aberration type, confirming that for the isolated aberrations considered here this algorithm surpassed the practical limit set by the overlapping of spots. We can therefore confidently state that any future algorithm would not provide a greater range. For reference, the Zernike algorithm showed minimal improvement in range under the same conditions, and the spiral algorithm showed an improvement only for defocus. This means that as far as a pure sorting task is concerned the B-spline algorithm is superior to the other two algorithms, but that this advantage is diminished under practical conditions due to spot overlap.

It is also important to consider the range results in the context of the maximum levels of each type of aberration that the algorithm must be able to measure. For ocular aberrometry it is desirable to be able to measure as many eyes in the population as possible, both on- and off-axis. For astigmatism, the ranges given here are equivalent to ∼30 D for the improved algorithms (the conversion factor between Zernike coefficient and ophthalmic prescription is ∼0.77 for defocus and ∼1.09 for astigmatism at a pupil size of 6.0 mm). This is far in excess of any case of astigmatism ever reported to our knowledge, even in eyes with severe corneal distortion17 or far away from the optical axis.18 The defocus results for the improved algorithms correspond to ∼13.3 to 16.6 D of refractive error. Even allowing for a ∼2-D variation over the central 40 deg of the visual field,18 this is still sufficient to account for >99.9% of the normal human eye population. Since we have already shown that the improved algorithms operate at or close to the hard limit set by the physical overlap of spots, a larger range would require a lenslet array with a shorter focal length (or some other hardware alteration as discussed in Sec. 1). The range for defocus for each algorithm can be adjusted as a linear function of the focal length of the lenslet array used. For example, if a lenslet array with half the focal length were used instead, a roughly equivalent spot pattern would result only when twice as much defocus is present. This is because the wavefront slope, and therefore the spot displacement, for pure defocus varies linearly in x and y across the pupil.

Spherical aberration is very rarely greater than ∼0.4 μm over a 6.0-mm pupil in the normal population,19 and changes little off-axis,18 although it may reach ∼1.0 μm in eyes suffering from corneal pathology.17 This is well within range of the 2.4 to 2.9 μm shown for the improved algorithms here. Coma is very rarely greater than ∼0.5 μm in the normal population19 and can change by as much as ∼0.6 μm over the central 40 deg of the visual field,18 which is similarly well within the improved ranges seen here of ∼4.6 to 4.8 μm. However, keratoconic eyes have been reported to feature up to ∼3.6 μm of vertical coma,17 which may give any of the algorithms considered here difficulty when combined with other aberration types, due to potential spot overlap. If it were desirable to accommodate a keratoconic population of eyes, a lenslet array with a shorter focal length may be desirable.

The preceding results quantify the performance of each algorithm for particular aberration types in isolation. We next used a real adaptive optics system with a model eye to simulate a worse-case scenario—randomized, irregular wavefronts composed of many different types of aberration. The amplitude of the random wavefronts was varied systematically by altering the allowed amplitude of the commands applied to the deformable mirror, with 100 different wavefronts generated for each of the 10 amplitude conditions. Figure 3 shows an example Hartmann-Shack image that was generated using the maximum amplitude, and illustrates the attempts of each algorithm to analyze it. Closed symbols denote spot positions located by the image processing routine. Circles indicate spots that a given algorithm judged were unambiguously assigned to a lenslet. In this particular example, the spiral algorithm did not make any errors, the B-spline made a small number of errors, and the conventional and Zernike algorithms made a large number of errors.

Fig. 3

Example wavefront from the set of 1000 randomly generated wavefronts. This wavefront was generated with the maximum amplitude considered here. Large circles indicate spots that were correctly located according to the internal error checking of each algorithm. We can see that in this specific example, the spiral algorithm was able to correctly sort the spots; the other algorithms show varying amounts of error.

067004_1_3.jpg

Figure 4 plots the average rate of failure for each algorithm as a function of the maximum allowed actuator amplitude. The failure criterion was met if 5 or more spots (of the ∼280 in the pupil) were missed. The error bars indicate standard error of the mean.

Fig. 4

Average failure rate for the randomly generated wavefronts. Error bars indicate standard error of the mean. We can see that the spiral and B-spline algorithms were most robust to the irregular aberrations that were generated; the Zernike algorithm was significantly less so, and the conventional algorithm was significantly worse again.

067004_1_4.jpg

The spiral and B-spline algorithms achieved similar results over much of the amplitude range tested; the spiral algorithm was superior at the highest end of the scale. This is probably because the spiral algorithm is far more local in nature and so is somewhat better suited to handle large wavefront deviations that are not well correlated with deviations in other parts of the pupil (the wavefront measured in Fig. 3 is a good example of this). There is a large difference evident between the Zernike algorithm and the spiral and B-spline algorithms. This is likely because the randomly generated mirror shapes would not often have been well represented by the Zernike expansion. Note that this algorithm still performed significantly better than the conventional algorithm, and that the high-amplitude, irregular aberrations produced here are unlikely to be seen in eyes without significant disease affecting the ocular media.

Finally, we report on the processing time for each algorithm. Absolute processing times, of course, depend highly on the computer hardware used. However, even the relative processing times given here should be taken as a rough approximation, since the MATLAB code for each algorithm has not necessarily been optimized for speed. With that in mind, with 10 trials for the well-centered case on an Intel Core i7 2.66 GHz processor with 2 GB RAM and using a 17×17 lenslet array (i.e., detector size of 6.8 mm), we obtained processing times of 2.3, 4.2, 28, and 41 ms for the conventional, spiral, Zernike, and B-spline algorithms, respectively. This excluded components of each algorithm that involved the initialization of variables, which would optimally be performed only once in a session. The spiral algorithm was faster than the other improved algorithms, as predicted due to the maximally simplistic calculations involved in predicting each spot location.

However, the magnitude of temporal variations in ocular aberrations for a given retinal point are generally small and well within the range of the conventional algorithm. A fast frame rate could therefore be achieved by employing a single iteration of any improved algorithm and feeding these results into the conventional algorithm to guide its search. This approach offers a clear benefit to frame rate because of the unique ability of the conventional algorithm to take advantage of parallel processing and so achieve even greater speed than reported here.

5.

Conclusion

The range of aberrometry was similar for the improved algorithms considered here, despite the differences in processing complexity. These algorithms approached the physical limits to the achievable range imposed by the overlap of Hartmann-Shack spots. The results presented here show that each algorithm is more than sufficient to accurately characterize the vast majority of human eye aberration, both on- and off-axis, with our chosen lenslet array parameters. The B-spline algorithm produced a high range and was highly robust to error, but was an order of magnitude slower than simpler algorithms. The Zernike algorithm was somewhat faster, but care must be taken to accurately estimate decentration of the pupil and of the input beam, and performance is noticeably reduced for wavefronts that are not well approximated by the Zernike expansion. The spiral algorithm was highly robust to error and to irregular aberration types. It was also 7 to 10 times faster than the other improved algorithms, affording the potential for real-time analysis. Real-time analysis for ocular aberrometry could alternatively be achieved by employing only a single initial iteration of any improved algorithm to inform subsequent iterations of the conventional algorithm, since the vast majority of human eye aberration is static.

Acknowledgments

This work was supported by the Australian Research Council (Discovery Project Grant DP0984649), and the Rowden White Benevolent Bequest.

References

1. 

R. K. Tyson, Principles of Adaptive Optics, Academic, San Diego, CA (1991). Google Scholar

2. 

G. Yoon, “Wavefront sensing and diagnostic uses,” Adaptive Optics for Vision Science, Wiley-Interscience, Hoboken, NJ, USA (2006). Google Scholar

3. 

G. Yoon, S. Pantanelli, and L. J. Nagy, “Large-dynamic-range Shack-Hartmann wavefront sensor for highly aberrated eyes,” J. Biomed. Opt., 11 (3), 030502 –030503 (2006). https://doi.org/10.1117/1.2197860 Google Scholar

4. 

N. Lindlein and J. Pfund, “Experimental results for expanding the dynamic range of a Shack-Hartmann sensor using astigmatic microlenses,” Opt. Eng., 41 529 –533 (2002). https://doi.org/10.1117/1.1430724 Google Scholar

5. 

M. Ares, S. Royo, and J. Caum, “Shack–Hartmann sensor based on a cylindrical microlens array,” Opt. Lett., 32 (7), 769 –771 (2007). https://doi.org/10.1364/OL.32.000769 Google Scholar

6. 

J. Gustafsson, E. Terenius, J. Buchheister, and P. Unsbo, “Peripheral astigmatism in emmetropic eyes,” Ophthal. Physiol. Opt., 21 (5), 393 –400 (2001). https://doi.org/10.1046/j.1475-1313.2001.00606.x Google Scholar

7. 

J. Pfund, N. Lindlein, and J. Schwider, “Dynamic range expansion of a Shack–Hartmann sensor by use of a modified unwrapping algorithm,” Opt. Lett., 23 (13), 995 –997 (1998). https://doi.org/10.1364/OL.23.000995 Google Scholar

8. 

S. Groening, B. Sick, K. Donner, J. Pfund, N. Lindlein, and J. Schwider, “Wave-front reconstruction with a Shack-Hartmann sensor with an iterative spline fitting method,” Appl. Opt., 39 561 –567 (2000). https://doi.org/10.1364/AO.39.000561 Google Scholar

9. 

L. Lundström and P. Unsbo, “Unwrapping Hartmann-Shack Images from highly aberrated eyes using an iterative B-spline based extrapolation method,” Optom. Vis. Sci., 81 (5), 383 –388 (2004). https://doi.org/10.1097/01.opx.0000135086.61760.b7 Google Scholar

10. 

C. Leroux and C. Dainty, “A simple and robust method to extend the dynamic range of an aberrometer,” Opt. Express, 17 (21), 19055 –19061 (2009). https://doi.org/10.1364/OE.17.019055 Google Scholar

11. 

J. Lee, R. V. Shack, and M. R. Descour, “Sorting method to extend the dynamic range of the Shack-Hartmann wave-front sensor,” Appl. Opt., 44 (23), 4838 –4845 (2005). https://doi.org/10.1364/AO.44.004838 Google Scholar

12. 

D. G. Smith and J. E. Greivenkamp, “Generalized method for sorting Shack-Hartmann spot patterns using local similarity,” Appl. Opt., 47 (25), 4548 –4554 (2008). https://doi.org/10.1364/AO.47.004548 Google Scholar

13. 

Y. Carmon and E. N. Ribaka, “Phase retrieval by demodulation of a Hartmann–Shack sensor,” Opt. Commun., 215 285 –288 (2003). https://doi.org/10.1016/S0030-4018(02)02254-X Google Scholar

14. 

E. J. Sarver, J. Schwiegerling, and R. A. Applegate, “Extracting wavefront error from Shack-Hartmann images using spatial demodulation,” J. Refract. Surg., 22 (9), 949 –953 (2006). Google Scholar

15. 

D. C. Ghiglia and M. D. Pritt, Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software, Wiley-Interscience, New York (1998). Google Scholar

16. 

L. N. Thibos, R. A. Applegate, J. T. Schwiegerling, and R. Webb, “Standards for reporting the optical aberrations of eyes,” J. Refract. Surg., 18 S652 –S660 (2002). Google Scholar

17. 

S. Pantanelli, S. MacRae, T. M. Jeong, and G. Yoon, “Characterizing the wave aberration in eyes with keratoconus or penetrating keratoplasty using a high-dynamic range wavefront sensor,” Ophthalmology, 114 (11), 2013 –2021 (2007). https://doi.org/10.1016/j.ophtha.2007.01.008 Google Scholar

18. 

A. Mathur, D. A. Atchison, and W. N. Charman, “Myopia and peripheral ocular aberrations,” J. Vis., 9 (10), 15.1 –12 (2009). https://doi.org/10.1167/9.10.15 Google Scholar

19. 

L. N. Thibos, X. Hong, A. Bradley, and X. Cheng, “Statistical variation of aberration structure and image quality in a normal population of healthy eyes,” J. Opt. Soc. Am. A, 19 (12), 2329 –2348 (2002). https://doi.org/10.1364/JOSAA.19.002329 Google Scholar
©(2010) Society of Photo-Optical Instrumentation Engineers (SPIE)
Phillip Bedggood and Andrew Metha "Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry," Journal of Biomedical Optics 15(6), 067004 (1 November 2010). https://doi.org/10.1117/1.3516706
Published: 1 November 2010
Lens.org Logo
CITATIONS
Cited by 8 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Wavefronts

Monochromatic aberrations

Eye

Image processing

MATLAB

Detection and tracking algorithms

Adaptive optics

Back to Top