Optical coherence tomography (OCT) provides high-resolution, cross-sectional tomographic images of the human retina and permits direct evaluation of retinal thickness.1 Recent technological developments in spectral-domain OCT (SDOCT) have greatly increased imaging capabilities compared to earlier time-domain technology. SDOCT provides estimates of retinal layer thicknesses across the macula to aid in clinical diagnosis and treatment decisions for a variety of ocular diseases.23.4.5.–6 Interpretation of data has been complicated by the variety of platforms designed by commercial SDOCT instrument manufacturers, each with different proprietary software technologies. Previous studies have identified OCT-derived retinal thickness measurement variability due to differences in their segmentation algorithms, their reported axial resolutions in tissue, their scan density options, and their ability to correct for subject fixation.126.96.36.199.12.–13 Additional anatomic factors vary between individual patients, including axial length, refractive focal length, and macular curvature.14 These anatomic variations may affect the accuracy of comparing lateral and axial measurements between SDOCT instruments in clinical studies.14 Other studies have addressed measurement differences inherent to individual instruments with the same time-domain OCT (TDOCT) platform.1516.–17 These TDOCT studies have used large sample sizes and built-in retinal segmentation software to show retinal thickness measurements with widespread variation between instruments, but differences reported in each study were not consistent.1516.–17
A model eye eliminates variability caused by anatomic differences between human patients and by potential morphologic changes between imaging sessions due to diurnal fluctuations, vascular changes, head tilt, or subject fixation. In a recent study, a customized model eye with a retinal nerve fiber layer phantom has been used to assess thickness differences between SDOCT platforms and individual instruments.18 However, this study used automated retinal segmentation software from each SDOCT platform, which causes reproducible thickness differences between platforms by using different anatomic definitions to identify retinal layer boundaries.78.9.–10 Furthermore, previous studies have not addressed SDOCT measurements of lateral width, which are important for novel SDOCT methods of disease analysis, such as drusen diameter and geographic atrophy in age-related macular degeneration.6
Accurate interpretation of retinal measurements for the treatment of macular diseases and for clinical research requires consistency and reproducibility between different SDOCT platforms and between instruments from the same platform. Significant differences in the quantitative measurements obtained manually from different SDOCT platforms may support the use of a conversion scale to compare data obtained from different systems. The purpose of this study is to determine the variability of lateral and axial retinal measurements among SDOCT instruments from the same commercial platform and across different systems.
A commercially available Rowe model eye (Rowe Technical Designs, Orange County, California) was selected for SDOCT imaging in this study. The manufacturer’s technical details describe the solid-state retinal tissue phantom as a 4.8-mm-diameter cylinder made of translucent polymethyl methacrylate.19 The retinal tissue phantom has thickness in the axial plane and a central depression of 0.9 mm radius and 180 μm central thickness, designed to simulate the natural foveal pit.19 A single model eye was used for all imaging. The model eye was removed and realigned on the same horizontal and vertical axis prior to each scan in order to reduce error from image tilt between different instruments. Alignment was confirmed by securing the model eye to a bracket attached to each SDOCT instrument and then centering the flat base of the tissue phantom with the 0-deg horizontal axis on the display screen. This process was repeated for every scan obtained with each instrument. Portable instruments were held and centered by hand with the 0-deg horizontal axis on the display screen.
SDOCT Instruments and Imaging Protocols
Eight separate SDOCT instruments were selected from three manufacturers and four SDOCT system platforms. We used two Spectralis devices (Spectralis™ OCT software version 5.3, Heidelberg Engineering, Carlsbad, California), two Cirrus devices (Cirrus™ HDOCT software version 5.2, Carl Zeiss Meditec, Dublin, California), and four Bioptigen OCT devices: two portable hand-held Envisu devices and two tabletop SDOIS devices (Envisu™ software version 2.0 and SDOIS software version 1.3, Bioptigen Inc., Morrisville, North Carolina).
All systems used superluminescent diode light sources with broad bandwidths centered between 800 and 900 nm, achieving an axial resolution of per pixel. In order to make fair comparisons between instruments, raster scanning protocols were matched between platforms as closely as permitted by their respective software. The Cirrus platform (840 nm) and both Bioptigen platforms (820 nm) captured raster scans consisting of 128 B-scans with 512 A-scans per B-scan. Due to its software restrictions, the Spectralis platform (870 nm) captured raster scans () consisting of 97 B-scans with 512 A-scans per B-scan. To assess reproducibility, 10 raster scans were performed on each instrument. Scans from both Bioptigen platforms were optimized for dispersion mismatch during imaging due to refractive index differences between the Rowe model eye and the average human eye. Cirrus and Spectralis software performed automatic optimization of dispersion during scan acquisition.
SDOCT Measurements and Statistical Analysis
Two graders viewed all SDOCT scans and agreed upon the one B-scan with the minimum central thickness that best approximated the foveal center of the retinal tissue phantom. Images were viewed in each platform’s standard display screen, and image processing was not allowed (i.e., magnification, brightness, contrast, summation, or Gaussian smoothing). Each grader performed measurements on the central B-scan of 10 raster scans obtained with each SDOCT instrument in masked and independent fashion. We selected anatomic landmarks on the tissue phantom that could be readily identified and measured in the lateral or axial planes of the central B-scan image. The lateral measurement was performed on the lateral width (LW) of the tissue phantom. Axial measurements were performed on the central foveal thickness (CFT), parafoveal thickness (PFT) at 1 mm to the left of center, and PFT at 1 mm to the right of center. These measurements included the largest dimensions of the tissue phantom in the lateral and axial planes in order to capture as much range of error as possible across SDOCT platforms. Figure 1 shows the borders defined for each manual measurement on different SDOCT platforms. Instruments from the same SDOCT platform had the same version of software and built-in screen calipers to take manual measurements. On all platforms, measurement accuracy was limited by pixel resolution and automatically converted to microns or millimeters by built-in software.
Intergrader reproducibility of retinal measurements was assessed with intraclass correlation coefficients (ICC) and 95% confidence intervals (CI). Due to high intergrader agreement, data from both graders were combined to assess intraplatform variability between instruments and interplatform variability between SDOCT systems. Coefficients of variance (COV) were calculated for each instrument and measurement, and instruments were compared with two-tailed -tests. Intra- and interplatform differences for each measurement were assessed with analysis of variance models and Tukey-Kramer tests. All statistical analysis was performed with SAS statistical modeling software (SAS JMP 10, SAS Institute, Cary, North Carolina), and values were considered statistically significant.
Qualitative image differences were observed between SDOCT platforms (Fig. 1). Spectralis instruments suppressed the most reflections, but signal suppression also complicated layer identification and observer measurements. Cirrus scan images appeared to be more saturated, illustrated by broadening of the hyperreflective bands created by laminations within the tissue phantom. Images from the Bioptigen systems (Envisu and SDOIS) had intermediate signal strength and were similar in appearance to each other.
There was excellent agreement between the two independent graders, with similar mean and standard deviation obtained for each measurement (Table 1). There was good agreement for LW measurements (ICC 0.71, CI 0.58 to 0.80). There was excellent agreement for all axial thickness measurements ( for central and PFT measurements). These results showed excellent reproducibility of SDOCT image acquisition and measurement with the model eye.
|Measurement||Grader 1||Grader 2||ICC (95% CI)|
|Mean (SD), μm||Mean (SD), μm|
|Lateral||5221 (53)||5222 (55)||0.71 (0.58 to 0.80)|
|CFT||180 (12)||180 (12)||0.95 (0.92 to 0.97)|
|1 mm right PFT||300 (19)||299 (19)||0.97 (0.95 to 0.98)|
|1 mm left PFT||299 (19)||298 (19)||0.98 (0.97 to 0.99)|
Note: SD, standard deviation; ICC, intraclass correlation coefficient; CI, confidence interval; CFT, central foveal thickness; PFT, parafoveal thickness.
Intraplatform Reproducibility Between Instruments
The differences between instruments from the same manufacturer and differences between SDOCT platforms are shown in Table 2. Serial measurements on each instrument were tightly grouped; however, average measurements between instruments were significantly different for all SDOCT platforms. For LW measurements, Spectralis had the greatest variance between two instruments (17-μm difference in mean width, ) and Bioptigen SDOIS had the least (4-μm difference in mean width, ). For LW measurements, Spectralis had the greatest single-instrument variance () and Bioptigen Envisu had the least (). For CFT measurements, Cirrus had the greatest variance between instruments (9-μm difference in mean CFT, ) and Bioptigen Envisu had the least (3-μm difference in mean CFT, ). For PFT measurements, Bioptigen Envisu had the greatest variance between instruments (9-μm difference in mean PFT, ), whereas Cirrus and Bioptigen SDOIS had the least (2-μm difference in mean PFT, and , respectively).
Comparison of measurements between instruments with the same platform.
|Lateral width||CFT||1 mm right PFT||1 mm left PFT|
|Mean (SD) μm||COV||ANOVA p value||Mean (SD) μm||COV||ANOVA p value||Mean (SD) μm||COV||ANOVA p value||Mean (SD) μm||COV||ANOVA p value|
|Instrument 1||5171 (15)||0.286||0.034||187 (3)||1.559||<0.001||307 (4)||1.163||<0.001||316 (3)||1.120||0.037|
|Instrument 2||5180 (14)||0.269||196 (3)||1.640||315 (4)||1.174||314 (3)||1.059|
|All instruments||5176 (14)||191 (3)||311 (4)||315 (3)|
|Instrument 1||5273 (65)||1.223||0.002||184 (3)||1.332||<0.001||312 (3)||0.920||0.004||308 (2)||0.775||<0.001|
|Instrument 2||5290 (69)||1.309||188 (3)||1.513||315 (3)||0.904||312 (3)||0.991|
|All instruments||5282 (67)||186 (3)||314 (3)||310 (3)|
|Instrument 1||5215 (12)||0.236||0.002||180 (2)||0.857||<0.001||302 (2)||0.755||<0.001||300 (3)||0.983||<0.001|
|Instrument 2||5230 (15)||0.282||183 (3)||1.536||311 (3)||0.836||305 (3)||0.493|
|All instruments||5222 (14)||181 (2)||306 (3)||303 (2)|
|Instrument 1||5209 (26)||0.489||0.042||162 (2)||1.210||0.029||269 (3)||1.013||0.016||268 (3)||1.198||0.029|
|Instrument 2||5205 (29)||0.559||163 (2)||1.648||267 (3)||1.219||273 (3)||1.220|
|All instruments||5207 (27)||162 (2)||268 (3)||270 (3)|
Note: CFT, central foveal thickness; PFT, parafoveal thickness; SD, standard deviation; COV, coefficient of variance; ANOVA, analysis of variance.
Interplatform Reproducibility Between Systems
Results of comparison between SDOCT platforms are shown in Table 3. All measurements between different SDOCT platforms were significantly different, except for the difference in LW measurements between two SDOCT platforms from the same manufacturer, Bioptigen SDOIS and Envisu (). Mean LW measurement differences ranged between 15 μm (Envisu versus SDOIS, 0.3%) and 106 μm (Cirrus versus Spectralis, 2%) among different SDOCT platforms. Mean axial thickness measurement differences ranged between 5 μm (Cirrus versus Spectralis, 1.1%) and 45 μm (Cirrus versus SDOIS, 17%) among different SDOCT platforms. Differences between instruments from the same platform were significantly smaller than between different platforms for lateral and axial measurements, including LW (), CFT (), and PFT ().
Tukey-Kramer test p values for comparison of measurements between different platforms.
|Platform 1||Platform 2||Lateral width||CFT||1 mm right PFT||1 mm left PFT|
Note: CFT, central foveal thickness; PFT, parafoveal thickness.
Conversion factors were calculated from mean single-platform measurements in order to allow investigators to translate quantitative data from one SDOCT platform to another. Conversion factors are presented for LW scaling in Table 4 and axial thickness scaling in Table 5.
Conversion factors for lateral measurements across platforms.
|Convert from this platform|
|Lateral scaling||Zeiss Cirrus™||Heidelberg Spectralis™||Bioptigen Envisu™||Bioptigen SDOIS|
|Convert to this platform||Zeiss Cirrus™||0.980||0.991||0.994|
Conversion factors for axial measurements across platforms.
|Convert from this platform|
|Axial scaling||Zeiss Cirrus™||Heidelberg Spectralis™||Bioptigen Envisu™||Bioptigen SDOIS|
|Convert to this platform||Zeiss Cirrus™||1.011||1.037||1.174|
This study examined the variability in lateral and axial manual measurements between several commercial SDOCT platforms. Dimensions were measured by hand with each instrument’s caliper tool, rather than by the manufacturer’s segmentation program. A single model eye was used to test for variability and to serve as a standardized solid-state target for SDOCT imaging. Under consistent imaging conditions, we found statistically significant differences in all lateral and axial manual measurements between instruments from the same manufacturer and different manufacturers, but intraplatform differences between instruments were significantly smaller than interplatform differences. From these results, we generated conversion factors to facilitate the comparison of manual measurements between different SDOCT platforms in future clinical trials and in daily treatment of macular diseases.
Before the appearance of numerous commercial SDOCT systems, several studies looked at errors and variability between instruments with the same platform.1516.–17 Barkana et al. evaluated several TDOCT instruments and they found substantial differences between devices, few being statistically significant.16 Interestingly, they found that the differences observed were significantly correlated with signal strength. Our findings differ from Barkana et al. and others, who reported no statistically significant difference between instruments.1516.–17 However, these reports only evaluated TDOCT instruments and had higher standard deviation of thickness measurements than recent SDOCT studies, in part due to the inferior pixel resolution of TDOCT systems.78.9.–10
This study is the first to rigorously compare quantitative manual measurements from several commercial platforms utilizing a commercially available model eye. We decided to evaluate two commercial platforms that are commonly used in human adult imaging, clinical research, and randomized clinical trials.23.–4 We chose a commercial hand-held portable platform approved for retinal imaging in pediatric human subjects5,14,2021.–22 and in basic animal research.2324.–25 Furthermore, the largest ongoing randomized trial for age-related macular degeneration (AMD), the NEI-sponsored Age-Related Eye Disease Study 2, exclusively allows the Bioptigen SDOIS platform for its longitudinal, observational ancillary SDOCT study (AREDS2 Ancillary SDOCT Study).6,26 The baseline dataset and measurements for both control and AMD eyes in this study has been made publicly available.6
Several studies have concluded that comparing retinal thickness with instruments from different manufacturers is not advised for clinical studies.78.9.–10 Determining the true variability in these measurements with a cohort of patients would be biased by errors in lateral and axial scaling. For example, Spectralis machines are programmed to offer scan parameters based on degrees of visual angle; however, it provides caliper measurements in millimeter distance. The same visual angle would span a shorter diameter in an eye with shorter axial length, but the distance would be converted to the same millimeter distance as a scan distance on a longer eye. Axial measurement differences may be caused by variability in the default algorithms for automated segmentation line placement, refractive index correction, or dispersion compensation across different SDOCT platforms. Since these calculations are proprietary components of each platform’s software, it is difficult for third party investigators to test their separate contributions to measurement variability.
We have also demonstrated statistically significant variability in manual measurements of a single retinal tissue phantom between two different instruments with the same SDOCT platform. Variability between these instruments may result from inherent variability in the optical path length measured at two different time points, variability in the degree of decalibration between instruments that occurs over time with regular use, or measurement variability caused by speckle noise. We attempted to control for decalibration by selecting same-platform instruments with similar frequency of use in daily clinical care. In SDOCT, speckle noise results from interference between densely packed reflectors, reducing contrast between highly scattering structures in tissue.27 However, the averaging methods commonly used by commercial SDOCT platforms were not applicable to the motionless imaging protocol of this study, where speckle noise was highly correlated across images and instruments. Figure 1 showed acceptably low image noise, and even state-of-the-art denoising algorithms produce some level of image blur,27 permitting us to perform measurements on the unprocessed images shown. Based on the small differences between graders (Table 1) and between same-platform instruments (Table 2), we concluded there was negligible effect of speckle noise on measurement variability.
Measurement differences between platforms were statistically significant; however, the clinical significance of this difference is less clear. With the exception of the Bioptigen SDOIS, the SDOCT systems evaluated in this study had low variability from a clinical standpoint, albeit statistically significant. Lateral scaling variability was 0.3 to 2% between platforms, which represents a range of 15 to 106 μm in width difference between images (based on nominal 6-mm scans divided by sampling density of 512 A-scans). Axial measurements performed in this study suggest that variability across all platforms was 1.1 to 17% between platforms, equivalent to a difference of 5 to 45 μm based on the nominal axial resolution of these SDOCT platforms. Excluding the axial measurements from the Bioptigen SDOIS, which were consistently smaller than all other platforms, the mean difference decreased to ( or 8 μm) across the other three systems. Low variability between Cirrus, Spectralis, and the portable Envisu system suggested that hand motion or instability of a human operator does not introduce additional error while holding the hand-held probe over the target. These differences may not affect disease management with uniform scanning protocols and manual measurements based on the small number of pixels required for the observed differences and the larger errors associated with automated segmentation, sampling density, and fixation variability.7,1011.12.–13 However, clinical studies gathering repeated measurements over time to evaluate disease modification may obtain statistically significant differences that remain within the range of instrument variability.
In conclusion, we have shown significantly greater variability across different platforms than between instruments from the same platform, while controlling for the influence of anatomic variations in human imaging and differences created by automated segmentation programs. This report suggests that clinical investigators may need to account for inherent variances in quantitative SDOCT data collected for clinical trials and routine patient follow-up. Standardized conversion factors may improve the accuracy of data collected from different SDOCT platforms. These conversion tools require further validation with larger samples and human imaging studies. We note that optical imaging instruments may perform differently with eyes of different axial length, refraction, and optical scattering. Accurate quantification of such parameters is part of our ongoing research. Robust, precise, and reproducible conversion factors between commercial SDOCT platforms may allow for the use of a greater range of SDOCT systems in clinical studies and can improve the clinical interpretation of statistically significant differences obtained from study results.
U. Chakravarthy et al., “Ranibizumab versus bevacizumab to treat neovascular age-related macular degeneration: one-year findings from the IVAN randomized trial,” Ophthalmology 119(7), 1399–1411 (2012).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2012.04.015Google Scholar
D. F. Martin et al., “Ranibizumab and bevacizumab for treatment of neovascular age-related macular degeneration: two-year results,” Ophthalmology 119(7), 1388–1398 (2012).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2012.03.053Google Scholar
Q. D. Nguyen et al., “Ranibizumab for diabetic macular edema: results from 2 phase III trials: RISE and RIDE,” Ophthalmology 119(4), 789–801 (2012).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2011.12.039Google Scholar
R. S. Maldonado et al., “Spectral-domain optical coherence tomographic assessment of severity of cystoid macular edema in retinopathy of prematurity,” Arch. Ophthalmol. 130(5), 569–578 (2012).AROPAW0003-9950http://dx.doi.org/10.1001/archopthalmol.2011.1846Google Scholar
S. Farsiu et al., “Quantitative classification of eyes with and without intermediate age-related macular degeneration utilizing optical coherence tomography,” Ophthalmology 121(1), 162–172 (2014).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2013.07.013Google Scholar
I. C. Han and G. J. Jaffe, “Comparison of spectral- and time-domain optical coherence tomography for retinal thickness measurements in healthy and diseased eyes,” Am. J. Ophthalmol. 147(5), 847–858 (2009).AJOPAA0002-9394http://dx.doi.org/10.1016/j.ajo.2008.11.019Google Scholar
U. E. Wolf-Schnurrbusch et al., “Macular thickness measurements in healthy eyes using six different optical coherence tomography instruments,” Invest. Ophthalmol. Vis. Sci. 50(7), 3432–3437 (2009).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.08-2970Google Scholar
A. C. Sull et al., “Comparison of spectral/Fourier domain optical coherence tomography instruments for assessment of normal macular thickness,” Retina 30(2), 235–245 (2010).RETIDX0275-004Xhttp://dx.doi.org/10.1097/IAE.0b013e3181bd2c3bGoogle Scholar
L. Pierro et al., “Macular thickness interoperator and intraoperator reproducibility in healthy eyes using 7 optical coherence tomography instruments,” Am. J. Ophthalmol. 150(2), 199–204 (2010).AJOPAA0002-9394http://dx.doi.org/10.1016/j.ajo.2010.03.015Google Scholar
S. R. Sadda et al., “Impact of scanning density on measurements from spectral domain optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 51(2), 1071–1078 (2010).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.09-4325Google Scholar
S. Hagen et al., “Reproducibility and comparison of retinal thickness and volume measurements in normal eyes determined with two different Cirrus OCT scanning protocols,” Retina 31(1), 41–47 (2011).RETIDX0275-004Xhttp://dx.doi.org/10.1097/IAE.0b013e3181dde71eGoogle Scholar
R. S. Maldonado et al., “Optimizing hand-held spectral domain optical coherence tomography imaging for neonates, infants, and children,” Invest. Ophthalmol. Vis. Sci. 51(5), 2678–2685 (2010).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.09-4403Google Scholar
I. Krebs et al., “Repeatability and reproducibility of retinal thickness measurements by optical coherence tomography in age-related macular degeneration,” Ophthalmology 117(8), 1577–1584 (2010).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2010.04.032Google Scholar
Y. Barkana et al., “Inter-device variability of the Stratus optical coherence tomography,” Am. J. Ophthalmol. 147(2), 260–266 (2009).AJOPAA0002-9394http://dx.doi.org/10.1016/j.ajo.2008.08.008Google Scholar
L. A. Paunescu et al., “Reproducibility of nerve fiber thickness, macular thickness, and optic nerve head measurements using Stratus OCT,” Invest. Ophthalmol. Vis. Sci. 45(6), 1716–1724 (2004).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.03-0514Google Scholar
R. de Kinkelder et al., “Comparison of retinal nerve fiber layer thickness measurements by spectral-domain optical coherence tomography systems using a phantom eye model,” J. Biophotonics 6(4), 314–320 (2013).JBOIBX1864-063Xhttp://dx.doi.org/10.1002/jbio.201200018Google Scholar
R. J. Zawadzki et al., “Towards building an anatomically correct solid eye model with volumetric representation of retinal morphology,” Proc. SPIE 7550, 75502F (2010).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.842888Google Scholar
R. S. Maldonado et al., “Dynamics of human foveal development after premature birth,” Ophthalmology 118(12), 2315–2325 (2011).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2011.05.028Google Scholar
A. M. Dubis et al., “Evaluation of normal human foveal development using optical coherence tomography and histologic examination,” Arch. Ophthalmol. 130(10), 1291–1300 (2012).AROPAW0003-9950http://dx.doi.org/10.1001/archophthalmol.2012.2270Google Scholar
L. Vajzovic et al., “Maturation of the human fovea: correlation of spectral-domain optical coherence tomography findings with histology,” Am. J. Ophthalmol. 154(5), 779–789 (2012).AJOPAA0002-9394http://dx.doi.org/10.1016/j.ajo.2012.05.004Google Scholar
M. D. Fischer et al., “Noninvasive, in vivo assessment of mouse retinal structure using optical coherence tomography,” PLoS One 4(10), 7507 (2009).1932-6203http://dx.doi.org/10.1371/journal.pone.0007507Google Scholar
T. J. Bailey et al., “Spectral-domain optical coherence tomography as a noninvasive method to assess damaged and regenerating adult zebrafish retinas,” Invest. Ophthalmol. Vis. Sci. 53(6), 3126–3138 (2012).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.11-8895Google Scholar
M. L. Gabriele et al., “Reproducibility of spectral-domain optical coherence tomography total retinal thickness measurements in mice,” Invest. Ophthalmol. Vis. Sci. 51(12), 6519–6523 (2010).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.10-5662Google Scholar
J. N. Leuschen et al., “Spectral-domain optical coherence tomography characteristics of intermediate age-related macular degeneration,” Ophthalmology 120(1), 140–150 (2013).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2012.07.004Google Scholar
Francisco A. Folgar, MD, is a clinical associate in ophthalmology and a fellow in vitreoretinal surgery at Duke University.
Eric L. Yuan, BSE, is a graduate of the Pratt School of Engineering at Duke University and a current MD candidate in the Wake Forest University School of Medicine.
Sina Farsiu, PhD, is an assistant professor of ophthalmology and biomedical engineering at Duke University and director of the Vision and Image Processing Laboratory.
Cynthia A. Toth, MD, is a professor of ophthalmology and biomedical engineering at Duke University, director of the Duke Advanced Research in Spectral Domain Optical Coherence Tomography Imaging Laboratory, and director of grading at the Duke Reading Center for ophthalmic imaging in clinical trials.