Open Access
17 January 2014 Lateral and axial measurement differences between spectral-domain optical coherence tomography systems
Francisco A. Folgar, Eric L. Yuan, Sina Farsiu, Cynthia A. Toth
Author Affiliations +
Abstract
We assessed the reproducibility of lateral and axial measurements performed with spectral-domain optical coherence tomography (SDOCT) instruments from a single manufacturer and across several manufacturers. One human retina phantom was imaged on two instruments each from four SDOCT platforms: Zeiss Cirrus, Heidelberg Spectralis, Bioptigen SDOIS, and hand-held Bioptigen Envisu. Built-in software calipers were used to perform manual measurements of a fixed lateral width (LW), central foveal thickness (CFT), and parafoveal thickness (PFT) 1 mm from foveal center. Inter- and intraplatform reproducibilities were assessed with analysis of variance and Tukey-Kramer tests. The range of measurements between platforms was 5171 to 5290 μm for mean LW (p<0.001), 162 to 196 μm for mean CFT (p<0.001), and 267 to 316 μm for mean PFT (p<0.001). All SDOCT platforms had significant differences between each other for all measurements, except LW between Bioptigen SDOIS and Envisu (p=0.27). Intraplatform differences were significantly smaller than interplatform differences for LW (p=0.020), CFT (p=0.045), and PFT (p=0.004). Conversion factors were generated for lateral and axial scaling between SDOCT platforms. Lateral and axial manual measurements have greater variance across different SDOCT platforms than between instruments from the same platform. Conversion factors for measurements from different platforms can produce normalized values for patient care and clinical studies.

1.

Introduction

Optical coherence tomography (OCT) provides high-resolution, cross-sectional tomographic images of the human retina and permits direct evaluation of retinal thickness.1 Recent technological developments in spectral-domain OCT (SDOCT) have greatly increased imaging capabilities compared to earlier time-domain technology. SDOCT provides estimates of retinal layer thicknesses across the macula to aid in clinical diagnosis and treatment decisions for a variety of ocular diseases.26 Interpretation of data has been complicated by the variety of platforms designed by commercial SDOCT instrument manufacturers, each with different proprietary software technologies. Previous studies have identified OCT-derived retinal thickness measurement variability due to differences in their segmentation algorithms, their reported axial resolutions in tissue, their scan density options, and their ability to correct for subject fixation.713 Additional anatomic factors vary between individual patients, including axial length, refractive focal length, and macular curvature.14 These anatomic variations may affect the accuracy of comparing lateral and axial measurements between SDOCT instruments in clinical studies.14 Other studies have addressed measurement differences inherent to individual instruments with the same time-domain OCT (TDOCT) platform.1517 These TDOCT studies have used large sample sizes and built-in retinal segmentation software to show retinal thickness measurements with widespread variation between instruments, but differences reported in each study were not consistent.1517

A model eye eliminates variability caused by anatomic differences between human patients and by potential morphologic changes between imaging sessions due to diurnal fluctuations, vascular changes, head tilt, or subject fixation. In a recent study, a customized model eye with a retinal nerve fiber layer phantom has been used to assess thickness differences between SDOCT platforms and individual instruments.18 However, this study used automated retinal segmentation software from each SDOCT platform, which causes reproducible thickness differences between platforms by using different anatomic definitions to identify retinal layer boundaries.710 Furthermore, previous studies have not addressed SDOCT measurements of lateral width, which are important for novel SDOCT methods of disease analysis, such as drusen diameter and geographic atrophy in age-related macular degeneration.6

Accurate interpretation of retinal measurements for the treatment of macular diseases and for clinical research requires consistency and reproducibility between different SDOCT platforms and between instruments from the same platform. Significant differences in the quantitative measurements obtained manually from different SDOCT platforms may support the use of a conversion scale to compare data obtained from different systems. The purpose of this study is to determine the variability of lateral and axial retinal measurements among SDOCT instruments from the same commercial platform and across different systems.

2.

Methods

2.1.

Model Eye

A commercially available Rowe model eye (Rowe Technical Designs, Orange County, California) was selected for SDOCT imaging in this study. The manufacturer’s technical details describe the solid-state retinal tissue phantom as a 4.8-mm-diameter cylinder made of translucent polymethyl methacrylate.19 The retinal tissue phantom has 300μm thickness in the axial plane and a central depression of 0.9 mm radius and 180 μm central thickness, designed to simulate the natural foveal pit.19 A single model eye was used for all imaging. The model eye was removed and realigned on the same horizontal and vertical axis prior to each scan in order to reduce error from image tilt between different instruments. Alignment was confirmed by securing the model eye to a bracket attached to each SDOCT instrument and then centering the flat base of the tissue phantom with the 0-deg horizontal axis on the display screen. This process was repeated for every scan obtained with each instrument. Portable instruments were held and centered by hand with the 0-deg horizontal axis on the display screen.

2.2.

SDOCT Instruments and Imaging Protocols

Eight separate SDOCT instruments were selected from three manufacturers and four SDOCT system platforms. We used two Spectralis devices (Spectralis™ OCT software version 5.3, Heidelberg Engineering, Carlsbad, California), two Cirrus devices (Cirrus™ HDOCT software version 5.2, Carl Zeiss Meditec, Dublin, California), and four Bioptigen OCT devices: two portable hand-held Envisu devices and two tabletop SDOIS devices (Envisu™ software version 2.0 and SDOIS software version 1.3, Bioptigen Inc., Morrisville, North Carolina).

All systems used superluminescent diode light sources with broad bandwidths centered between 800 and 900 nm, achieving an axial resolution of 5μm per pixel. In order to make fair comparisons between instruments, raster scanning protocols were matched between platforms as closely as permitted by their respective software. The Cirrus platform (840 nm) and both Bioptigen platforms (820 nm) captured 6×6-mm raster scans consisting of 128 B-scans with 512 A-scans per B-scan. Due to its software restrictions, the Spectralis platform (870 nm) captured 20deg×20deg raster scans (5.9×5.9mm) consisting of 97 B-scans with 512 A-scans per B-scan. To assess reproducibility, 10 raster scans were performed on each instrument. Scans from both Bioptigen platforms were optimized for dispersion mismatch during imaging due to refractive index differences between the Rowe model eye and the average human eye. Cirrus and Spectralis software performed automatic optimization of dispersion during scan acquisition.

2.3.

SDOCT Measurements and Statistical Analysis

Two graders viewed all SDOCT scans and agreed upon the one B-scan with the minimum central thickness that best approximated the foveal center of the retinal tissue phantom. Images were viewed in each platform’s standard display screen, and image processing was not allowed (i.e., magnification, brightness, contrast, summation, or Gaussian smoothing). Each grader performed measurements on the central B-scan of 10 raster scans obtained with each SDOCT instrument in masked and independent fashion. We selected anatomic landmarks on the tissue phantom that could be readily identified and measured in the lateral or axial planes of the central B-scan image. The lateral measurement was performed on the lateral width (LW) of the tissue phantom. Axial measurements were performed on the central foveal thickness (CFT), parafoveal thickness (PFT) at 1 mm to the left of center, and PFT at 1 mm to the right of center. These measurements included the largest dimensions of the tissue phantom in the lateral and axial planes in order to capture as much range of error as possible across SDOCT platforms. Figure 1 shows the borders defined for each manual measurement on different SDOCT platforms. Instruments from the same SDOCT platform had the same version of software and built-in screen calipers to take manual measurements. On all platforms, measurement accuracy was limited by pixel resolution and automatically converted to microns or millimeters by built-in software.

Fig. 1

Model eye measurements were obtained for the fixed lateral width, center foveal thickness, and parafoveal thickness 1 mm to the left and right of center. The B-scan with the widest width and minimum central thickness of the circular tissue phantom was selected for measurements. Lateral width was defined as the horizontal distance at the base. Axial thickness was defined as the vertical distance from the inner border of the hyperreflective inner retinal surface to the inner border of the hyperreflective base substrate beneath the tissue phantom. Values shown here are mean values obtained with each spectral-domain optical coherence tomography platform.

JBO_19_1_016014_f001.png

Intergrader reproducibility of retinal measurements was assessed with intraclass correlation coefficients (ICC) and 95% confidence intervals (CI). Due to high intergrader agreement, data from both graders were combined to assess intraplatform variability between instruments and interplatform variability between SDOCT systems. Coefficients of variance (COV) were calculated for each instrument and measurement, and instruments were compared with two-tailed t-tests. Intra- and interplatform differences for each measurement were assessed with analysis of variance models and Tukey-Kramer tests. All statistical analysis was performed with SAS statistical modeling software (SAS JMP 10, SAS Institute, Cary, North Carolina), and p values <0.05 were considered statistically significant.

3.

Results

Qualitative image differences were observed between SDOCT platforms (Fig. 1). Spectralis instruments suppressed the most reflections, but signal suppression also complicated layer identification and observer measurements. Cirrus scan images appeared to be more saturated, illustrated by broadening of the hyperreflective bands created by laminations within the tissue phantom. Images from the Bioptigen systems (Envisu and SDOIS) had intermediate signal strength and were similar in appearance to each other.

3.1.

Intergrader Reproducibility

There was excellent agreement between the two independent graders, with similar mean and standard deviation obtained for each measurement (Table 1). There was good agreement for LW measurements (ICC 0.71, CI 0.58 to 0.80). There was excellent agreement for all axial thickness measurements (ICC0.95 for central and PFT measurements). These results showed excellent reproducibility of SDOCT image acquisition and measurement with the model eye.

Table 1

Intergrader agreement.

MeasurementGrader 1Grader 2ICC (95% CI)
Mean (SD), μmMean (SD), μm
Lateral5221 (53)5222 (55)0.71 (0.58 to 0.80)
CFT180 (12)180 (12)0.95 (0.92 to 0.97)
1 mm right PFT300 (19)299 (19)0.97 (0.95 to 0.98)
1 mm left PFT299 (19)298 (19)0.98 (0.97 to 0.99)
Note: SD, standard deviation; ICC, intraclass correlation coefficient; CI, confidence interval; CFT, central foveal thickness; PFT, parafoveal thickness.

3.2.

Intraplatform Reproducibility Between Instruments

The differences between instruments from the same manufacturer and differences between SDOCT platforms are shown in Table 2. Serial measurements on each instrument were tightly grouped; however, average measurements between instruments were significantly different for all SDOCT platforms. For LW measurements, Spectralis had the greatest variance between two instruments (17-μm difference in mean width, p=0.002) and Bioptigen SDOIS had the least (4-μm difference in mean width, p=0.042). For LW measurements, Spectralis had the greatest single-instrument variance (COV=1.309) and Bioptigen Envisu had the least (COV=0.236). For CFT measurements, Cirrus had the greatest variance between instruments (9-μm difference in mean CFT, p<0.001) and Bioptigen Envisu had the least (3-μm difference in mean CFT, p<0.001). For PFT measurements, Bioptigen Envisu had the greatest variance between instruments (9-μm difference in mean PFT, p<0.001), whereas Cirrus and Bioptigen SDOIS had the least (2-μm difference in mean PFT, p=0.037 and p=0.016, respectively).

Table 2

Comparison of measurements between instruments with the same platform.

Lateral widthCFT1 mm right PFT1 mm left PFT
Mean (SD) μmCOVANOVA p valueMean (SD) μmCOVANOVA p valueMean (SD) μmCOVANOVA p valueMean (SD) μmCOVANOVA p value
Zeiss Cirrus™
Instrument 15171 (15)0.2860.034187 (3)1.559<0.001307 (4)1.163<0.001316 (3)1.1200.037
Instrument 25180 (14)0.269196 (3)1.640315 (4)1.174314 (3)1.059
All instruments5176 (14)191 (3)311 (4)315 (3)
Difference, %0.165.242.250.63
Heidelberg Spectralis™
Instrument 15273 (65)1.2230.002184 (3)1.332<0.001312 (3)0.9200.004308 (2)0.775<0.001
Instrument 25290 (69)1.309188 (3)1.513315 (3)0.904312 (3)0.991
All instruments5282 (67)186 (3)314 (3)310 (3)
Difference, %0.322.120.881.08
Bioptigen Envisu™
Instrument 15215 (12)0.2360.002180 (2)0.857<0.001302 (2)0.755<0.001300 (3)0.983<0.001
Instrument 25230 (15)0.282183 (3)1.536311 (3)0.836305 (3)0.493
All instruments5222 (14)181 (2)306 (3)303 (2)
Difference, %0.271.792.121.65
Bioptigen SDOIS
Instrument 15209 (26)0.4890.042162 (2)1.2100.029269 (3)1.0130.016268 (3)1.1980.029
Instrument 25205 (29)0.559163 (2)1.648267 (3)1.219273 (3)1.220
All instruments5207 (27)162 (2)268 (3)270 (3)
Difference, %0.080.640.652.05
Note: CFT, central foveal thickness; PFT, parafoveal thickness; SD, standard deviation; COV, coefficient of variance; ANOVA, analysis of variance.

3.3.

Interplatform Reproducibility Between Systems

Results of comparison between SDOCT platforms are shown in Table 3. All measurements between different SDOCT platforms were significantly different, except for the difference in LW measurements between two SDOCT platforms from the same manufacturer, Bioptigen SDOIS and Envisu (p=0.272). Mean LW measurement differences ranged between 15 μm (Envisu versus SDOIS, 0.3%) and 106 μm (Cirrus versus Spectralis, 2%) among different SDOCT platforms. Mean axial thickness measurement differences ranged between 5 μm (Cirrus versus Spectralis, 1.1%) and 45 μm (Cirrus versus SDOIS, 17%) among different SDOCT platforms. Differences between instruments from the same platform were significantly smaller than between different platforms for lateral and axial measurements, including LW (p=0.020), CFT (p=0.045), and PFT (p=0.004).

Table 3

Tukey-Kramer test p values for comparison of measurements between different platforms.

Platform 1Platform 2Lateral widthCFT1 mm right PFT1 mm left PFT
Spectralis™Cirrus™<0.001<0.0010.027<0.001
Spectralis™SDOIS<0.001<0.001<0.001<0.001
Spectralis™Envisu™<0.001<0.001<0.001<0.001
Envisu™Cirrus™<0.001<0.001<0.001<0.001
SDOISCirrus™0.001<0.001<0.001<0.001
Envisu™SDOIS0.272<0.001<0.001<0.001
Note: CFT, central foveal thickness; PFT, parafoveal thickness.

Conversion factors were calculated from mean single-platform measurements in order to allow investigators to translate quantitative data from one SDOCT platform to another. Conversion factors are presented for LW scaling in Table 4 and axial thickness scaling in Table 5.

Table 4

Conversion factors for lateral measurements across platforms.

Convert from this platform
Lateral scalingZeiss Cirrus™Heidelberg Spectralis™Bioptigen Envisu™Bioptigen SDOIS
Convert to this platformZeiss Cirrus™0.9800.9910.994
Heidelberg Spectralis™1.0201.0111.014
Bioptigen Envisu™1.0090.9891.003
Bioptigen SDOIS1.0060.9860.997

Table 5

Conversion factors for axial measurements across platforms.

Convert from this platform
Axial scalingZeiss Cirrus™Heidelberg Spectralis™Bioptigen Envisu™Bioptigen SDOIS
Convert to this platformZeiss Cirrus™1.0111.0371.174
Heidelberg Spectralis™0.9901.0251.160
Bioptigen Envisu™0.9640.9751.132
Bioptigen SDOIS0.8520.8620.884

4.

Discussion

This study examined the variability in lateral and axial manual measurements between several commercial SDOCT platforms. Dimensions were measured by hand with each instrument’s caliper tool, rather than by the manufacturer’s segmentation program. A single model eye was used to test for variability and to serve as a standardized solid-state target for SDOCT imaging. Under consistent imaging conditions, we found statistically significant differences in all lateral and axial manual measurements between instruments from the same manufacturer and different manufacturers, but intraplatform differences between instruments were significantly smaller than interplatform differences. From these results, we generated conversion factors to facilitate the comparison of manual measurements between different SDOCT platforms in future clinical trials and in daily treatment of macular diseases.

Before the appearance of numerous commercial SDOCT systems, several studies looked at errors and variability between instruments with the same platform.1517 Barkana et al. evaluated several TDOCT instruments and they found substantial differences between devices, few being statistically significant.16 Interestingly, they found that the differences observed were significantly correlated with signal strength. Our findings differ from Barkana et al. and others, who reported no statistically significant difference between instruments.1517 However, these reports only evaluated TDOCT instruments and had higher standard deviation of thickness measurements than recent SDOCT studies, in part due to the inferior pixel resolution of TDOCT systems.710

This study is the first to rigorously compare quantitative manual measurements from several commercial platforms utilizing a commercially available model eye. We decided to evaluate two commercial platforms that are commonly used in human adult imaging, clinical research, and randomized clinical trials.24 We chose a commercial hand-held portable platform approved for retinal imaging in pediatric human subjects5,14,2022 and in basic animal research.2325 Furthermore, the largest ongoing randomized trial for age-related macular degeneration (AMD), the NEI-sponsored Age-Related Eye Disease Study 2, exclusively allows the Bioptigen SDOIS platform for its longitudinal, observational ancillary SDOCT study (AREDS2 Ancillary SDOCT Study).6,26 The baseline dataset and measurements for both control and AMD eyes in this study has been made publicly available.6

Several studies have concluded that comparing retinal thickness with instruments from different manufacturers is not advised for clinical studies.710 Determining the true variability in these measurements with a cohort of patients would be biased by errors in lateral and axial scaling. For example, Spectralis machines are programmed to offer scan parameters based on degrees of visual angle; however, it provides caliper measurements in millimeter distance. The same visual angle would span a shorter diameter in an eye with shorter axial length, but the distance would be converted to the same millimeter distance as a scan distance on a longer eye. Axial measurement differences may be caused by variability in the default algorithms for automated segmentation line placement, refractive index correction, or dispersion compensation across different SDOCT platforms. Since these calculations are proprietary components of each platform’s software, it is difficult for third party investigators to test their separate contributions to measurement variability.

We have also demonstrated statistically significant variability in manual measurements of a single retinal tissue phantom between two different instruments with the same SDOCT platform. Variability between these instruments may result from inherent variability in the optical path length measured at two different time points, variability in the degree of decalibration between instruments that occurs over time with regular use, or measurement variability caused by speckle noise. We attempted to control for decalibration by selecting same-platform instruments with similar frequency of use in daily clinical care. In SDOCT, speckle noise results from interference between densely packed reflectors, reducing contrast between highly scattering structures in tissue.27 However, the averaging methods commonly used by commercial SDOCT platforms were not applicable to the motionless imaging protocol of this study, where speckle noise was highly correlated across images and instruments. Figure 1 showed acceptably low image noise, and even state-of-the-art denoising algorithms produce some level of image blur,27 permitting us to perform measurements on the unprocessed images shown. Based on the small differences between graders (Table 1) and between same-platform instruments (Table 2), we concluded there was negligible effect of speckle noise on measurement variability.

Measurement differences between platforms were statistically significant; however, the clinical significance of this difference is less clear. With the exception of the Bioptigen SDOIS, the SDOCT systems evaluated in this study had low variability from a clinical standpoint, albeit statistically significant. Lateral scaling variability was 0.3 to 2% between platforms, which represents a range of 15 to 106 μm in width difference between images (based on nominal 6-mm scans divided by sampling density of 512 A-scans). Axial measurements performed in this study suggest that variability across all platforms was 1.1 to 17% between platforms, equivalent to a difference of 5 to 45 μm based on the nominal axial resolution of these SDOCT platforms. Excluding the axial measurements from the Bioptigen SDOIS, which were consistently smaller than all other platforms, the mean difference decreased to <3.7% (1.3pixels or 8 μm) across the other three systems. Low variability between Cirrus, Spectralis, and the portable Envisu system suggested that hand motion or instability of a human operator does not introduce additional error while holding the hand-held probe over the target. These differences may not affect disease management with uniform scanning protocols and manual measurements based on the small number of pixels required for the observed differences and the larger errors associated with automated segmentation, sampling density, and fixation variability.7,1013 However, clinical studies gathering repeated measurements over time to evaluate disease modification may obtain statistically significant differences that remain within the range of instrument variability.

In conclusion, we have shown significantly greater variability across different platforms than between instruments from the same platform, while controlling for the influence of anatomic variations in human imaging and differences created by automated segmentation programs. This report suggests that clinical investigators may need to account for inherent variances in quantitative SDOCT data collected for clinical trials and routine patient follow-up. Standardized conversion factors may improve the accuracy of data collected from different SDOCT platforms. These conversion tools require further validation with larger samples and human imaging studies. We note that optical imaging instruments may perform differently with eyes of different axial length, refraction, and optical scattering. Accurate quantification of such parameters is part of our ongoing research. Robust, precise, and reproducible conversion factors between commercial SDOCT platforms may allow for the use of a greater range of SDOCT systems in clinical studies and can improve the clinical interpretation of statistically significant differences obtained from study results.

References

1. 

D. Huang et al., “Optical coherence tomography,” Science, 254 (5035), 1178 –1181 (1991). http://dx.doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar

2. 

U. Chakravarthy et al., “Ranibizumab versus bevacizumab to treat neovascular age-related macular degeneration: one-year findings from the IVAN randomized trial,” Ophthalmology, 119 (7), 1399 –1411 (2012). http://dx.doi.org/10.1016/j.ophtha.2012.04.015 OPANEW 0743-751X Google Scholar

3. 

D. F. Martin et al., “Ranibizumab and bevacizumab for treatment of neovascular age-related macular degeneration: two-year results,” Ophthalmology, 119 (7), 1388 –1398 (2012). http://dx.doi.org/10.1016/j.ophtha.2012.03.053 OPANEW 0743-751X Google Scholar

4. 

Q. D. Nguyen et al., “Ranibizumab for diabetic macular edema: results from 2 phase III trials: RISE and RIDE,” Ophthalmology, 119 (4), 789 –801 (2012). http://dx.doi.org/10.1016/j.ophtha.2011.12.039 OPANEW 0743-751X Google Scholar

5. 

R. S. Maldonado et al., “Spectral-domain optical coherence tomographic assessment of severity of cystoid macular edema in retinopathy of prematurity,” Arch. Ophthalmol., 130 (5), 569 –578 (2012). http://dx.doi.org/10.1001/archopthalmol.2011.1846 AROPAW 0003-9950 Google Scholar

6. 

S. Farsiu et al., “Quantitative classification of eyes with and without intermediate age-related macular degeneration utilizing optical coherence tomography,” Ophthalmology, 121 (1), 162 –172 (2014). http://dx.doi.org/10.1016/j.ophtha.2013.07.013 OPANEW 0743-751X Google Scholar

7. 

I. C. Han and G. J. Jaffe, “Comparison of spectral- and time-domain optical coherence tomography for retinal thickness measurements in healthy and diseased eyes,” Am. J. Ophthalmol., 147 (5), 847 –858 (2009). http://dx.doi.org/10.1016/j.ajo.2008.11.019 AJOPAA 0002-9394 Google Scholar

8. 

U. E. Wolf-Schnurrbusch et al., “Macular thickness measurements in healthy eyes using six different optical coherence tomography instruments,” Invest. Ophthalmol. Vis. Sci., 50 (7), 3432 –3437 (2009). http://dx.doi.org/10.1167/iovs.08-2970 IOVSDA 0146-0404 Google Scholar

9. 

A. C. Sull et al., “Comparison of spectral/Fourier domain optical coherence tomography instruments for assessment of normal macular thickness,” Retina, 30 (2), 235 –245 (2010). http://dx.doi.org/10.1097/IAE.0b013e3181bd2c3b RETIDX 0275-004X Google Scholar

10. 

L. Pierro et al., “Macular thickness interoperator and intraoperator reproducibility in healthy eyes using 7 optical coherence tomography instruments,” Am. J. Ophthalmol., 150 (2), 199 –204 (2010). http://dx.doi.org/10.1016/j.ajo.2010.03.015 AJOPAA 0002-9394 Google Scholar

11. 

S. R. Sadda et al., “Impact of scanning density on measurements from spectral domain optical coherence tomography,” Invest. Ophthalmol. Vis. Sci., 51 (2), 1071 –1078 (2010). http://dx.doi.org/10.1167/iovs.09-4325 IOVSDA 0146-0404 Google Scholar

12. 

D. Odell et al., “Assessing errors inherent in OCT-derived macular thickness maps,” J. Ophthalmol., 2011 692574 (2011). http://dx.doi.org/10.1155/2011/692574 JOOPA8 2090-004X Google Scholar

13. 

S. Hagen et al., “Reproducibility and comparison of retinal thickness and volume measurements in normal eyes determined with two different Cirrus OCT scanning protocols,” Retina, 31 (1), 41 –47 (2011). http://dx.doi.org/10.1097/IAE.0b013e3181dde71e RETIDX 0275-004X Google Scholar

14. 

R. S. Maldonado et al., “Optimizing hand-held spectral domain optical coherence tomography imaging for neonates, infants, and children,” Invest. Ophthalmol. Vis. Sci., 51 (5), 2678 –2685 (2010). http://dx.doi.org/10.1167/iovs.09-4403 IOVSDA 0146-0404 Google Scholar

15. 

I. Krebs et al., “Repeatability and reproducibility of retinal thickness measurements by optical coherence tomography in age-related macular degeneration,” Ophthalmology, 117 (8), 1577 –1584 (2010). http://dx.doi.org/10.1016/j.ophtha.2010.04.032 OPANEW 0743-751X Google Scholar

16. 

Y. Barkana et al., “Inter-device variability of the Stratus optical coherence tomography,” Am. J. Ophthalmol., 147 (2), 260 –266 (2009). http://dx.doi.org/10.1016/j.ajo.2008.08.008 AJOPAA 0002-9394 Google Scholar

17. 

L. A. Paunescu et al., “Reproducibility of nerve fiber thickness, macular thickness, and optic nerve head measurements using Stratus OCT,” Invest. Ophthalmol. Vis. Sci., 45 (6), 1716 –1724 (2004). http://dx.doi.org/10.1167/iovs.03-0514 IOVSDA 0146-0404 Google Scholar

18. 

R. de Kinkelder et al., “Comparison of retinal nerve fiber layer thickness measurements by spectral-domain optical coherence tomography systems using a phantom eye model,” J. Biophotonics, 6 (4), 314 –320 (2013). http://dx.doi.org/10.1002/jbio.201200018 JBOIBX 1864-063X Google Scholar

19. 

R. J. Zawadzki et al., “Towards building an anatomically correct solid eye model with volumetric representation of retinal morphology,” Proc. SPIE, 7550 75502F (2010). http://dx.doi.org/10.1117/12.842888 PSISDG 0277-786X Google Scholar

20. 

R. S. Maldonado et al., “Dynamics of human foveal development after premature birth,” Ophthalmology, 118 (12), 2315 –2325 (2011). http://dx.doi.org/10.1016/j.ophtha.2011.05.028 OPANEW 0743-751X Google Scholar

21. 

A. M. Dubis et al., “Evaluation of normal human foveal development using optical coherence tomography and histologic examination,” Arch. Ophthalmol., 130 (10), 1291 –1300 (2012). http://dx.doi.org/10.1001/archophthalmol.2012.2270 AROPAW 0003-9950 Google Scholar

22. 

L. Vajzovic et al., “Maturation of the human fovea: correlation of spectral-domain optical coherence tomography findings with histology,” Am. J. Ophthalmol., 154 (5), 779 –789 (2012). http://dx.doi.org/10.1016/j.ajo.2012.05.004 AJOPAA 0002-9394 Google Scholar

23. 

M. D. Fischer et al., “Noninvasive, in vivo assessment of mouse retinal structure using optical coherence tomography,” PLoS One, 4 (10), 7507 (2009). http://dx.doi.org/10.1371/journal.pone.0007507 1932-6203 Google Scholar

24. 

T. J. Bailey et al., “Spectral-domain optical coherence tomography as a noninvasive method to assess damaged and regenerating adult zebrafish retinas,” Invest. Ophthalmol. Vis. Sci., 53 (6), 3126 –3138 (2012). http://dx.doi.org/10.1167/iovs.11-8895 IOVSDA 0146-0404 Google Scholar

25. 

M. L. Gabriele et al., “Reproducibility of spectral-domain optical coherence tomography total retinal thickness measurements in mice,” Invest. Ophthalmol. Vis. Sci., 51 (12), 6519 –6523 (2010). http://dx.doi.org/10.1167/iovs.10-5662 IOVSDA 0146-0404 Google Scholar

26. 

J. N. Leuschen et al., “Spectral-domain optical coherence tomography characteristics of intermediate age-related macular degeneration,” Ophthalmology, 120 (1), 140 –150 (2013). http://dx.doi.org/10.1016/j.ophtha.2012.07.004 OPANEW 0743-751X Google Scholar

27. 

L. Fang et al., “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE Trans. Med. Imaging, 32 (11), 2034 –2049 (2013). http://dx.doi.org/10.1109/TMI.2013.2271904 ITMID4 0278-0062 Google Scholar

Biography

Francisco A. Folgar, MD, is a clinical associate in ophthalmology and a fellow in vitreoretinal surgery at Duke University.

Eric L. Yuan, BSE, is a graduate of the Pratt School of Engineering at Duke University and a current MD candidate in the Wake Forest University School of Medicine.

Sina Farsiu, PhD, is an assistant professor of ophthalmology and biomedical engineering at Duke University and director of the Vision and Image Processing Laboratory.

Cynthia A. Toth, MD, is a professor of ophthalmology and biomedical engineering at Duke University, director of the Duke Advanced Research in Spectral Domain Optical Coherence Tomography Imaging Laboratory, and director of grading at the Duke Reading Center for ophthalmic imaging in clinical trials.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Francisco A. Folgar, Eric L. Yuan, Sina Farsiu, and Cynthia A. Toth "Lateral and axial measurement differences between spectral-domain optical coherence tomography systems," Journal of Biomedical Optics 19(1), 016014 (17 January 2014). https://doi.org/10.1117/1.JBO.19.1.016014
Published: 17 January 2014
Lens.org Logo
CITATIONS
Cited by 26 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Pulmonary function tests

Eye

Optical coherence tomography

Tissues

Eye models

Manufacturing

Instrument modeling

Back to Top