Optical coherence tomography (OCT)1 is an optical signal acquisition and processing modality which is noninvasive for imaging subsurface tissue structures. OCT can attain images of the internal structures of the eye with higher spatial resolution (several micrometers) than other imaging modalities, such as ultrasound, x-ray, and magnetic resonance imaging.
Because human eyes are optically penetrable, OCT has recently become an important technique to revolutionize the clinical imaging of the retina.2 Quantities derived from the segmented retinal layers will help doctors to understand the anatomy of the human retina better and to quantify the extents of abnormalities. These quantities include nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), inner segment (IS), outer segment (OS), and retinal pigment epithelium (RPE) (Fig. 1).
There are efforts to segment layer boundaries from OCT retinal images based on grayscale variation and grayscale thresholding.184.108.40.206.8.9.–10 Unfortunately, these methods are sensitive to noise. A Markov random field model for extracting the inner and outer retinal borders was introduced in Ref. 11. Its extension was adopted for optic nerve head12 and outer retinal layers’ segmentation.13 The autoregressive model was shown to be more robust than those based on grayscale thresholding and grayscale variation. Its main problem was to find reliable “seed” points for OCT images with abnormal retina. In addition, special rules needed to be applied for correcting errors since the model depended on the connection of one-dimensional (1-D) points. Active contour models were employed to segment retinal layers including macula and optic nerve regions.1415.16.17.–18 However, they required good initialization and had high computational costs. Methods based on graph theory could accurately segment retinal layer boundaries in normal adult eyes,19220.127.116.11.18.104.22.168.28.–29 but had high computational complexity. Recently, machine learning approaches, including support vector machines,30 random forest,31 and Bayesian artificial neural networks,32 have attracted much attention. As pointed out in Ref. 10, most of the reported segmentation methods on two-dimensional (2-D) and three-dimensional (3-D) OCT data were not practical for general clinical use due to their high computational costs. However, segmentation will be particularly valuable in fields such as computer-assisted surgery, where real-time visualization of the anatomy is a crucial component. As quantitative changes in different retinal layers are correlated well with changes in visual function and may represent surrogate indicators of glaucoma,33,34 age-related macular degeneration (AMD),35 type 1 diabetes,36 multiple sclerosis,3738.–39 Alzheimer’s disease,40 and Parkinson’s disease,41 it is of great value to quantify the thickness of different retinal layer boundaries with speed and accuracy. Our objective is to decrease computational costs while preserving reliable and accurate segmentation for spectral-domain OCT (SD-OCT) images. The main points of this study are listed below.
First, Kalman filtering42 to model the correlation between adjacent image frames is adopted, which, to the best of our knowledge, has not yet been addressed in existing methods. It has the advantages of avoiding the initialization of contours from the second frame and enhancing the robustness in the presence of retinal blood vessel shades and other artifacts of motion or uneven illumination. Second, segmentation of 2-D OCT images is converted to 1-D positioning by exploiting the layered structures of the retina. Finally, a customized active contour model with only an image energy term inspired by Jacob et al.43 is proposed to roughly localize the retina followed by smoothing based on the Savitzky–Golay algorithms44 to yield fast and accurate layer boundaries.
The rest of the paper is organized as follows. In Sec. 2, the proposed macular layer boundary segmentation and measurement techniques are presented. Experimental results are then described in Sec. 3, whereas discussion and conclusion are given in Sec. 4.
An OCT imaging system can obtain 2-D images (i.e., B-scans, where the number of A-scans is the width of a 2-D image) and 3-D volumes (volumetric data, containing spatially aligned multiple B-scans). Due to the information correlation among adjacent images, for volumetric data, the first frame and others are addressed separately. If a volumetric data is not available, the algorithm to process individual B-scans is the same as that to process the first frame of a volumetric data. We will divide Sec. 2 into eight subsections, which, respectively, deal with image projection to get the region of interest (ROI), directional filtering, multiresolution to estimate vitreous-NFL and IS/OS, customized active contour model for coarse segmentation, curve smoothing, image flattening and flatttening, Kalman filtering, and segmentation of other layer boundaries. The following flowcharts outline the main steps (Fig. 2), which are, respectively, for individual B-scans or the first frame of a volumetric data, and the nonfirst frame of a volumetric data.
Image Projection to Get Region of Interest
In OCT images, the data are substantially redundant in the axial direction. Searching for the ROI is essential to decrease the computational cost as well as to make the delineation more reliable. Once the ROI is determined, the subsequent processings will be performed within the ROI. A method of image projection is employed to attain the ROI.
First, each row of the B-scan to be processed is projected along the lateral direction:
Second, according to prior knowledge, the distance between the center line of RPE-IS/OS complex (center line of NFL) and the top (bottom) border of the ROI can be determined, which has a correlation with the height of the B-scan. That is, the distance can be set as . Here, is a predefined constant and is the height of the B-scan. In this study, we set . Figure 3 shows the derived ROI (the rectangle formed by the magenta and yellow lines).
Note that image projection to derive the ROI is only applied for processing individual B-scans or the first frame of a volumetric data. For the nonfirst frame of a volumetric data, Kalman filtering (see Sec. 2.7) is utilized to avoid redundant calculation of image projections frame by frame.
Directional Gaussian Filtering
The original B-scan is smoothed with 1-D Gaussian filter along the lateral direction as the layer boundaries are almost horizontal to suppress noise without blurring layer boundaries. Figure 4 shows the original and smoothed images along the lateral direction with the standard deviation (SD) being 5.0 and the mean being 0.
Multiresolution to Estimate Vitreous-Nerve Fiber Layer and Inner Segment/Outer Segment
The two most outstanding layer boundaries are the vitreous-NFL and IS-OS in an SD-OCT retinal B-scan. As such, these two boundaries could be extracted first. To be robust to noise and reduce computational costs, a multiresolution approach is adopted in four steps.
First, downsampling is performed. Suppose that the original image is denoted as , then the downsampling is carried out by a factor of 2 vertically to get .
This downsampling could be performed along both the and directions:
Second, the grayscale gradient magnitude is calculated at the coarse scale.
Third, the two initial contours are composed of pixels whose gradient magnitudes are maximum and the second maximum at each column.
Finally, the edge pixels at the coarse scale are converted to pixels of the original image by multiplying coordinates by 4 and coordinates by 16 to form the initial contours (Fig. 5).
Note that multiresolution for estimating vitreous-NFL and IS/OS is only employed for individual B-scans or the first frame of a volumetric data. For the nonfirst frame of a volumetric data, Kalman filtering (see Sec. 2.7) is employed instead.
Customized Active Contour Model for Coarse Positioning of Layer Boundaries
1. There is only one contour point at each column.
2. At each iteration, the contour point at each column could only move vertically, being either one pixel up or down.
3. The contour point at each column should be adjacent to the contour point of the previous and next columns (eight neighbors).
With these constraints, there might be only four configuration types for the three neighboring contour pixels (Fig. 6).
Here are some observations to justify the simplification of Eq. (11).
1. There are only two cases between the two adjacent contour points with being either 1 or .
2. The curvature term is 0 for the horizontal and diagonal cases, 1 for the inflectional case, and 2 for the V-shaped case.
Equation (11) could thus be simplified to include only the third term. In this study, the third term is defined locally to reduce the computational cost.Figure 7 shows the derived initial contours of vitreous-NFL and IS/OS from Eq. (12) of Fig. 5.
As the customized active contour model [Eq. (12)] has not taken into account continuity and curvature terms, the derived contour could be jagged. To make it smooth, curve smoothing is employed.
Savitzky–Golay smoothing filters44 have been widely used for 1-D filtering. The fundamental idea is to fit a polynomial to the data surrounding each data point. The difference between the general least-squares polynomial smoothing filters and the Savitzky–Golay smoothing filters lies in the fact that the latter introduces tables of convolution weights for the center-point, which could substantially enhance the speed.48 In this study, Savitzky–Golay smoothing filters are employed for curve smoothing of the layer boundaries with the order of the polynomial being 4 and a neighborhood size from 20 to 30 pixels for different layer boundaries.
Image Flattening and Unflattening
To handle the possible deformation of layer boundaries due to pathology, the image is flattened with respect to the extracted layer boundary of IS/OS. Specifically, the flattening process is carried out this way: find the maximum coordinate of the IS/OS layer boundary (denoted as ), for each column , the column will move () downwards with being the coordinate of the IS/OS boundary pixel. In this way, the extracted IS/OS becomes a horizontal line. Figure 8 shows the flattened image from Fig. 9. The other six layer boundaries are all extracted from the flattened image. The 8 layer boundaries are then derived by reversing the flattening process. It should be noted that these flattening and unflattening procedures are only applied to individual B-scans or the first frame of a volumetric data.
As a prior knowledge, for a volumetric data, the positions of layer boundaries in adjacent images (frames) are similar. The coarse layer boundaries of the current frame could be estimated from the layer boundaries of the previous frame. We employ Kalman filtering42 to track coarse layer boundaries frame by frame.
To approximate the layer boundaries, Kalman filter is formulated as a state equation [Eq. (13)] and a measurement equation [Eq. (14)]:
The state and variance could be predicted as follows:
The correction equation is
Then, the state can be updated
At last, the variance can be updated
To summarize, two phases [prediction by Eqs. (15) and (16) and updating by Eqs. (17)–(19)] of Kalman filtering are applied to attain the state with minimum variance.
Other Boundaries Segmentation
Once the two most outstanding layer boundaries are segmented, they can be utilized as constraints to facilitate segmentation of other layer boundaries. Figure 10 illustrates the flowchart of segmentation for other layer boundaries.
The segmentation of other layer boundaries consists of the same three steps: initialization, coarse segmentation by the customized active contour method [Eq. (12)], and curve smoothing through the Savitzky–Golay filtering described in Sec. 2.5.
There are two ways of initialization. One is to use prior knowledge to initialize for individual B-scans or the first frame of a volumetric data. The other layers are initialized as follows: OS/RPE is initialized below IS/OS, RPE-choroid is below OS-RPE, OPL-ONL is above IS/OS, INL-OPL is above OPL-ONL, IPL-INL is above INL-OPL, and NFL-GCL is the average of vitreous-NFL and IPL-INL in the vertical direction from our customized OCT imaging equipment with a pixel sized image.
The second way of initialization is for the nonfirst frame of a volumetric data using the Kalman filter described in Sec. 2.7, by employing the fact that layer boundaries between consecutive frames are similar.
In this section, experiments were carried out to validate the proposed method. Validation was first carried out on 21 SD-OCT volumes with each volume from one subject was collected from 21 healthy Chinese subjects and then comparisons were made with state-of-the-art methods for both normal and pathological eyes with AMD using common datasets. Finally, additional experiments were performed to vary the components of the algorithm for comparison.
For each layer of a B-scan, the thickness between neighboring boundaries was computed and then the average difference in layer thickness between the automatic and manual delineation was calculated for quantification. The average difference in layer thickness between two manual delineations was also calculated to reflect the possible uncertainty of measuring the layer thickness, even when done by experts.
Figure 11 showed the segmentation for a B-scan of a normal right eye.
Figure 12 illustrated that the proposed method could handle images with heavy noise and blood vessel artifacts.
Validation on Our Clinical Data
A customized SD-OCT system was adopted for 3-D imaging, which used a superluminescent diode with a full-width at half-maximum spectral bandwidth of 50 nm and a center wavelength of 840 nm, and a linear CCD camera (Aviiva-SM2-CL-2014, e2V) with a pixel size operating in an 11-bit mode. The optical power of the beam on the cornea was not greater than . Images were obtained with a depth of 1.68 mm and axial resolution of .
A dataset consisting of 21 SD-OCT volumes ( voxels, macular regions, and an in-plane pixel size of ) with each volume from one subject was collected from 21 healthy Chinese subjects at the Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine. One eye from each subject was selected randomly. The protocol of the research has been approved by the Institutional Review Board of the hospital. All subjects gave written consent and provided permission for educational and scientific purposes.
The automated detection of macular layer boundaries was performed successfully on all the 21 volumes. For segmenting 8 layer boundaries, it took an average of 5.8 s for each volume (128 B-scans per volume) and 95 ms for a B-scan on a personal computer (C program running on a 3.2 GHz Intel core i5-3470 CPU). These data were also segmented manually by two eye specialists as the reference for comparison with automatic segmentation.
The time distribution of the proposed method was recorded. For segmenting a B-scan, the time (95 ms) distribution was: image projection to get ROI (2 ms), 1-D Gaussian filter (5 ms), multiresolution to estimate the two boundaries (5 ms), image flattening and unflattening (3 ms), coarse segmentation (), and curve smoothing (). Here was used to process eight boundaries for each B-scan. For processing a volumetric data without the first frame, the time (5715 ms) distribution was: 1-D Gaussian filter (), Kalman filtering (), coarse segmentation (), and curve smoothing (). Here was used to process 127 B-scans of each volumetric data without the first frame.
Table 1 showed the absolute mean difference and SD of the 21 volumes.
Retinal layer thickness differences in μm of 21 volumes.
|Differences in retinal layer thickness|
|Retinal layer||Comparison of two eye specialists||Comparison of manual and automatic segmentation|
|Mean difference ± SD||Max error||Mean difference ± SD||Max error|
Note: SD, standard deviation; Max, maximum; RPE, retinal pigment epithelium; NFL, nerve fiber layer; GCL, ganglion cell layer; IPL, inner plexiform layer; INL, inner nuclear layer; OPL, outer plexiform layer; ONL, outer nuclear layer; OS, outer segment; IS, inner segment.
Comparison with a State-of-the-Art Method for Normal Eyes
This is to compare the proposed method with the method of Chiu et al.23 using their data and their manual segmentation as a reference for normal eyes. The 10 volumes were acquired by Bioptigen, Inc. (Research Triangle Park, North Carolina) imaging systems with an axial spacing of . Only 100 B-scans from the 10 volumes were manually delineated by two experts (one junior and one senior) and the manual delineation by the senior expert was taken as the reference, among which 29 were used for interexpert comparison.
The average times to derive the 8 layer boundaries of a B-scan for the proposed method and Chiu’s method23 were, respectively, 60 and 9740 ms.
The comparison results between two experts as well as between the proposed method with Ref. 23 were summarized in Table 2. Paired -tests were carried out to find that there existed a significant difference in the accuracy of INL (), OPL (), ONL-IS (), OS (), RPE (), and total retina (), while errors of the NFL and GCL-IPL did not have significant differences.
Comparison with Ref. 23 for thickness measurement of retinal layers of 10 volumes of normal eyes. The mean and SD are all in voxels with the voxel size being 3.23 μm.
|Differences of retinal layer thickness|
|Retinal layer||Comparison between two manual graders||Comparison between automatic and manual segmentations|
|Column I||Column II (Chiu’s method)||Column III (proposed)||Column IV (Chiu’s method)||Column V (proposed)|
|29 B-scans||100 B-scans|
Note: * indicates significant differences.
Comparison with a State-of-the-Art Method for Eyes with Age-Related Macular Degeneration
AMD is a primary cause of vision loss worldwide.49 To validate the applicability of the proposed method to data with pathology, the second direct comparison was to compare the proposed method with the method of Chiu et al.26 using their data and their manual segmentation as a reference for AMD patients.
In the study of Chiu et al.26 for SD-OCT images of eyes containing drusen and geographic atrophy (GA), segmentation was only performed for three retinal layer boundaries [the inner internal limiting membrane (ILM), inner RPE drusen complex, and outer Bruch’s membrane defined in Ref. 26 corresponded to, respectively, vitreous-NFL, OS-RPE, and RPE-choroid in Fig. 1]. Figure 13 showed the segmentation of a B-scan of an AMD patient from Ref. 26.
In the Chiu’s study, 220 B-scans across 20 patients (one volume per patient) for eyes with AMD were acquired by Bioptigen, Inc. (Research Triangle Park, North Carolina) imaging systems with an axial pixel resolution ranging from 3.06 to per pixel (Devers Eye Institute, ; Duke Eye Center, ; Emory Eye Center, ; and the National Eye Institute, ). All the SD-OCT images were manually delineated by two certified experts (one junior and one senior) and the manual delineation by the senior expert was taken as the referencel all of the images were used for interexpert comparison.
The comparison results between the proposed and Ref. 26 were summarized in Table 3. Paired -tests were carried out to find that there existed a significant difference in the accuracy of: total retina of drusens with low quality (), total retina of GA with high quality (), total retina of GA with low quality (), and total retina for both the drusens and GA images (), while errors of the rest did not have significant differences.
Comparison with Ref. 26 for retinal layers of 20 volumes for patients with age-related macular degeneration. The mean and SD are all in pixels with the pixel size ranging from 3.06 to 3.24 μm.
|Comparison between two manual graders||Comparison between automatic and manual segmentations|
|Pathology||Quality||Volume group||Retinal ayer boundary||Mean error ± SD (pixels)||Max error; error > 5 pixels (μm; %)||Mean error ± SD (pixels)||Max error; error > 5 pixels (μm; %)||Mean error ± SD (pixels)||Max error; error > 5 pixels (μm; %)|
|Drusen (110 images)||High||1||Total retina||1.01±0.73||13; 5.5||0.78±0.55||17; 2.5||0.83±0.60||17; 4.2|
|RPEDC||1.31±0.99||16; 5.8||0.94±0.67||17; 3.2||0.91±0.56||13; 3.1|
|Low||2||Total retina||1.04±0.71||22; 10.0||1.17±0.75||22; 7.6||0.91±0.69||19; 6.5|
|RPEDC||1.43±1.11||23; 12.5||0.90±0.65||23; 7.8||0.84±0.52||21; 7.3|
|GA (110 images)||High||3||Total retina||1.45±1.27||32; 12.8||1.55±0.92||32; 11.4||1.15±1.05||29; 10.5|
|RPEDC||1.52±1.34||31; 13.3||1.29±1.07||28; 11.3||1.03±1.03||23; 8.4|
|Low||4||Total retina||0.84±0.73||23; 10.6||1.73±0.94||30; 10.6||1.43±1.21||29; 11.7|
|RPEDC||1.38±1.00||22; 13.4||0.92±0.68||31; 7.8||0.89±0.60||26; 7.5|
|Total||Total retina||1.09±0.91||32; 9.7||1.31±0.88||32; 8.0||1.08±0.73||29; 8.2|
|RPEDC||1.41±1.16||31; 11.3||1.01±0.80||31; 7.5||0.92±0.83||26; 6.6|
Note: SD for standard deviation, GA for geographic atrophy, RPE drusen complex (RPEDC) for retinal pigment epithelium drusen complex, total retinal is the distance between the inner ILM to inner RPEDC, and * for significant difference.
To see the comparative performance with respect to different initializations for a volumetric data from the second frame (with the subsequent processing steps unchanged), two variants were devised: one variant to use the segmented layer boundaries of the previous frame as the corresponding initial layer boundaries of the current frame and this variant was denoted as method A; and the other variant was to ignore the spatial correlation to derive initial layer boundaries independently in the same way as was done for the first frame, and this variant was denoted as method B. Table 4 summarized the performance of the original method and the variants.
Comparison among the proposed method (using Kalman filtering), method A (replacing Kalman filtering with direct casting from the previous frame), and method B (initializing boundary layers independently).
|Average time for one B-scan of a volumetric data (ms)||Mean difference ± SD (μm)|
|Proposed||Method A||Method B||Proposed||Method A||Method B|
|Our clinic data||45.3||44.2||95.2||2.35±1.32||3.21±2.01||2.76±1.67|
|Chiu’s normal data||32.3||31.7||60.3||2.33±2.10||3.32±2.57||2.83±2.31|
|Chiu’s pathological data||23.1||22.6||45.5||3.41±2.61||4.05±3.02||3.51±3.03|
In refining the boundary layers from coarse segmentation, the proposed method employed a combination of the customized/simplified active contour model and Savitzky–Golay smoothing filter. A variant was to use the full-active contour model. To this end, the active contour model in Ref. 46 was implemented for comparison, and this variant was denoted as method C. The comparative performance was summarized in Table 5.
Comparison between the proposed method (using customized active contour plus Savitzky–Golay smoothing filter) and method C (using the active contour model in Ref. 46).
|Average time for one B-scan of volumetric data (ms)||Mean difference ± SD (μm)|
|Proposed||Method C||Proposed||Method C|
|Our clinic data||45.3||63.2||2.35±1.32||3.41±2.47|
|Chiu’s normal data||32.3||44.7||2.33±2.10||3.53±2.76|
|Chiu’s pathological data||23.1||34.3||3.41±2.61||4.15±3.16|
Discussion and Conclusion
Sensitivity to Parameters
The parameter determines the size of the ROI. We alter the in the range from 0.05 to 0.40 to yield similar segmentation accuracy (with the total retinal thickness showing a difference of ), but different time consumption (the average time is in the range from 71 to 116 ms). A value of 0.1 seems a good trade-off between segmentation accuracy and time consumption.
The parameter is related to the smoothing degree of Gaussian filtering. As changing the in [1.0, 10.0] yields almost the same segmentation, a value of 5.0 is chosen for this study.
Comparison with Existing Methods
The performance of existing methods is to be compared in terms of computational cost and accuracy.
Segmentation of nerve fiber for a single image (with 1000 A-scans) could take as long as 62 s.14 It took 24 s to segment seven layer boundaries for a pixels’ image.5 The RPE segmentation took 8.3 min for a volume with 60 images ( pixels).50 As to processing normal OCT volumetric data, the segmentation time for RPE and vitreous-NFL on volumetric data was 37 s,10 and for 11 layer boundaries on volumetric data was 70 s.22 The average computation time was () for 8 layer boundaries.23 The fastest approach was 16 s for detecting nine layer boundaries per volume () by a commercial system named Topcon 3-D OCT-1000 (Topcon Medical Systems, Oakland, New Jersey).24 The proposed method segments 8 layer boundaries within 5.8 s for a volumetric data and 95 ms for an individual B-scan with similar spatial resolutions (), being substantially faster than these existing methods.
Garvin et al.19,20 reported an overall mean absolute boundary differences of 19 and .20 Better accuracies were achieved to be in the range of 3 to , i.e., ,24 ,23 and .31 The proposed method yields a comparable maximum absolute boundary difference of (Table 1). As these accuracies are from the segmentation of different data, it is inherently difficult to judge which method yields the best accuracy, as the segmentation accuracy depends on other factors in addition to segmentation method, such as imaging type [time-domain OCT (TD-OCT), or SD-OCT], the imaging quality, and axial and lateral resolutions.
We have tried to make direct comparisons using common datasets with existing methods.
On the common dataset with normal eyes,23 the proposed method has been compared with Chiu’s method.23 From Table 2, it can be seen that the proposed method yields similar accuracy to that of Ref. 23 (all the quantities of the proposed method have a smaller average error than Ref. 23 for the 29 scans, and seven out of the eight quantities of the proposed method have a smaller average error than Ref. 23 for the 100 B-scans), and all the mean differences of the proposed method were smaller than the mean difference of the two references from the two eye specialists. For the segmentation of 8 layer boundaries of 100 normal B-scans from Ref. 23, the proposed method has a significantly higher accuracy than Ref. 23 for INL, OPL, ONL-IS, OS, RPE, and total retina, while errors of NFL and GCL-IPL do not have significant differences. The proposed method is 162 (9740/60) times faster than Chiu’s method23 to derive the 8 layer boundaries of a B-scan.
On the common dataset of patients with AMD,26 the proposed method has been compared with Chiu’s method.26 From Table 3, it could be seen that the proposed method yields similar segmentation accuracy to that of Ref. 26 for the pathological data (9 out of 10 quantities of the proposed method have a smaller average error than Chiu’s method26). For the segmentation of three layer boundaries of 110 B-scans with drusens and 110 B-scans with GA,26 the proposed method has a significantly higher accuracy than Ref. 26 for the total retina of drusens with low quality, total retina of GA with high quality, total retina of GA with low quality, and total retina for both the drusens and GA images, while errors of the rest do not have significant differences. For these pathological data, the proposed method is 37 (1700/45) times faster than Chiu’s method26 to derive the three layer boundaries of a B-scan.
The recent evolution of OCT from the time domain to the spectral domain has substantially increased imaging speed and made the acquisition of volumetric data with a huge number of bytes feasible. Therefore, fast and automatic segmentation of layers and thickness measurement for OCT data has become increasingly more important than ever. Our purpose is to minimize processing time while maintaining reliable segmentation results. The proposed method has the following advantages.
1. Fast. The average time cost of the proposed method to extract 8 layer boundaries is 5.8 s for volumetric data and 95 ms for a B-scan, while existing methods will take several minutes.10 The low computational cost can be attributed mainly to the framework of the proposed method: image projection to initialize the two most prominent layers (RPE-IS/OS and NFL), 1-D Gaussian filtering along lateral direction instead of 2-D Gaussian filtering, approximating the RPE-IS/OS and NFL based on grayscale gradient magnitude at low spatial resolution (one 16th of the original and quarter of the original ), customized active contour model (only calculating grayscale difference along the lateral direction) to refine the layer boundaries, Savitzsky–Golay smoothing with precalculated weights, and Kalman filtering with a low order () matrix manipulation to initialize layer boundaries from previous frame for volumetric data.
2. Accurate. For both normal eyes and eyes with AMD, the proposed method yields similar or higher accuracy as compared with the state-of-the-art methods23 (Table 2) and Ref. 26 (Table 3). This could be ascribed to the core components of the proposed method: employment of prior knowledge to derive ROI, Kalman filtering to perform well even in the presence of heavy noise and artifacts, and customized active contour plus Savitzsky–Golay smoothing to balance between converging to edges and smoothness.
3. Being able to handle pathologies. According to our methodology, the pathology will only influence the initialization of layer boundaries of the first frame of a volumetric data, as the initialization of layer boundaries of subsequent fames will be cast from the previous frame. Due to their edge prominence, the virtreous-NFL and IS/OS could be extracted irrespective of pathologies. To get rid of the influence of a possible deformation of layer boundaries due to pathologies for any individual B-scan or the first frame of a volumetric data, the image is flattened such that the extracted IS/OS boundary is a horizontal line, and the other six layer boundaries will be segmented from the flattened image by taking the horizontal line of IS/OS as the first basis with subsequently extracted layer boundaries being possibly used as bases for those layer boundaries to be extracted from. The method has been validated on pathological eyes with drusens and GA to perform well (Fig. 13 and Table 3).
Contributions and Limitations
From Table 4, it can be found that initialization of layer boundaries through Kalman filtering could yield more accurate segmentation than the direct casting and independent initialization. As a matter of fact, this feature of Kalman filtering may be employed for initialization of other structures of volumetric data, such as the brain from 3-D computed tomography or magnetic resonance images once the brain from a 2-D slice is available.
From Table 5, it can be seen that the proposed combination of a customized/simplified active contour model and Savitzky–Golay smoothing is better than the active contour method46 in terms of both segmentation accuracy and computational cost. This feature may be explored as one more option to replace the active contour methods.
The novelties of the study are fourfold: determination of the ROI from grayscale projection and prior knowledge, Kalman filtering to initialize layer boundaries from the previous frame, customized active contour for coarse delineation of layer boundaries, and curve smoothing with precalculated weights. All these contribute to substantial reduction of computational cost while preserving segmentation accuracy.
The study is not without limitations. In particular, this study is focused on methodology and validation has only been carried out for retinal layer boundaries in OCT images of normal eyes and eyes with AMD. In the future, we will be handling more pathological cases.
The proposed method is based on the assumption that the layer boundaries are visible in the images and different layer boundaries will not intersect. We humbly believe that the proposed method will be applicable to those pathologies as long as they will not invalidate this assumption.
Existing methods could segment different numbers of layer boundaries (and/or surfaces), i.e., 2 boundaries (Ref. 9 with SD-OCT for posterior retinal layers of human eyes,10 with SD-OCT for AMD of human eyes,11 with SD-OCT for total retinal thickness of human eyes,12 with TD-OCT for total retinal thickness of human eyes,13 with TD-OCT for optic nerve head of human eyes,14 with SD-OCT for NFL thickness of human eyes,15 with SD-OCT for optic nerve head of human eyes), 3 boundaries (Ref. 26 with SD-OCT for AMD of human eyes), 4 boundaries (Ref. 25 with SD-OCT for patients with retinitis pigmentosa), 5 boundaries (Ref. 4 with TD-OCT for retinal layers of human eyes,30 with SD-OCT for retinal layers of human eyes), 6 boundaries (Ref. 18 with SD-OCT for intraretinal layers of rodent eyes,29 with SD-OCT for retinal layers of human eyes), 7 boundaries (Ref. 5 with TD-OCT for retinal layer of human eyes,8 with TD-OCT for retinal layers of human eyes), 8 boundaries (Ref. 23 with SD-OCT for retinal layers of human eyes,32 with TD-OCT for retinal layers of diabetic eyes), 9 boundaries (Ref. 24 with SD-OCT for retinal layers of human eyes,31 with SD-OCT for retinal layers of human eyes), 10 boundaries (Ref. 17 with SD-OCT for intraretinal layers of rat retinas,28 with SD-OCT for mouse retina), and 11 boundaries (Ref. 22 with SD-OCT for retinal layers of human eyes), varying from OCT types (TD-OCT or SD-OCT), purposes (total retinal thickness, optic nerve head, intraretinal, outer retinal, or pathology), and subjects (human or animal). Among the methods segmenting more layer boundaries than the proposed method, Refs. 17 and 28 focused on rat (or mouse) retina, which are different from human retina; as to Refs. 22, 24, and 31, they adopted Cirrus HD-OCT machines (Carl Zeiss Meditec, Inc., Dublin, California), Topcon 3-D OCT-1000 equipment (Topcon Medical Systems, Oakland, New Jersey), and Spectralis OCT system (Heidelberg Engineering, Heidelberg, Germany) respectively, which are all commercial systems with better signal-to-noise ratio than our customized system. In this study, as experts are only able to delineate 8 layer boundaries due to image quality, we opt to segment these layer boundaries to demonstrate the effectiveness of the methodology. We believe that the proposed methodology could be easily extended for extraction of more layers when the imaging quality is enhanced to be comparable with that of existing commercial systems.
To conclude, in this paper, an approach for automatic measurement of macular thickness using SD-OCT has been proposed and validated. A quantitative evaluation has been performed on 21 volumetric data covering the usually observed variability, by comparing the results with those obtained manually by two experts. For segmenting 8 layer boundaries on each B-scan, quantitative results show that the average layer boundary error is below , being smaller than the difference between two eye experts’ manual delineations with an average processing time of 5.8 s for a volumetric data () and 95 ms for an individual B scan. The algorithm has been compared with state-of-the-art methods (Refs. 23 and 26) on a common dataset of normal eyes23 and a common dataset of AMD patients26 to yield similar or higher accuracy (Tables 2 and 3) and is 37 times faster to derive the 8 or 3 layer boundaries of a B-scan. The proposed method could be a potential tool to quantify the retinal layer boundaries of normal subjects and AMD patients from SD-OCT. In the future, we are going to extend the methodology to more layer boundaries on one hand, and to more eye diseases, such as diabetic macular edema,51 for computer-assisted diagnosis on the other hand, from SD-OCT images.
We would like to thank Dr. Stephanie J. Chiu for providing the datasets introduced in Ref. 23 (normal) and Ref. 26 (pathological). The work is supported by Key Joint Program of National Natural Science Foundation and Guangdong Province of China U1201257, Shenzhen Key Technical Development Grant (No. CXZZ20140610151856719), and Introduced Innovative R&D Teams of Guangdong Province of China “Robot and Intelligent Information Technology” (Grant No. 201001D0104648280), and “Technologies for Image-guided Bio-simulation Radiotherapy Machinery” (Grant No. 2011S013).
M. Shahidi, Z. Wang and R. Zelkha “Quantitative thickness measurement of retinal layers imaged by optical coherence tomography,” Am. J. Ophthalmol. 139(6), 1056–1061 (2005).AJOPAA0002-9394http://dx.doi.org/10.1016/j.ajo.2005.01.012Google Scholar
D. Cabrera Fernández, H. M. Salinas and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13(25), 10200–10216 (2005).OPEXFF1094-4087http://dx.doi.org/10.1364/OPEX.13.010200Google Scholar
M. Szkulmowski et al., “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retinal and retinal pathologies,” J. Biomed. Opt. 12(4), 041207 (2007).JBOPFO1083-3668http://dx.doi.org/10.1117/1.2771569Google Scholar
O Tan et al., “Mapping of macular substructures with optical coherence tomography for glaucoma diagnosis,” Ophthalmology 115(6), 949–956 (2008).OPANEW0743-751Xhttp://dx.doi.org/10.1016/j.ophtha.2007.08.011Google Scholar
M. Bagci et al., “Thickness profiles of retinal layers by optical coherence tomography image segmentation,” Am. J. Ophthalmol. 146(5), 679–687 (2008).AJOPAA0002-9394http://dx.doi.org/10.1016/j.ajo.2008.06.010Google Scholar
C Ahlers et al., “Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography,” Br. J. Ophthalmol. 92, 197–203 (2008).BJOPAL0007-1161http://dx.doi.org/10.1136/bjo.2007.120956Google Scholar
T. Fabritius et al., “Automated segmentation of the macula by optical coherence tomography,” Opt. Express 17(18), 15659–15669 (2009).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.17.015659Google Scholar
D. Koozekanani, K. L. Boyer and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE T. Med. Imaging 20(9), 900–916 (2001).ITMID40278-0062http://dx.doi.org/10.1109/42.952728Google Scholar
K. L. Boyer, A. Herzog and C. Roberts, “Automatic recovery of the optic nervehead geometry in optical coherence tomography,” IEEE T. Med. Imaging 25(5), 553–570 (2006).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2006.871417Google Scholar
V. J. Srinivasan et al., “Characterization of outer retinal morphology with high-speed, ultrahigh-resolution optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 49(4), 1571–1579 (2008).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.07-0838Google Scholar
M. Mujat et al., “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13(23), 9480–9491 (2005).OPEXFF1094-4087http://dx.doi.org/10.1364/OPEX.13.009480Google Scholar
D. C. Fernández, “Delineating fluid-Filled region boundaries in optical coherence tomography images of the retina,” IEEE Trans. Med. Imaging 24(8), 929–945 (2005).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2005.848655Google Scholar
A. Mishra et al., “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.17.023719Google Scholar
A. Yazdanpanah et al., “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2010.2087390Google Scholar
M. A. Mayer et al., “Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,” Biomed. Opt. Express 1(5), 1358–1383 (2010).BOEICL2156-7085http://dx.doi.org/10.1364/BOE.1.001358Google Scholar
M. K. Garvin et al., “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2008.923966Google Scholar
M. K. Garvin et al., “Automated 3D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2009.2016958Google Scholar
K. Lee et al., “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imaging 29(1), 159–168 (2010).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2009.2031324Google Scholar
G. Quellec et al., “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging 29(6), 1321–1330 (2010).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2010.2047023Google Scholar
S. J. Chiu et al., “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express, 18(18), 19413–19428 (2010).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.18.019413Google Scholar
Q. Yang et al., “Automated layer segmentation of macular OCT images using dual-scale gradient information,” Opt. Express 18(20), 21293–21307 (2010).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.18.021293Google Scholar
Q. Yang et al., “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2(9), 2493–2503 (2011).BOEICL2156-7085http://dx.doi.org/10.1364/BOE.2.002493Google Scholar
S. J. Chiu et al., “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.11-7640Google Scholar
X. Chen et al., “3D segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph-cut,” IEEE Trans. Med. Imaging 31(8), 1521–1531 (2012).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2012.2191302Google Scholar
P. P. Srinivasan et al., “Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology,” Biomed. Opt. Express 5(2), 348–365 (2014).BOEICL2156-7085http://dx.doi.org/10.1364/BOE.5.000348Google Scholar
P. A. Dufour et al., “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging 32(3), 531–543 (2012).ITMID40278-0062http://dx.doi.org/10.1109/TMI.2012.2225152Google Scholar
K. A. Vermeer et al., “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2(6), 1743–1756 (2011).BOEICL2156-7085http://dx.doi.org/10.1364/BOE.2.001743Google Scholar
A. Lang et al., “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013).BOEICL2156-7085http://dx.doi.org/10.1364/BOE.4.001133Google Scholar
G. M. Somfai et al., “Automated classifiers for early detection and diagnosis of retinopathy in diabetic eyes,” BMC Bioinf. 15(106), 1–10 (2014).BBMIC41471-2105.http://dx.doi.org/10.1186/1471-2105-15-106Google Scholar
D. S. Greenfield, H. Bagga and R. W. Knighton, “Macular thickness changes in glaucomatous optic neuropathy detected using optical coherence tomography,” Arch. Ophthalmol. 121, 41–46 (2003).AROPAW0003-9950http://dx.doi.org/10.1001/archopht.121.1.41Google Scholar
V. Guedes et al., “Optical coherence tomography measurement of macular and nerve fiber layer thickness in normal and glaucomatous human eyes,” Ophthalmology 110, 177–189 (2003).OPANEW0743-751Xhttp://dx.doi.org/10.1016/S0161-6420(02)01564-6Google Scholar
H. W. van Dijk et al., “Selective loss of inner retinal layer thickness in type 1 diabetic patients with minimal diabetic retinopathy,” Invest. Ophthalmol. Visual Sci. 50, 3404–3409 (2009).IOVSDA0146-0404http://dx.doi.org/10.1167/iovs.08-3143Google Scholar
E. M. Frohman et al., “Optical coherence tomography: a window into the mechanisms of multiple sclerosis,” Nat. Clin. Pract. Neurol. 4, 664–675 (2008).http://dx.doi.org/10.1038/ncpneuro0950Google Scholar
S. Saidha et al., “Primary retinal pathology in multiple sclerosis as detected by optical coherence tomography,” Brain 134, 518–533 (2011).BRAIAK0006-8950http://dx.doi.org/10.1093/brain/awq346Google Scholar
S. Saidha et al., “Microcystic macular oedema, thickness of the inner nuclear layer of the retina, and disease characteristics in multiple sclerosis: a retrospective study,” Lancet Neurol. 11, 963–972 (2012).http://dx.doi.org/10.1016/S1474-4422(12)70213-2Google Scholar
M. Jacob, T. Blu and M. Unser, “Efficient energies and algorithms for parametric snakes,” IEEE Trans. Image Process. 13(9), 1231–1244 (2004).IIPRE41057-7149http://dx.doi.org/10.1109/TIP.2004.832919Google Scholar
A Savitzky and M. J. E. Golay, “Smoothing and differentiation of data by simplified least square procedures,” Anal. Chem. 36(8), 1627–1639 (1964).ANCHAM0003-2700http://dx.doi.org/10.1021/ac60214a047Google Scholar
E. Götzinger et al., “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express 16(21), 16410–16422 (2008).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.16.016410Google Scholar
Tianqiao Zhang is a PhD student with Shenzhen Institutes of Advanced Technology since 2011. He received his master's degree from Guangxi Normal University of China in 2006. Then he worked as a software engineer for Huawei Technologies Co., Ltd., a senior image analysis engineer, and a team leader with Shenzhen Moptim Imaging Technique Co., Ltd. His research interests include medical image processing and machine vision.
Zhangjun Song received his PhD from Tsinghua University in 2007. Then he worked as an assistant researcher, was promoted to and has been working as a principal investigator with Shenzhen Institutes of Advanced Integration Technology, Chinese Academy of Sciences. He also worked as a research fellow in National University of Singapore for more than one year. His research interests include machine vision and motion control of mobile robots.
Xiaogang Wang has been with the Shanxi Eye Hospital since 2014 as a resident of ophthalmology. He obtained the MD degree from Shanghai Jiao Tong University. His research activities are mainly about ophthalmic imaging technologies for the eye, especially about optical coherence tomography, and Scheimpflug imaging technology.
Huimin Zheng is a research assistant at Shenzhen Institutes of Advanced Technology. She received her BS degree in information and computing technology from Guilin University of Electronic Technology in 2010. Her current research interest is medical image processing.
Fucang Jia received his PhD in computer application technology from Institute of Computing Technology, Chinese Academy of Sciences in 2004. He then worked as an engineer and senior engineer in Shenzhen Anke High-Tech Co., Ltd. From 2008, he joined Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, as one principle investigator. His research interests include medical image processing and computer assisted surgery.
Jianhuang Wu received his PhD from Shenyang Institute of Automation, Chinese Academy of Sciences, in 2007. He is currently an associate professor at the Research Laboratory for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. His research interests include virtual surgery, medical visualization, and computer graphics.
Guanglin Li received his PhD from Zhejiang University in 1997. He was a research associate at University of Illinois at Chicago. Then he joined the BioTechPlex Corporation as a senior scientist, and later became a senior research scientist in the Neural Engineering Center for Artificial Limbs at the Rehabilitation Institute of Chicago. He is currently a professor with Shenzhen Institutes of Advanced Technology. His current research interests include neural engineering and computational biomedical engineering.
Qingmao Hu is a professor with Shenzhen Institutes of Advanced Technology since 2007. He received his PhD from Huazhong University of Science and Technology in 1990. Then he worked as a lecturer and associate professor for the South Medical University of China, a guest scientist with University of Bern in Switzerland, and a senior scientist for A*STAR in Singapore. His research interests include medical image processing, pattern recognition, and computer-aided diagnosis.