Open Access
28 July 2016 Length-adaptive graph search for automatic segmentation of pathological features in optical coherence tomography images
Brenton Keller, David Cunefare, Dilraj S. Grewal, Tamer H. Mahmoud, Joseph A. Izatt, Sina Farsiu
Author Affiliations +
Abstract
We introduce a metric in graph search and demonstrate its application for segmenting retinal optical coherence tomography (OCT) images of macular pathology. Our proposed “adjusted mean arc length” (AMAL) metric is an adaptation of the lowest mean arc length search technique for automated OCT segmentation. We compare this method to Dijkstra’s shortest path algorithm, which we utilized previously in our popular graph theory and dynamic programming segmentation technique. As an illustrative example, we show that AMAL-based length-adaptive segmentation outperforms the shortest path in delineating the retina/vitreous boundary of patients with full-thickness macular holes when compared with expert manual grading.

1.

Introduction

Optical coherence tomography (OCT) is widely utilized for many medical imaging applications, especially for diagnosing retinal diseases.1 The thickness of individual layers in the retina is an important quantitative biomarker for diagnosis, management, and prognosis of ocular and neurodegenerative diseases. As manual segmentation of retinal layer boundaries visualized on OCT images is often too time-consuming and subjective for utilization in a clinical setting, several automated segmentation algorithms have been developed.224 Despite progress and development of algorithms applicable to images for normal eyes and for eyes with limited pathology, segmentation of retinal layers with significant pathology is still a challenging problem. In addressing the problem of dealing with diseased retinas with severe deformation, graph-based segmentation frameworks have been among the most promising.7,1116,20

Finding the shortest path through a graph, often implemented via Dijkstra’s algorithm,25 is the core engine behind many graph-based automated segmentation methods, including our popular graph theory and dynamic programming (GTDP) framework.2,7,11 For healthy retinas [Fig. 1(a)]26 with layers that cross each A-scan only once the shortest path metric works well. However, the shortest path metric cannot faithfully represent certain classes of pathology in retinal images. An example is the case of a full-thickness macular hole, which is defined as a lesion with interruption of all layers of the neurosensory retina that develops in the fovea27 [Fig. 1(b)]. In these pathologic eyes, the retina/vitreous boundary can cross an A-scan more than once, negating the applicability of Dijkstra’s algorithm (Fig. 2).

Fig. 1

(a) Comparison of a macular OCT image centered at the fovea in a healthy retina; (b) to a retina with a full-thickness macular hole.

JBO_21_7_076015_f001.png

Fig. 2

Comparison of the shortest path and length-adaptive graph search. The function in (a), which may represent a normal retinal layer boundary, crosses each A-scan only once. Thus, the shortest path perfectly matches the contour of the target object, as this path goes through the minimum number of vertices, which have the lowest weights (b). However, the nonfunction in (c), which may represent retina vitreous boundary in a full-thickness macular hole case, crosses some A-scans more than once. In this case, the shortest path metric does not match the contour of the object (d), because the alternative path going through a fewer number of vertices, even though they may have higher weights, has a lower overall path length. On the other hand, due to path length normalization, the length-adaptive search is less sensitive to the geometric path length, and correctly segments both features (e and f).

JBO_21_7_076015_f002.png

The main contributions of this paper are to

  • 1. demonstrate the shortcomings of the popular shortest path metric and the importance of utilizing length-adaptive metrics in design of graph-based layer segmentation algorithms,

  • 2. introduce a new length-adaptive graph search metric, applicable to a wide variety of layer segmentation problems in medical images,

  • 3. introduce an algorithm for implementing this metric in the graph-search framework, and

  • 4. show, as a practical example, how this metric can be utilized in an automatic algorithm for segmenting the retina/vitreous boundary in OCT images of eyes with full-thickness macular holes.

2.

Methods

When applying a general graph search technique to image boundary segmentation, each pixel in the image is converted to a node in the graph. Nodes in our graph are connected to their surrounding eight neighbors via a positive weighted edge. Edge weights between neighbors are assigned based on a feature between the two nodes (pixels) the edge connects. This feature is then converted into a weight via a mapping function. The cost of traversing an edge expresses how likely this edge corresponds to a boundary.28 Examples of features include, but are not limited to, pixel intensity and the gradient between the neighboring pixels. Once the graph of nodes and edges is constructed, boundaries (layers) can be found by searching for paths through the graph. In order to find a path through the graph, search algorithms seek paths that minimize a cost metric (e.g., sum of the edge weights in a path also known as the total path length).

2.1.

Length-Adaptive Graph Search

In the shortest path problem, given a graph with a set of vertices (nodes) and positive weighted edges, the goal is to find a path that connects a start vertex to an end vertex by minimizing the total path length

Eq. (1)

Path Length=i=1N1wi,
where N is the total number of nodes and wi is the i’th edge weight in the path. Following the principle of mean arc length29 and normalized cuts,30 in our proposed length-adaptive method the goal is to find a path that minimizes the following metric, which we call the AMAL,

Eq. (2)

AMAL=1N1(i=1N1wi)x,
where x is a constant. In the case where x is one, Eq. (2) simplifies to the mean arc length of the path.

To demonstrate the effect of varying the value of exponent in Eq. (2), we have segmented the same image with four different values of x, shown in Fig. 3. Utilization of a graph search metric which is completely independent of path length [i.e., x=1, Fig. 3(a)] may result in erroneously long paths. Increasing the value of x causes the length-adaptive algorithm to become more sensitive to the total distance of the path. This results in a shorter overall path length of the segmentation, which, when segmenting the retina/vitreous boundary of a full-thickness macular hole, reduces the accuracy. Searching for a path with an AMAL calculated with an exponent greater than 1 in Eq. (2) provides a balance between the total distance of the path and the mean arc length of the path.

Fig. 3

Illustration of the effect varying x in Eq. (2) has on retina/vitreous boundary segmentation in a patient with a macular hole. (a) Segmentation with x=1.0 (i.e., the mean arc length of the path).29 (b) Segmentation with x=1.02 (value used in validation). (c) Segmentation with x=1.25. (d) Segmentation with x=1.75.

JBO_21_7_076015_f003.png

2.2.

Implementation of the Adjusted Mean Arc Length Metric

Unfortunately, unlike the classic shortest path-based segmentation, which can be implemented efficiently using Dijkstra’s algorithm,2 implementation of length-adaptive graph search using the AMAL metric is more challenging. Wimer et al.29 developed two methods for finding the path with the lowest mean arc length in weighted graphs: an iterative approach that finds an approximation to the solution and an exact solution. These two methods could potentially be modified to find the path with the lowest AMAL, but they both assume the graph is acyclic, where the search space prevents visiting a vertex more than once. Because we want to segment arbitrary shapes (Fig. 4), edges between neighboring nodes must be bidirectional, which creates cycles in the search space. Without cycles, the segmentation would only be able to search in one direction along the horizontal axis of the image, either left-to-right or right-to-left. Thus, in order to segment arbitrary shapes, our overall graph search space, but not the final segmentation, must contain cycles.

Fig. 4

Illustration of a graph that prevents cycles in the search space. Black arrows represent attempted segmentation and the white squares represent the image to be segmented. Because the graph search space does not contain cycles, there cannot be bidirectional edges between neighboring nodes. Therefore, correct segmentation of nonfunctional shapes becomes impossible (red circle). To prevent this problem, we allow our graph search space to contain cycles.

JBO_21_7_076015_f004.png

In this paper, to find an estimate of the path with the lowest AMAL in a graph, we created a priority queue sorted by the AMAL value of visited nodes. For each node in the graph, we recorded the number of neighbor nodes, the edge weights to the neighbor nodes, the mean arc length of the path up to that node, the previous node, the number of previous nodes in the path, and the AMAL of the path up to that node. The priority queue was initialized with the start node, which had an AMAL of zero, and we set the AMAL value of all other nodes to infinity. We then modified the iterative Dijkstra’s method outlined by Chen et al.31 to greedily search for the path with the lowest AMAL. At every iteration, we removed the node with the lowest adjusted mean arc length from the top of the queue and made it the current node (Fig. 5 line 4). We generated a new AMAL for each neighbor of the current node (Fig. 5 line 6) using the equation

Eq. (3)

AMALnew=(Niw¯i+wij)xNi+1,
where w¯i is the mean arc length of the path up to the current node, wij is the weight between the current node and the neighbor node, Ni is the current node’s number of previous nodes, and x is the exponential factor. We then checked each neighbor node to see if it satisfied one of four conditions. If the new AMAL was lower than the old AMAL, the mean arc length, AMAL, previous node, and number of previous nodes were all updated and the neighbor node was placed in the queue (Fig. 5 lines 7 to 8). If the new path to the neighbor superseded its stored path, we updated the mean arc length, AMAL, previous node, number of previous nodes and placed the neighbor node in the queue (Fig. 5 lines 9 to 16). If the new path to the end node (used for automatic endpoint initialization, see below) superseded its stored path, we looked at all the nodes connected to the end node, found the one that produced the lowest AMAL for the end node, and updated the end node accordingly (Fig. 5 lines 17 to 21). The algorithm terminated when there were no more nodes left in the queue. To determine the path with the lowest AMAL, we started at the end node and traced backward through previous nodes until the start node was reached.

Fig. 5

Pseudocode for the length-adaptive graph search algorithm (left). Functions which are called in the algorithm (update_neighbor and find_low_AMAL) are described in pseudocode on the right.

JBO_21_7_076015_f005.png

2.3.

Representative Application in Retinal Optical Coherence Tomography Layer Segmentation

The length-adaptive graph search algorithm described is general and applicable for segmenting virtually any layered feature. Indeed, optimal utilization for specific applications requires customization. One such application of this new graph search algorithm is segmentation of the retina/vitreous boundary in OCT images of patients with full-thickness macular holes, which is detailed in the following.

We began the segmentation process by denoising the 8-bit grayscale OCT images using a previously described sparsity-based denoising technique.32 Next, we calculated three gradient images, two horizontal gradient images, and one vertical gradient image by convolving the image with [1,1,1,1,1,0,1,1,1,1,1], [1,1,1,1,1,0,1,1,1,1,1], and [1;1;1;1;1;0;1;1;1;1;1] (MATLAB notation) filters. We then linearly normalized the pixel values in these three gradient images to be between 0 and 1. We performed nonmaximum gradient suppression along each column for the vertical gradient image and each row for the horizontal gradients images using the following equation:

Eq. (4)

Gi={Gi  if  sgn(Gi+1Gi)sgn(GiGi1)<0,1otherwise,
where Gi is the gradient at index i in the column/row. Finally, we computed the edge weights between adjacent nodes in the graph as2

Eq. (5)

wij=2(Gi+Gj)+1e5,
where wij is the edge weight between nodes i and j, Gi is the gradient at node i, and Gj is the gradient at node j. If node j was above node i we used the [1,1,1,1,1,0,1,1,1,1,1] gradient, if node j had the same y-coordinate as node i we used the [1;1;1;1;1;0;1;1;1;1;1] gradient, and if node j was below node i we used the [1,1,1,1,1,0,1,1,1,1,1] gradient.

In order to not erroneously segment the bright hyperreflective retinal pigment epithelium (RPE) layer in place of our target retina/vitreous boundary, we performed an additional prepossessing step before running the length-adaptive graph search. We began by creating a binary image to isolate the nerve fiber layer (NFL) and RPE using the technique described by Chiu et al.7 Next, we divided each binary image horizontally into two equal halves and processed them separately. We obtained pilot estimates of the RPE and NFL layers by using Dijkstra’s algorithm to segment each halved section twice. We performed a third segmentation if over 80% of the columns in the binary image contained more than two black to white transitions. This third segmentation was put in place to account for posterior hyaloids (Fig. 6), which in a binary image can appear like another retinal layer. We removed segmented nodes from the graph after finding each layer, and the remaining graph was segmented again. The NFL was chosen to be highest layer in the image if two layers were detected, or the middle layer if three layers were detected.

Fig. 6

Image of full-thickness macular hole with posterior hyaloid, denoted by orange arrows, in the vitreous.

JBO_21_7_076015_f006.png

After determining the estimate for the NFL, we increased edge weights in the graph that were above and below the NFL estimate proportional to their vertical distance from the NFL estimate using

Eq. (6)

wx,y=wx,y+abs(yNFLx)×0.05,
where wx,y represents the non-zero edge weights for all vertices connected to the node (x,y) and NFLx represents the y-location of the NFL estimate in column   x. We only increased the edge weights of nodes that were in the left and right third of the image, as we assumed the macular hole was in the center third of the image.

We added two more nodes to the graph for automatic endpoint initialization, a start node and an end node. The start node was defined to be neighbors with every pixel in the first column of the image, and every pixel in the last column of the image was neighbors with the end node. The weights between the start/end node and its neighbors were set to the minimum weight value of 1e5. A flow chart of the segmentation process is shown in Fig. 7.

Fig. 7

Graphical representation of the segmentation process. The original image is shown in (a), which is denoised to create image (b). We then obtain a rough estimate of the location of the nerve fiber layer (top white band) and retinal pigment epithelium (bottom white band) using the binary image (c) as described in Sec. 2.3. From (b) we also create the gradient images (d). Next, we create the graph (e) from the gradient images and modify the edge weights based on the pilot estimation of the nerve fiber layer location from (c). Finally, we perform the length-adaptive graph search (green) and shortest path search (magenta) to segment the retina/vitreous boundary (f).

JBO_21_7_076015_f007.png

Based on testing performed with our training data set, we set the exponential parameter x of our AMAL metric in Eq. (2) to 1.02. For the specific application of retinal layer segmentation, we added two extra conditions in the implementation of the length-adaptive graph search algorithm to decide whether or not to add a neighbor node to the priority queue. If the neighbor node’s new AMAL was lower than its stored AMAL, we also checked to see if one of the following constraints was violated before updating corresponding parameters and placing the neighbor node in the queue:

  • the neighbor node was already in the path, or

  • the neighbor node was closer than the lateral or axial pixel resolution to a set of nodes already in the path.

We evoked the first constraint to prevent cycles in the final result [Fig. 8(a)]. As noted, it is important that the graph search space is able to contain cycles for representing arbitrary shapes, but no cycles are desired in the final segmentation. The second constraint was evoked to prevent adding unnecessary nodes to the segmentation, e.g., to prevent the segmentation from folding back on itself [Fig. 8(b)]. If neither of these constraints were violated, the appropriate properties of the node were updated and this node was placed in the queue. Otherwise, the node was not added to the queue.

Fig. 8

Illustration of when neighbor nodes were not added to the priority queue in the length-adaptive graph search algorithm. (a) Node circled in red that would not be added to the priority queue because it causes a cycle. (b) Node that would not be added to the priority queue because it is too close to other nodes in the segmentation.

JBO_21_7_076015_f008.png

2.4.

Dataset

This study was approved by the Duke University Health System Institutional Review Board in accordance with Health Insurance Portability and Accountability Act regulations and the standards of the 1964 Declaration of Helsinki. Ten different patients with full-thickness macular holes were imaged with a Heidelberg Spectralis (Heidelberg Engineering, Heidelberg, Germany) OCT system using a radial scan pattern centered on the fovea. Twenty-four images were collected from each patient. Nine of the 10 sets of images were 512×496 (width and height) pixels, with pixel resolution 11.06 to 11.65  μm laterally and 3.87  μm axially. The other set of images was 1024×496 with a lateral pixel resolution of 5.73  μm and an axial pixel resolution of 3.87  μm. Five images from each patient were randomly selected to segment. These images were manually segmented by a clinical retina specialist and compared with the previous shortest path segmentation and our length-adaptive segmentation. We derived all algorithm parameters from a separate training set of images from subjects with full-thickness macular holes. None of the training images were used in validation. For a fair comparison, all preprocessing steps were performed for both shortest path and length-adaptive techniques.

3.

Results

A qualitative comparison of the two segmentation methods is shown in Fig. 9. The top row shows the original images, from four separate patients with full-thickness macular holes, and the bottom row shows the segmentation for the length-adaptive (green) and shortest path (magenta) segmentations. These results show that the length-adaptive method more closely follows the contours of the macular holes, while the shortest path algorithm misses them entirely.

Fig. 9

Segmentation of the retina/vitreous boundary of four patients with full-thickness macular holes using shortest path length and length-adaptive algorithms. The top row are the original images and the bottom row are the segmented images. Magenta denotes the classic shortest path segmentation and green denotes the proposed length-adaptive segmentation. Where the magenta is not visible the two methods segmented the same line. The far right image shows an example where the length-adaptive algorithm failed to successfully segment the retina/vitreous boundary in the lower-left part of the macular hole.

JBO_21_7_076015_f009.png

For each B-Scan, we calculated the mean distance between each pixel location on the manual segmentation and the closest point on the automatic segmentations. Then, we computed the average, standard deviation, and median of these mean pixel distances for all 50 B-Scans as a measure of error. The mean, standard deviation, and median of the error for the length-adaptive and shortest path algorithms are shown in Table 1.

Table 1

Error comparison of shortest path and length-adaptive segmentation to expert manual grader for 50 images.

MethodMean (pixels)Standard Deviation (pixels)Median (pixels)
Shortest path39.9810.8542.31
Length-adaptive3.405.741.31

The mean, standard deviation, and median of the time needed to segment the macular hole images with the shortest path method were 0.043, 0.013, and 0.039 s, respectively. The mean, standard deviation, and median of the time needed to segment the images with the length-adaptive method were 4.53, 10.47, and 2.65 s, respectively. Timing results were obtained on a 64-bit desktop computer with an Intel Core i7-4930K 3.4 GHz CPU. Our implementation of the length-adaptive algorithm was not optimized to run quickly and we expect the execution time could be reduced by applying parallel processing techniques. We could also use segmentation results from images in the same volume to limit the search region to reduce the processing time and improve segmentation accuracy. A similar method was used by Xu et al.33

Further, as an illustrative example, to qualitatively demonstrate that the length-adaptive method can also be used on nonpathological images, we segmented the retina/vitreous boundary with x=1.02 in a normal subject from the dataset distributed by Srinivasan et al.26 This image was not part of our training dataset. Results of this segmentation are shown in Fig. 10 with the original image on the left and the segmented image on the right. As expected, the results of these two techniques for segmenting such simple features are virtually identical.

Fig. 10

Segmentation of the retina/vitreous boundary of a normal subject without a full-thickness macular hole. The left image is the original and the right image is the segmented image. Magenta denotes the shortest path segmentation and green denotes the length-adaptive segmentation. Where the magenta is not visible (most of the pixels in this case) the two methods segmented the same line.

JBO_21_7_076015_f010.png

As a demonstrative example of how the length-adaptive graph search can be used to segment other layers in retinal OCT images, we have segmented the inner aspect of the RPE in two separate patients with age-related macular degeneration (AMD) (Fig. 11). In order to segment these images, we obtained an estimate of the RPE using the same method described earlier. We then increased the weights between edges above the RPE estimate to prevent the segmentation of the NFL. When creating the gradient images, we used [1,1,1,0,1,1,1], [1,1,1,0,1,1,1], and [1;1;1;0;1;1;1] filters. Finally, we changed the exponent from Eq. (2) to be 1.4. Optimization of an AMAL-based drusen segmentation algorithm and careful validation is outside the scope of this preliminary paper and will be addressed in our future publications.

Fig. 11

Segmentation of the inner aspect of the RPE in patients with age-related macular degeneration using shortest path length and length-adaptive algorithms. The top row are the original images and the bottom row are the segmented images. In the segmented images, magenta denotes the classic shortest path segmentation and green denotes the proposed length-adaptive segmentation. Where the magenta is not visible the two methods segmented the same line.

JBO_21_7_076015_f011.png

4.

Discussion

We have demonstrated the utility of a new metric in graph search called AMAL. Applications of this metric are general and it can be potentially applied to virtually any layer (or closed contour feature that can be converted to a layer34) segmentation problem including segmentation of intravascular,35 esophageal wall,36 epithelium,37 or epidermal38 OCT images. In this first report, we demonstrate its application for segmentation of the retina/vitreous boundary in patients with full-thickness macular holes. The data obtained from the length-adaptive segmentation could be used to estimate parameters such as macular hole height and base width. Our new method outperformed our popular GTDP method,2 which is based on the shortest path metric, when compared to expert manual grading.

The length-adaptive algorithm we have presented is similar in theory to other graph search algorithms like Dijkstra’s shortest path and Wimer’s lowest mean arc length path in that we are searching through a graph for a path that minimizes a cost metric. As we explained in Sec. 2, what separates our algorithm from these two algorithms is the utilization of AMAL that is a compromise between total path length and mean arc length metrics. Moreover, since neither Dijkstra’s or Wimer’s techniques can utilize this new metric, we proposed a new graph search algorithm. This algorithm is an extension of the algorithm used by Chen et al.,31 which is only applicable for the shortest path metric, modified to find the path with the lowest AMAL (Fig. 5) and account for cycles in the search space (Fig. 8). Because our algorithm performs a greedy search, it is not guaranteed that the solution produced is globally optimal. However, as demonstrated in our experiments, this shortcoming does not limit utilization in practical applications.

We note that although the length-adaptive method performed better than the shortest path algorithm in our experiments on pathologic eyes, it is not able to successfully segment every feature or image. For example, in the far right image in Fig. 9 the segmentation line does not perfectly adapt to the contour of the macular hole in the region with very weak gradient, suggesting that there is still room for improvement. The length-adaptive algorithm spans a significantly larger search space as compared with the shortest path algorithm because of its insensitivity to path length. In many cases, such as segmenting smooth structures like the ganglion cell layer in normal eyes, it is not reasonable to replace the shortest path method, which can be implemented efficiently via Dijkstra’s algorithm, with an AMAL-based method. Thus, we expect that comprehensive software for segmentation of different anatomic and pathologic features in the retina will utilize both of the shortest path and AMAL metrics in its algorithm. In this framework, AMAL is only used for segmenting pathologic structures representing nonfunction features [e.g., Fig. 2(b)], while other structures are segmented using the shortest path method. This is an attainable goal because algorithms for detecting diseased eyes26 or deformed or missing layers39 from retinal OCT image have already been developed.

We also note on the reliance of our algorithm on a few empirically selected algorithmic parameters. We should emphasis that since selection of the algorithm’s parameters is based on a dataset completely separate from the dataset in which the performance of the algorithm is evaluated upon, the results of this study are generalizable. We should also note that such empirical selection of segmentation parameters is explicitly or implicitly utilized in most of the other OCT layer segmentation algorithms.2,13,14,40 Alternatively, automated algorithms presented in previous work15 can be utilized to pick the parameters that reduce the error between manual and automatic segmentation boundaries in the training dataset.

In conclusion, we expect the AMAL metric to become a standard component of future graph-based automated segmentation methods. While efficient algorithms for implementing graph search using the shortest path metric are available online, there is a significant need for an algorithm to perform graph search using the AMAL metric. Thus, we have distributed freely online a general application open source software package coded in C++ for segmenting graphs using the length-adaptive algorithm. This software can be found in Ref. 41. Extending the test dataset and application to other types of pathology in ophthalmic images is part of our ongoing work.

Acknowledgments

We thank Prof. Robyn Guymer of the Centre for Eye Research Australia, University of Melbourne and Dr. Frederick L. Ferris III of the National Eye Institute for generously sharing the optical coherence tomography images in Fig. 11. This work was supported in part by grants from NIH R01, EY022691, and U.S. Army Medical Research Acquisition Activity Contract W81XWH-12-1-0397.

References

1. 

D. Huang et al., “Optical coherence tomography,” Science, 254 (5035), 1178 –1181 (1991). http://dx.doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar

2. 

S. J. Chiu et al., “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express, 18 (18), 19413 –19428 (2010). http://dx.doi.org/10.1364/OE.18.019413 OPEXFF 1094-4087 Google Scholar

3. 

A. Mishra et al., “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express, 17 (26), 23719 –23728 (2009). http://dx.doi.org/10.1364/OE.17.023719 OPEXFF 1094-4087 Google Scholar

4. 

M. K. Garvin et al., “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging, 28 (9), 1436 –1447 (2009). http://dx.doi.org/10.1109/TMI.2009.2016958 ITMID4 0278-0062 Google Scholar

5. 

D. Cabrera Fernández, H. M. Salinas and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express, 13 (25), 10200 –10216 (2005). http://dx.doi.org/10.1364/OPEX.13.010200 OPEXFF 1094-4087 Google Scholar

6. 

A. Lang et al., “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express, 4 (7), 1133 –1152 (2013). http://dx.doi.org/10.1364/BOE.4.001133 BOEICL 2156-7085 Google Scholar

7. 

S. J. Chiu et al., “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest Ophthalmol. Visual Sci., 53 (1), 53 –61 (2012). http://dx.doi.org/10.1167/iovs.11-7640 Google Scholar

8. 

H. Ishikawa et al., “Macular segmentation with optical coherence tomography,” Invest. Ophthalmol. Visual Sci., 46 (6), 2012 (2005). http://dx.doi.org/10.1167/iovs.04-0335 IOVSDA 0146-0404 Google Scholar

9. 

K. Vermeer et al., “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express, 2 (6), 1743 –1756 (2011). http://dx.doi.org/10.1364/BOE.2.001743 BOEICL 2156-7085 Google Scholar

10. 

Q. Yang et al., “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express, 2 (9), 2493 –2503 (2011). http://dx.doi.org/10.1364/BOE.2.002493 BOEICL 2156-7085 Google Scholar

11. 

S. J. Chiu et al., “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express, 6 (4), 1172 –1194 (2015). http://dx.doi.org/10.1364/BOE.6.001172 BOEICL 2156-7085 Google Scholar

12. 

B. J. Antony et al., “A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes,” Biomed. Opt. Express, 4 (12), 2712 –2728 (2013). http://dx.doi.org/10.1364/BOE.4.002712 BOEICL 2156-7085 Google Scholar

13. 

J. Tian et al., “Real-time automatic segmentation of optical coherence tomography volume data of the macular region,” PLoS ONE, 10 (8), e0133908 (2015). http://dx.doi.org/10.1371/journal.pone.0133908 POLNCL 1932-6203 Google Scholar

14. 

S. Huang et al., “In vivo microvascular network imaging of the human retina combined with an automatic three-dimensional segmentation method,” J. Biomed. Optics, 20 (7), 076003 (2015). http://dx.doi.org/10.1117/1.JBO.20.7.076003 JBOPFO 1083-3668 Google Scholar

15. 

P. A. Dufour et al., “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging, 32 (3), 531 –543 (2013). http://dx.doi.org/10.1109/TMI.2012.2225152 ITMID4 0278-0062 Google Scholar

16. 

D. Kaba et al., “Retina layer segmentation using kernel graph cuts and continuous max-flow,” Opt. Express, 23 (6), 7366 –7384 (2015). http://dx.doi.org/10.1364/OE.23.007366 OPEXFF 1094-4087 Google Scholar

17. 

J. Novosel et al., “Loosely coupled level sets for simultaneous 3D retinal layer segmentation in optical coherence tomography,” Med. Image Anal., 26 (1), 146 –158 (2015). http://dx.doi.org/10.1016/j.media.2015.08.008 Google Scholar

18. 

T. Zhang et al., “Fast retinal layer segmentation of spectral domain optical coherence tomography images,” J. Biomed. Opt., 20 (9), 096014 (2015). http://dx.doi.org/10.1117/1.JBO.20.9.096014 JBOPFO 1083-3668 Google Scholar

19. 

Q. Chen et al., “Automated choroid segmentation based on gradual intensity distance in HD-OCT images,” Optics Express, 23 (7), 8974 –8994 (2015). http://dx.doi.org/10.1364/OE.23.008974 OPEXFF 1094-4087 Google Scholar

20. 

M. Zhang et al., “Advanced image processing for optical coherence tomographic angiography of macular diseases,” Biomed. Opt. Express, 6 (12), 4661 –4675 (2015). http://dx.doi.org/10.1364/BOE.6.004661 BOEICL 2156-7085 Google Scholar

21. 

M. A. Mayer et al., “Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,” Biomed. Opt. Express, 1 (5), 1358 –1383 (2010). http://dx.doi.org/10.1364/BOE.1.001358 BOEICL 2156-7085 Google Scholar

22. 

V. Kajić et al., “Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model,” Biomed. Opt. Express, 3 (1), 86 –103 (2012). http://dx.doi.org/10.1364/BOE.3.000086 BOEICL 2156-7085 Google Scholar

23. 

M. Hidalgo-Aguirre et al., “Automatic segmentation of the optic nerve head for deformation measurements in video rate optical coherence tomography,” J. Biomed. Opt., 20 (11), 116008 (2015). http://dx.doi.org/10.1117/1.JBO.20.11.116008 JBOPFO 1083-3668 Google Scholar

24. 

X. Yin, J. R. Chao and R. K. Wang, “User-guided segmentation for volumetric retinal optical coherence tomography images,” J. Biomed. Opt., 19 (8), 086020 (2014). http://dx.doi.org/10.1117/1.JBO.19.8.086020 JBOPFO 1083-3668 Google Scholar

25. 

E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische mathematik, 1 (1), 269 –271 (1959). http://dx.doi.org/10.1007/BF01386390 NUMMA7 0029-599X Google Scholar

26. 

P. P. Srinivasan et al., “Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images,” Biomed. Opt. Express, 5 (10), 3568 –3577 (2014). http://dx.doi.org/10.1364/BOE.5.003568 BOEICL 2156-7085 Google Scholar

27. 

J. S. Duker et al., “The International Vitreomacular Traction Study Group classification of vitreomacular adhesion, traction, and macular hole,” Ophthalmology, 120 (12), 2611 –2619 (2013). http://dx.doi.org/10.1016/j.ophtha.2013.07.042 OPANEW 0743-751X Google Scholar

28. 

A. X. Falcão et al., “User-steered image segmentation paradigms: live wire and live lane,” Graph. Models Image Process., 60 (4), 233 –260 (1998). http://dx.doi.org/10.1006/gmip.1998.0475 Google Scholar

29. 

S. Wimer, I. Koren and I. Cederbaum, “On paths with the shortest average arc length in weighted graphs,” Discrete Appl. Math., 45 (2), 169 –179 (1993). http://dx.doi.org/10.1016/0166-218X(93)90059-W DAMADU 0166-218X Google Scholar

30. 

J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 22 (8), 888 –905 (2000). http://dx.doi.org/10.1109/34.868688 ITPIDJ 0162-8828 Google Scholar

31. 

M. Chen et al., “Priority queues and Dijkstra’s algorithm,” Computer Science Department, University of Texas, Austin (2007). Google Scholar

32. 

L. Fang et al., “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE Trans. Med. Imaging, 32 (11), 2034 –2049 (2013). http://dx.doi.org/10.1109/TMI.2013.2271904 ITMID4 0278-0062 Google Scholar

33. 

D. Xu et al., “A novel segmentation algorithm for volumetric analysis of macular hole boundaries identified with optical coherence tomographyvolumetric macular hole segmentation,” Invest. Ophthalmol. Visual Sci., 54 (1), 163 –169 (2013). http://dx.doi.org/10.1167/iovs.12-10246 IOVSDA 0146-0404 Google Scholar

34. 

S. J. Chiu et al., “Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming,” Biomed. Opt. Express, 3 (5), 1127 –1140 (2012). http://dx.doi.org/10.1364/BOE.3.001127 BOEICL 2156-7085 Google Scholar

35. 

Z. Wang et al., “Volumetric quantification of fibrous caps using intravascular optical coherence tomography,” Biomed. Opt. Express, 3 (6), 1413 –1426 (2012). http://dx.doi.org/10.1364/BOE.3.001413 BOEICL 2156-7085 Google Scholar

36. 

G. J. Ughi et al., “Automated segmentation and characterization of esophageal wall in vivo by tethered capsule optical coherence tomography endomicroscopy,” Biomed. Opt. Express, 7 (2), 409 –419 (2016). http://dx.doi.org/10.1364/BOE.7.000409 BOEICL 2156-7085 Google Scholar

37. 

S. Chitchian et al., “Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography,” J. Biomed. Opt., 17 (11), 116004 (2012). http://dx.doi.org/10.1117/1.JBO.17.11.116004 JBOPFO 1083-3668 Google Scholar

38. 

A. Li et al., “Epidermal segmentation in high-definition optical coherence tomography,” in 37th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society (EMBC 2015), 3045 –3048 (2015). http://dx.doi.org/10.1109/EMBC.2015.7319034 Google Scholar

39. 

P. P. Srinivasan et al., “Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology,” Biomed. Opt. Express, 5 (2), 348 –365 (2014). http://dx.doi.org/10.1364/BOE.5.000348 BOEICL 2156-7085 Google Scholar

40. 

J. Tian et al., “Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images,” Biomed. Opt. Express, 4 (3), 397 –411 (2013). http://dx.doi.org/10.1364/BOE.4.000397 BOEICL 2156-7085 Google Scholar

41. 

B. Keller et al., “Length adaptive graph search source code,” (2016) www.duke.edu/~sf59/Keller_JBO_2016.htm July ). 2016). Google Scholar

Biographies for the authors are not available.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2016/$25.00 © 2016 SPIE
Brenton Keller, David Cunefare, Dilraj S. Grewal, Tamer H. Mahmoud, Joseph A. Izatt, and Sina Farsiu "Length-adaptive graph search for automatic segmentation of pathological features in optical coherence tomography images," Journal of Biomedical Optics 21(7), 076015 (28 July 2016). https://doi.org/10.1117/1.JBO.21.7.076015
Published: 28 July 2016
Lens.org Logo
CITATIONS
Cited by 33 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Optical coherence tomography

Image processing algorithms and systems

Retina

Biomedical optics

Algorithm development

Pathology

Back to Top