Optical coherence tomography (OCT) is widely utilized for many medical imaging applications, especially for diagnosing retinal diseases.1 The thickness of individual layers in the retina is an important quantitative biomarker for diagnosis, management, and prognosis of ocular and neurodegenerative diseases. As manual segmentation of retinal layer boundaries visualized on OCT images is often too time-consuming and subjective for utilization in a clinical setting, several automated segmentation algorithms have been developed.2–24 Despite progress and development of algorithms applicable to images for normal eyes and for eyes with limited pathology, segmentation of retinal layers with significant pathology is still a challenging problem. In addressing the problem of dealing with diseased retinas with severe deformation, graph-based segmentation frameworks have been among the most promising.7,11–16,20
Finding the shortest path through a graph, often implemented via Dijkstra’s algorithm,25 is the core engine behind many graph-based automated segmentation methods, including our popular graph theory and dynamic programming (GTDP) framework.2,7,11 For healthy retinas [Fig. 1(a)]26 with layers that cross each A-scan only once the shortest path metric works well. However, the shortest path metric cannot faithfully represent certain classes of pathology in retinal images. An example is the case of a full-thickness macular hole, which is defined as a lesion with interruption of all layers of the neurosensory retina that develops in the fovea27 [Fig. 1(b)]. In these pathologic eyes, the retina/vitreous boundary can cross an A-scan more than once, negating the applicability of Dijkstra’s algorithm (Fig. 2).
The main contributions of this paper are to
1. demonstrate the shortcomings of the popular shortest path metric and the importance of utilizing length-adaptive metrics in design of graph-based layer segmentation algorithms,
2. introduce a new length-adaptive graph search metric, applicable to a wide variety of layer segmentation problems in medical images,
3. introduce an algorithm for implementing this metric in the graph-search framework, and
4. show, as a practical example, how this metric can be utilized in an automatic algorithm for segmenting the retina/vitreous boundary in OCT images of eyes with full-thickness macular holes.
When applying a general graph search technique to image boundary segmentation, each pixel in the image is converted to a node in the graph. Nodes in our graph are connected to their surrounding eight neighbors via a positive weighted edge. Edge weights between neighbors are assigned based on a feature between the two nodes (pixels) the edge connects. This feature is then converted into a weight via a mapping function. The cost of traversing an edge expresses how likely this edge corresponds to a boundary.28 Examples of features include, but are not limited to, pixel intensity and the gradient between the neighboring pixels. Once the graph of nodes and edges is constructed, boundaries (layers) can be found by searching for paths through the graph. In order to find a path through the graph, search algorithms seek paths that minimize a cost metric (e.g., sum of the edge weights in a path also known as the total path length).
Length-Adaptive Graph Search
In the shortest path problem, given a graph with a set of vertices (nodes) and positive weighted edges, the goal is to find a path that connects a start vertex to an end vertex by minimizing the total path length29 and normalized cuts,30 in our proposed length-adaptive method the goal is to find a path that minimizes the following metric, which we call the AMAL,
To demonstrate the effect of varying the value of exponent in Eq. (2), we have segmented the same image with four different values of , shown in Fig. 3. Utilization of a graph search metric which is completely independent of path length [i.e., , Fig. 3(a)] may result in erroneously long paths. Increasing the value of causes the length-adaptive algorithm to become more sensitive to the total distance of the path. This results in a shorter overall path length of the segmentation, which, when segmenting the retina/vitreous boundary of a full-thickness macular hole, reduces the accuracy. Searching for a path with an AMAL calculated with an exponent greater than 1 in Eq. (2) provides a balance between the total distance of the path and the mean arc length of the path.
Implementation of the Adjusted Mean Arc Length Metric
Unfortunately, unlike the classic shortest path-based segmentation, which can be implemented efficiently using Dijkstra’s algorithm,2 implementation of length-adaptive graph search using the AMAL metric is more challenging. Wimer et al.29 developed two methods for finding the path with the lowest mean arc length in weighted graphs: an iterative approach that finds an approximation to the solution and an exact solution. These two methods could potentially be modified to find the path with the lowest AMAL, but they both assume the graph is acyclic, where the search space prevents visiting a vertex more than once. Because we want to segment arbitrary shapes (Fig. 4), edges between neighboring nodes must be bidirectional, which creates cycles in the search space. Without cycles, the segmentation would only be able to search in one direction along the horizontal axis of the image, either left-to-right or right-to-left. Thus, in order to segment arbitrary shapes, our overall graph search space, but not the final segmentation, must contain cycles.
In this paper, to find an estimate of the path with the lowest AMAL in a graph, we created a priority queue sorted by the AMAL value of visited nodes. For each node in the graph, we recorded the number of neighbor nodes, the edge weights to the neighbor nodes, the mean arc length of the path up to that node, the previous node, the number of previous nodes in the path, and the AMAL of the path up to that node. The priority queue was initialized with the start node, which had an AMAL of zero, and we set the AMAL value of all other nodes to infinity. We then modified the iterative Dijkstra’s method outlined by Chen et al.31 to greedily search for the path with the lowest AMAL. At every iteration, we removed the node with the lowest adjusted mean arc length from the top of the queue and made it the current node (Fig. 5 line 4). We generated a new AMAL for each neighbor of the current node (Fig. 5 line 6) using the equationFig. 5 lines 7 to 8). If the new path to the neighbor superseded its stored path, we updated the mean arc length, AMAL, previous node, number of previous nodes and placed the neighbor node in the queue (Fig. 5 lines 9 to 16). If the new path to the end node (used for automatic endpoint initialization, see below) superseded its stored path, we looked at all the nodes connected to the end node, found the one that produced the lowest AMAL for the end node, and updated the end node accordingly (Fig. 5 lines 17 to 21). The algorithm terminated when there were no more nodes left in the queue. To determine the path with the lowest AMAL, we started at the end node and traced backward through previous nodes until the start node was reached.
Representative Application in Retinal Optical Coherence Tomography Layer Segmentation
The length-adaptive graph search algorithm described is general and applicable for segmenting virtually any layered feature. Indeed, optimal utilization for specific applications requires customization. One such application of this new graph search algorithm is segmentation of the retina/vitreous boundary in OCT images of patients with full-thickness macular holes, which is detailed in the following.
We began the segmentation process by denoising the 8-bit grayscale OCT images using a previously described sparsity-based denoising technique.32 Next, we calculated three gradient images, two horizontal gradient images, and one vertical gradient image by convolving the image with , , and  (MATLAB notation) filters. We then linearly normalized the pixel values in these three gradient images to be between 0 and 1. We performed nonmaximum gradient suppression along each column for the vertical gradient image and each row for the horizontal gradients images using the following equation:2
In order to not erroneously segment the bright hyperreflective retinal pigment epithelium (RPE) layer in place of our target retina/vitreous boundary, we performed an additional prepossessing step before running the length-adaptive graph search. We began by creating a binary image to isolate the nerve fiber layer (NFL) and RPE using the technique described by Chiu et al.7 Next, we divided each binary image horizontally into two equal halves and processed them separately. We obtained pilot estimates of the RPE and NFL layers by using Dijkstra’s algorithm to segment each halved section twice. We performed a third segmentation if over 80% of the columns in the binary image contained more than two black to white transitions. This third segmentation was put in place to account for posterior hyaloids (Fig. 6), which in a binary image can appear like another retinal layer. We removed segmented nodes from the graph after finding each layer, and the remaining graph was segmented again. The NFL was chosen to be highest layer in the image if two layers were detected, or the middle layer if three layers were detected.
After determining the estimate for the NFL, we increased edge weights in the graph that were above and below the NFL estimate proportional to their vertical distance from the NFL estimate using
We added two more nodes to the graph for automatic endpoint initialization, a start node and an end node. The start node was defined to be neighbors with every pixel in the first column of the image, and every pixel in the last column of the image was neighbors with the end node. The weights between the start/end node and its neighbors were set to the minimum weight value of . A flow chart of the segmentation process is shown in Fig. 7.
Based on testing performed with our training data set, we set the exponential parameter of our AMAL metric in Eq. (2) to 1.02. For the specific application of retinal layer segmentation, we added two extra conditions in the implementation of the length-adaptive graph search algorithm to decide whether or not to add a neighbor node to the priority queue. If the neighbor node’s new AMAL was lower than its stored AMAL, we also checked to see if one of the following constraints was violated before updating corresponding parameters and placing the neighbor node in the queue:
• the neighbor node was already in the path, or
• the neighbor node was closer than the lateral or axial pixel resolution to a set of nodes already in the path.
We evoked the first constraint to prevent cycles in the final result [Fig. 8(a)]. As noted, it is important that the graph search space is able to contain cycles for representing arbitrary shapes, but no cycles are desired in the final segmentation. The second constraint was evoked to prevent adding unnecessary nodes to the segmentation, e.g., to prevent the segmentation from folding back on itself [Fig. 8(b)]. If neither of these constraints were violated, the appropriate properties of the node were updated and this node was placed in the queue. Otherwise, the node was not added to the queue.
This study was approved by the Duke University Health System Institutional Review Board in accordance with Health Insurance Portability and Accountability Act regulations and the standards of the 1964 Declaration of Helsinki. Ten different patients with full-thickness macular holes were imaged with a Heidelberg Spectralis (Heidelberg Engineering, Heidelberg, Germany) OCT system using a radial scan pattern centered on the fovea. Twenty-four images were collected from each patient. Nine of the 10 sets of images were (width and height) pixels, with pixel resolution 11.06 to laterally and axially. The other set of images was with a lateral pixel resolution of and an axial pixel resolution of . Five images from each patient were randomly selected to segment. These images were manually segmented by a clinical retina specialist and compared with the previous shortest path segmentation and our length-adaptive segmentation. We derived all algorithm parameters from a separate training set of images from subjects with full-thickness macular holes. None of the training images were used in validation. For a fair comparison, all preprocessing steps were performed for both shortest path and length-adaptive techniques.
A qualitative comparison of the two segmentation methods is shown in Fig. 9. The top row shows the original images, from four separate patients with full-thickness macular holes, and the bottom row shows the segmentation for the length-adaptive (green) and shortest path (magenta) segmentations. These results show that the length-adaptive method more closely follows the contours of the macular holes, while the shortest path algorithm misses them entirely.
For each B-Scan, we calculated the mean distance between each pixel location on the manual segmentation and the closest point on the automatic segmentations. Then, we computed the average, standard deviation, and median of these mean pixel distances for all 50 B-Scans as a measure of error. The mean, standard deviation, and median of the error for the length-adaptive and shortest path algorithms are shown in Table 1.
Error comparison of shortest path and length-adaptive segmentation to expert manual grader for 50 images.
|Method||Mean (pixels)||Standard Deviation (pixels)||Median (pixels)|
The mean, standard deviation, and median of the time needed to segment the macular hole images with the shortest path method were 0.043, 0.013, and 0.039 s, respectively. The mean, standard deviation, and median of the time needed to segment the images with the length-adaptive method were 4.53, 10.47, and 2.65 s, respectively. Timing results were obtained on a 64-bit desktop computer with an Intel Core i7-4930K 3.4 GHz CPU. Our implementation of the length-adaptive algorithm was not optimized to run quickly and we expect the execution time could be reduced by applying parallel processing techniques. We could also use segmentation results from images in the same volume to limit the search region to reduce the processing time and improve segmentation accuracy. A similar method was used by Xu et al.33
Further, as an illustrative example, to qualitatively demonstrate that the length-adaptive method can also be used on nonpathological images, we segmented the retina/vitreous boundary with in a normal subject from the dataset distributed by Srinivasan et al.26 This image was not part of our training dataset. Results of this segmentation are shown in Fig. 10 with the original image on the left and the segmented image on the right. As expected, the results of these two techniques for segmenting such simple features are virtually identical.
As a demonstrative example of how the length-adaptive graph search can be used to segment other layers in retinal OCT images, we have segmented the inner aspect of the RPE in two separate patients with age-related macular degeneration (AMD) (Fig. 11). In order to segment these images, we obtained an estimate of the RPE using the same method described earlier. We then increased the weights between edges above the RPE estimate to prevent the segmentation of the NFL. When creating the gradient images, we used , , and  filters. Finally, we changed the exponent from Eq. (2) to be 1.4. Optimization of an AMAL-based drusen segmentation algorithm and careful validation is outside the scope of this preliminary paper and will be addressed in our future publications.
We have demonstrated the utility of a new metric in graph search called AMAL. Applications of this metric are general and it can be potentially applied to virtually any layer (or closed contour feature that can be converted to a layer34) segmentation problem including segmentation of intravascular,35 esophageal wall,36 epithelium,37 or epidermal38 OCT images. In this first report, we demonstrate its application for segmentation of the retina/vitreous boundary in patients with full-thickness macular holes. The data obtained from the length-adaptive segmentation could be used to estimate parameters such as macular hole height and base width. Our new method outperformed our popular GTDP method,2 which is based on the shortest path metric, when compared to expert manual grading.
The length-adaptive algorithm we have presented is similar in theory to other graph search algorithms like Dijkstra’s shortest path and Wimer’s lowest mean arc length path in that we are searching through a graph for a path that minimizes a cost metric. As we explained in Sec. 2, what separates our algorithm from these two algorithms is the utilization of AMAL that is a compromise between total path length and mean arc length metrics. Moreover, since neither Dijkstra’s or Wimer’s techniques can utilize this new metric, we proposed a new graph search algorithm. This algorithm is an extension of the algorithm used by Chen et al.,31 which is only applicable for the shortest path metric, modified to find the path with the lowest AMAL (Fig. 5) and account for cycles in the search space (Fig. 8). Because our algorithm performs a greedy search, it is not guaranteed that the solution produced is globally optimal. However, as demonstrated in our experiments, this shortcoming does not limit utilization in practical applications.
We note that although the length-adaptive method performed better than the shortest path algorithm in our experiments on pathologic eyes, it is not able to successfully segment every feature or image. For example, in the far right image in Fig. 9 the segmentation line does not perfectly adapt to the contour of the macular hole in the region with very weak gradient, suggesting that there is still room for improvement. The length-adaptive algorithm spans a significantly larger search space as compared with the shortest path algorithm because of its insensitivity to path length. In many cases, such as segmenting smooth structures like the ganglion cell layer in normal eyes, it is not reasonable to replace the shortest path method, which can be implemented efficiently via Dijkstra’s algorithm, with an AMAL-based method. Thus, we expect that comprehensive software for segmentation of different anatomic and pathologic features in the retina will utilize both of the shortest path and AMAL metrics in its algorithm. In this framework, AMAL is only used for segmenting pathologic structures representing nonfunction features [e.g., Fig. 2(b)], while other structures are segmented using the shortest path method. This is an attainable goal because algorithms for detecting diseased eyes26 or deformed or missing layers39 from retinal OCT image have already been developed.
We also note on the reliance of our algorithm on a few empirically selected algorithmic parameters. We should emphasis that since selection of the algorithm’s parameters is based on a dataset completely separate from the dataset in which the performance of the algorithm is evaluated upon, the results of this study are generalizable. We should also note that such empirical selection of segmentation parameters is explicitly or implicitly utilized in most of the other OCT layer segmentation algorithms.2,13,14,40 Alternatively, automated algorithms presented in previous work15 can be utilized to pick the parameters that reduce the error between manual and automatic segmentation boundaries in the training dataset.
In conclusion, we expect the AMAL metric to become a standard component of future graph-based automated segmentation methods. While efficient algorithms for implementing graph search using the shortest path metric are available online, there is a significant need for an algorithm to perform graph search using the AMAL metric. Thus, we have distributed freely online a general application open source software package coded in C++ for segmenting graphs using the length-adaptive algorithm. This software can be found in Ref. 41. Extending the test dataset and application to other types of pathology in ophthalmic images is part of our ongoing work.
We thank Prof. Robyn Guymer of the Centre for Eye Research Australia, University of Melbourne and Dr. Frederick L. Ferris III of the National Eye Institute for generously sharing the optical coherence tomography images in Fig. 11. This work was supported in part by grants from NIH R01, EY022691, and U.S. Army Medical Research Acquisition Activity Contract W81XWH-12-1-0397.