Open Access
2 May 2013 Automatic segmentation of anterior segment optical coherence tomography images
Dominic Williams, Yalin Zheng, Fangjun Bao, Ahmed Elsheikh
Author Affiliations +
Abstract
Optical coherence tomography (OCT) images can provide quantitative measurements of the eye’s entire anterior segment. A new technique founded on a newly proposed level set-based shape prior segmentation model has been developed for automatic segmentation of the cornea’s anterior and posterior boundaries. This technique comprises three major steps: removal of regions containing irrelevant structures and artifacts, estimation of the cornea’s location using a thresholding technique, and application of the new level set-based shape prior segmentation model to improve segmentation. The performance of our technique is compared to previously developed methods for analysis of the cornea in 33 OCT images of normal eyes, whereby manual annotations are used as a reference standard. The new technique achieves much improved concordance than previous methods, with a mean Dice’s similarity coefficient of <0.92 . This demonstrates the technique’s potential to provide accurate and reliable measurements of the anterior segment geometry, which is important for many applications, including the construction of representative numerical simulations of the eye’s mechanical behavior.

1.

Introduction

Optical coherence tomography (OCT) is a noninvasive imaging technique that has been used extensively on the posterior segment of the eye. The optically transparent nature of the human eye makes OCT a well-suited imaging technique for retinal imaging.1 OCT is being increasingly used to measure the shape and thickness of the human cornea in vivo.2

Currently, ultrasound pachymetry is the primary technique used to measure the thickness of the cornea.3 Another system that can be used is the Orbscan system.4 Orbscan uses two slit lamps to illuminate the human eye to get information on axial curvature, elevation of the anterior and posterior surface, and corneal thickness throughout the cornea.5 Using OCT has two major advantages. It is a noncontact technique, meaning it is more comfortable for a patient, can be used on eyes that suffer trauma, and carries no risk of pressure on the eye altering measurements. The other major advantage is that OCT can produce images at a higher resolution and higher speed than any of the other techniques.6,7

Anterior segment OCT (AS-OCT) allows the resolution of anterior and posterior surfaces of the entire cornea. This allows accurate measurement of the thickness and volume of the entire cornea, as well as the anterior chamber biometry, such as its angle and depth. It has several important medical applications from contact lens fitting, diagnosis and clinical evaluation, and surgical planning and monitoring to monitoring patients with eye pathologies.810 In particular, obtaining accurate topography information of the anterior segment using this technique would also allow construction of patient-specific models for biomechanical modelling of the human eye.11 There is currently a lack of automated measurement tools supplied with commercial OCT devices, and manual measurement is time consuming, tedious, and subject to human errors. For this reason, there is an increasing need for fully automated segmentation techniques to identify and trace anterior and posterior boundaries of the anterior segment accurately.

The segmentation of AS-OCT images has been explored in several studies. Shen et al. used a simple threshold-based model to measure the anterior surface of the cornea.12 That study did not consider the location of the posterior boundary of the cornea. Tian et al. used a similar method to calculate the anterior chamber angle13 by locating the posterior boundary of the cornea near the iris. Their study did not investigate the location of the entire posterior boundary of the cornea. La Rocca et al. segmented three boundaries in the cornea using a hybrid graph theory and a dynamic programming framework.14 They were able to detect three boundaries at the center of the cornea, but their method did not segment boundaries over the entire cornea. An intelligent scissors-based method has also been used to segment five layers of the central cornea,15 but the method has two major disadvantages: It is not a fully automated method, so it still needs manual selection of initial points, and it attempts to segment only the central region of the cornea, which generally has the highest signal-to-noise ratio. To the best of our knowledge, there is no approach that can segment both the anterior and posterior boundaries of the entire cornea and the front part of the sclera in AS-OCT images.

The key challenge in segmenting the AS-OCT images is that there are regions with a low signal-to-noise ratio next to the central cornea on all images. This phenomenon, an example of which is shown in Fig. 1, is primarily due to the steepness of the cornea reducing the fringe amplitude of the signal and polarization effect, and also due to the telecentric scanning method, which reduces detection of reflected light from these regions. For this reason, the posterior surface in those areas is difficult to perceive and segment. Our observations tell us the cornea has an approximately elliptical shape. It is therefore assumed that this shape prior information can be used to address the above challenge. Previous studies have used prior shape knowledge to achieve improved segmentation in other applications16,17 and have shown promising results on retinal OCT images. These methods used circular shape estimation to represent the shape of interest. Another previously used method for incorporating prior shape knowledge into a model is to use a good set of training images to derive the shape.18

Fig. 1

An example anterior segment OCT image acquired using the Visante AS-OCT system.

JBO_18_5_056003_f001.png

In this paper, a new technique for the automated segmentation of the anterior and posterior surfaces of the entire cornea and a small part of the sclera is presented. The technique is compared with Shen’s thresholding method12 and Chan and Vese’s active contour without shape19 using a data set of 33 images against a reference standard built from the manual annotations made by an expert ophthalmologist (FB). These comparisons were made using three similarity measures: Dice’s coefficient, mean unsigned difference, and Hausdorff distance.20

The remainder of the paper is organised as follows. Section 2 describes the dataset used in the study and the proposed segmentation technique. Section 3 presents the experimental results, and Sec. 4 discusses the results and concludes the paper.

2.

Methods

2.1.

Data Acquisition

Thirty-three AS-OCT B scan images through the center of the cornea from healthy eyes (one per subject) acquired by the Visante AS-OCT system (Carl Zeiss Meditec, Dublin, CA) at Wenzhou Medical College, China, were used for the purpose of evaluation in this study. The Visante system is a time domain system that uses 1,300-nm infrared light to obtain cross-sectional images of the anterior segment with a scanning rate of 2,000 axial scans per second. Each B scan image contains 256 A-scans in 16 mm with 1,024 points per A scan to a depth of 8 mm. The images have a transverse resolution of 60 μm and an axial resolution of 18 μm. The images were output as 816×636-pixel JPEG files. The images had been corrected for refractive index using the built-in software of the system; this correction is unlikely to affect our results. The anterior and posterior boundaries of all images were later segmented manually by an expert ophthalmologist (FB).

Two further images, acquired using the same system, from eyes with keratoconus were used to demonstrate the performance of the program.

2.2.

Segmentation Framework

A three-step algorithm was developed. The first step was to preprocess the image in order to remove the central noise artefact and the iris. The next step was to obtain a coarse segmentation of the front eye using a thresholding technique. The final step used the new level set-based shape prior segmentation model to evolve the contour initialized from the coarse segmentation and achieve the final segmentation.

2.2.1.

Preprocessing step

All the AS-OCT images contain a common central noise artefact. This is intrinsic to the OCT scanning system and is caused by much higher reflection when the detector is located perpendicular to the corneal surface. This region was detected by calculating the mean intensity of each A scan of the image under consideration. The column with the highest mean intensity, and those next to it, will be considered the central noise artefact and removed by setting the intensity value of all the pixels within them to zero. The iris can complicate the shape representation essential for the final step; as such, it was detected and removed in a similar manner using the projection along the horizontal direction. The original image is shown in Fig. 2(a), while Fig. 2(b) shows the resulting image after preprocessing.

Fig. 2

Illustration of preprocessing and coarse segmentation steps. (a) Original image. (b) Preprocessed image after removing the iris and central noise artefact. (c) Coarse segmentation result.

JBO_18_5_056003_f002.png

2.2.2.

Coarse segmentation

The aim of this step is to produce an initial estimate of the corneal location (or coarse segmentation). This estimate is important because it will be used as the initial location of the curve to be evolved by the level set function in the following step, and its anterior boundary will be used to construct the shape constraint in the later stage. The technique described by Shen et al.12 was adopted for this purpose. More specifically, an entropy filter was applied to the preprocessed image, as shown in Fig. 2(b), to produce an entropy map. The coarse segmentation is achieved by segmenting the entropy map using the Otsu’s thresholding method.21 Figure 2(c) shows the initial segmentation; a relatively good detection of the anterior surface can be achieved, but the posterior boundary is difficult to detect.

2.2.3.

Segmentation with level set and shape prior

In this step, the coarse segmentation will be further refined by the newly proposed level set segmentation model with shape prior. Level set techniques are widely used in image segmentation. They represent the contour of interest as the zero level set of a function valued everywhere on the image.22 This function is evolved by minimizing an energy function to achieve the segmentation.

How the energy function is constructed is important in determining the segmentation performance. In the new level set with shape prior segmentation model, the energy functional consists of a sum of three terms: the region’s fidelity term, the curve length penalty term, and the shape prior term. More specifically, the energy function is

Eq. (1)

E(ϕ)=λ1E1(ϕ)+λ2E2(ϕ)+λ3E3(ϕ),
where E1(ϕ) represents the region fidelity term, E2(ϕ) is the curve length penalty term, E3(ϕ) is the shape prior, and λi are coefficients that determine the relative strength of each component. In particular, when λ3 is 0, the equation reduces to the conventional Chan-Vese (CV) model,19 whose performance is evaluated in the next section.

For the region fidelity term E1(ϕ) in Eq. (1), an intensity-based model proposed by Chan and Vese19 has been used. The goal of the function is to split the image into two approximately homogenous regions. The region term has the form

Eq. (2)

E1(ϕ)=Ω[I(x,z)u]2H(ϕ)+[I(x,z)v]2[1H(ϕ)]dxdz,
where I(x,z) is the image intensity at the pixel (x,z), u is the mean intensity inside the curve, v is the mean intensity outside of the curve, Ω is the space representing the image, and H(ϕ) is the Heaviside function. The mean intensities are updated in every iteration of the model.

The curve length term, which ensures the boundary curves are smooth, favors shorter curves. A commonly used form was adopted and has the form

Eq. (3)

E2(ϕ)=Ωδ(ϕ)|ϕ|dxdz,
where δ(ϕ) is the regularized delta function corresponding to the gradient of Heaviside function. The shape prior term is responsible for ensuring the contour found is as close as possible to the shape prior of the cornea to be segmented. In this new model, the shape term that was incorporated into the energy function can be expressed as

Eq. (4)

E3(ϕ)=Ω(ϕϕ0)2dxdz,
where ϕ is the level set function of the image, and ϕ0 is the signed distance function representing the shape prior, which will be discussed below.

The location of the anterior boundary detected during the initial estimate is used to calculate the shape of the front eye. This is done by assuming the posterior boundary has a fixed relationship to the anterior boundary. First, an ellipse is fitted to the anterior boundary using a least squares fitting method,23 and a signed distance function of the ellipse is calculated. Next, the central corneal thickness is calculated by classifying peaks in image intensity; better image quality at the center of the image means the first large peak can be assumed to be the anterior boundary, and the last peak can be assumed to be the posterior boundary. This method has been used elsewhere for central corneal thickness measurements.24 Once the thickness is known, an estimate of the position of the posterior boundary can be made. The signed distance function is altered using a quadratic expression that shifts the zero point down:

Eq. (5)

ϕlower(x,z)=ϕupper(x,z)ϕupper(xt,zt)c1(xxt)2c2(xxt),
where ϕ0 is the altered function, ϕupper is the initial function based on the top surface only, (xt,zt) is the point on the lower boundary calculated as discussed above, and ci are constants governing the strength of quadratic terms. The quadratic terms are added to account for the fact that the two surfaces of the cornea are not parallel; the posterior boundary has a greater curvature than the anterior boundary.

The product of the distance functions for the lower and upper boundaries is used to give ϕ0(x,z) the shape prior; that is,

Eq. (6)

ϕ0(x,z)=ϕlower(x,z)ϕupper(x,z),
where ϕ0(x,z) is the shape prior, while ϕlower(x,z) and ϕupper(x,z) are the signed distance functions corresponding to the anterior and posterior boundaries, respectively. The motivation of using ϕ0(x,z) is that it is a level set function representing a shape similar to the cornea. Taking the product of two signed distance functions ensures that ϕ0(x,z) has negative values between the boundaries and positive values everywhere else. On the boundaries, it has a value of 0. This formulation used in Eq. (3) attempts to force the level set to be sought as close as possible to the shape prior during the iterations.

2.2.4.

Minimizing the energy function

The contour is evolved toward the optimal location by minimizing the energy function described above. The Euler-Lagrange equation corresponding to the Eq. (1) was calculated. A gradient descent method was then used to solve this iteratively using the equation

Eq. (7)

ϕt=λ1δ(ϕ)[(Iu)2(Iv)2]+λ2.(ϕ|ϕ|)δ(ϕ)2λ3(ϕϕ0),
where t is an artificial time to represent the change to the level set function for each iteration. In order to speed up the analysis program, the shape constraint was updated every 20 iterations. The initial estimate described in the previous subsection was used to initialize ϕ.

The weighting of the different terms was determined empirically. The values used were λ1=1, λ2=0.2, and λ3=0.8. The weighting of the terms is important, since it determines how much each particular term contributes to the overall energy function. Previous studies using level set functions have reported that changing the strength of the terms relative to the iterations produced better results.16,17 However, the best results using this model were achieved when keeping the values fixed.

2.2.5.

Shape term with gradient (CVWSe)

In the above model, shape information is used to improve segmentation in areas where the image information alone is not enough. In an area with good image intensity, reducing the dependence on the shape may lead to improved results. To achieve this, we added to the shape term the gradient term

Eq. (8)

E3(ϕ)=Ωg(ϕϕ0)2dxdz,
where g is related to the image gradient and defined as

Eq. (9)

g=11+κ|(GI)|,
where κ is a constant, (GI) is the convolution of the image with a Gaussian kernel to smooth the edges, and is the standard Del operator that calculates the gradient of the image. When the gradient of the image is large, this decreases g and results in the shape function having less effect on segmentation.

2.3.

Evaluation

For the purpose of evaluation, the performance of four methods was compared. These were two variations of the newly proposed technique—level set with shape prior (CVWS) and level set with shape and gradient (CVWSe)—and two existing methods: the Chan Vese (CV) model19 and a threshold-based method described by Shen et al.12

Three similarity measures were used to evaluate the results through comparison with expert manual segmentation: Dice’s similarity coefficient (DSC), mean unsigned surface positioning error, and the Hausdorff distance (HD).

DSC is an area similarity method defined by

Eq. (10)

DSC=2|XY||X|+|Y|,
where X and Y are the two segmentations to be compared—in this case, the manual and automated segmentation results. DSC has a range between 0 and 1. The higher the DSC value, the more similar the two segmented regions are.

The mean unsigned surface positioning errors (MSPE) for anterior and posterior boundaries between the manual and automatic segmentation were also calculated as the mean value of the unsigned difference at each location between two curves. This was done separately for the anterior and posterior surfaces.

The mean 95% Hausdorff distance25 is a more stringent measure that compares the difference between the two boundaries. The Hausdorff distance from set A to set B is defined as

Eq. (11)

HD(A,B)=maxaA[minbB(|ab|)],
where A and B are sets of boundary points from the two images to be compared. The 5% largest distances were removed, and then the maximum of HD(A,B) and HD(B,A) was taken for each image.26 Perfect alignment is represented by a Hausdorff distance of 0.

3.

Results

The four algorithms were applied to all 33 images. These were all carried out using a PC with Intel Core i5-2320 CPU @3.00 GHz and 4.00 GB RAM. The mean±standard deviation (std) time for the new algorithm was 102±8s. Figure 3 shows the results from different methods overlaid on an example image, and Fig. 4 shows the segmented images achieved from different methods side by side. The mean±std of the DSC values is presented in Table 1. In particular, the mean DSC values for both of our methods are more than 0.9; this demonstrates an excellent agreement with the manual annotation. The mean value for both the CVWS and CVWSE is higher than 0.9, implying excellent agreement with the reference standard. An analysis of variance (ANOVA) test showed there is no statistically significant difference between our new models (CVWS and CVWSE; t-test, p=0.849). However, CVWS and CVWSE provide significantly higher DSC measures than the other two methods (CV and thresholding; all p<0.001).

Fig. 3

Illustration of agreement between the segmentations using the two new methods (CVWS and CVWSe) and the manual annotation. The red line is CVWS, the green line is CVWSe, and the blue line is manual annotation. Colors are altered where lines overlap. Good agreement among the different methods can be seen, especially on anterior surface.

JBO_18_5_056003_f003.png

Fig. 4

Illustration of segmentation results using different techniques: (a) CVWS, (b) CVWSe, (c) CV, (d) threshold, (e) expert, and (f) original image.

JBO_18_5_056003_f004.png

Table 1

Comparison with manual segmentation using Dice’s similarity coefficient (DSC), mean unsigned surface positioning errors (MSPE), and 95% Hausdorff distance. DSC is a coefficient; the other values are in pixels.

CVWSCVWSeCVThreshold
DSC0.930±0.0220.918±0.0290.654±0.0490.767±0.10
MSPE anterior boundary1.56±0.531.80±1.173.21±1.592.82±5.33
MSPE posterior boundary2.90±1.314.06±1.7212.46±2.3711.05±3.66
95% Hausdorff distance6.06±2.269.50±4.3320.07±19.3425.39±8.89

The MSPEs between the manual and automatic segmentation boundaries are shown in Table 1. There is no statistically significant difference in anterior boundary (ANOVA, p=0.058). For the posterior boundary, there is a statistically significant difference among the four methods (ANOVA, p<0.001). Also, there is no significant difference between CVWS and CVWSE (p=0.212); both methods perform significantly better than the other two methods (p<0.001). There is no significant difference between the CV and threshold approaches (p=0.136). In general, the difference is smaller for the anterior boundary, which also confirms the observation that it is much easier to detect than the posterior boundary, due to the relatively poorer image quality at the posterior cornea. For the 95% Hausdorff distance, there are statistically significant differences among the four methods (p<0.001). CVWSe has a larger Hausdorff distance than CVWS, but the difference is not significant (p=0.578); both CVWSe and CVWS have a significantly smaller Hausdorff distance than the other two methods (p<0.001). There are no significant differences between the CV and threshold approaches (p=0.201).

Our technique can be extended to segment the entire anterior chamber; the iris and remaining sclera can be easily segmented by the standard Chan and Vese model from the image where the cornea has been removed. By combining the two segmentation results together, the entire anterior chamber can be segmented. This is essential for anterior chamber biometry. Figure 5 shows an example of full segmentation, including the iris. From this figure, it appears that the anterior surface of the iris is easier to detect than the posterior surface, due to the nature of the OCT image.

Fig. 5

Example segmentation of full anterior segment, including the iris.

JBO_18_5_056003_f005.png

The ultimate goal of this work is to produce an automated technique that can detect the cornea in normal and diseased eyes. Figure 6 shows two segmentations from OCT images of an eye with keratoconus. These preliminary results show that this new segmentation technique can be used to segment the cornea in diseased eyes. It is expected that further evaluation will be performed when data from more diseased eyes are available.

Fig. 6

Two segmented images of an eye with keratoconus.

JBO_18_5_056003_f006.png

4.

Discussion and Conclusion

A fully automatic technique has been developed that can detect both the anterior and posterior surfaces of the anterior segment in AS-OCT images. The algorithm used a shape prior to allow difficult-to-segment regions to be segmented. The technique has been demonstrated to be capable of segmenting images, including regions with a low signal-to-noise ratio.

The newly developed method performed significantly better than previously described methods, and the results showed a high level of agreement with expert manual segmentation. This is the first method that has demonstrated segmentation of both anterior and posterior surfaces over the entire length of the cornea.

One of the current limitations of the algorithm is that it has not been optimized for speed. This can be done in the future by implementation in C++ or using splines to represent the level set. The work has also focused on time domain (TD)-OCT images. Although the new spectral domain (SD) OCT systems provide much faster acquisition and better resolution than TD-OCT,27 there is currently no commercially available SD-OCT system that can image the entire anterior segment, including the limbus and anterior sclera. Given the advantages of SD-OCT, it is believed that the technique developed in this paper as a generic segmentation tool will be easily transferable to SD-OCT images when SD-OCT becomes mature in imaging the entire anterior segment.

One important factor that can affect segmentation performance is the image quality, including the signal-to-noise ratio (SNR). In general, the higher SNR an image has, the easier segmentation will be. For this particular problem, images contain speckle noise inherent in the OCT system and poor SNR in some of the cornea structures. This means simple thresholding and region-based models will not work; this was demonstrated in the comparative study. This study uses shape to overcome this problem. It is important to note that technical advances such as SD-OCT will lead to better-quality images. This could make segmentation easier, but this algorithm will still work easily. For SD-OCT image analysis, computational time for 3D processing will become important.

Accurate detection of the anterior and posterior surfaces is essential in research and clinical practice. For example, this information could be used as an input when creating patient-specific models of the human eye. Incorrect information or inaccurate segmentation would produce errors in the model. Patient-specific models would allow for improved diagnosis of corneal pathologies such as keratoconus and improved monitoring of the cornea after surgery. Other measurements of the cornea, such as corneal power measurements, require very precise segmentation. This could be another important application of the method developed here.

Some preliminary work presented here has shown that segmentation of patients with keratoconus is possible in principle using this technique. Future evaluation work will include investigating how well the algorithm can cope with examination of images of eyes with a variety of conditions. We also wish to extend this technique to the formation of 3D maps. This should be relatively straightforward, since the level set formulation we used extends easily to higher dimensions. In the current formulation, an intensity-based region term is used, and we plan to investigate the usefulness of texture models.28,29

In conclusion, this work has shown that using a shape prior term can significantly improve segmentation results for the fully automatic segmentation of the cornea in AS-OCT images, a challenging and previously unsolved problem. The algorithm developed here is the first fully automatic method to detect two boundaries across the entire anterior segment with superior performance over the previous methods and excellent agreement with expert manual annotation. It may become a valuable tool for providing accurate and reliable measurements of the anterior segment geometry for clinical and nonclinical applications.

Acknowledgments

The authors would like to express thanks to Dr. Mei Xiao Shen for help with OCT images. We would also like to thank EPSRC for funding the project.

References

1. 

E. A. Swansonet al., “In vivo retinal imaging by optical coherence tomography,” Opt. Lett., 18 (21), 1864 –1866 (1993). http://dx.doi.org/10.1364/OL.18.001864 OPLEDP 0146-9592 Google Scholar

2. 

B. J. Kaluzyet al., “Spectral optical coherence tomography: a novel technique for cornea imaging,” Cornea, 25 (8), 960 –965 (2006). http://dx.doi.org/10.1097/01.ico.0000224644.81719.59 CORNDB 0277-3740 Google Scholar

3. 

J. González-Pérezet al., “Central corneal thickness measured with three optical devices and ultrasound pachometry,” Eye Contact Lens, 37 (2), 66 –70 (2011). http://dx.doi.org/10.1097/ICL.0b013e31820c6ffc 1542-2321 Google Scholar

4. 

Z. LiuA. J. HuangS. C. Pflugfelder, “Evaluation of corneal thickness and topography in normal eyes using the Orbscan corneal topography system,” Br. J. Ophthalmol., 83 (7), 774 –778 (1999). http://dx.doi.org/10.1136/bjo.83.7.774 BJOPAL 0007-1161 Google Scholar

5. 

M. M. MarsichM. A. Bullimore, “The repeatability of corneal thickness measures,” Cornea, 19 (6), 792 –795 (2000). http://dx.doi.org/10.1097/00003226-200011000-00007 CORNDB 0277-3740 Google Scholar

6. 

S. SinT. L. Simpson, “The repeatability of corneal and corneal epithelial thickness measurements using optical coherence tomography,” Optom. Vis. Sci., 83 (6), 360 –365 (2006). http://dx.doi.org/10.1097/01.opx.0000221388.26031.23 OVSCET 1040-5488 Google Scholar

7. 

J. L. B. RamosY. LiD. Huang, “Clinical and research applications of anterior segment optical coherence tomography—a review,” Clin. Exp.Ophthalmol., 37 (1), 81 –89 (2009). http://dx.doi.org/10.1111/j.1442-9071.2008.01823.x 1442-6404 Google Scholar

8. 

L. M. Sakataet al., “Comparison of gonioscopy and anterior segment ocular coherence tomography in detecting angle closure in different quadrants of the anterior chamber angle,” Ophthalmology, 115 (5), 769 –774 (2008). http://dx.doi.org/10.1016/j.ophtha.2007.06.030 0161-6420 Google Scholar

9. 

A. Konstantopouloset al., “Assessment of the use of anterior segment optical coherence tomography in microbial keratitis,” Am. J. Ophthalmol., 146 (4), 534 –542 (2008). http://dx.doi.org/10.1016/j.ajo.2008.05.030 AJOPAA 0002-9394 Google Scholar

10. 

R. C. Hallet al., “Laser in situ keratomileusis flap measurements: comparison between observers and between spectral-domain and time-domain anterior segment optical coherence tomography,” J. Cataract Refract. Surg., 37 (3), 544 –551 (2011). http://dx.doi.org/10.1016/j.jcrs.2010.10.037 JCSUEV 0886-3350 Google Scholar

11. 

A. ElsheikhD. Wang, “Numerical modelling of corneal biomechanical behaviour,” Comput. Meth. Biomech. Biomed. Eng., 10 (2), 85 –95 (2007). http://dx.doi.org/10.1080/10255840600976013 1025-5842 Google Scholar

12. 

M. Shenet al., “Extended scan depth optical coherence tomography for evaluating ocular surface shape,” J. Biomed. Opt., 16 (5), 056007 (2011). http://dx.doi.org/10.1117/1.3578461 JBOPFO 1083-3668 Google Scholar

13. 

J. Tianet al., “Automatic anterior chamber angle assessment for HD-OCT images,” IEEE Trans. Biomed. Eng., 58 (11), 3242 –3249 (2011). http://dx.doi.org/10.1109/TBME.2011.2166397 IEBEAX 0018-9294 Google Scholar

14. 

F. LaRoccaet al., “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomed. Opt. Express, 2 (6), 1524 –1538 (2011). http://dx.doi.org/10.1364/BOE.2.001524 BOEICL 2156-7085 Google Scholar

15. 

J. A. Eichelet al., “A novel algorithm for extraction of the layers of the cornea,” in Proc. IEEE Canadian Conf. on Comput. and Robot Vis., 313 –320 (2009). Google Scholar

16. 

C. Pluempitiwiriyawejet al., “STACS: new active contour scheme for cardiac MR image segmentation,” IEEE Trans. Med. Imaging, 24 (5), 593 –603 (2005). http://dx.doi.org/10.1109/TMI.2005.843740 ITMID4 0278-0062 Google Scholar

17. 

A. Yazdanpanahet al., “Segmentation of intra-retinal layers from optical coherence tomography images ssing an active contour approach,” IEEE Trans. Med. Imaging, 30 (2), 484 –496 (2011). http://dx.doi.org/10.1109/TMI.2010.2087390 ITMID4 0278-0062 Google Scholar

18. 

X. BressonP. VandergheynstJ. P. Thiran, “A variational model for object segmentation using boundary information and shape prior driven by the Mumford-Shah functional,” Int. J. Comput. Vis., 68 (2), 145 –162 (2006). http://dx.doi.org/10.1007/s11263-006-6658-x IJCVEQ 0920-5691 Google Scholar

19. 

T. F. ChanL. A. Vese, “Active contours without edges,” IEEE Trans. Image Process., 10 (2), 266 –277 (2001). http://dx.doi.org/10.1109/83.902291 IIPRE4 1057-7149 Google Scholar

20. 

A. R. MansouriA. MiticheC. Vázquez, “Multiregion competition: a level set extension of region competition to multiple region image partitioning,” Comput. Vis. Image. Understand., 101 (3), 137 –150 (2006). http://dx.doi.org/10.1016/j.cviu.2005.07.008 CVIUF4 1077-3142 Google Scholar

21. 

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., 9 (1), 62 –66 (1979). http://dx.doi.org/10.1109/TSMC.1979.4310076 0018-9472 Google Scholar

22. 

S. OsherR. P. Fedkiw, “Level set methods: an overview and some recent results,” J. Comput. Phys., 169 (2), 463 –502 (2001). http://dx.doi.org/10.1006/jcph.2000.6636 JCTPAH 0021-9991 Google Scholar

23. 

A. FitzgibbonM. PiluR. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell., 21 (5), 476 –480 (1999). http://dx.doi.org/10.1109/34.765658 ITPIDJ 0162-8828 Google Scholar

24. 

L. Geet al., “Automatic segmentation of the central epithelium imaged with three optical coherence tomography devices,” Eye Contact Lens, 38 (3), 150 –157 (2012). http://dx.doi.org/10.1097/ICL.0b013e3182499b64 Google Scholar

25. 

D. P. HuttenlocherG. A. KlandermanW. J. Rucklidge, “Comparing images using the Hausdorff distance,” IEEE Trans. Pattern Anal. Mach. Intell., 15 (9), 850 –863 (1993). http://dx.doi.org/10.1109/34.232073 ITPIDJ 0162-8828 Google Scholar

26. 

N. Archipet al., “Non-rigid alignment of pre-operative MRI, fMRI, and DT-MRI with intra-operative MRI for enhanced visualization and navigation in image-guided neurosurgery,” NeuroImage, 35 (2), 609 –624 (2007). http://dx.doi.org/10.1016/j.neuroimage.2006.11.060 Google Scholar

27. 

W. DrexlerJ. G. Fujimoto, “State-of-the-art retinal optical coherence tomography,” Prog. Retin. Eye Res., 27 (1), 45 –88 (2008). http://dx.doi.org/10.1016/j.preteyeres.2007.07.005 PRTRES 1350-9462 Google Scholar

28. 

Y. ZhengK. Chen, “A hierarchical algorithm for multiphase texture image segmentation,” ISRN Signal Proc., 2012 1 –11 (2012). http://dx.doi.org/10.5402/2012/781653 Google Scholar

29. 

Y. ZhengK. Chen, “A general model for multiphase texture segmentation and its applications to retinal image analysis,” Biomed. Signal Process. Control, (2013). http://dx.doi.org/10.1016/j.bspc.2013.02.004 Google Scholar
© 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2013/$25.00 © 2013 SPIE
Dominic Williams, Yalin Zheng, Fangjun Bao, and Ahmed Elsheikh "Automatic segmentation of anterior segment optical coherence tomography images," Journal of Biomedical Optics 18(5), 056003 (2 May 2013). https://doi.org/10.1117/1.JBO.18.5.056003
Published: 2 May 2013
Lens.org Logo
CITATIONS
Cited by 33 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Cornea

Optical coherence tomography

Eye

Eye models

Iris recognition

Signal to noise ratio

Back to Top