1 October 2012 Parapapillary atrophy and optic disc region assessment (PANDORA): retinal imaging tool for assessment of the optic disc and parapapillary atrophy
Author Affiliations +
We describe a computer-aided measuring tool, named parapapillary atrophy and optic disc region assessment (PANDORA), for automated detection and quantification of both the parapapillary atrophy (PPA) and the optic disc (OD) regions in two-dimensional color retinal fundus images. The OD region is segmented using a combination of edge detection and ellipse fitting methods. The PPA region is identified by the presence of bright pixels in the temporal zone of the OD, and it is segmented using a sequence of techniques, including a modified Chan-Vese approach, thresholding, scanning filter, and multiseed region growing. PANDORA has been tested with 133 color retinal images (82 with PPA; 51 without PPA) drawn randomly from the Lothian Birth Cohort (LBC) database, together with a "ground truth" estimate from an ophthalmologist. The PPA detection rate is 89.47% with a sensitivity of 0.83 and a specificity of 1. The mean accuracy in defining the OD region is 81.31% (



Two-dimensional (2-D) color retinal fundus images not only provide information about different eye conditions and ophthalmic diseases (e.g., myopia, macular degeneration and glaucoma), but they could also show signs of systemic diseases such as diabetes.12.3.4 The optic disc (OD) is one of the fundamental features of interest. The OD is located anatomically at the distal end of optic nerve (Fig. 1) and appears as a bright yellowish-white ellipse partially overshadowed by retinal blood vessels in the fundus image. Segmenting the OD is not a trivial task, owing to light artefacts, blood vessels, and often ill-defined boundaries, particularly in the presence of parapapillary atrophy (PPA).

Fig. 1

The sagittal section of the eyeball. The optic disc is situated anatomically at the distal end of the optic nerve.


Clinically, PPA can be categorized into alpha- and beta-zone PPA.5 The α-zone often refers to the outer zone of PPA covering irregular hyper- and hypo-pigmented areas in the retinal pigment epithelium (RPE), either on their own or in the surrounding β-zone; the β-zone is usually the zone of complete RPE atrophy, next to the OD. However, such division will not be made in this work. The detection and quantification of PPA will be based on the outermost boundaries of single or both zones of PPA. The reason PPA develops has remained unclear, but PPA has been linked to degenerative myopia6 and glaucoma,7 both of which can lead to sight loss. Early detection and monitoring of PPA could therefore offer a way to monitor degeneration in retinal nerve fibre layer (RNFL) and degenerative myopia. PPA appears as an irregular shape (e.g., scleral crescents, temporal choroidal, or a ring around the periphery of the OD) and is of similar brightness to the OD, although β-zone PPA may appear slightly brighter than the OD region while the α-zone appears darker. Figure 2 shows an example.

Fig. 2

(a) The original color retinal fundus image. Annotations describe the four different zones of the optic disc. (b) The optic disc boundary and the parapapillary atrophy (PPA) region.


The literature surrounding OD detection and segmentation is rich, as the OD is an important parameter in glaucoma diagnosis8,9 and a common landmark when locating regions of interest such as the macula.10,11 The detection and segmentation of the OD region has been performed using 2-D fundus images directly1213.14 and through three-dimensional (3-D) planimetric images generated from multimodal imaging systems.15,16 The detection methods have searched for either a large cluster of bright pixels17 or a region with the highest variation in grey level intensity.18 However, such methods are susceptible to retinal lesions (e.g., exudates), which can also appear bright in fundus images and artefacts (e.g., intensity gradient across the image).

An alternative method based on watershed transformation and morphological filtering has been proposed,19 but obstructions such as retinal vessels are difficult to remove completely without introducing significant distortion and loss to the fundus image. Techniques based on vascular models have proved more robust, at the expense of higher computational complexity.12 They have decomposed fundus images at multiple resolutions and used Hausdorff-based template matching techniques to detect the OD. More recently, a combination of level set and Chan-Vese (CV) methods has been explored,20 as CV has the potential to auto-compensate discontinuities in the OD boundary. However, this approach suffers from major drawbacks, e.g., the segmentation process is likely to be time-consuming, CV requires an accurate initial “guess” of the OD boundary, and it is likely to achieve good results only when the OD region is of homogenous intensity. Applying a circular Hough transform on the Prewitt edge map on the dual channel (red and green) of images would prove a faster approach.21 This method achieves a mean accuracy level of 86%. However, the inhomogeneity in the OD region would lead to a noisy edge map and hence incorrect results from the Hough transform.21 Besides, several other OD segmentation approaches have also been reported. These approaches are based on graph search,22 an adaptive method using mathematical morphology,23 a active contour model,24 or blood vessel information with graph construction.25

In addition, feasibility studies using 3-D multimodal imaging systems also exist.10,26 These systems are not yet widely available, and the required image processing time is likely to be as demanding as that required by the CV method. Stereo imaging techniques have also been exploited using a “Snake” algorithm, together with p-tile thresholding on an edge map, to outline the OD boundary.27 However, the presence of PPA remains a problem. One possible solution is to predetermine the presence of PPA and subsequently devise a corresponding strategy to segment the OD region.

Despite its importance as a potential biomarker for sight loss, very little has been done to develop tools to detect and segment PPA using 2-D fundus images. One early system has been developed (PAMELA) which can detect certain types of PPA automatically.6,28 However, the system cannot quantify the extent and hence describe the development of PPA. PAMELA uses texture analysis to extract PPA features and applies an artificial neural network known as a Support Vector Machine to perform binary classification (i.e., PPA present or not). Because the PPA has similar attributes to the OD in terms of contrast and brightness, segregating one from another accurately is nontrivial.

Current ophthalmic imaging techniques available for the examination of both the OD and the PPA regions are Heidelberg retina tomography (HRT), optical coherence tomography (OCT), and the more recent ultra-high resolution (UHR) OCT, which offers a pseudo-color 3-D visualisation of the aforementioned regions.29 These techniques have provided some success, but they still have four main limitations.30 First, these techniques are not widely used in ophthalmology clinics, because they operate with more expensive and specialized lasers.31,32 Second, a technician or photographer needs an intimate understanding of retinal anatomy to identify the OD boundary manually before the PPA and OD variable can be estimated from the image contour based on third dimensional depth information.32,33 Third, the patient is required to remain motionless for quite a long time during the scanning procedure executed by OCT. Any eye movement would produce artefacts in images. Fourth, these techniques are limited as an aid for monitoring disease progression. The existing software for OCT does not allow users to retrieve and display images acquired from previous examinations to compare them with a new image. Moreover, there is a lack of automated software for large-scale screening programs.32 This paper aims to address the technical challenges and presents an integrated solution (PANDORA: Parapapillary atrophy AND Optic disc Region Assessment) that both detects the presence of PPA and quantifies the OD and PPA regions using 2-D color fundus images. We exploit both the red and blue channels in the RGB space to maximize feature extraction (OD/PPA) while minimizing the interference caused by blood vessels. PANDORA has been designed to be fully automated, to allow large population studies in the future and to reduce problems associated with human errors (e.g., habituation and fatigue).



All the color fundus images for the assessment of PANDORA were randomly drawn from the database of the Lothian Birth Cohort (LBC) 1936 study.34 The LBC study included the surviving members of the 1947 Scottish Mental Survey (n=70,805) who were born in 1936 and currently reside in the Edinburgh area (Lothian) of Scotland. Three hundred and twelve individuals were tracked successfully and had their retinal photos taken at the Wellcome Trust Clinical Research Facility, Western General Hospital, NHS Lothian, Scotland. This research complied with the Declaration of Helsinki and was approved by the Lothian (Scotland A) Research Ethics Committee.

Our imaging tool, PANDORA, is implemented in MATLAB (Mathworks Inc., Natick, MA, USA). All fundus images were first cropped manually to the region of interest (ROI) and had a “ground truth” estimate of the OD and PPA regions defined by an ophthalmologist (author AL), who had not seen the results from PANDORA. Figure 3 shows an example of the ROI selection on a left eye fundus image.

Fig. 3

An example fundus image of a left eye. The white box, which is manually drawn, encloses the region of interest.


Figure 4 illustrates the flow chart of the PANDORA algorithm, which can be divided into three phases. The OD segmentation module uses an ellipse fitting technique on a (Sobel) edge map in the red channel to outline the OD boundary. With the OD removed, the PPA detection module then determines the presence of PPA in the temporal zone using prior knowledge about the nature of PPA. Note that the temporal zone is one of the four zones in the fundus image (see Fig. 5), in which PPA normally first develops. Once the PPA region is detected, we then conduct PPA segmentation using a combination of image processing techniques: thresholding, a scanning filter, and multiseed region growing methods.20

Fig. 4

A flow chart for segmentation of the OD and PPA. The scheme consists of three main phases: OD segmentation (gray), PPA detection (pale yellow), and PPA segmentation (cyan).


Fig. 5

The process of PPA detection: (a) and (f) Examples of original fundus images. (b) and (g) Examples of original fundus images in the blue channel. (c) and (h) OD segmentation results from Phase 1. (d) Left clinical knowledge-based mask. (i) Right clinical knowledge-based mask. (e) and (j) The detection results shown as images with and without PPA, respectively.



Phase 1: OD Segmentation

The OD is extracted using a direct least square fitting algorithm of an ellipse (DLSFE).35 This algorithm yields an elliptical solution that minimizes the sum of squared algebraic distances from the image edge points to the fitted ellipse. However, this ellipse fitting technique is susceptible to noise and requires preprocessing to remove unwanted pixels from the fundus image before fitting.

First of all, a Sobel edge detector is applied to the red channel of the cropped image, generating an edge map. The Sobel operator was chosen for two reasons: (a) it could compute the gradient of image intensity and describe the smoothness of an edge (abrupt or gradual), hence providing a more reliable result (random noise would appear as abrupt edges); and (b) with a large convolution kernel, it could also act as a noise-filtering function. Once an edge map is produced, a two-stage preprocessing technique is used to eliminate noise. The first stage removes the retinal vessels originating from the OD region, as they may interfere with the accurate segmentation of the OD area. The detection of retinal vessels is achieved by applying the thresholding technique in hue channel of the images. The second stage isolates the OD (and the PPA) from the background by using a clustering technique with the nearest neighbor rule36 in the L*a*b* color space. To achieve that, samples of the ROI region and the background are first extracted in L*a*b* color space and then used as markers to classify each pixel (as the ROI or the background) by using the nearest neighbor rule. Upon completion, a DLSFE is fitted to estimate the OD boundary. To reduce fitting errors, we fit the OD region iteratively until the center of the fitting result is within a predetermined “tolerance” distance, which is 1/35th of the image width (in pixels), from the center of the cropped image (ROI).


Phase 2: PPA Detection

The OD is clinically divided into four zones: temporal, superior, nasal, and inferior. Figure 5 shows an example of a right eye image. The temporal and nasal zones must be exchanged if the image is from a left eye. The type of image (right or left eye) was prelabeled on each file name during the image recording. As previously mentioned, the OD and PPA account for the brighter region of the image (the threshold is set empirically at the top 12% of the histogram of the image intensity). If the OD region (estimated from Phase 1) is removed, we can detect PPA by the presence of any remaining bright pixels in the temporal zone, as shown in Fig. 5(c) and 5(h). This phase of operation is carried out in the blue channel, where the PPA appears most clearly.


Phase 3: PPA Segmentation

We previously developed an automated scheme for the extraction and quantification of the PPA region.20 This scheme used an initial segmentation and estimation of the OD-plus-PPA boundary based on a modified Chan-Vase analysis37 of the blue channel. The OD region is then removed from the OD-plus-PPA using the result obtained from Phase 1, leaving the first order estimation of the PPA region. The actual PPA boundary is subsequently refined by using a multiseed region growing method. 20

PANDORA combines these techniques and exploits both global and local information for PPA and OD segmentation. As before, it could derive the following three physiological parameters (all in pixel):

  • (1) the size of the OD;

  • (2) the length of minor/major OD axis;

  • (3) the size of the PPA.

PANDORA uses a different approach to extract the OD, so it does not require any additional calibration to compensate for the underestimation (due to premature termination of snake evolution), as in our previous tool.14,20 In addition, PANDORA offers a new feature: it could reveal the shape of PPA, which may be important in understanding the development process of PPA.



A total of 133 color fundus images (including 31 poor-quality images as determined by an ophthalmologist, author AL) from 101 subjects were randomly selected from the LBC database. Without knowing the results from PANDORA, the human assessor identified images with PPA (82 images with PPA; 51 images without PPA) and provided a “ground truth” estimate of the OD and PPA region in each image. The assessor first observed in full color space (RGB) the scleral ring and the retinal vessel bending to identify the boundary of the OD. Subsequently, the region of PPA was identified based on the brightness and texture of image pixels. In this work, we do not divide the PPA region into different zones but view them as one. We further randomly drew a subsample of 30 images with PPA and 20 images without PPA to evaluate the segmentation results from PANDORA. The area enclosed by the ground estimate/the segmentation result from PANDORA is counted pixel by pixel with the MATLAB software development tool to quantify the size of each region.

PANDORA achieved a PPA detection rate of 89.47%. In this context, the detection rate refers to the probability of a correct test among all the 133 test images. Figures 6 and 7 show six samples from the OD segmentation results of fundus images without and with PPA, respectively. The first column shows examples of the best results achieved, while the second column shows the worst. The ground estimate is enclosed by black spots, and the OD segmentation result is enclosed by a blue solid line. Figure 8 shows six samples of the PPA segmentation results from PANDORA. The segmentation result is enclosed by red triangles, and the ground truth estimate is enclosed by a solid black line. The results indicate that PANDORA is able to detect and capture the boundary separating the OD and PPA regions reasonably well, despite its poorly defined nature. Apart from the variation in color, size, and shape of the OD and PPA, there are additional factors shown in Figs. 6 and 7 to take into account. The OD boundary and the blood vessels do not always have a sharp contrast, making it difficult to remove all the background noise before fitting an ellipse. The presence of PPA further complicates this process, as shown in Fig. 7(b), 7(d), and 7(f).

Fig. 6

Segmentation results on the images without PPA from PANDORA. Images (a), (c), and (e) on the left represent the best results, while images (b), (d), and (f) on the right represent the worst results. The ground truth estimate is drawn on the black spots, while the estimated OD region is outlined by the blue solid line.


Fig. 7

Segmentation results on the images with PPA from PANDORA. Images (a), (c), and (e) on the left represent the best results, while images (b), (d), and (f) on the right represent the worst results. The ground truth estimate is drawn on the black spots, while the estimated OD region is outlined by the blue solid line.


Fig. 8

PPA segmentation results on the images from PANDORA. Images (a), (c), and (e) on the left represent the best results, while images (b), (d), and (f) on the right represent the worst results. The ground truth estimate is enclosed by the black solid line, while the estimated PPA region is enclosed by the red triangles.


The examples given in Fig. 7(b) and 7(f) shows overestimates of the OD region. This, in effect, reduces the possible PPA area, which is shown in Fig. 8(b) and 8(f). Conversely, underestimation of the OD region could also lead to inaccurate segmentation of the PPA region, as shown in Fig. 8(c) to 8(e). Therefore, the use of the multiseed region growing method in Phase 3 to refine the PPA boundary is necessary to eliminate any contribution from the OD, as Fig. 8(c) illustrates.


Validity of the Tool

There are two main functions of PANDORA: to determine the presence of PPA and to quantify the area of PPA and OD. We therefore adopted two different validation methods. First, we calculated PANDORA’s PPA detection rate as well as its specificity and sensitivity. The specificity, defined as the number of true negatives (TN) divided by the sum of TN and false positives (FP), indicates how well a tool can correctly identify negatives. The sensitivity, defined as the number of true positives (TP) divided by the sum of TP and false negatives (FN), indicates how well a tool can identify actual positives. Based on the PPA detection results, PANDORA is able to achieve a sensitivity of 0.83 and a specificity of 1.

Second, in terms of area estimation, the accuracy is measured by comparing the segmentation results against the ophthalmologist’s ground truth estimate of the OD/PPA region. PANDORA is assessed by a simple yet effective overlap measure (M) of the matching between two estimates, which is defined as:


where R and T represent the segmentation result and the ground estimate, respectively, and N(.) is the number of pixels within the targeted region. The accuracy in defining a region means the percentage of the overlap measure (i.e., M×100%). Table 1 summarizes the segmentation results on 50 images. In Table 1, mean accuracy refers to the average value of the accuracy in defining a region for a set of n test images (n=30 for images with PPA and n=20 for images without PPA).

Table 1

The statistical results of PPA and OD segmentation in 50 trials.

ResultsImages with PPAImages with no PPA
Mean accuracy (%)73.5781.3195.32
Standard deviation11.6210.454.36



This paper introduces PANDORA, a novel automated retinal imaging tool for both detecting the presence of PPA and quantifying OD and PPA using 2-D color fundus images. Experimental results showed that PANDORA achieves a high PPA detection rate (89.47%), despite the wide variation in fundus image quality. These results are comparable with those reported by Liu et al.,28 in which the detection rate is 87.5%, the sensitivity is 0.85, and the specificity is 0.9 of the database (40 images with PPA; 40 images without PPA) from the Singapore Eye Research Institute (SERI).

Figure 9 shows the segmentation results of the OD and PPA regions for images with and without PPA. As expected, the results indicate that PANDORA segments the OD better in images without PPA than in those with PPA, as the OD is the sole bright object. The OD region may be overestimated or underestimated when there is no clear boundary, as is often the case in images with PPA.

Fig. 9

Box plots for the quantification results of the OD and the PPA on the images with PPA and without PPA. The lower outliers are denoted by red stars. The bars specify the ranges of quantification results, and the boxes specify the first and third quartiles, with the median represented by the center lines.


In addition, the mean accuracy of OD segmentation in images with and without PPA of the proposed method in 50 trials are 81.31 (S.D.=10.45) and 95.32% (S.D.=4.36), respectively. These results are comparable to the best state-of-the-art performance as listed in Table 2. It should be mentioned that Gradient Vector Flow (GVF) Snake38 and our previous method20 have a much lower OD segmentation accuracy in the images with PPA (e.g., 48.65% and 68.35%, respectively), because they face a convergence problem whenever the boundary of a region is not clear (in this case, PPA region).

Table 2

Comparison of the OD segmentation methods in 50 trials.

MethodsImages with PPAImages without PPA
Mean accuracy (%)Standard deviationMean accuracy (%)Standard deviation
GVF Snake3848.6511.2391.315.23
Previous method2068.3510.4293.275.47
Proposed method81.3110.4595.324.36

PANDORA has three primary advantages over alternative approaches. First, it both detects the presence of PPA and allows quantification of the PPA region automatically from 2-D color fundus images alone. Previous studies6,28 were limited to the detection of PPA. Conventionally, the size of the PPA region is quantified manually.2,39 PANDORA therefore provides the first automated tool to allow PPA development to be tracked. Second, PANDORA improves upon our previous tool14,20 by using an OD segmentation approach based on an edge map, which estimates the OD/PPA boundary more accurately. Therefore, it can describe the actual shape of the regions, allowing more detailed study of the relationship between PPA and different ocular diseases. The previous approach, which was based on the ‘snake’ algorithm, suffered from a random offset in defining the boundary and could give only an estimate of the size. Third, PANDORA has been fully automated, reducing the dependency on a human assessor and minimizing problems related to human errors such as habituation. PANDORA’s physiological measurements offer additional information for clinicians studying ophthalmic or systemic diseases. Fourth, PANDORA is intrinsically more appropriate for large-scale screening programs owing to the utilization of a 2-D fundus camera as an alternative to the OCT equipment. A digital fundus camera40 could acquire fundus images quickly, without the time-consuming scanning procedure required by the OCT machine. It is also relatively cheap and has become a standard examination tool in ophthalmology clinics. Therefore, working on 2-D fundus images is both cost-effective and time-effective, and it is more convenient to the users as compared with OCT instruments.

There remain some limitations within our method. Firstly, PANDORA is susceptible to noise, due to an ill-defined boundary, from overlapping blood vessels and from lighting artefacts. Creating a noise-free edge map for ellipse fitting is essential to avoid underestimation or overestimation of the actual region. In this work, we used a naïve thresholding technique to segment retinal blood vessels from the fundus images. Techniques such as an artificial neural network4142.43 will be explored in future development to improve the robustness of retinal vessel segmentation. Second, the proposed method uses only the brightness of the pixels to detect the presence of PPA. Adding texture information, for instance, should improve the detection rate. Third, the OD is not always perfectly elliptical, despite its general appearance. The assumption made in this work (i.e., the OD is always elliptical or circular) helps to estimate the boundary of the OD, especially when it is poorly defined (i.e., it appears as broken lines in the edge map). Admittedly, this assumption could also limit the fit to the real OD size and shape, as in Fig. 6(d). While more complex segmentation algorithms might be able to describe a nonelliptical shape better, it will remain difficult to estimate a poorly defined boundary. As such, we argue that the principle of Occam’s razor may be best applied.



PPA has been linked to degenerative myopia and glaucoma, both of which can lead to loss of sight. Early detection and quantification offer an opportunity for medical intervention to halt or slow the development of ophthalmic diseases. However, existing methods are manual and subjective. They also require multimodal imaging systems (i.e., a 2-D standard laser ophthalmoscope plus an optical coherence tomograph) which are not widely available. In this paper, we demonstrate a tool that can detect PPA and quantify its size automatically using 2-D color fundus images alone. The presence of PPA is detected with an accuracy of 89.47% in the trial of 133 test images. The sensitivity of PPA detection is 0.83, and the specificity is 1. Our proposed tool also achieves an accuracy of 81.31% (SD=10.45) and 95.32% (SD=4.36) in estimating the OD region in images with and without PPA, respectively. The accuracy of PPA segmentation is 73.57% (SD=11.62), compared with the “gold standard” defined by an experienced ophthalmologist. Further development of PANDORA will allow a wider study of the development of PPA and its significance in disease diagnosis.


1. T. DammsF. Dannheim, “Sensitivity and specificity of optic disc parameters in chronic glaucoma,” Invest. Ophthalmol. Vis. Sci. 34(7), 2246–2250 (1993).IOVSDA0146-0404 Google Scholar

2. H. UchidaS. UgurluJ. Caprioli, “Increasing peripapillary atrophy is associated with progressive glaucoma,” Ophthalmology 105(8), 1541–1545 (1998).OPANEW0743-751X http://dx.doi.org/10.1016/S0161-6420(98)98044-7 Google Scholar

3. W. M. BuddeJ. B. Jonas, “Influence of cilioretinal arteries on neuroretinal rim and parapapillary atrophy in glaucoma,” Invest. Ophthalmol. Vis. Sci. 44(1), 170–174 (2003).IOVSDA0146-0404 http://dx.doi.org/10.1167/iovs.02-0651 Google Scholar

4. J. R. EhrlichN. M. Radcliffe, “The role of clinical parapapillary atrophy evaluation in the diagnosis of open angle glaucoma,” Clin. Ophthalmol. 4, 971–976 (2010).COLPCK1177-5467 http://dx.doi.org/10.2147/OPTH Google Scholar

5. J. B. JonasW. M. Budde, “Diagnosis and pathogenesis of glaucomatous optic neuropathy: morphological aspects,” Prog. Retin. Eye Res. 19(1), 1–40 (2000).PRTRES1350-9462 http://dx.doi.org/10.1016/S1350-9462(99)00002-6 Google Scholar

6. N. M. Tanet al., “Automatic detection of pathological myopia using variational level set,” in Proc. IEEE EMBS, pp. 3609–3612, IEEE, Minneapolis, MN (2009). Google Scholar

7. J. B. Jonas, “Clinical implications of peripapillary atrophy in glaucoma,” Curr. Opin. Ophthalmol. 16(2), 84–88 (2005).COOTEF Google Scholar

8. K. AndersonA. El-SheikhT. Newson, “Application of structural analysis to the mechanical behaviour of the cornea,” J. R. Soc. Interface 1(1), 3–15 (2004).1742-5689 http://dx.doi.org/10.1098/rsif.2004.0002 Google Scholar

9. A. Elsheikhet al., “Characterization of age-related variation in corneal biomechanical properties,” J. R. Soc. Interface. 7(51), 1475–1485 (2010).1742-5689 http://dx.doi.org/10.1098/rsif.2010.0108 Google Scholar

10. N. Pattonet al., “Retinal image analysis: concepts, applications and potential,” Prog. Retin. Eye Res. 25(1), 99–127 (2006).PRTRES1350-9462 http://dx.doi.org/10.1016/j.preteyeres.2005.07.001 Google Scholar

11. N. M. Tanet al., “Classification of left and right eye retinal images,” Proc. SPIE 7624, 762438 (2010).PSISDG0277-786X http://dx.doi.org/10.1117/12.844638 Google Scholar

12. M. LalondeM. BeaulieuL. Gagnon, “Fast and robust optic disk detection using pyramidal decomposition and Hausdorff-based template matching,” IEEE Trans. Med. Imag. 20(11), 1193–1200 (2001).ITMID40278-0062 http://dx.doi.org/10.1109/42.963823 Google Scholar

13. M. D. Abramoffet al., “Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features,” Invest. Ophthalmol. Vis. Sci. 48(4), 1665–1673 (2007).IOVSDA0146-0404 http://dx.doi.org/10.1167/iovs.06-1081 Google Scholar

14. C. K. Luet al., “Automatic parapapillary atrophy shape detection and quantification in colour fundus images,” in Proc. IEEE BioCAS, pp. 86–89, IEEE, Paphos, Cyprus (2010). Google Scholar

15. E. Coronaet al., “Digital stereo image analyzer for generating automated 3-D measures of optic disc deformation in glaucoma,” IEEE Trans. Med. Imag. 21(10), 1244–1253 (2002).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2002.806293 Google Scholar

16. J. Xuet al., “Automated assessment of the optic nerve head on stereo disc photographs,” Invest. Ophthalmol. Vis. Sci. 49(6), 2512–2517 (2008).IOVSDA0146-0404 http://dx.doi.org/10.1167/iovs.07-1229 Google Scholar

17. S. TamuraY. Okamoto, “Zero-crossing interval correction in tracing eye-fundus blood vessels,” Pattern Recogn. 21(3), 227–233 (1988).PTNRA80031-3203 http://dx.doi.org/10.1016/0031-3203(88)90057-X Google Scholar

18. C. Sinthanayothinet al., “Automated location of the optic disk, fovea, and retinal blood vessels from digital color fundus images,” Br. J. Ophthalmol. 83(8), 902–910 (1999).BJOPAL0007-1161 http://dx.doi.org/10.1136/bjo.83.8.902 Google Scholar

19. T. Walteret al., “A contribution of image processing to the diagnosis of diabetic retinopathy—detection of exudates in color fundus images of the human retina,” IEEE Trans. Med. Imag. 21(10), 1236–1243 (2002).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2002.806290 Google Scholar

20. C. K. Luet al., “Quantification of parapapillary atrophy and optic disc,” Invest. Ophthalmol. Vis. Sci. 52(7), 4671–4677 (2011).IOVSDA0146-0404 http://dx.doi.org/10.1167/iovs.10-6572 Google Scholar

21. A. AquinoM. E. Gegundez-AriasD. Marin, “Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques,” IEEE Trans. Med. Imag. 29(11), 1860–1869 (2010).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2010.2053042 Google Scholar

22. M. B. Merickelet al., “Segmentation of the optic nerve head combining pixel classification and graph search,” Proc. SPIE 6512, 651215 (2007).1006-9011 http://dx.doi.org/10.1117/12.710588 Google Scholar

23. D. Welferet al., “Segmentation of the optic disc in color eye fundus images using an adaptive morphological approach,” Comput. Biol. Med. 40(2), 124–137 (2010).CBMDAW0010-4825 http://dx.doi.org/10.1016/j.compbiomed.2009.11.009 Google Scholar

24. R. Bocket al., “Glaucoma risk index: Automated glaucoma detection from color fundus images,” Med. Image Anal. 14(3), 471–481 (2010).MIAECY1361-8415 http://dx.doi.org/10.1016/j.media.2009.12.006 Google Scholar

25. A. G. Salazar-GonzalezY. LiX. Liu, “Optic disc segmentation by incorporating blood vessel compensation,” in Proc. Third International Workshop on Computational Intelligence In Medical Imaging (CIMI), pp. 1–8, IEEE Computer Society, Paris, France (2011). Google Scholar

26. K. Leeet al., “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imag. 29(1), 159–168 (2010).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2009.2031324 Google Scholar

27. C. Muramatsuet al., “Automated determination of cup-to-disc ration for classification of glaucomatous and normal eyes on stereo retinal fundus images,” J. Biomed. Opt. 16(9), 096009 (2011).JBOPFO1083-3668 http://dx.doi.org/10.1117/1.3622755 Google Scholar

28. J. Liuet al., “Detection of pathological myopia by PAMELA with texture-based features through an SVM approach,” J. Healthcare Eng. 1(1), 1–11 (2010). Google Scholar

29. J. Xuet al., “Optic disk feature extraction via modified deformable model technique for glaucoma analysis,” Pattern Recogn. 40(7), 2063–2076 (2007).PTNRA80031-3203 http://dx.doi.org/10.1016/j.patcog.2006.10.015 Google Scholar

30. B.-P. Leslie, “Clinical update: comprehensive. OCT: getting the best images,” EyeNet Magazine 35–37 (2006). Google Scholar

31. S. KavithaS. KarthikeyanK. Duraiswamy, “Early detection of glaucoma in retinal images using cup to disc ratio,” in Proc. Int. Conf on Computing Communication and Networking Technologies, pp. 1–5, IEEE, Karur, India (2010). Google Scholar

32. G. D. JoshiJ. SivaswamyS. R. Krishnadas, “Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment,” IEEE Trans. Med. Imag. 30(6), 1192–1205 (2011).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2011.2106509 Google Scholar

33. R. Laemmeret al., “Measurement of autofluorescence in the parapapillary atrophic zone in patients with ocular hypertension,” Graefes. Arch. Clin. Exp. Ophthalmol. 245(1), 51–58 (2007).0721-832X http://dx.doi.org/10.1007/s00417-006-0381-8 Google Scholar

34. I. J. Dearyet al., “The Lothian Birth Cohort 1936: a study to examine influences on cognitive ageing from age 11 to age 70 and beyond,” BMC Geriatr. 7, 1–12 (2007).1471-2318 http://dx.doi.org/10.1186/1471-2318-7-28 Google Scholar

35. A. FitzgibboM. PiluR. B. Fisher, “Direct Least Square Fitting of Ellipses,” IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 476–480 (1999).ITPIDJ0162-8828 http://dx.doi.org/10.1109/34.765658 Google Scholar

36. S. Aryaet al., “An optimal algorithm for approximate nearest neighbour searching in fixed dimensions,” J. ACM 45(6), 891–923 (1998).JOACF60004-5411 http://dx.doi.org/10.1145/293347.293348 Google Scholar

37. Y. Tanget al., “Automatic segmentation of the papilla in a fundus image based on the C-V model and a shape restraint,” in Proc. 18th Int. Conf. on Pattern Recognition, pp. 183–186, IEEE Computer Society, Hong Kong, China (2006). Google Scholar

38. C. XuJ. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Trans. Imag. Proc. 7(3), 359–369 (1998).IIPRE41057-7149 Google Scholar

39. P. R. Healeyet al., “The inheritance of peripapillary atrophy,” Invest. Ophthalmol. Vis. Sci. 48(6), 2529–2534 (2007).IOVSDA0146-0404 http://dx.doi.org/10.1167/iovs.06-0714 Google Scholar

40. R. KolarP. Tasevsky, “Registration of 3-D Retinal Optical Coherence Tomography Data and 2-D Fundus Images,” Lecture Notes in Computer Science 6204, 72–82 (2010).LNCSD90302-9743 http://dx.doi.org/10.1007/978-3-642-14366-3 Google Scholar

41. J. V. B. Soareset al., “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag. 25(9), 1214–1222 (2006).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2006.879967 Google Scholar

42. D. Marinet al., “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariant-based features,” IEEE Trans. Med. Imag. 30(1), 146–158 (2011).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2010.2064333 Google Scholar

43. X. Youet al., “Segmentation of retinal blood vessels using the radial projection and semi-supervised approach,” Pattern Recogn. 44(10–11), 2314–2324 (2011).PTNRA80031-3203 http://dx.doi.org/10.1016/j.patcog.2011.01.007 Google Scholar

© 2012 Society of Photo-Optical Instrumentation Engineers (SPIE)
Cheng-Kai Lu, Cheng-Kai Lu, Tong Boon Tang, Tong Boon Tang, Augustinus Laude, Augustinus Laude, Baljean Dhillon, Baljean Dhillon, Alan F. Murray, Alan F. Murray, "Parapapillary atrophy and optic disc region assessment (PANDORA): retinal imaging tool for assessment of the optic disc and parapapillary atrophy," Journal of Biomedical Optics 17(10), 106010 (1 October 2012). https://doi.org/10.1117/1.JBO.17.10.106010 . Submission:

Back to Top