|
1.IntroductionThe automatic detection and quantitative description of retinal blood vessels is an important area of research. It can assist the clinician toward the objective diagnosis of vascular pathologies such as retinopathy of prematurity1,2 or hypertensive retinopathy.3 The retinal vascular network is frequently used as the anchor to coregister data between different imaging modalities.4 Moreover, the coregistration of exams taken at different times allows a more accurate study of the development and progression of the pathological process and a deeper insight of the changes occurring in the human retina.5 Special focus has been given to the imaging modalities of color fundus photography (CFP) and fluorescein angiography (FA). Tracking-based methods,6,7 matched filters,8–12 Hessian matrix and gradient vector fields,13–15 supervised learning,11,16–18 and other strategies19 are several of the many approaches proposed for segmentation of the vascular network from these imaging modalities. The working principle of optical coherence tomography (OCT), which is based on the backscattering of a low-coherence light, has been extensively described in the literature.20–22 OCT has made it possible to acquire three-dimensional (3-D) data of the microstructure of biological tissue in vivo and in situ. Its main application is imaging the human retina, for which it has become an important tool. With the introduction of high-definition spectral-domain OCT, it is possible to acquire high-resolution cross-sectional scans while maintaining the acquisition time. Vessel segmentation from OCT data is considerably different than that from CFP or FA. At the wavelengths used by regular OCT systems (800 to 900 nm), light is absorbed by the hemoglobin, leading to a decrease of the backscattered light from the structures beneath the perfused vessels.23 As such, segmentation techniques rely on this well-known effect. A two-dimensional (2-D) image can be obtained by projecting the OCT volume depth-wise. However, traditional methods translate into a vascular network with suboptimal detail and contrast. The lateral resolution and spatial sampling interval of OCT make the process of vascular network segmentation difficult, particularly at the macula.24 Because the vessels are considerably thinner in this region, they present lower levels of contrast. A robust method for vascular segmentation on 2-D ocular fundus reference images obtained from OCT data would be valuable and a significant starting point toward several algorithms, such as image coregistration and 3-D vascular segmentation algorithms.25 Only a few algorithms for segmenting the retinal vascular network using OCT have been described in the literature. The first method was proposed by Niemeijer et al.24 for optic nerve head (ONH) OCT scans. A 2-D projection (by depth-wise sum) of data from the retinal pigment epithelium (RPE) was used, followed by a supervised pixel classification. Xu et al.26 presented a method that does not require OCT layer/interface segmentation. A boosting learning algorithm was used to segment the vascular network from ONH OCT volumes. Pilch et al.27 took a different approach and segmented the vessels directly on high-resolution cross-sectional B-scans close to the ONH. All of these techniques were validated on healthy retinas. This report describes a fully automatic method for segmenting the vascular network of the human retina from standard OCT data. We rely on work previously developed by our research group28 to generate 2-D fundus reference images from OCT data volumes. From these images, a set of features are computed to feed a supervised classification algorithm, support vector machine (SVM), to classify pixels into vessel or nonvessel. 2.2-D Fundus ImagesAs noted, light absorption by hemoglobin is responsible for the decrease in light scattering beneath perfused vessels. The segmentation process herein takes advantage of this effect by computing a set of 2-D fundus reference images from the 3-D OCT data at the preprocessing step. A study was conducted to identify the images that provide the best discrimination between the vascular network and the remaining tissue (background). The images evaluated in this study were the mean value fundus (MVF), the expected value fundus (EVF), the error to local median (ELM), and the principal component fundus (PCF).28 Throughout this paper, we use the following coordinate system for the OCT data: is the nasal-temporal direction, is the superior-inferior direction, and is the anterior-posterior direction (depth). Let be a low-pass filtered OCT volume, flattened at the junction of inner and outer photoreceptor segments (IS/OS). The MVF image is computed as the average of the A-scan values within the lower layers of the retina, i.e., where and are the coordinates of the IS/OS junction and of the RPE/choroid interface, respectively.The EVF image of order is computed by Both MVF and EVF are corrected for nonuniform intensities.28 The ELM image of order is given by with and the local median volume. Each median A-scan from is computed from the neighborhood of the A-scan within a window of size in the plane, where and are the size (in voxels) of the scanned region in the and directions, respectively.The PCF image is computed as the principal component (by principal component analysis) of the MVF, EVF, and ELM images. In Ref. 28, it was demonstrated that the PCF image provides a greater extension of the vascular network and better contrast than the other fundus reference images (MVF, EVF, and ELM). In addition, when computed from standard OCT data, it presents a vascular network extension similar to that achieved with CFP.28 Figure 1 shows the four fundus images from the same OCT scan covering the central 20 deg field of view of a healthy retina. 3.FeaturesAll four 2-D fundus references images computed from OCT volumes (MVF, EVF, ELM, and PCF) were used as features in the classification process. This section describes additional features that were computed from the PCF image. These features were selected through a forward-selection approach from a larger pool of features (see Sec. 5). Sliding windows and filters discussed in this section consider the differences in spatial sampling between the and directions. As such, the results are independent of the acquisition protocol used. The parameters used for computing the fundus images and SVM features presented herein are discussed in Sec. 7. Throughout the following sections, and represent the scale and orientation indexes, respectively. 3.1.Intensity-Based FeaturesIntensity-based features (previously used in a similar context16) are used to describe the local intensity variations. These features are computed from the PCF image using a sliding window of size and centered at . The range, average, standard deviation, and entropy features (Fig. 2) are computed as follows: where is the total number of elements in ().3.2.Gaussian-Derivative FiltersThe use of the Hessian matrix to perform scale-space analysis is a well-established technique that is commonly applied in CFP and FA image analysis.13,14 It is applied here, with slight modifications, to the PCF image. The Hessian matrix at scale is a square matrix defined by where , , , and are the second-order partial derivatives at scale . These derivatives result from the convolution of second-order partial derivatives of a Gaussian filter () with the fundus reference , leading to where * is the convolution operator and is defined asTo accommodate for differences in sampling, the standard deviations are defined as where is the spatial sampling along the direction, is a normalization parameter, and is the sampling-invariant standard deviation of the filter at scale .3.2.1.Hessian eigenvectorsEigenvectors from the Hessian matrix provide information about the image curvature at . For ocular fundus images, at vessel pixels, the eigenvector (in polar coordinates) associated with the largest/smallest eigenvalue ( and , respectively) is normal/parallel to the vessel.14 In addition, at vessel pixels is considerably larger than the ones in the background, and it presents a consistent direction across different scales. As such, relevant local information can be extracted by resorting to eigenvectors from the Hessian matrix. In this work, we use (eigenvalue) and (eigenvector orientation—mapped to the interval ) to compute two features. 3.2.2.Laplacian of GaussianThe Laplacian of Gaussian filter is commonly used as an edge detector. This feature represents the Laplacian of a low-pass (Gaussian) filtered version of image ; it can be directly obtained as the trace of the Hessian matrix for multiple scales 3.3.Local-Phase FeaturesGood results in edge and corner detection using phase congruency were achieved by Kovesi.29,30 The method proved to be useful for vascular segmentation in ocular fundus images.31 Computing local-phase features requires the convolution of the image with a bank of log-Gabor kernels, each having a unique orientation-scale pair. These filters are created by combining a radial and an angular component, limiting the frequency bands and the orientation of the filter, respectively (Fig. 4). The radial component is computed with the log-Gabor transfer function in the frequency space (in polar coordinates) as where is the ratio between the standard deviation of the Gaussian describing the log-Gabor transfer function (in the frequency domain) and the central frequency , given by where is the largest central frequency.32 The log-Gabor transfer function is then multiplied (Hadamard product) with a low-pass filter (Butterworth) to ensure uniform coverage in all orientationswhere is the cutoff frequency and is the order of the filter.In turn, the angular component is given by with where is the orientation and is the total number of orientations.These components are multiplied (Hadamard product) to obtain the frequency domain log-Gabor filter. In the time domain, the filter is composed by the even (real part) and the odd (imaginary part) kernels (Fig. 5). The local-phase features—phase congruency (PC), , feature type (FT), phase symmetry (PSym), and symmetry energy (SymE)—are described below Examples of these features are shown in Fig. 6. 3.3.1.Phase congruencyPC is a dimensionless quantity that measures the agreement of the phase of the Fourier components of the signal (image) being invariant to changes in image brightness or contrast. It differs from gradient-based features because the same relevance is given to all frequency components, independent of gradient magnitude. Hence, an estimate of the noise is removed from the local energy.29 The PC evaluates the local phase information of an image log-Gabor wavelet transform and is computed by where operator assumes the enclosed quantity as zero when negative, is a small positive constant that prevents division by zero, and is the norm of the phase angle vector .29 is a noise threshold estimated from the response amplitude of the smallest scale filter.29 Moreover, is given by and the weighted mean phase angle vector is the unit vector where and result from the convolution of image with the even and odd components, respectively, of the log-Gabor kernels. The sigmoid weighting function is computed by where and model the sigmoid, and is the spread of the log-Gabor filters responses.3.3.2.Feature typeWhile higher values of are found at the vessel pixels, takes higher values at the vessel boundaries and other step/edge locations. To emphasize line-like structures, the even wavelet response is weighted by its odd counterpart. FT was adapted from Ref. 32 and is given by where arctan2 is the four-quadrant arctangent ().3.3.3.Phase symmetryPSym is a local contrast-invariant measure of the degree of symmetry of an image.33 This feature is computed by and was adapted from Refs. 32 and 33.3.3.4.Symmetry energyThe last of the local-phase features is the total unnormalized raw symmetry energy32 given by 3.4.Band-Pass FilterThe PCF image is filtered with a band-pass filter, defined by the log-Gabor radial component as in Eqs. (19) to (21). Several values of central frequency were tested. See Table 1 for the chosen parameters. A band-pass filtered PCF can be seen in Fig. 7(a). Table 1Parameter values used for feature computation.
3.5.Filter BanksLocally, vessels may be considered linear structures, and pixels are expected to preserve their intensity along the vessel direction. In this class of features, the PCF image is filtered with sets of kernels. The responses are then combined into a single feature for each type of filter bank, the average filter bank and the log-Gabor filter bank features [Figs. 7(b) and 7(c), respectively]. 3.5.1.Average filter bankConsistency along a local direction is a characteristic exhibited by vessel pixels but not by background ones. By using directional average kernels (average line operators34), pixels that belong to a vessel are highlighted. Each of these kernels is created as a matrix of zeros except on the line that crosses its center with angle . Differences between acquisition protocols are again taken into account, as the line length is computed using an ellipse (in polar coordinates) defined by where is computed by similar to Eq. (11). The size of the filter , for each orientation , is defined to accommodate a straight line of length , where is the sampling-invariant length.The bank responses are combined by 3.5.2.Log-Gabor filter bankFor the log-Gabor filter bank, the same principle of consistency (used in the average filter bank) applies, this time using 2-D log-Gabor filters. This method is frequently used for vessel enhancement and detection.11,12 The bank responses are combined across scales and orientations as the average value of the maximal filter response over the different directions, i.e., where denotes the convolution between the PCF image and the even component of a log-Gabor wavelet of orientation and scale .4.DataOCT data were gathered from our institution’s database. These OCTs had been acquired resorting to the high-definition spectral-domain Cirrus™ HD-OCT (Carl Zeiss Meditec Inc., Dublin, CA). It allows the acquisition of volumetric data from a region of the human retina of , covering a 20 deg field of view of the ocular fundus, with or , along the (nasal-temporal), (superior-inferior), and (anterior-posterior) directions, respectively. Three different datasets—DS1, DS2, and DS3—were used to validate our algorithm. OCT macular scans of 10 eyes from 10 healthy subjects and 20 eyes from 13 patients diagnosed with type 2 diabetes mellitus [early treatment diabetic retinopathy study (ETDRS) levels 10 to 35] were used for optimization and cross-validation of the classification process. The healthy group contains four volumes of and six volumes of . The diabetic group consists of 11 volumes of and 9 volumes of . This dataset will be referred to as DS1. Although our objective is the segmentation of the vascular network within the macular region, we also consider the ONH region to allow the comparison with previously proposed methods. With this purpose, a dataset (DS2) composed of ONH-centered OCT scans of 10 eyes from 10 healthy subjects (all acquired with the protocol) was used. Finally, dataset DS3 was used to evaluate the robustness of the proposed method when applied to eyes with pathological disorders. It consists of macular OCTs of eight eyes with different pathologic disorders (four OCTs of each protocol). All 48 PCF reference images were manually segmented pixel-by-pixel by two graders (T.M. and S.S.) who established two ground truths (one per grader) for each image. The first ground truths (T.M.) are used for all SVM training and testing processes (instead of the union or intersection of the two segmentations), following the approach used in Ref. 18. The segmentations of the second grader (S.S.) are used only to assess the intergrader agreement. 5.ClassificationSVM is a supervised-learning algorithm widely used in pattern recognition.35,36 In this work, pixel-by-pixel classification was performed by SVM. For training, manual segmentations of the vascular network are used to derive the best SVM model. A C-support vector classification with a radial-basis-function kernel was used,37 where two additional parameters intrinsic to the SVM are required: the parameter controlling the separability margin of the hyperplane and the parameter controlling the spread of the kernel. The best combination was searched for using a genetic-algorithm aiming for the highest accuracy using cross-validation. The dataset DS1 and the ground truths of the first grader were used for optimization. Because of the large amount of data and the required computing time, we used a twofold cross-validation for the optimization and only a fraction of each image. Specifically, we used 10% of vessel pixels and 10% of nonvessel pixels (both randomly selected) of each image. All the defined features were used in the SVM. These features were selected from a pool using a forward-selection approach based on the accuracy of the classification. This larger pool of features included all the ones described herewith, as well as variations of these, e.g., different , , and [Eqs. (2) and (3)], and additional features like moment invariants-based features16,38 and semantic/categorical features using spin descriptors,39,40 to name a few. The segmentations of the first grader and the dataset DS1 were used in this process. 6.MetricsAccuracy, specificity, and sensitivity were used for the evaluation of the system performance. In addition, appropriate metrics were borrowed from Ref. 41. In brief, these are the connectivity C, area A, length L, and CAL. C, A, and L are given by where and are binary images for the segmentation being evaluated and the ground truth segmentation, respectively. is the number of eight-connected components, is a morphological dilation operator, is a morphological skeletonization operator, and and are the AND and OR binary operators, respectively. CAL is defined as the product of the three components (C, A, and L). These metrics are specific to the vascular network and are insensitive to small tracing differences in grader segmentations.41Cohen’s (Ref. 42) was computed as a metric of segmentation agreement. It is defined by where is the observed agreement and is the expected agreement by chance.7.ResultsEach of the features used by the supervised-classification process requires the specification of working parameters. These parameters were defined based on the system characteristics (e.g., scanning protocol), the forward-selection method (features), or values previously defined in the literature. The parameters used in this work can be found in Table 1. It is important to note that no postprocessing was applied. All automatic segmentations are the direct result of the classification process. The classification was validated using a 10-fold cross-validation approach on the dataset DS1 with manual segmentations from the first grader. Each of the 10 groups was composed of macular scans from one healthy and two diabetic subjects. OCTs acquired with different protocols were also distributed equitably throughout the groups. Quantitative results are summarized in Table 2. For visual inspection, the cases with worst and best performance (for the accuracy and CAL metrics) are shown in Fig. 8. Table 2Results from the 10-fold cross-validation on dataset DS1, healthy and diabetic subjects. The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.
The achieved values demonstrate the viability of the proposed method. The high specificity (0.994) shows that there is little oversegmentation, percentagewise. On the other hand, undersegmentation is a more prominent problem, as demonstrated by the lower sensitivity (0.825). By visual assessment of Fig. 8, one can say that the undersegmentation is associated with the misclassification of pixels on low-caliber vessels. Considering the application of subsequent algorithms (e.g., 3-D vascular segmentation and computation of vascular network descriptors), a specificity close to 1 (associated with a high sensitivity) is particularly important, as the postprocessing stage would be less demanding. The differences between the healthy and diabetic groups are small. However, the algorithm performed better on the healthy group. A new training was performed using all OCT scans in DS1. The model was then used to segment the OCT scans in DS2, the ONH OCTs, and in DS3, the pathological cases. For the ONH dataset, the optic disk was manually segmented and discarded in metric computation. Quantitative results are shown in Tables 3 and 4. For qualitative assessment, Fig. 9 shows a few examples of vascular segmentation on the ONH OCTs, and Fig. 10 shows all segmentation results of the pathological cases. Although the segmentation was developed mainly for OCTs of healthy eyes and eyes close to the healthy condition, the results on the pathological cases serve to demonstrate the robustness of the proposed method in rather extreme conditions. Table 3Test results for the optic nerve head dataset (N=10). The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.
Table 4Test results for the pathology dataset (N=8). The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.
The proposed approach was able to outperform the one published in Ref. 26 for segmentation on the ONH region, where a specificity of 88% and a sensitivity of 85% were achieved. Although the dataset is not the same, it has the same fundamental characteristics: the OCT scans were acquired with Cirrus HD-OCT using the same acquisition protocol (), all OCTs were centered on the ONH region, the number of cases is similar (), and all eyes are from healthy subjects. Furthermore, in our work, the optimizations and training processes were performed on OCT scans from the macular region. Therefore, the process could benefit from parameter optimization and training on ONH-centered scans. In the pathological cases, the specificity is lower, which translates to higher oversegmentation. However, while no training was performed on the pathological cases other than the diabetic ones, the results achieved on several pathologies are similar to the results achieved with the healthy and the diabetic subjects (accuracy and ), which attests to the robustness of the proposed method. The agreement between the two graders was also evaluated. The manual segmentations of the second grader (S.S.) were tested against those of the first grader (T.M.). Results can be found in Table 5. These confirm the good performance of the algorithm. For the majority of the metrics, the average values for the automatic segmentations on DS1 and DS2 are greater than or equal to those for the second grader. Contrarily, as expected, for DS3, the metrics (although similar) are lower than those for the comparison between graders. Table 5Intergrader agreement (N=48). The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.
The total execution time for the segmentation process (OCT fundus reference computation, features computation, and SVM classification), using a nonoptimized MATLAB® (The MathWorks Inc., Natick, MA) implementation, was () and () for the and protocols, respectively. The system hardware used was an Intel® Core™ i7-3770 CPU (Intel Corporation, Santa Clara, California) at 3.4 GHz. 8.Discussion and ConclusionsTo our knowledge, the work presented herein is the first attempt to automatically segment the vascular network of the macular region. Our findings showed that it is able to work with standard OCTs of both healthy and diseased retinas. The proposed method achieved good results for both the macular and ONH regions even with no training with ONH OCTs. The proposed models (trained only on healthy and diabetic subjects) already show robustness on the pathological cases, as the results do not differ substantially from DS1 results. Nevertheless, with proper training on pathological data and given enough pathological cases, the SVM would be able to create even more robust models, which would improve the classification performance on these cases. Improvements are still possible and the 2-D segmentations could benefit from a postprocessing stage. The spectrum of features presented here can be useful for region growing based on the automatic segmentations and subsequent deletion of small incorrectly segmented regions. For the pathological cases, the classification specificity is lower, which leads to a more complex postprocessing stage than for DS1 and DS2. Additional research will be performed on these issues. With the proposed method, additional algorithms of coregistration43 and computation of vascular network descriptors,44,45 already described for other imaging modalities, can now be applied to OCT. For most multimodal coregistration algorithms, even for the pathological cases, we are particularly convinced that the proposed segmentation is a valuable tool. Therefore, multimodal imaging and studies of disease progression (even in extreme cases) are a major application area of the proposed method. The representation of blood vessels on OCT fundus reference images depends on the presence of hemoglobin. Therefore, the method is not able to segment fully occluded vessels. However, for reperfused or partially occluded vessels, segmentation is still possible [Fig. 10(b)]. This led to yet another area of application now being pursued by our research group: the discrimination between perfused and occluded vessels from OCT data.46 Finally, this method establishes a good starting point toward the fully automatic 3-D segmentation. The first tests on 3-D segmentation have been conducted in Ref. 25. AcknowledgmentsThis work was supported by Fundação para a Ciência e a Tecnologia (FCT) under the projects PTDC/SAU-ENB/111139/2009 and PEST/C/SAU/3282/2011, and by the COMPETE programs FCOMP-01-0124-FEDER-015712 and FCOMP-01-0124-FEDER-022717. ReferencesJ. ChenL. E. H. Smith,
“Retinopathy of prematurity,”
Angiogenesis, 10
(2), 133
–140
(2007). http://dx.doi.org/10.1007/s10456-007-9066-0 69IRVD 0969-6970 Google Scholar
C. Heneghanet al.,
“Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis,”
Med. Image Anal., 6
(4), 407
–429
(2002). http://dx.doi.org/10.1016/S1361-8415(02)00058-0 MIAECY 1361-8415 Google Scholar
R. BernardesP. SerranhoC. Lobo,
“Digital ocular fundus imaging: a review,”
Ophthalmologica, 226
(4), 161
–181
(2011). http://dx.doi.org/10.1159/000329597 OPHTAD 0030-3755 Google Scholar
Y. Liet al.,
“Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,”
Opt. Express, 19
(1), 7
(2011). http://dx.doi.org/10.1364/OE.19.000007 OPEXFF 1094-4087 Google Scholar
M. D. AbràmoffM. K. GarvinM. Sonka,
“Retinal imaging and image analysis,”
IEEE Rev. Biomed. Eng., 3
(0), 169
–208
(2010). http://dx.doi.org/10.1109/RBME.2010.2084567 1937-3333 Google Scholar
A. Canet al.,
“Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms,”
IEEE Trans. Inf. Technol. Biomed., 3
(2), 125
–138
(1999). http://dx.doi.org/10.1109/4233.767088 ITIBFX 1089-7771 Google Scholar
O. ChutatapeL. ZhengS. M. Krishnan,
“Retinal blood vessel detection and tracking by matched Gaussian and Kalman filters,”
in Proc. of the 20th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society,
3144
–3149
(1998). Google Scholar
S. Chaudhuriet al.,
“Detection of blood vessels in retinal images using two-dimensional matched filters,”
IEEE Trans. Med. Imaging, 8
(3), 263
–269
(1989). http://dx.doi.org/10.1109/42.34715 ITMID4 0278-0062 Google Scholar
M. SofkaC. V. Stewart,
“Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures,”
IEEE Trans. Med. Imaging, 25
(12), 1531
–1546
(2006). http://dx.doi.org/10.1109/TMI.2006.884190 ITMID4 0278-0062 Google Scholar
L. WangA. BhaleraoR. Wilson,
“Analysis of retinal vasculature using a multiresolution hermite model,”
IEEE Trans. Med. Imaging, 26
(2), 137
–152
(2007). http://dx.doi.org/10.1109/TMI.2006.889732 ITMID4 0278-0062 Google Scholar
J. V. B. Soareset al.,
“Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,”
IEEE Trans. Med. Imaging, 25
(9), 1214
–1222
(2006). http://dx.doi.org/10.1109/TMI.2006.879967 ITMID4 0278-0062 Google Scholar
Q. Liet al.,
“A new approach to automated retinal vessel segmentation using multiscale analysis,”
in 18th Int. Conf. Pattern Recognition,
77
–80
(2006). Google Scholar
M. E. Martinez-Perezet al.,
“Segmentation of blood vessels from red-free and fluorescein retinal images,”
Med. Image Anal., 11
(1), 47
–61
(2007). http://dx.doi.org/10.1016/j.media.2006.11.004 MIAECY 1361-8415 Google Scholar
N. SalemS. SalemA. Nandi,
“Segmentation of retinal blood vessels based on analysis of the Hessian matrix and clustering algorithm,”
in 15th European Signal Processing Conf.,
428
–432
(2007). Google Scholar
B. S. Y. LamH. Yan,
“A novel vessel segmentation algorithm for pathological retina images based on the divergence of vector fields,”
IEEE Trans. Med. Imaging, 27
(2), 237
–246
(2008). http://dx.doi.org/10.1109/TMI.2007.909827 ITMID4 0278-0062 Google Scholar
D. Marínet al.,
“A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,”
IEEE Trans. Med. Imaging, 30
(1), 146
–158
(2011). http://dx.doi.org/10.1109/TMI.2010.2064333 ITMID4 0278-0062 Google Scholar
C. Sinthanayothinet al.,
“Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images,”
Br. J. Ophthalmol., 83
(8), 902
–910
(1999). http://dx.doi.org/10.1136/bjo.83.8.902 BJOPAL 0007-1161 Google Scholar
J. Staalet al.,
“Ridge-based vessel segmentation in color images of the retina,”
IEEE Trans. Med. Imaging, 23
(4), 501
–509
(2004). http://dx.doi.org/10.1109/TMI.2004.825627 ITMID4 0278-0062 Google Scholar
A. M. MendonçaA. Campilho,
“Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,”
IEEE Trans. Med. Imaging, 25
(9), 1200
–1213
(2006). http://dx.doi.org/10.1109/TMI.2006.879955 ITMID4 0278-0062 Google Scholar
P. SerranhoA. M. MorgadoR. Bernardes,
“Optical coherence tomography: a concept review,”
Optical Coherence Tomography: A Clinical and Technical Update, 139
–156 Springer, Berlin, Heidelberg
(2012). Google Scholar
D. Huanget al.,
“Optical coherence tomography,”
Science, 254
(5035), 1178
–1181
(1991). http://dx.doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar
W. Drexler,
“Ultrahigh-resolution optical coherence tomography,”
J. Biomed. Opt., 9
(1), 47
–74
(2004). http://dx.doi.org/10.1117/1.1629679 JBOPFO 1083-3668 Google Scholar
J. G. FujimotoW. Drexler,
“Introduction to optical coherence tomography,”
Optical Coherence Tomography: A Clinical and Technical Update, 1
–45 Springer, Berlin, Heidelberg
(2008). Google Scholar
M. Niemeijeret al.,
“Vessel segmentation in 3D spectral OCT scans of the retina,”
Proc. SPIE, 6914 69141R
(2008). http://dx.doi.org/10.1117/12.772680 PSISDG 0277-786X Google Scholar
P. Guimarãeset al.,
“3D retinal vascular network from optical coherence tomography data,”
Lec. Notes Comput. Sci., 7325 339
–346
(2012). http://dx.doi.org/10.1007/978-3-642-31298-4 LNCSD9 0302-9743 Google Scholar
J. Xuet al.,
“3D OCT retinal vessel segmentation based on boosting learning,”
in World Congress on Medical Physics and Biomedical Engineering,
179
–182
(2009). Google Scholar
M. Pilchet al.,
“Automated segmentation of retinal blood vessels in spectral domain optical coherence tomography scans,”
Biomed. Opt. Express, 3
(7), 1478
(2012). http://dx.doi.org/10.1364/BOE.3.001478 BOEICL 2156-7085 Google Scholar
P. Guimarãeset al.,
“Ocular fundus reference images from optical coherence tomography,”
(2013) http://www.aibili.pt/publicacoes/techReport.pdf October ). 2013). Google Scholar
P. D. Kovesi,
“Image features from phase congruency,”
Videre: J. Comput. Vis. Res., 1
(3), 1
–26
(1999). Google Scholar
P. D. Kovesi,
“Phase congruency detects corners and edges,”
in The Australian Pattern Recognition Society Conference,
309
–318
(2003). http://dx.doi.org/10.1111/j.1755-3768.2013.4463.x Google Scholar
T. Zhu,
“Fourier cross-sectional profile for vessel detection on retinal images,”
Comput. Med. Imaging Graph., 34
(3), 203
–212
(2010). http://dx.doi.org/10.1016/j.compmedimag.2009.09.004 CMIGEY 0895-6111 Google Scholar
P. D. Kovesi,
“MATLAB and octave functions for computer vision and image processing,”
(2013) http://www.peterkovesi.com/ April ). 2013). Google Scholar
P. D. Kovesi,
“Symmetry and asymmetry from local phase,”
in Tenth Australian Joint Conf. on Artificial Intelligence,
185
–190
(1997). Google Scholar
E. RicciR. Perfetti,
“Retinal blood vessel segmentation using line operators and support vector classification,”
IEEE Trans. Med. Imaging, 26
(10), 1357
–1365
(2007). http://dx.doi.org/10.1109/TMI.2007.898551 ITMID4 0278-0062 Google Scholar
N. CristianiniJ. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, Cambridge
(2000). Google Scholar
R. O. DudaP. E. HartD. G. Stork, Pattern Classification, Wiley, Hoboken, New Jersey
(2000). Google Scholar
C.-C. ChangC.-J. Lin,
“LIBSVM: a library for support vector machines,”
(2013) http://www.csie.ntu.edu.tw/~cjlin/libsvm/ April ). 2013). Google Scholar
M.-K. Hu,
“Visual pattern recognition by moment invariants,”
IRE Trans. Inf. Theory, 8
(2), 179
–187
(1962). http://dx.doi.org/10.1109/TIT.1962.1057692 IRITAY 0096-1000 Google Scholar
X. Chenget al.,
“Automatic localization of retinal landmarks,”
in Proc. of the 34th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society,
4954
–4957
(2012). Google Scholar
S. LazebnikC. SchmidJ. Ponce,
“A sparse texture representation using local affine regions,”
IEEE Trans. Pattern Anal. Mach. Intell., 27
(8), 1265
–1278
(2005). http://dx.doi.org/10.1109/TPAMI.2005.151 ITPIDJ 0162-8828 Google Scholar
M. E. Gegúndez-Ariaset al,
“A function for quality evaluation of retinal vessel segmentations,”
IEEE Trans. Med. Imaging, 31
(2), 231
–239
(2012). http://dx.doi.org/10.1109/TMI.2011.2167982 ITMID4 0278-0062 Google Scholar
J. Cohen,
“A coefficient of agreement for nominal scales,”
Educ. Psychol. Meas., 20
(1), 37
–46
(1960). http://dx.doi.org/10.1177/001316446002000104 EPMEAJ 0013-1644 Google Scholar
F. ZanaJ.-C. Klein,
“A multimodal registration algorithm of eye fundus images using vessels detection and Hough transform,”
IEEE Trans. Med. Imaging, 18
(5), 419
–428
(1999). http://dx.doi.org/10.1109/42.774169 ITMID4 0278-0062 Google Scholar
W. E. Hartet al.,
“Measurement and classification of retinal vascular tortuosity,”
Int. J. Medical Inform., 53
(2), 239
–252
(1999). http://dx.doi.org/10.1016/S1386-5056(98)00163-4 1386-5056 Google Scholar
N. Wittet al.,
“Abnormalities of retinal microvascular structure and risk of mortality from ischemic heart disease and stroke,”
Hypertension, 47
(5), 975
–981
(2006). http://dx.doi.org/10.1161/01.HYP.0000216717.72048.6c HPRTDN 0194-911X Google Scholar
R. Bernardeset al.,
“Non-invasive discrimination between perfused and occluded vessels by optical coherence tomography,”
Acta Ophthalmologica, 91
(s252),
(2013). ACOPAT 0001-639X Google Scholar
|