Open Access
16 December 2013 Two-dimensional segmentation of the retinal vascular network from optical coherence tomography
Pedro Rodrigues, Pedro Guimaraes, Torcato Santos, Silvia Simao, Telmo Miranda, Pedro Serranho, Rui C. Bernardes
Author Affiliations +
Abstract
The automatic segmentation of the retinal vascular network from ocular fundus images has been performed by several research groups. Although different approaches have been proposed for traditional imaging modalities, only a few have addressed this problem for optical coherence tomography (OCT). Furthermore, these approaches were focused on the optic nerve head region. Compared to color fundus photography and fluorescein angiography, two-dimensional ocular fundus reference images computed from three-dimensional OCT data present additional problems related to system lateral resolution, image contrast, and noise. Specifically, the combination of system lateral resolution and vessel diameter in the macular region renders the process particularly complex, which might partly explain the focus on the optic disc region. In this report, we describe a set of features computed from standard OCT data of the human macula that are used by a supervised-learning process (support vector machines) to automatically segment the vascular network. For a set of macular OCT scans of healthy subjects and diabetic patients, the proposed method achieves 98% accuracy, 99% specificity, and 83% sensitivity. This method was also tested on OCT data of the optic nerve head region achieving similar results.

1.

Introduction

The automatic detection and quantitative description of retinal blood vessels is an important area of research. It can assist the clinician toward the objective diagnosis of vascular pathologies such as retinopathy of prematurity1,2 or hypertensive retinopathy.3

The retinal vascular network is frequently used as the anchor to coregister data between different imaging modalities.4 Moreover, the coregistration of exams taken at different times allows a more accurate study of the development and progression of the pathological process and a deeper insight of the changes occurring in the human retina.5

Special focus has been given to the imaging modalities of color fundus photography (CFP) and fluorescein angiography (FA). Tracking-based methods,6,7 matched filters,812 Hessian matrix and gradient vector fields,1315 supervised learning,11,1618 and other strategies19 are several of the many approaches proposed for segmentation of the vascular network from these imaging modalities.

The working principle of optical coherence tomography (OCT), which is based on the backscattering of a low-coherence light, has been extensively described in the literature.2022 OCT has made it possible to acquire three-dimensional (3-D) data of the microstructure of biological tissue in vivo and in situ. Its main application is imaging the human retina, for which it has become an important tool. With the introduction of high-definition spectral-domain OCT, it is possible to acquire high-resolution cross-sectional scans while maintaining the acquisition time.

Vessel segmentation from OCT data is considerably different than that from CFP or FA. At the wavelengths used by regular OCT systems (800 to 900 nm), light is absorbed by the hemoglobin, leading to a decrease of the backscattered light from the structures beneath the perfused vessels.23 As such, segmentation techniques rely on this well-known effect.

A two-dimensional (2-D) image can be obtained by projecting the OCT volume depth-wise. However, traditional methods translate into a vascular network with suboptimal detail and contrast. The lateral resolution and spatial sampling interval of OCT make the process of vascular network segmentation difficult, particularly at the macula.24 Because the vessels are considerably thinner in this region, they present lower levels of contrast.

A robust method for vascular segmentation on 2-D ocular fundus reference images obtained from OCT data would be valuable and a significant starting point toward several algorithms, such as image coregistration and 3-D vascular segmentation algorithms.25

Only a few algorithms for segmenting the retinal vascular network using OCT have been described in the literature. The first method was proposed by Niemeijer et al.24 for optic nerve head (ONH) OCT scans. A 2-D projection (by depth-wise sum) of data from the retinal pigment epithelium (RPE) was used, followed by a supervised pixel classification. Xu et al.26 presented a method that does not require OCT layer/interface segmentation. A boosting learning algorithm was used to segment the vascular network from ONH OCT volumes. Pilch et al.27 took a different approach and segmented the vessels directly on high-resolution cross-sectional B-scans close to the ONH. All of these techniques were validated on healthy retinas.

This report describes a fully automatic method for segmenting the vascular network of the human retina from standard OCT data. We rely on work previously developed by our research group28 to generate 2-D fundus reference images from OCT data volumes. From these images, a set of features are computed to feed a supervised classification algorithm, support vector machine (SVM), to classify pixels into vessel or nonvessel.

2.

2-D Fundus Images

As noted, light absorption by hemoglobin is responsible for the decrease in light scattering beneath perfused vessels. The segmentation process herein takes advantage of this effect by computing a set of 2-D fundus reference images from the 3-D OCT data at the preprocessing step. A study was conducted to identify the images that provide the best discrimination between the vascular network and the remaining tissue (background). The images evaluated in this study were the mean value fundus (MVF), the expected value fundus (EVF), the error to local median (ELM), and the principal component fundus (PCF).28

Throughout this paper, we use the following coordinate system for the OCT data: x is the nasal-temporal direction, y is the superior-inferior direction, and z is the anterior-posterior direction (depth).

Let V be a low-pass filtered OCT volume, flattened at the junction of inner and outer photoreceptor segments (IS/OS). The MVF image is computed as the average of the A-scan values within the lower layers of the retina, i.e.,

Eq. (1)

MVF(x,y)=1Z2(x,y)Z1+1z=Z1Z2(x,y)V(x,y,z),
where Z1 and Z2 are the z coordinates of the IS/OS junction and of the RPE/choroid interface, respectively.

The EVF image of order ρ is computed by

Eq. (2)

EVF(x,y)=zzV(x,y,z)ρzV(x,y,z)ρ.

Both MVF and EVF are corrected for nonuniform intensities.28

The ELM image of order τ is given by

Eq. (3)

ELM(x,y)=z=Z1Z2min|V(x,y,z)V˜w(x,y,z)|τ,
with Z2min=minx,yZ2(x,y) and V˜w(x,y,z) the local median volume. Each median A-scan from V˜w is computed from the neighborhood of the A-scan (x,y) within a window of size w[NxNy] in the xy plane, where Nx and Ny are the size (in voxels) of the scanned region in the x and y directions, respectively.

The PCF image is computed as the principal component (by principal component analysis) of the MVF, EVF, and ELM images. In Ref. 28, it was demonstrated that the PCF image provides a greater extension of the vascular network and better contrast than the other fundus reference images (MVF, EVF, and ELM). In addition, when computed from standard OCT data, it presents a vascular network extension similar to that achieved with CFP.28

Figure 1 shows the four fundus images from the same OCT scan covering the central 20 deg field of view of a healthy retina.

Fig. 1

Two-dimensional fundus reference images computed from the same optical coherence tomography ocular scan: (a) mean value fundus, (b) expected value fundus, (c) error to local median, and (d) principal component fundus.

JBO_18_12_126011_f001.png

3.

Features

All four 2-D fundus references images computed from OCT volumes (MVF, EVF, ELM, and PCF) were used as features in the classification process. This section describes additional features that were computed from the PCF image. These features were selected through a forward-selection approach from a larger pool of features (see Sec. 5).

Sliding windows and filters discussed in this section consider the differences in spatial sampling between the x and y directions. As such, the results are independent of the acquisition protocol used.

The parameters used for computing the fundus images and SVM features presented herein are discussed in Sec. 7. Throughout the following sections, n and o represent the scale and orientation indexes, respectively.

3.1.

Intensity-Based Features

Intensity-based features (previously used in a similar context16) are used to describe the local intensity variations. These features are computed from the PCF image I using a sliding window W(x,y) of size w[NxNy] and centered at (x,y). The range, average, standard deviation, and entropy features (Fig. 2) are computed as follows:

Eq. (4)

Irange(x,y)=I(x,y)[maxk,lW(x,y)I(k,l)mink,lW(x,y)I(k,l)],

Eq. (5)

Iavg(x,y)=I(x,y)1NWk,lW(x,y)I(k,l),

Eq. (6)

Istd(x,y)=1NWk,lW(x,y)[Iavg(k,l)]2,

Eq. (7)

Ientropy(x,y)=k,lW(x,y)I(k,l)log2I(k,l),
where NW is the total number of elements in W (NW=w2NxNy).

Fig. 2

Intensity-based features computed from the principal component fundus image: (a) range, (b) average, (c) standard deviation, and (d) entropy.

JBO_18_12_126011_f002.png

3.2.

Gaussian-Derivative Filters

The use of the Hessian matrix to perform scale-space analysis is a well-established technique that is commonly applied in CFP and FA image analysis.13,14 It is applied here, with slight modifications, to the PCF image.

The Hessian matrix H at scale n is a square matrix defined by

Eq. (8)

Hn(x,y)=[Lnxx(x,y)Lnxy(x,y)Lnyx(x,y)Lnyy(x,y)],
where Lnxx, Lnxy, Lnyx, and Lnyy are the second-order partial derivatives at scale n. These derivatives result from the convolution of second-order partial derivatives of a Gaussian filter (Gn) with the fundus reference I, leading to

Eq. (9)

Lnxx(x,y)=I(x,y)*σnxσny2x2Gn(x,y)Lnyy(x,y)=I(x,y)*σnxσny2y2Gn(x,y)Lnxy(x,y)=Lnyx(x,y)=I(x,y)*σnxσny2xyGn(x,y),
where * is the convolution operator and Gn is defined as

Eq. (10)

Gn(x,y)=12πσnxσnyexp[12(x2σnx2+y2σny2)].

To accommodate for differences in sampling, the standard deviations are defined as

Eq. (11)

σnj=σnαpj,
where pj is the spatial sampling along the j={x,y} direction, α is a normalization parameter, and σn is the sampling-invariant standard deviation of the filter at scale n.

3.2.1.

Hessian eigenvectors

Eigenvectors from the Hessian matrix Hn(x,y) provide information about the image curvature at (x,y). For ocular fundus images, at vessel pixels, the eigenvector v=(λ,θ) (in polar coordinates) associated with the largest/smallest eigenvalue (λ+ and λ, respectively) is normal/parallel to the vessel.14 In addition, λ+ at vessel pixels is considerably larger than the ones in the background, and it presents a consistent direction θ+ across different scales. As such, relevant local information can be extracted by resorting to eigenvectors from the Hessian matrix. In this work, we use λ+ (eigenvalue) and θ+ (eigenvector orientation—mapped to the interval ]π/2,π/2]) to compute two features.

3.2.2.

Laplacian of Gaussian

The Laplacian of Gaussian filter is commonly used as an edge detector. This feature represents the Laplacian of a low-pass (Gaussian) filtered version of image I; it can be directly obtained as the trace of the Hessian matrix for multiple scales

Eq. (12)

ΔIn=Δ(I*Gn)=Lnxx+Lnyy.

3.2.3.

Gradient norm

The norm of the image gradient is computed from a Gaussian low-pass filtered version of image I, for multiple scales, as

Eq. (13)

|I|n=(Lnx)2+(Lny)2,
with

Eq. (14)

Lnx(x,y)=I(x,y)*σnxxGn(x,y)Lny(x,y)=I(x,y)*σnyyGn(x,y).

3.2.4.

Features across acales

To reduce the number of features and create new features that encode multiscale information, the results across scales are combined14

Eq. (15)

λmax=maxn(λn+σn)

Eq. (16)

θvar=varnθn+,

Eq. (17)

ΔImax=maxn(ΔInσn),

Eq. (18)

|I|max=maxn(|I|nσn).

Results can be seen in Fig. 3.

Fig. 3

Gaussian-derivative features computed from the principal component fundus image: (a) Hessian eigenvalue, (b) Hessian eigenvector orientation, (c) Laplacian of Gaussian, and (d) gradient norm.

JBO_18_12_126011_f003.png

3.3.

Local-Phase Features

Good results in edge and corner detection using phase congruency were achieved by Kovesi.29,30 The method proved to be useful for vascular segmentation in ocular fundus images.31

Computing local-phase features requires the convolution of the image with a bank of log-Gabor kernels, each having a unique orientation-scale pair. These filters are created by combining a radial and an angular component, limiting the frequency bands and the orientation of the filter, respectively (Fig. 4).

Fig. 4

Log-Gabor filter (frequency domain): (a) radial and (b) angular components.

JBO_18_12_126011_f004.png

The radial component is computed with the log-Gabor transfer function in the frequency space F(ϱ,θ) (in polar coordinates) as

Eq. (19)

FnlG(ϱ,θ)=exp[ln2(ϱ/fn0)2ln2ς],
where ς is the ratio between the standard deviation of the Gaussian describing the log-Gabor transfer function (in the frequency domain) and the central frequency fn0, given by

Eq. (20)

fn0=fmax2.1(n1),
where fmax is the largest central frequency.32 The log-Gabor transfer function is then multiplied (Hadamard product) with a low-pass filter (Butterworth) to ensure uniform coverage in all orientations

Eq. (21)

Flp(ϱ,θ)=[1+(ϱ/fc)2υ]1,
where fc[0,0.5] is the cutoff frequency and υ is the order of the filter.

In turn, the angular component is given by

Eq. (22)

Foas(ϱ,θ)={(1+cosd)/2ifdπ0ifd>π,
with

Eq. (23)

d=|θθo|No/2,
where θo is the orientation and No is the total number of orientations.

These components are multiplied (Hadamard product) to obtain the frequency domain log-Gabor filter. In the time domain, the filter is composed by the even (real part) and the odd (imaginary part) kernels (Fig. 5).

Fig. 5

Log-Gabor filter (time domain) for a specific orientation: (a) even and (b) odd kernels.

JBO_18_12_126011_f005.png

The local-phase features—phase congruency (PC), , feature type (FT), phase symmetry (PSym), and symmetry energy (SymE)—are described below Examples of these features are shown in Fig. 6.

Fig. 6

Local-phase features computed from the principal component fundus image: (a) phase congruency, (b) feature type, (c) phase symmetry, and (d) symmetry energy.

JBO_18_12_126011_f006.png

3.3.1.

Phase congruency

PC is a dimensionless quantity that measures the agreement of the phase of the Fourier components of the signal (image) being invariant to changes in image brightness or contrast. It differs from gradient-based features because the same relevance is given to all frequency components, independent of gradient magnitude. Hence, an estimate of the noise is removed from the local energy.29

The PC evaluates the local phase information of an image log-Gabor wavelet transform and is computed by

Eq. (24)

PC=onωoϕno·ϕ¯o|ϕno×ϕ¯o|Toε+onAno,
where . operator assumes the enclosed quantity as zero when negative, ε is a small positive constant that prevents division by zero, and A is the norm of the phase angle vector ϕ.29 To is a noise threshold estimated from the response amplitude of the smallest scale filter.29 Moreover, ϕ is given by

Eq. (25)

ϕno=[MnoevenMnoodd],
and the weighted mean phase angle vector ϕ¯o is the unit vector

Eq. (26)

ϕ¯o=1(nMnoeven)2+(nMnoodd)2[nMnoevennMnoodd],
where Meven and Modd result from the convolution of image I with the even and odd components, respectively, of the log-Gabor kernels. The sigmoid weighting function ωo is computed by

Eq. (27)

ωo(x,y)=11+exp{g[cso(x,y)]},
where c and g model the sigmoid, and

Eq. (28)

so(x,y)=1Nn[nAno(x,y)ε+Aomax(x,y)]
is the spread of the log-Gabor filters responses.

3.3.2.

Feature type

While higher values of Meven are found at the vessel pixels, Modd takes higher values at the vessel boundaries and other step/edge locations. To emphasize line-like structures, the even wavelet response is weighted by its odd counterpart. FT was adapted from Ref. 32 and is given by

Eq. (29)

FT=arctan2[maxonMnoevenmaxo(n|Mnoodd|)],
where arctan2 is the four-quadrant arctangent (FT]π,π]).

3.3.3.

Phase symmetry

PSym is a local contrast-invariant measure of the degree of symmetry of an image.33 This feature is computed by

Eq. (30)

PSym=onMnoevenon|Mnoodd|ε+onAno
and was adapted from Refs. 32 and 33.

3.3.4.

Symmetry energy

The last of the local-phase features is the total unnormalized raw symmetry energy32 given by

Eq. (31)

SymE=on[|Mnoeven||Mnoodd|To].

3.4.

Band-Pass Filter

The PCF image is filtered with a band-pass filter, defined by the log-Gabor radial component as in Eqs. (19) to (21). Several values of central frequency fn0 were tested. See Table 1 for the chosen parameters. A band-pass filtered PCF can be seen in Fig. 7(a).

Table 1

Parameter values used for feature computation.

FeaturesParameters
Expected value fundusρ=7
Error to local medianτ=0.5
w=0.1
Irange, Iavg, Ientropy, Istdw=0.03
λmax, θvar, ΔImax, |I|maxσn={1,4}
Phase congruency, feature type, phase symmetry, symmetry energy, B1Go={1,2,,6}
n={1,2,3,4}
Band-pass filtern=3
Bavgo={1,2,,12}
m=5
EquationsParameters
(11), (33)α=6000/(1281)
(19)ς=0.55
(20)fmax=1/3
(21)fc=0.45
υ=15
(27)c=0.5
g=10

Fig. 7

Band-pass (a), average filter-bank (b), and log-Gabor filter bank (c) features computed from the principal component fundus image.

JBO_18_12_126011_f007.png

3.5.

Filter Banks

Locally, vessels may be considered linear structures, and pixels are expected to preserve their intensity along the vessel direction. In this class of features, the PCF image is filtered with sets of kernels. The responses are then combined into a single feature for each type of filter bank, the average filter bank and the log-Gabor filter bank features [Figs. 7(b) and 7(c), respectively].

3.5.1.

Average filter bank

Consistency along a local direction is a characteristic exhibited by vessel pixels but not by background ones. By using directional average kernels Ko (average line operators34), pixels that belong to a vessel are highlighted. Each of these kernels is created as a matrix of zeros except on the line that crosses its center with angle θo.

Differences between acquisition protocols are again taken into account, as the line length is computed using an ellipse (in polar coordinates) defined by

Eq. (32)

rE(θo)=rxry(rxsinθo)2+(rycosθo)2,
where rj is computed by

Eq. (33)

rj=αpj,
similar to Eq. (11). The size of the filter Ko, for each orientation o, is defined to accommodate a straight line of length mrE(θo), where m is the sampling-invariant length.

The bank responses are combined by

Eq. (34)

Bavg(x,y)=maxo[I(x,y)*Ko].

3.5.2.

Log-Gabor filter bank

For the log-Gabor filter bank, the same principle of consistency (used in the average filter bank) applies, this time using 2-D log-Gabor filters. This method is frequently used for vessel enhancement and detection.11,12

The bank responses are combined across scales and orientations as the average value of the maximal filter response over the different directions, i.e.,

Eq. (35)

BlG(x,y)=1Nnnmaxo(Mnoeven),
where Mnoeven denotes the convolution between the PCF image I and the even component of a log-Gabor wavelet of orientation o and scale n.

4.

Data

OCT data were gathered from our institution’s database. These OCTs had been acquired resorting to the high-definition spectral-domain Cirrus™ HD-OCT (Carl Zeiss Meditec Inc., Dublin, CA). It allows the acquisition of volumetric data from a region of the human retina of 6000×6000×2000μm3, covering a 20 deg field of view of the ocular fundus, with 200×200×1024 or 512×128×1024voxels, along the x (nasal-temporal), y (superior-inferior), and z (anterior-posterior) directions, respectively.

Three different datasets—DS1, DS2, and DS3—were used to validate our algorithm.

OCT macular scans of 10 eyes from 10 healthy subjects and 20 eyes from 13 patients diagnosed with type 2 diabetes mellitus [early treatment diabetic retinopathy study (ETDRS) levels 10 to 35] were used for optimization and cross-validation of the classification process. The healthy group contains four volumes of 200×200×1024voxels and six volumes of 512×128×1024voxels. The diabetic group consists of 11 volumes of 200×200×1024voxels and 9 volumes of 512×128×1024voxels. This dataset will be referred to as DS1.

Although our objective is the segmentation of the vascular network within the macular region, we also consider the ONH region to allow the comparison with previously proposed methods. With this purpose, a dataset (DS2) composed of ONH-centered OCT scans of 10 eyes from 10 healthy subjects (all acquired with the 200×200×1024 protocol) was used.

Finally, dataset DS3 was used to evaluate the robustness of the proposed method when applied to eyes with pathological disorders. It consists of macular OCTs of eight eyes with different pathologic disorders (four OCTs of each protocol).

All 48 PCF reference images were manually segmented pixel-by-pixel by two graders (T.M. and S.S.) who established two ground truths (one per grader) for each image. The first ground truths (T.M.) are used for all SVM training and testing processes (instead of the union or intersection of the two segmentations), following the approach used in Ref. 18. The segmentations of the second grader (S.S.) are used only to assess the intergrader agreement.

5.

Classification

SVM is a supervised-learning algorithm widely used in pattern recognition.35,36 In this work, pixel-by-pixel classification was performed by SVM. For training, manual segmentations of the vascular network are used to derive the best SVM model.

A C-support vector classification with a radial-basis-function kernel was used,37 where two additional parameters intrinsic to the SVM are required: the parameter C controlling the separability margin of the hyperplane and the parameter γ controlling the spread of the kernel. The best (C,γ) combination was searched for using a genetic-algorithm aiming for the highest accuracy using cross-validation. The dataset DS1 and the ground truths of the first grader were used for optimization. Because of the large amount of data and the required computing time, we used a twofold cross-validation for the optimization and only a fraction of each image. Specifically, we used 10% of vessel pixels and 10% of nonvessel pixels (both randomly selected) of each image.

All the defined features were used in the SVM. These features were selected from a pool using a forward-selection approach based on the accuracy of the classification. This larger pool of features included all the ones described herewith, as well as variations of these, e.g., different ρ, τ, and w [Eqs. (2) and (3)], and additional features like moment invariants-based features16,38 and semantic/categorical features using spin descriptors,39,40 to name a few. The segmentations of the first grader and the dataset DS1 were used in this process.

6.

Metrics

Accuracy, specificity, and sensitivity were used for the evaluation of the system performance.

In addition, appropriate metrics were borrowed from Ref. 41. In brief, these are the connectivity C, area A, length L, and CAL. C, A, and L are given by

Eq. (36)

C(S,SG)=1min[1,|#cc(SG)#cc(S)|#(SG)]A(S,SG)=#{[βdil(S)SG][βdil(SG)S]}#(SSG)L(S,SG)=#{[βskel(S)βdil(SG)][βskel(SG)βdil(S)]}#[βskel(S)βskel(SG)],
where S and SG are binary images for the segmentation being evaluated and the ground truth segmentation, respectively. #cc is the number of eight-connected components, βdil is a morphological dilation operator, βskel is a morphological skeletonization operator, and and are the AND and OR binary operators, respectively. CAL is defined as the product of the three components (C, A, and L). These metrics are specific to the vascular network and are insensitive to small tracing differences in grader segmentations.41

Cohen’s κ (Ref. 42) was computed as a metric of segmentation agreement. It is defined by

Eq. (37)

κ=PaPe1Pe,
where Pa is the observed agreement and Pe is the expected agreement by chance.

7.

Results

Each of the features used by the supervised-classification process requires the specification of working parameters. These parameters were defined based on the system characteristics (e.g., scanning protocol), the forward-selection method (features), or values previously defined in the literature. The parameters used in this work can be found in Table 1.

It is important to note that no postprocessing was applied. All automatic segmentations are the direct result of the classification process.

The classification was validated using a 10-fold cross-validation approach on the dataset DS1 with manual segmentations from the first grader. Each of the 10 groups was composed of macular scans from one healthy and two diabetic subjects. OCTs acquired with different protocols were also distributed equitably throughout the groups. Quantitative results are summarized in Table 2. For visual inspection, the cases with worst and best performance (for the accuracy and CAL metrics) are shown in Fig. 8.

Table 2

Results from the 10-fold cross-validation on dataset DS1, healthy and diabetic subjects. The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.

Overall (N=30)Healthy (N=10)Diabetic (N=20)
MINMAXAVGSDMINMAXAVGSDMINMAXAVGSD
Accuracy0.9670.9840.9780.0050.9730.9830.9790.0030.9670.9840.9780.005
Specificity0.9890.9980.9940.0020.9890.9970.9930.0020.9890.9980.9940.002
Sensitivity0.7200.8790.8250.0420.7890.8790.8330.0300.7200.8790.8210.046
Connectivity0.9660.9990.9850.0090.9690.9970.9850.0080.9660.9990.9850.009
Area0.8290.9460.9000.0260.8840.9460.9140.0210.8290.9280.8920.026
Length0.7700.9190.8610.0330.8500.9190.8820.0250.7700.8980.8510.032
CAL0.6230.8560.7640.0530.7340.8560.7950.0410.6230.8260.7490.052
Cohen’s κ0.7980.9080.8610.0270.8240.8940.8650.0230.7980.9080.8590.029

Fig. 8

Principal component fundus (top row), and manual (middle row) and automatic (bottom row) segmentations from optical coherence tomography volumes with the highest (a) and lowest accuracy (b), and the highest (c) and lowest CAL (d).

JBO_18_12_126011_f008.png

The achieved values demonstrate the viability of the proposed method. The high specificity (0.994) shows that there is little oversegmentation, percentagewise. On the other hand, undersegmentation is a more prominent problem, as demonstrated by the lower sensitivity (0.825). By visual assessment of Fig. 8, one can say that the undersegmentation is associated with the misclassification of pixels on low-caliber vessels. Considering the application of subsequent algorithms (e.g., 3-D vascular segmentation and computation of vascular network descriptors), a specificity close to 1 (associated with a high sensitivity) is particularly important, as the postprocessing stage would be less demanding.

The differences between the healthy and diabetic groups are small. However, the algorithm performed better on the healthy group.

A new training was performed using all OCT scans in DS1. The model was then used to segment the OCT scans in DS2, the ONH OCTs, and in DS3, the pathological cases. For the ONH dataset, the optic disk was manually segmented and discarded in metric computation. Quantitative results are shown in Tables 3 and 4. For qualitative assessment, Fig. 9 shows a few examples of vascular segmentation on the ONH OCTs, and Fig. 10 shows all segmentation results of the pathological cases. Although the segmentation was developed mainly for OCTs of healthy eyes and eyes close to the healthy condition, the results on the pathological cases serve to demonstrate the robustness of the proposed method in rather extreme conditions.

Table 3

Test results for the optic nerve head dataset (N=10). The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.

MINMAXAVGSD
Accuracy0.9600.9800.9740.006
Specificity0.9870.9990.9930.004
Sensitivity0.7430.8980.8480.049
Connectivity0.9700.9970.9870.009
Area0.8740.9660.9340.029
Length0.8450.9280.8920.024
CAL0.7350.8940.8230.049
Cohen’s κ0.8260.9080.8770.027

Table 4

Test results for the pathology dataset (N=8). The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.

MINMAXAVGSD
Accuracy0.9520.9840.9740.010
Specificity0.9710.9950.9860.007
Sensitivity0.7910.8760.8350.031
Connectivity0.9530.9850.9680.011
Area0.7560.9470.8290.067
Length0.7040.9140.7880.071
CAL0.5110.8520.6380.117
Cohen’s κ0.7520.9040.8160.048

Fig. 9

Examples of optic nerve head optical coherence tomography fundus images and the automatic segmentation results. The gray region (from the manual segmentation of the optic nerve head) was not considered.

JBO_18_12_126011_f009.png

Fig. 10

Optical coherence tomography fundus images and automatic segmentation results for cases with pathological disorders: (a) full-thickness macular hole, (b) branch retinal arterial occlusion, (c) cystoid macular edema, (d) and (e) age-related macular degeneration, (f) vascular tortuosity, (g) proliferative diabetic retinopathy, and (h) diabetic macular edema.

JBO_18_12_126011_f010.png

The proposed approach was able to outperform the one published in Ref. 26 for segmentation on the ONH region, where a specificity of 88% and a sensitivity of 85% were achieved. Although the dataset is not the same, it has the same fundamental characteristics: the OCT scans were acquired with Cirrus HD-OCT using the same acquisition protocol (200×200×1024), all OCTs were centered on the ONH region, the number of cases is similar (N=11), and all eyes are from healthy subjects. Furthermore, in our work, the optimizations and training processes were performed on OCT scans from the macular region. Therefore, the process could benefit from parameter optimization and training on ONH-centered scans.

In the pathological cases, the specificity is lower, which translates to higher oversegmentation. However, while no training was performed on the pathological cases other than the diabetic ones, the results achieved on several pathologies are similar to the results achieved with the healthy and the diabetic subjects (accuracy >97% and κ>0.81), which attests to the robustness of the proposed method.

The agreement between the two graders was also evaluated. The manual segmentations of the second grader (S.S.) were tested against those of the first grader (T.M.). Results can be found in Table 5. These confirm the good performance of the algorithm. For the majority of the metrics, the average values for the automatic segmentations on DS1 and DS2 are greater than or equal to those for the second grader. Contrarily, as expected, for DS3, the metrics (although similar) are lower than those for the comparison between graders.

Table 5

Intergrader agreement (N=48). The minimum (MIN), maximum (MAX), average (AVG), and standard deviation (SD) values for each segmentation performance metric.

MINMAXAVGSD
Accuracy0.9680.9840.9760.004
Specificity0.9890.9970.9940.002
Sensitivity0.7180.8850.8080.036
Connectivity0.9560.9960.9810.010
Area0.8600.9850.9130.036
Length0.8120.9670.8810.042
CAL0.6800.9490.7910.074
Cohen’s κ0.7920.9160.8490.025

The total execution time for the segmentation process (OCT fundus reference computation, features computation, and SVM classification), using a nonoptimized MATLAB® (The MathWorks Inc., Natick, MA) implementation, was 65.2±1.2s (N=15) and 108.2±3.8s (N=15) for the 200×200×1024 and 512×128×1024 protocols, respectively. The system hardware used was an Intel® Core™ i7-3770 CPU (Intel Corporation, Santa Clara, California) at 3.4 GHz.

8.

Discussion and Conclusions

To our knowledge, the work presented herein is the first attempt to automatically segment the vascular network of the macular region. Our findings showed that it is able to work with standard OCTs of both healthy and diseased retinas. The proposed method achieved good results for both the macular and ONH regions even with no training with ONH OCTs.

The proposed models (trained only on healthy and diabetic subjects) already show robustness on the pathological cases, as the results do not differ substantially from DS1 results. Nevertheless, with proper training on pathological data and given enough pathological cases, the SVM would be able to create even more robust models, which would improve the classification performance on these cases.

Improvements are still possible and the 2-D segmentations could benefit from a postprocessing stage. The spectrum of features presented here can be useful for region growing based on the automatic segmentations and subsequent deletion of small incorrectly segmented regions. For the pathological cases, the classification specificity is lower, which leads to a more complex postprocessing stage than for DS1 and DS2. Additional research will be performed on these issues.

With the proposed method, additional algorithms of coregistration43 and computation of vascular network descriptors,44,45 already described for other imaging modalities, can now be applied to OCT. For most multimodal coregistration algorithms, even for the pathological cases, we are particularly convinced that the proposed segmentation is a valuable tool. Therefore, multimodal imaging and studies of disease progression (even in extreme cases) are a major application area of the proposed method.

The representation of blood vessels on OCT fundus reference images depends on the presence of hemoglobin. Therefore, the method is not able to segment fully occluded vessels. However, for reperfused or partially occluded vessels, segmentation is still possible [Fig. 10(b)]. This led to yet another area of application now being pursued by our research group: the discrimination between perfused and occluded vessels from OCT data.46

Finally, this method establishes a good starting point toward the fully automatic 3-D segmentation. The first tests on 3-D segmentation have been conducted in Ref. 25.

Acknowledgments

This work was supported by Fundação para a Ciência e a Tecnologia (FCT) under the projects PTDC/SAU-ENB/111139/2009 and PEST/C/SAU/3282/2011, and by the COMPETE programs FCOMP-01-0124-FEDER-015712 and FCOMP-01-0124-FEDER-022717.

References

1. 

J. ChenL. E. H. Smith, “Retinopathy of prematurity,” Angiogenesis, 10 (2), 133 –140 (2007). http://dx.doi.org/10.1007/s10456-007-9066-0 69IRVD 0969-6970 Google Scholar

2. 

C. Heneghanet al., “Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis,” Med. Image Anal., 6 (4), 407 –429 (2002). http://dx.doi.org/10.1016/S1361-8415(02)00058-0 MIAECY 1361-8415 Google Scholar

3. 

R. BernardesP. SerranhoC. Lobo, “Digital ocular fundus imaging: a review,” Ophthalmologica, 226 (4), 161 –181 (2011). http://dx.doi.org/10.1159/000329597 OPHTAD 0030-3755 Google Scholar

4. 

Y. Liet al., “Registration of OCT fundus images with color fundus photographs based on blood vessel ridges,” Opt. Express, 19 (1), 7 (2011). http://dx.doi.org/10.1364/OE.19.000007 OPEXFF 1094-4087 Google Scholar

5. 

M. D. AbràmoffM. K. GarvinM. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng., 3 (0), 169 –208 (2010). http://dx.doi.org/10.1109/RBME.2010.2084567 1937-3333 Google Scholar

6. 

A. Canet al., “Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms,” IEEE Trans. Inf. Technol. Biomed., 3 (2), 125 –138 (1999). http://dx.doi.org/10.1109/4233.767088 ITIBFX 1089-7771 Google Scholar

7. 

O. ChutatapeL. ZhengS. M. Krishnan, “Retinal blood vessel detection and tracking by matched Gaussian and Kalman filters,” in Proc. of the 20th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, 3144 –3149 (1998). Google Scholar

8. 

S. Chaudhuriet al., “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imaging, 8 (3), 263 –269 (1989). http://dx.doi.org/10.1109/42.34715 ITMID4 0278-0062 Google Scholar

9. 

M. SofkaC. V. Stewart, “Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures,” IEEE Trans. Med. Imaging, 25 (12), 1531 –1546 (2006). http://dx.doi.org/10.1109/TMI.2006.884190 ITMID4 0278-0062 Google Scholar

10. 

L. WangA. BhaleraoR. Wilson, “Analysis of retinal vasculature using a multiresolution hermite model,” IEEE Trans. Med. Imaging, 26 (2), 137 –152 (2007). http://dx.doi.org/10.1109/TMI.2006.889732 ITMID4 0278-0062 Google Scholar

11. 

J. V. B. Soareset al., “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imaging, 25 (9), 1214 –1222 (2006). http://dx.doi.org/10.1109/TMI.2006.879967 ITMID4 0278-0062 Google Scholar

12. 

Q. Liet al., “A new approach to automated retinal vessel segmentation using multiscale analysis,” in 18th Int. Conf. Pattern Recognition, 77 –80 (2006). Google Scholar

13. 

M. E. Martinez-Perezet al., “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal., 11 (1), 47 –61 (2007). http://dx.doi.org/10.1016/j.media.2006.11.004 MIAECY 1361-8415 Google Scholar

14. 

N. SalemS. SalemA. Nandi, “Segmentation of retinal blood vessels based on analysis of the Hessian matrix and clustering algorithm,” in 15th European Signal Processing Conf., 428 –432 (2007). Google Scholar

15. 

B. S. Y. LamH. Yan, “A novel vessel segmentation algorithm for pathological retina images based on the divergence of vector fields,” IEEE Trans. Med. Imaging, 27 (2), 237 –246 (2008). http://dx.doi.org/10.1109/TMI.2007.909827 ITMID4 0278-0062 Google Scholar

16. 

D. Marínet al., “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Trans. Med. Imaging, 30 (1), 146 –158 (2011). http://dx.doi.org/10.1109/TMI.2010.2064333 ITMID4 0278-0062 Google Scholar

17. 

C. Sinthanayothinet al., “Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images,” Br. J. Ophthalmol., 83 (8), 902 –910 (1999). http://dx.doi.org/10.1136/bjo.83.8.902 BJOPAL 0007-1161 Google Scholar

18. 

J. Staalet al., “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging, 23 (4), 501 –509 (2004). http://dx.doi.org/10.1109/TMI.2004.825627 ITMID4 0278-0062 Google Scholar

19. 

A. M. MendonçaA. Campilho, “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE Trans. Med. Imaging, 25 (9), 1200 –1213 (2006). http://dx.doi.org/10.1109/TMI.2006.879955 ITMID4 0278-0062 Google Scholar

20. 

P. SerranhoA. M. MorgadoR. Bernardes, “Optical coherence tomography: a concept review,” Optical Coherence Tomography: A Clinical and Technical Update, 139 –156 Springer, Berlin, Heidelberg (2012). Google Scholar

21. 

D. Huanget al., “Optical coherence tomography,” Science, 254 (5035), 1178 –1181 (1991). http://dx.doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar

22. 

W. Drexler, “Ultrahigh-resolution optical coherence tomography,” J. Biomed. Opt., 9 (1), 47 –74 (2004). http://dx.doi.org/10.1117/1.1629679 JBOPFO 1083-3668 Google Scholar

23. 

J. G. FujimotoW. Drexler, “Introduction to optical coherence tomography,” Optical Coherence Tomography: A Clinical and Technical Update, 1 –45 Springer, Berlin, Heidelberg (2008). Google Scholar

24. 

M. Niemeijeret al., “Vessel segmentation in 3D spectral OCT scans of the retina,” Proc. SPIE, 6914 69141R (2008). http://dx.doi.org/10.1117/12.772680 PSISDG 0277-786X Google Scholar

25. 

P. Guimarãeset al., “3D retinal vascular network from optical coherence tomography data,” Lec. Notes Comput. Sci., 7325 339 –346 (2012). http://dx.doi.org/10.1007/978-3-642-31298-4 LNCSD9 0302-9743 Google Scholar

26. 

J. Xuet al., “3D OCT retinal vessel segmentation based on boosting learning,” in World Congress on Medical Physics and Biomedical Engineering, 179 –182 (2009). Google Scholar

27. 

M. Pilchet al., “Automated segmentation of retinal blood vessels in spectral domain optical coherence tomography scans,” Biomed. Opt. Express, 3 (7), 1478 (2012). http://dx.doi.org/10.1364/BOE.3.001478 BOEICL 2156-7085 Google Scholar

28. 

P. Guimarãeset al., “Ocular fundus reference images from optical coherence tomography,” (2013) http://www.aibili.pt/publicacoes/techReport.pdf October ). 2013). Google Scholar

29. 

P. D. Kovesi, “Image features from phase congruency,” Videre: J. Comput. Vis. Res., 1 (3), 1 –26 (1999). Google Scholar

30. 

P. D. Kovesi, “Phase congruency detects corners and edges,” in The Australian Pattern Recognition Society Conference, 309 –318 (2003). http://dx.doi.org/10.1111/j.1755-3768.2013.4463.x Google Scholar

31. 

T. Zhu, “Fourier cross-sectional profile for vessel detection on retinal images,” Comput. Med. Imaging Graph., 34 (3), 203 –212 (2010). http://dx.doi.org/10.1016/j.compmedimag.2009.09.004 CMIGEY 0895-6111 Google Scholar

32. 

P. D. Kovesi, “MATLAB and octave functions for computer vision and image processing,” (2013) http://www.peterkovesi.com/ April ). 2013). Google Scholar

33. 

P. D. Kovesi, “Symmetry and asymmetry from local phase,” in Tenth Australian Joint Conf. on Artificial Intelligence, 185 –190 (1997). Google Scholar

34. 

E. RicciR. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Trans. Med. Imaging, 26 (10), 1357 –1365 (2007). http://dx.doi.org/10.1109/TMI.2007.898551 ITMID4 0278-0062 Google Scholar

35. 

N. CristianiniJ. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, Cambridge (2000). Google Scholar

36. 

R. O. DudaP. E. HartD. G. Stork, Pattern Classification, Wiley, Hoboken, New Jersey (2000). Google Scholar

37. 

C.-C. ChangC.-J. Lin, “LIBSVM: a library for support vector machines,” (2013) http://www.csie.ntu.edu.tw/~cjlin/libsvm/ April ). 2013). Google Scholar

38. 

M.-K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theory, 8 (2), 179 –187 (1962). http://dx.doi.org/10.1109/TIT.1962.1057692 IRITAY 0096-1000 Google Scholar

39. 

X. Chenget al., “Automatic localization of retinal landmarks,” in Proc. of the 34th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, 4954 –4957 (2012). Google Scholar

40. 

S. LazebnikC. SchmidJ. Ponce, “A sparse texture representation using local affine regions,” IEEE Trans. Pattern Anal. Mach. Intell., 27 (8), 1265 –1278 (2005). http://dx.doi.org/10.1109/TPAMI.2005.151 ITPIDJ 0162-8828 Google Scholar

41. 

M. E. Gegúndez-Ariaset al, “A function for quality evaluation of retinal vessel segmentations,” IEEE Trans. Med. Imaging, 31 (2), 231 –239 (2012). http://dx.doi.org/10.1109/TMI.2011.2167982 ITMID4 0278-0062 Google Scholar

42. 

J. Cohen, “A coefficient of agreement for nominal scales,” Educ. Psychol. Meas., 20 (1), 37 –46 (1960). http://dx.doi.org/10.1177/001316446002000104 EPMEAJ 0013-1644 Google Scholar

43. 

F. ZanaJ.-C. Klein, “A multimodal registration algorithm of eye fundus images using vessels detection and Hough transform,” IEEE Trans. Med. Imaging, 18 (5), 419 –428 (1999). http://dx.doi.org/10.1109/42.774169 ITMID4 0278-0062 Google Scholar

44. 

W. E. Hartet al., “Measurement and classification of retinal vascular tortuosity,” Int. J. Medical Inform., 53 (2), 239 –252 (1999). http://dx.doi.org/10.1016/S1386-5056(98)00163-4 1386-5056 Google Scholar

45. 

N. Wittet al., “Abnormalities of retinal microvascular structure and risk of mortality from ischemic heart disease and stroke,” Hypertension, 47 (5), 975 –981 (2006). http://dx.doi.org/10.1161/01.HYP.0000216717.72048.6c HPRTDN 0194-911X Google Scholar

46. 

R. Bernardeset al., “Non-invasive discrimination between perfused and occluded vessels by optical coherence tomography,” Acta Ophthalmologica, 91 (s252), (2013). ACOPAT 0001-639X Google Scholar
© 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2013/$25.00 © 2013 SPIE
Pedro Rodrigues, Pedro Guimaraes, Torcato Santos, Silvia Simao, Telmo Miranda, Pedro Serranho, and Rui C. Bernardes "Two-dimensional segmentation of the retinal vascular network from optical coherence tomography," Journal of Biomedical Optics 18(12), 126011 (16 December 2013). https://doi.org/10.1117/1.JBO.18.12.126011
Published: 16 December 2013
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Optical coherence tomography

Image filtering

Linear filtering

Retina

Bandpass filters

Head

Back to Top