7 January 2015 Three-dimensional segmentation and reconstruction of the retinal vasculature from spectral-domain optical coherence tomography
Author Affiliations +
Abstract
We reconstruct the three-dimensional shape and location of the retinal vascular network from commercial spectral-domain (SD) optical coherence tomography (OCT) data. The two-dimensional location of retinal vascular network on the eye fundus is obtained through support vector machines classification of properly defined fundus images from OCT data, taking advantage of the fact that on standard SD-OCT, the incident light beam is absorbed by hemoglobin, creating a shadow on the OCT signal below each perfused vessel. The depth-wise location of the vessel is obtained as the beginning of the shadow. The classification of crossovers and bifurcations within the vascular network is also addressed. We illustrate the feasibility of the method in terms of vessel caliber estimation and the accuracy of bifurcations and crossovers classification.

1.

Introduction

The retina is regarded as a window to the cardiovascular system. Changes in the retinal microvasculature have been found to be related to several cardiovascular12.3 and cerebrovascular34.5.6.7.8 outcomes, among other.910.11 These evidences make the automatic detection of retinal blood vessels a key step in this area of research. The quantitative description of the detected retinal vasculature can be and has been used to establish the association between retinal vascular properties and clinical and subclinical outcomes, thus providing tools to the clinician for an objective diagnosis.

Extensive work has been done in this field, mainly based on two widely used ocular imaging modalities: color fundus photography (CFP) and fluorescein angiography (FA). Vascular properties such as tortuosity or bifurcation angles computed from these two-dimensional (2-D) fundus images have been associated with several diseases. However, the computation of such properties is incomplete due to the projection to a 2-D plane. A robust method to segment the human retinal vascular network in three-dimensions (3-D) would be a valuable tool and a significant leap forward to fully understanding the pathophysiology of several diseases.

Optical coherence tomography (OCT) is an imaging modality capable of noninvasively imaging the microstructure of tissue in vivo and in situ.12,13 Over time, it became an important tool in the diagnosis of ocular pathologic conditions. It has been extensively used in clinical research and is becoming common in the clinical practice. The principle, based on the backscattering of low-coherence light, is now extensively described in the literature.12,13

On standard spectral-domain OCT (SD-OCT) scans, the retinal blood vessels are not directly visible. Instead, two signatures emerge in the OCT signal. One relates to the fact that hemoglobin absorbs the infrared light. Consequently, backscattering at the structures below perfused vessels is highly attenuated.12,14,15 This effect is well known and has been used to obtain the 2-D vascular network from the 3-D OCT data.1617.18 While this is a major advantage for 2-D segmentation due to the significant contrast on the retinal pigment epithelium (RPE), the 3-D segmentation of the retinal vasculature requires additional information. The other signature is a diffuse hyper reflectivity on the vessel itself.

Approaches for 3-D retinal vasculature segmentation in the literature are limited in number.1920.21

In this work, we describe a fully automatic method for the 3-D segmentation of the vascular network of the human retina from standard Cirrus HD-OCT (Carl Zeiss Meditec, Dublin, CA, United States) data and the framework for its reconstruction.

2.

Workflow and Background Work

Following a preliminary study by our research group,22 the 3-D OCT scan is projected to a 2-D ocular fundus reference image. Each pixel in this image translates into an A-scan on the OCT volume. This reference is then used to compute features that are able to discriminate each pixel into vessel or nonvessel groups.23 The A-scans whose pixels were classified into the vessel group, i.e., A-scans that intersect blood vessels, are then processed to determine the depth-wise location of the vessel. Figure 1 shows the global workflow of the algorithm and Fig. 2 shows a graphical depiction of the process.

Fig. 1

Flowchart representing the global workflow of the algorithm.

JBO_20_1_016006_f001.png

Fig. 2

Projection of a three-dimensional (3-D) macular optical coherence tomography (OCT) scan to a two-dimensional (2-D) fundus reference image by mapping the shadows casted due to light absorption. The binary image (from the classification of the pixels in the 2-D reference) is then used to aid in the identification of the depth location of the vessels.

JBO_20_1_016006_f002.png

Throughout this paper, the following coordinate system for the OCT data will be used: x is the nasal-temporal direction, y is the superior-inferior direction, and z is the anterior-posterior direction (depth).

2.1.

Retinal Layer Segmentation

The very first step consists of determining the depth coordinates of three interfaces: Z1 is the inner limiting membrane (ILM), Z2 is the junction between the inner and outer photoreceptor segments (IS/OS), and Z3 is the interface between the RPE and the choroid. These interfaces are typically the easiest ones to identify.

The retinal layer segmentation process itself will not be discussed here in great detail. On one hand, it does not greatly affect the quality of the results of the final 3-D vascular segmentation and, on the other hand, there is a strong background of published work in this area.24,25 Each B-scan is first filtered with a 2-D Gaussian derivative filter. The rational is that Z1, Z2, and Z3 interfaces correspond to the strongest intensity transitions in a normal retina. A threshold is then applied. Finally, 3-D gradient smoothing is applied to the obtained surfaces.

2.2.

Ocular Fundus Reference Images

As noted, light absorption by hemoglobin is responsible for the decrease in light backscattering beneath perfused vessels. The segmentation process herein takes advantage of this effect by computing a set of 2-D fundus reference images from the 3-D OCT data (Fig. 2). A study was conducted to identify the fundus reference images that provide the best discrimination between the vascular network and the background.26 Four potential reference images were evaluated:26 the mean value fundus (MVF), the expected value fundus (EVF), the error to local median (ELM), and the principal component fundus (PCF).

The PCF image is computed through principal component analysis (PCA) as the principal component of the MVF, EVF, and ELM images. The interfaces at coordinates Z2 and Z3 are needed for this stage of the process. It was demonstrated that the PCF image provides the greatest extension of the vascular network (equivalent to that achieved with CFP) and the best contrast among the other fundus reference images.26

2.3.

Feature Computation and Two-Dimensional Classification

To obtain the 2-D vascular network segmentation, we resort to an approach previously published by Rodrigues et al.23 For clarity, we shortly describe in this section the used algorithm and the obtained results.

All four 2-D fundus reference images computed from OCT volumes (MVF, EVF, ELM, and PCF) were used as features in the classification process. Additional features were computed from the PCF image. Specifically, intensity-based features, local-phase features, and features computed with Gaussian-derivative filters, log-Gabor and average filter banks, and bandpass filters were obtained.23 A supervised learning algorithm (support vector machine) was then used to train and classify each A-scan of the OCT volume V into vessel or nonvessel A-scans, i.e., to discriminate between A-scans that cross retinal blood vessels and the remaining A-scans. Formally, we define the set A={Ai(.):iU} of A-scans Ai(.)=V(xi,yi,) that cross vessels. The set U is defined as the indices of A-scans that cross vessels.

The process proved able to cope with OCTs of both healthy and diseased retinas. It achieved good results for both the macular and optic nerve head regions. For a set of macular OCT scans of healthy subjects and diabetic patients, the algorithm achieved 98% accuracy, 99% specificity, and 83% sensitivity. For both groups, the algorithm favorably compared to the inter-grader agreement.23

3.

Three-Dimensional Segmentation and Reconstruction

The core of the work presented in this paper addresses the issues of estimating vessels diameters and their depth-wise location within the OCT volume, and the 3-D modeling of the vascular network from the human retina, dealing with crossovers and bifurcations. The workflow of this process can be found in Fig. 3.

Fig. 3

Flowchart for the 3-D segmentation block of Fig. 1. The main block inputs are the OCT volume, the segmented layers, the 2-D ocular fundus reference image, and the binary image (Fig. 2).

JBO_20_1_016006_f003.png

3.1.

Vessel Caliber Estimation

The lateral resolution of OCT combined with the optical properties of the vessel walls do not allow for their direct observation. As such, vessel caliber can only be estimated from the shadow cast over the RPE due to light absorption by hemoglobin. Due to the aforementioned reasons (Sec. 2.2), the PCF ocular fundus reference image is the natural choice for the vessel caliber estimation. Thus, it is clear that the estimated caliber is below the real caliber and that this effect is less significant for larger vessels closer to the optic disk, and become more important close to the fovea as vessels become thinner. In addition, the estimated caliber can only be achieved in the xy plane and we assume them to have a circular cross section.

There is a wealth of literature on methods to compute retinal vessel width from 2-D ocular fundus images. These were tailored for imaging modalities such as CFP and FA.2728.29 These methods rely on the estimation of the cross section with respect to vessel centerlines and have to deal with the rough definition of vessel borders due to the regular sampling of the digital imaging modalities.30,31 This effect is much more prominent on OCT rendering these methods inappropriate for this modality.

Log-Gabor wavelets are routinely used for vessel enhancement and detection.32 These are created by combining a radial and an angular component (in the frequency space), then determining the scale and the orientation of the filter, respectively. In the time domain, the filter is composed by even (real part) and odd (imaginary part) kernels.

To determine the retinal vessels’ caliber, the PCF image is filtered with a bank of log-Gabor even kernels, each with a unique orientation-scale pair. This bank covers a wide range of scales and orientations to fully describe the whole vascular network. At each vessel pixel PCF(xi,yi), with iU, the log-Gabor even kernel whose orientation-scale pair better matches the vessel orientation and caliber generates the highest response. The scale of the kernel with the maximal response can then be translated into the caliber of the vessel on that pixel.

3.2.

Bewildering Regions

The bewildering regions are subsets of neighboring vascular network A-scans where the path of the vessels is unclear from the 2-D segmentation due to bifurcation and/or crossovers (Fig. 4). To determine these regions, several binary morphological operations are used. These regions require further processing (see Sec. 3.4).

Fig. 4

Examples of 2-D vessel binary maps cropped to bewildering regions of crossovers and/or bifurcations.

JBO_20_1_016006_f004.png

First, endpoints in the vicinity of one another on the binary vascular fundus image are bridged and the binary image (the set of vessel points U) is redefined. This updated image is then skeletonized and isolated points are removed. Finally, the bewildering regions are defined as the dilation of branch points and these are removed (erased) from the skeleton image. In consequence, all potential bifurcations and crossovers are adequately left to be re-linked. This new set of points defines the set Uskel (Fig. 5).

Fig. 5

Detail of a principal component fundus reference image (a) and the respective binary maps of the 2-D classification U (b) and of the vessel centerlines (with the removed bewildering regions in red) Uskel (c).

JBO_20_1_016006_f005.png

Fig. 6

Computation examples of the depth coordinate of different blood vessel at a centerline point i. The two profiles on the top of each plot are the vessel profile Aivessel (blue) and the nonvessel profile Ainonvessel (red). The difference between the two (black profile) is shown on the bottom. The difference profile is filtered at the region of interest (green) and the depth coordinate of the vessel on that point is taken as the location of the maximum of the filtered difference.

JBO_20_1_016006_f006.png

The set EcUskel of endpoints on each bewildering region c, with Ec{E1,E2,}, will then be used to look for the most plausible linking configuration of that region (Sec. 3.4).

3.3.

Vascular Network Depth-Wise Position

As stated, retinal blood vessels are not directly visible in standard OCT data. Typically, vessels on OCT appear as hyper-reflective regions followed posteriorly by the shadowing of structures beneath due to light absorption by hemoglobin in perfused vessels.

In the previous section, all A-scans were classified as vessel or nonvessel A-scans. Only A-scans from the centerline of the vessel are processed to search for the depth-wise location of the vessel, using the following methods.

3.3.1.

Principal Component Analysis

PCA is used to enhance similar information between neighbor vessel A-scans in the principal component. For each vessel centerline A-scan (Askel={Ai(.):iUskel}), a circular region in the xy plane centered at (xi,yi) with radius r is defined. The set of A-scans in each vessel and nonvessel region is used to compute two new profiles, Avessel and Anonvessel, respectively, as follows:

Every A-scan within the defined region is interpolated to account for differences in retinal thickness, from the ILM (Z1) to the IS/OS (Z2). One should bear in mind that the whole volume was previously flattened by the IS/OS layer26 and, as such, Z2 is now equal for all A-scans.

Each profile in Avessel={Aivessel(.):iUskel} is computed as the principal component of the selected A-scans Aj such that

(1)

Aj:[xixj,yiyj]<r,{i,j}Vk,
where VkU is the set of indices of the A-scans that constitute the vessel k and .; is the usual Euclidean norm. On the other hand, each profile in Anonvessel={Ainonvessel(.):iUskel} is computed as the principal component of the selected A-scans Aj such that

(2)

Aj:[xixj,yiyj]<r,jU.

At the first iteration, the sets Vk{V1,V2,} are vessel segments delimited by bewildering regions. However, after defining the actual links between segments, the depth-wise position can be recomputed to improve the estimation at the bewildering regions (Fig. 3).

Although the radius r was empirically selected following visual inspection of the final 3-D segmentation, it is now kept fixed for the processing of all the examples.

3.3.2.

Local Difference Profiles

To estimate the z (depth) coordinate of the vessel at each point of the centerline, the difference between the vessel (Aivessel) and the nonvessel (Ainonvessel) profiles is computed for each iUskel. This operation results in a difference profile with two clear signatures, one due to the presence of the hyper reflectivity and the other due to the shadow in vessel A-scans only, as illustrated in Fig. 6.

Although the vessel walls are not seen in the difference profile, one can estimate their position (and, therefore, the vessel cener) by taking advantage of its diameter estimated in Sec. 3.1. A moving average filter h(x,d) with size d allows for the estimation of the center of the vessel by

(3)

ci=argmaxz[(Aivessel(z)Ainonvessel(z))*h(z,si)],
where si is the estimated caliber at the i‘th A-scan, * is the convolution operator, and ci is restricted to be in the interval [Z1+si/2,Z2si/2].

3.4.

Bifurcations and Crossovers

Vessel tracking in OCT is quite different from other imaging modalities. Due to the low sampling of OCT systems compared to imaging modalities as CFP and FA, most vessels are just one to two pixels wide and the bewildering regions present many alternatives for the linking process in windows just a few pixels wide (Fig. 4). Furthermore, since we use the shadow to locate the blood vessels, the intensity of different vessels on the fundus reference images do not differ sufficiently to tell them apart.

As a preprocessing step before linkage, we screen all the possible links to find those that are unlikely and to find some links that were not established in the 2-D classification that could help in solving the bewildering region (the so-called growing links). All possible links are then subject to thresholding by length, angle between linked vessel segments, and angle between each vessel segment and the link itself.

For each bewildering region c, a cost ϕij is assigned to every link lij between every two points, (xi,yi,zi) and (xj,yj,zj), such that {i,j}Ec and i<j, as

(4)

ϕij=(αij+βij+γij)(αji+βji+γji)
with

(5)

αij=2π|θixyarctan(yjyixjxi)|βij=1min(si,sj)max(si,sj)γij=2π|θizarctan(zjzi[xixj,yiyj])|
with arctan and the difference of angles mapped to the interval [π/2,π/2], and θxy and θz are, respectively, the orientations of the vessel in the xy plane and in z.

At this point, we shall define all configurations Ck{C1,C2,} as sets of links lij subject to the following constraints:

  • the set of links contains the link with the least cost (lij:(i,j)=argmini,jϕij);

  • all end-points have to be linked, except points that need vessel segmentation growing to link between them;

  • there is only one link by vessel segmentation growing;

  • the configuration does not result in intersections or loops in the same vessel.

Generally, the set {C1,C2,} will enclose all possibilities for crossovers and bifurcations.

The cost of the configuration Ck is then computed as

(6)

Φk=(i,j):lijCkϕij.

From the group of many feasible combinations, i.e., ignoring the ones rendering loops within the same vessel, the solution presenting the lowest overall cost is chosen.

3.5.

Three-Dimensional Reconstruction

Vessel centerlines in the xy plane (Sec. 3.2) are combined with the estimated vessel centerlines of Sec. 3.4 to render the 3-D skeleton on the vascular network.

In this work, we assume vessels to have a local tubular structure whose centerlines are defined by the 3-D skeleton and the diameter is estimated from the fundus reference image (xy plane). In this way, cylinders and cone-like structures are the fundamental components from which the 3-D vascular network is built.

3.5.1.

Vessels Path Interpolation

The combination of vessel diameter and low OCT sampling results in vessels is poorly defined in the fundus reference. In addition, at vessel crossovers, the basic assumptions for the determination of the vessel depth location are no longer verified except for the top one. That is, the vessels below the top one do not present the clear signatures (vessel hyper reflectivity and shadow) since they lie on the shadow cast by the vessel above. In this way, interpolation is mandatory and is performed under the assumption that vessels do not present discontinuities or sharp edges, i.e., they are continuous with respect to the first derivative in any of the three dimensions. Under these assumptions, the vascular network is built based on OCT data (control points) making use of cubic spline interpolation.

3.5.2.

Delaunay Triangulation

Vessel reconstruction is achieved by Delaunay triangulation. At each vessel centerline location, the tangent to the path, defined by the centerline points, is computed and the respective normal plane determined. The cross section of the vessel is approximated by a set of points in this cross-sectional plane within the circumference centered at the vessel centerline and a diameter equal to the estimated vessel’s diameter (Fig. 7). Adjacent circumferences are later connected using the Delaunay triangulation in 3-D. The quality of the final reconstruction is directly dependent on the number of triangles, which in turn depends on the circumference sampling and the gap between estimated vessel cross sections.

Fig. 7

Illustration of a local reconstruction of a vessel, based on Delaunay triangulation. Several points over the centerline are chosen and equally distant points (with distance equal to the estimated radius of the vessel in the xy plane) in the respective orthogonal plane to the centerline are considered for the 3-D triangulation process of reconstruction.

JBO_20_1_016006_f007.png

4.

Results

OCT macular scans of 15 eyes from healthy subjects and eyes from patients diagnosed with type 2 diabetes mellitus (early treatment diabetic retinopathy study levels 10 to 35) were used as the test bed for the proposed methodology. All OCT scans were gathered from our institutional database and were collected by a Cirrus HD-OCT (Carl Zeiss Meditec, Dublin, CA, United States) system. These eyes were also imaged by CFP (Zeiss FF 450 system) and/or FA (Topcon TRC 50DX, Topcon Medical Systems, Inc., Oakland, NJ, United States).

The results obtained for the 3-D vascular segmentation are now presented in three parts: the vessel caliber estimation, the bewildering regions decision, and the reconstruction and z position of the vessels.

4.1.

Vessel Caliber

The automated vessel caliber estimation was compared with the assisted estimates made by three graders (G1, G2, and G3).

Five vessel segments per OCT eye scan were randomly chosen, each with a minimum of 21 pixels long along the centerline. Graders were instructed to use a software tool. Since the lateral sampling of OCT is relatively low, the graders would not be able to simply mark the vessel boundaries, as is the common practice for CFP.33 Instead, the tool provided for any given caliber, a subpixel continuous marking of the boundaries (for that particular caliber) of the whole vessel segment against the PCF image. Several diameters with increments of 0.2 pixels were tested by the graders to choose the diameter that best fits the data. The result is a set of several caliber measurements (in pixels) for each randomly chosen vessel segment.

For the comparison, two metrics were used, the absolute and the relative differences as defined, respectively, by

(7)

Da(G,G^)=|GG^|,

(8)

Dr(G,G^)=|GG^|/G^,
where G is the data to test and G^ is the ground truth, considered here as the average of all the graders (Table 1).

Table 1

Comparison between the automatic caliber estimation (S) and each manual grading (G1, G2, and G3), by the mean and standard deviation (SD) values for the absolute difference Da (pixels) and the relative difference Dr between the result G and G^.

MetricGG^MeanSD
Da(G,G^)S1/3(G1+G2+G3)0.3110.208
G11/2(G2+G3)0.4940.240
G21/2(G1+G3)0.1870.153
G31/2(G1+G2)0.4060.231
Dr(G,G^)S1/3(G1+G2+G3)0.2130.173
G11/2(G2+G3)0.2990.168
G21/2(G1+G3)0.1220.110
G31/2(G1+G2)0.3370.266

Table 1 shows the average metric results by comparing the automatic estimation (S) to the average grading of all graders and each grader to the average grading of the other two, as a measure of inter-grader agreement. Figure 8 shows the relative difference metric for several ranges of diameters for the same comparisons. As expected, smaller vessels render a higher variability as shown by the histograms. Figure 9 shows both the manual and automatic determination of vessel boudaries over the PCF fundus reference image.

Fig. 8

Average relative differences for the automatic estimation (S) and manual gradings (G1, G2, and G3) displayed by caliber range.

JBO_20_1_016006_f008.png

Fig. 9

Representative results for the diameter estimation. The manual gradings are marked with (white) ; and automatic estimation is marked with the (yellow) dotted line.

JBO_20_1_016006_f009.png

4.2.

Bewildering Regions

At bifurcation and crossover locations, the continuity of each vessel is hard to detect even for a human grader, naturally depending on the number of possibilities for each region. This was demonstrated to be a very demanding task for the automatic process. For very difficult cases or whenever graders decided, they could access either a fluorescein angiography or CFP, or both, according to the respective availability in our database. In general, the access to these complementary imaging modalities proved to be frequently required for this task. Three metrics were used to assess the accuracy of the automated system in establishing the correct links between vessel segments at bewildering regions. These are: the point accuracy (indicates the percentage of points in Ec correctly grouped), the linking accuracy (the percentage of links correctly connected and nonlinks correctly unconnect, i.e., the percentage of true positives and false negatives) and, the percentage of bewildered regions correctly classified.

The results are shown in Figs. 10 and 11.

Fig. 10

Number of bewildering regions by number of points to link (points in Ec), on the left, and metric results for bewildering region solving.

JBO_20_1_016006_f010.png

Fig. 11

Detail on the reconstruction of the vessels in bewildering regions, with crossovers and bifurcations.

JBO_20_1_016006_f011.png

4.3.

Vascular Network Depth-Wise Position

The low visibility of vessel markers (such as the shadow) renders the manual detection of blood vessels in OCT B-scans a hard task. Two graders (G4 and G5.1) were instructed to mark, at 50 randomly selected vessel A-scans, the position where the shadow of the vessels began. Some restrictions were imposed to the random selection to guarantee an unbiased evaluation: the same vessel could not be selected twice and not more than five A-scans per exam were possible. Furthermore, very small vessels (less than two pixels in radius) were excluded as the graders found it very difficult to evaluate them. Both graders evaluated the same A-scans so that inter-grader variability was computed. A software tool was developed to help in this process. The second grader repeated the process (G5.2) to establish an intra-grader variability. The results are shown in Fig. 12 and summarized in Table 2.

Fig. 12

Difference (in millimeters) between the first grader (G4) manual marking of the beginning of the shadow and the automatic segmentation (S) of the centerline, and both markings of the beginning of the shadow from the second grader (G5.1 and G5.2).

JBO_20_1_016006_f012.png

Table 2

Comparison between the automatic depth-wise position (S) and each manual grading (G4, G5.1, and G5.2), by the mean and standard deviation (SD) values for the absolute difference Da (mm) between the result G and G^.

MetricGG^MeanSD
Da(G,G^)SG40.0130.009
SG5.10.0300.026
G4G5.10.0200.021
G5.2G5.10.0110.012

The high inter and intra-variability values obtained (0.020 and 0.011 mm, respectively), clearly show how difficult the manual process is.

As is visible from Fig. 12, the automatic estimation of the depth-wise position appears mostly at a higher depth than the position estimated by the graders, since the grader aims at the beginning of the shadow (vessel top wall) and the algorithm aims at the vessel centerline. Hence, this systematic deviation is expected.

4.4.

Reconstruction

The proposed reconstruction seems very feasible. Overall, the reconstructed vessel network is smooth and behaves as expected (Figs. 13 and 14). In fact, the details of the crossovers of vessels, as illustrated in Fig. 11, are also according to physicians’ expectations, where the vessels rapidly deviate to form the crossover. Note that the high axial resolution of OCT is crucial to detect crossovers due to this aspect. Moreover, we illustrate in Fig. 15 the cross section of the vascular reconstruction in several B-scans. These figures illustrate that the method leads to a feasible reconstruction of the position and shape of the vascular network within the OCT volumetric scan.

Fig. 13

3-D reconstruction of the position and shape of the vessels.

JBO_20_1_016006_f013.png

Fig. 14

3-D reconstruction of the retinal vascular network embedded in the OCT volume.

JBO_20_1_016006_f014.png

Fig. 15

(a) B-scans and cross section of the vascular reconstruction (in white); (b) the respective position of the B-scan (white line) with the detected vessels (in red) on the fundus reference image.

JBO_20_1_016006_f015.png

From Rodrigues et al.,23 the execution time for the 2-D segmentation process (OCT fundus reference computation, features computation, and support vector machines classification), using a MATLAB® (The MathWorks Inc., Natick, MA) implementation, was 65.2±1.2s (N=15). The system hardware used was an Intel® Core i7-3770 CPU (Intel Corporation, Santa Clara, California) at 3.4 GHz.

The additional execution time to achieve the 3-D reconstruction, also using a MATLAB implementation, was 122.3±115.1s (average±standard deviation) on an Intel® Core i7-4770 CPU at 3.4 GHz. For the 3-D reconstruction, the required time greatly depends on the vessel tree complexity. The high standard deviation value reflects this behavior.

5.

Discussion and Conclusions

The method presents a good overall performance. The location of the vascular tree is (as expected) in the upper third of the retina. The vessel caliber estimation achieves a precision similar to a human grader and the depth position detection is in agreement with the known anatomy. However, the linking of the vessels in the defined bewildering regions works well when few connections are involved, but does not seem to contain enough information to achieve higher accuracy when the number of vessels to connect is higher. Apart form the presented cost functions, many other approaches were tried leading to similar but no better results. Although the bewildering accuracy is about 65%, we note that the linking accuracy and point accuracy are quite high for the problem at hand, bearing in mind that OCT lateral resolution is low (in comparison with other modalities). To the best of our knowledge, this problem is addressed here for the first time for OCT data.

The different steps of the validation show that the algorithm performs well. However, Doppler OCT would be a better ground truth than manual segmentation. Unfortunately, we have no access to such a system at our institutions. The high inter and intra-variability values from the manual gradings indicate how difficult the manual segmentation is.

The first tests on the relevance of the depth component of the vascular network of the retina have already been conducted.33 In this preliminary study, it is shown that the most widely used metric of vessel tortuosity does not have a statistically significant linear relation between 2-D and 3-D metrics. Thus, the use of 3-D tortuosity metrics to correlate with disease status might have a significant impact on the correlation values when compared with those obtained using 2-D metrics.

It is expected that more severe pathologies would render a more difficult task to overcome. In the future, we hope to extend our tests to these cases. Although one might slightly anticipate worse results, please note that our goal of early diagnosis of pathology requires working with eyes within the early stages of disease progression (close to normal).

In the present work, we propose and describe a method to segment and visualize the retinal vascular network in 3-D for OCT data. Increased sampling and accuracy would improve the algorithm’s performance and would allow for an objective validation.

Acknowledgments

This work was supported by FCT under the research projects (Project Nos. PTDC/SAU-ENB/111139/2009 and PEST-C/SAU/UI3282/2013), and by the COMPETE programs FCOMP-01-0124-FEDER-015712 and FCOMP-01-0124-FEDER-037299. The authors would like to thank António Correia for performing the manual gradings.

References

1. G. Liewet al., “Retinal vascular imaging: a new tool in microvascular disease research,” Circ. Cardiovasc. Imaging 1(2), 156–161 (2008).0009-7322 http://dx.doi.org/10.1161/CIRCIMAGING.108.784876 Google Scholar

2. R. Kawasakiet al., “Retinal microvascular signs and 10-year risk of cerebral atrophy: the Atherosclerosis Risk in Communities (ARIC) study,” Stroke 41, 1826–1828 (2010).SJCCA70039-2499 http://dx.doi.org/10.1161/STROKEAHA.110.585042 Google Scholar

3. N. Wittet al., “Abnormalities of retinal microvascular structure and risk of mortality from ischemic heart disease and stroke,” Hypertension 47, 975–981 (2006).HPRTDN0194-911X http://dx.doi.org/10.1161/01.HYP.0000216717.72048.6c Google Scholar

4. F. N. Doubalet al., “Fractal analysis of retinal vessels suggests that a distinct vasculopathy causes lacunar stroke,” Neurology 74, 1102–1107 (2010).NEURAI0028-3878 http://dx.doi.org/10.1212/WNL.0b013e3181d7d8b4 Google Scholar

5. R. Kawasakiet al., “Retinal microvascular signs and risk of stroke: the multi-ethnic study of atherosclerosis (MESA),” Stroke 43, 3245–51 (2012).SJCCA70039-2499 http://dx.doi.org/10.1161/STROKEAHA.112.673335 Google Scholar

6. N. Pattonet al., “Retinal vascular image analysis as a potential screening tool for cerebrovascular disease: a rationale based on homology between cerebral and retinal microvasculatures,” J. Anat. 206, 319–348 (2005).JOANAY0021-8782 http://dx.doi.org/10.1111/joa.2005.206.issue-4 Google Scholar

7. N. Pattonet al., “The association between retinal vascular network geometry and cognitive ability in an elderly population,” Invest. Ophthalmol. Vis. Sci. 48, 1995–2000 (2007).IOVSDA0146-0404 http://dx.doi.org/10.1167/iovs.06-1123 Google Scholar

8. H. Yatsuyaet al., “Retinal microvascular abnormalities and risk of lacunar stroke: atherosclerosis risk in communities study,” Stroke 41, 1349–1355 (2010).SJCCA70039-2499 http://dx.doi.org/10.1161/STROKEAHA.110.580837 Google Scholar

9. P. Z. Benitez-Aguirreet al., “Retinal vascular geometry predicts incident renal dysfunction in young people with type 1 diabetes,” Diabet. Care 35, 599–604 (2012).DICAD20149-5992 http://dx.doi.org/10.2337/dc11-1177 Google Scholar

10. X. WangH. CaoJ. Zhang, “Analysis of retinal images associated with hypertension and diabetes,” in Proc. 27th Ann. Int. Conf. of the IEEE Engineering in Medicine and Biology Society, Vol. 6, pp. 6407–6410, IEEE, Shangai (2005). Google Scholar

11. M. B. Sasongkoet al., “Alterations in retinal microvascular geometry in young type 1 diabetes,” Diabet. Care 33, 1331–1336 (2010).DICAD20149-5992 http://dx.doi.org/10.2337/dc10-0055 Google Scholar

12. B. E. Boumaet al., Handbook of Optical Coherence Tomography, Marcel Dekker, New York (2002). Google Scholar

13. R. BernardesJ. Cunha-Vaz, Optical Coherence Tomography: A Clinical and Technical Update, Springer, Heidelberg (2012). Google Scholar

14. T. Fabritiuset al., “Automated retinal shadow compensation of optical coherence tomography images,” J. Biomed. Opt. 14(1), 010503 (2009).JBOPFO1083-3668 http://dx.doi.org/10.1117/1.3076204 Google Scholar

15. W. DrexlerJ. G. Fujimoto, Optical Coherence Tomography: Technology and Applications, Springer, Heidelberg (2008). Google Scholar

16. M. Niemeijeret al., “Vessel Segmentation in 3D spectral OCT scans of the retina,” Proc. SPIE 6914, 69141R (2008).PSISDG0277-786X http://dx.doi.org/10.1117/12.772680 Google Scholar

17. J. Xuet al., “3D OCT retinal vessel segmentation based on boosting learning,” in IFMBE Proc. World Congress on Medical Physics and Biomedical Engineering, September 7 to 12, 2009, Munich, Germany, O. DösselW. C. Schlegel, Eds., pp. 179–182, Springer, Berlin Heidelberg (2009). Google Scholar

18. R. Kafiehet al., “Vessel segmentation in images of optical coherence tomography using shadow information and thickening of retinal nerve fiber layer,” in Proc. 2013 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 1075–1079, IEEE, Vancouver (2013). Google Scholar

19. M. Pilchet al., “Automated segmentation of retinal blood vessels in spectral domain optical coherence tomography scans,” Biomed. Opt. Express 3, 1478–1491 (2012).BOEICL2156-7085 http://dx.doi.org/10.1364/BOE.3.001478 Google Scholar

20. Z. Huet al., “Automated segmentation of 3-D spectral OCT retinal blood vessels by neural canal opening false positive suppression,” Lec. Notes Comput. Sci. 13, 33–40 (2010).LNCSD90302-9743 http://dx.doi.org/10.1007/978-3-642-15711-0 Google Scholar

21. V. Kajićet al., “Automated three-dimensional choroidal vessel segmentation of 3d 1060 nm oct retinal data,” Biomed. Opt. Express 4(1), 134–150 (2013).BOEICL2156-7085 http://dx.doi.org/10.1364/BOE.4.000134 Google Scholar

22. P. Guimarãeset al., “3D retinal vascular network from optical coherence tomography data,” Lec. Notes Comput. Sci., 339–346 (2012).LNCSD90302-9743 http://dx.doi.org/10.1007/978-3-642-31298-4 Google Scholar

23. P. Rodrigueset al., “Two-dimensional segmentation of the retinal vascular network from optical coherence tomography,” J. Biomed. Opt. 18, 126011 (2013).JBOPFO1083-3668 http://dx.doi.org/10.1117/1.JBO.18.12.126011 Google Scholar

24. K. Liet al., “Optimal surface segmentation in volumetric images - a graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 119–134 (2006).ITPIDJ0162-8828 http://dx.doi.org/10.1109/TPAMI.2006.19 Google Scholar

25. P. A. Dufouret al., “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging 32, 531–543 (2013).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2012.2225152 Google Scholar

26. P. Guimarãeset al., “Ocular fundus reference images from optical coherence tomography,” Comput. Med. Imaging Graph. 38(5), 381–389 (2014).CMIGEY0895-6111 http://dx.doi.org/10.1016/j.compmedimag.2014.02.003 Google Scholar

27. B. Al-DiriA. HunterD. Steel, “An active contour model for segmenting and measuring retinal vessels,” IEEE Trans. Med. Imaging 28, 1488–1497 (2009).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2009.2017941 Google Scholar

28. X. Xuet al., “Vessel boundary delineation on fundus images using graph-based approach,” IEEE Trans. Med. Imaging 30, 1184–1191 (2011).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2010.2103566 Google Scholar

29. E. Truccoet al., “Novel VAMPIRE algorithms for quantitative analysis of the retinal vasculature,” in Proc. 2013 ISSNIP Biosignals and Biorobotics Conference (BRC), pp. 1–4, IEEE, Rio de Janeiro (2013). Google Scholar

30. B. Al-Diriet al., “Manual measurement of retinal bifurcation features.,” in Proc. 2010 Ann. Int. Conf. of the IEEE Engineering in Medicine and Biology Society, Buenos Aires, Vol. 2010, pp. 4760–4764 (2010). Google Scholar

31. B. Al-Diriet al., “Automated analysis of retinal vascular network connectivity,” Comput. Med. Imaging Graph. 34(6), 462–470 (2010).CMIGEY0895-6111 http://dx.doi.org/10.1016/j.compmedimag.2009.12.013 Google Scholar

32. J. V. B. Soareset al., “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imaging 25, 1214–1222 (2006).ITMID40278-0062 http://dx.doi.org/10.1109/TMI.2006.879967 Google Scholar

33. P. Serranhoet al., “On the relevance of the 3D retinal vascular network from OCT data,” Biometric. Lett. 49, 95–102 (2012).1896-3811 http://dx.doi.org/10.2478/bile-2013-0007 Google Scholar

Biographies of the authors are not available.

© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2015/$25.00 © 2015 SPIE
Pedro Guimarães, Pedro Guimarães, Pedro Rodrigues, Pedro Rodrigues, Dirce Celorico, Dirce Celorico, Pedro Serranho, Pedro Serranho, Rui Bernardes, Rui Bernardes, "Three-dimensional segmentation and reconstruction of the retinal vasculature from spectral-domain optical coherence tomography," Journal of Biomedical Optics 20(1), 016006 (7 January 2015). https://doi.org/10.1117/1.JBO.20.1.016006
JOURNAL ARTICLE
11 PAGES


SHARE
RELATED CONTENT


Back to Top