12 October 2018 Optic nerve head three-dimensional shape analysis
Author Affiliations +
Abstract
We present a method for optic nerve head (ONH) 3-D shape analysis from retinal optical coherence tomography (OCT). The possibility to noninvasively acquire in vivo high-resolution 3-D volumes of the ONH using spectral domain OCT drives the need to develop tools that quantify the shape of this structure and extract information for clinical applications. The presented method automatically generates a 3-D ONH model and then allows the computation of several 3-D parameters describing the ONH. The method starts with a high-resolution OCT volume scan as input. From this scan, the model-defining inner limiting membrane (ILM) as inner surface and the retinal pigment epithelium as outer surface are segmented, and the Bruch’s membrane opening (BMO) as the model origin is detected. Based on the generated ONH model by triangulated 3-D surface reconstruction, different parameters (areas, volumes, annular surface ring, minimum distances) of different ONH regions can then be computed. Additionally, the bending energy (roughness) in the BMO region on the ILM surface and 3-D BMO-MRW surface area are computed. We show that our method is reliable and robust across a large variety of ONH topologies (specific to this structure) and present a first clinical application.

1.

Introduction

Many neurological and ophthalmological conditions affect the retina. Two landmark structures of this affection are the macula and the optic nerve head (ONH). Parameters describing the shape of these structures, especially 3-D parameters, are gaining more and more importance in understanding normative data and disease-specific changes, in form of additional information along with traditional quantitative ones. For example, in the case of the macula, several studies and approaches have already been published in order to provide a better insight into the definition of healthy structure and shape.12.3.4.5 ONH represents the part in which all retinal ganglion cell axons gather to leave the eye toward the brain. As such, investigating the ONH is central to the diagnosis of many disorders affecting the optic nerve like glaucoma,6 idiopathic intracranial hypertension (IIH),7,8 optic neuritis (ON),9,10 and optic neuropathies of other etiology, for example, in the context of multiple sclerosis (MS) or neuromyelitis optica spectrum disorders (NMOSD).1112.13

The advent of fast spectral domain optical coherence tomography (OCT) recently allows in vivo 3-D imaging of retinal structures, including the ONH. While high-resolution ONH images can now be taken in 3-D, the complex ONH shape has so far made automatic image analysis challenging.

Main focus of this work is to fill in this gap by presenting and validating a robust and fully automatic method that is capable of analyzing ONH shape over a range of conditions. Our approach includes a fully automatic 3-D shape analysis pipeline, with various techniques that will be presented throughout the paper. The algorithm is shown to be stable to various 3-D ONH scans and is—to our knowledge—the first approach that fully exploits the 3-D nature of the ONH for computational image analysis.

1.1.

Related Work

Analyzing tissue damage and structural changes in the ONH is one of the key goals to improve the diagnosis and understanding of diseases related to this structure. The main focus of ONH research lies in the field of ophthalmology, the most prominent topic being glaucoma. The most common parameters utilized include length, areas, volumes, or ratios to quantify various regions of the ONH. The main anatomical structures needed to compute these parameters are the inner limiting membrane (ILM), Bruch’s membrane or the lower boundary of the retinal pigment epithelium (denoted throughout the paper, for simplicity, by RPE) and the Bruch’s membrane opening (BMO) points. The first two structures comprise the retina, and the BMO points have recently gained more and more attention since Reis et al.14 introduced these as being the true anatomical structure defining the optic disc as to the clinically identified margin using fundus photography. Following this work, several other groups15,16 proved BMO-based parameters superiority in reliability compared to cup-to-disk ratio.

For ONH analysis, Enders et al.17 and Muth and Hirnei18 used manual segmentation of the BMO points and minimal rim width (BMO-MRW), as well as commercially available ILM surface detection with manual correction to study the correlation between visual fields and structural ONH changes. Chauhan et al.19 proposed BMO-MRW as a marker for early glaucoma detection also using manual segmented BMO points and BMO-MRW measurements, while Pollet-Villard20 used a similar manual process to prove that structure–function relationship was significantly stronger using BMO-MRW over other ONH parameters derived from spectral-domain OCT. Furthermore, several studies focused on the description of normative values that characterize the shape of the ONH. Chauhan et al.21 analyzed BMO-MRW, the orientation of the long axis of BMO in a normal population, while Enders et al.17 described BMO-MRW in micro- and macrodiscs.22 BMO minimum rim area (BMO-MRA) correlates with the total number of nerve fibers traversing the optic nerve and was introduced and investigated in Refs. 23 and 24. Both studies have shown BMO-MRA’s high diagnostic power for glaucoma. All these studies used commercial available OCT software to retrieve the ILM surface and BMO points, which were then manually controlled and corrected by experienced graders.

Another important parameter is the ONH volume, previously analyzed mainly in relation to conditions characterized by ONH swelling. To this end, a 2-D segmentation has been previously developed by our group. The method25 is able to robustly detect the lower boundary of the RPE in healthy as well as in ONHs from patients suffering from various neurological disorders that lead to swelling of the ONH. Several other publications investigating the same condition26,27 followed, all using a graph-cut based approach to segment the ILM and BM at the ONH. Lee et al.28 introduced an end-to-end pipeline to compute several morphometric parameters (volumes in different ONH regions and related area parameters). Furthermore, a surface correspondences approach to create a normalized space was presented. However, this pipeline is semiautomatic and needs manual segmentation of the ILM, BM, and BMO points. For the development of the mean surfaces, an accurate registration between the ONH surfaces is needed. Gibson et al.29 introduced an ONH registration algorithm to compute the one-to-one correspondence between two ONH surfaces using the hemispherical surface and volume registration. Later, Lee et al.30 proposed a more sophisticated registration algorithm based on surface currents and hemispherical demons. Recently, the shape variability of retinal nerve fiber layer (RNFL)-choroidal thickness was investigated using a nonrigid surface registration for longitudinal analysis.31

In general, most of the published methods related to ONH morphometry are computing the traditional parameters (lengths, widths, and volumes) and performing layers segmentation either manually or semiautomatically. To the best of our knowledge, none of the published ONH morphometry methods use proper manifold 3-D ILM or BM surfaces but rather a 2.5D surface (i.e., a graph function on an XY-grid).

1.2.

Contribution

In this paper, we propose a fully automatic 3-D shape analysis of the ONH region and introduce 3-D shape parameters along with commonly used ones. The objective of our proposed method is to retrieve robust and reliable 3-D quantitative measurements that describe different aspects of the various shapes of the ONH. Specifically:

  • We introduce a fully automatic pipeline for 3-D shape analysis of the ONH region.

  • We automatically segment ILM and the lower boundary of RPE along with BMO points.

  • We compute triangulated 3-D surfaces of the ILM and the lower boundary of RPE.

  • We introduce several 3-D shape analysis parameters, along with traditional parameters.

  • We prove the robustness of our algorithm for repeated measurements in healthy control (HC) data.

  • We provide quantitative parameters for different ONH regions in healthy people with normal vision, as well as in a dataset with clinical relevance for ONH swelling and atrophy.

2.

Method

In this section, we explain the procedure to compute several 3-D shape parameters of the ONH including the preprocessing of the ILM and the RPE surfaces and correspondence between them. Figure 1 shows the algorithm pipeline, where an OCT volume scan is the input of the algorithm. Next, we compute the triangulated ILM and RPE surfaces along with the BMO points. These three structures represent the ONH 3-D shape and serve as inputs for further shape analysis of the ONH. In the remainder of the paper, the ILM and the RPE surfaces will be represented as MILM and MRPE, respectively, and the BMO points are denoted by P. The ILM and the RPE surfaces are triangulated manifold surfaces and can be written in terms of the set of vertices and faces (triangles):

MILM={VILM,FILM}andMRPE={VRPE,FRPE}.

Fig. 1

The pipeline of the proposed algorithm.

JBO_23_10_106004_f001.png

These two surfaces can have different numbers of vertices and triangles. Let us consider that nILM and mILM represent the numbers of vertices and faces in the ILM surfaces. Similarly, nRPE and mRPE denote the size of VRPE and FRPE, respectively. The BMO points are represented as P={piR3|i=1np}, where np is the number of the BMO points.

2.1.

OCT Image Data

Our algorithm has as input 3-D ONH scans obtained with a spectral-domain OCT (Heidelberg Spectralis SDOCT, Heidelberg Engineering, Germany) using a custom protocol with 145 B-scans, focusing the ONH with a scanning angle of 15  deg×15  deg and a resolution of 384 A-scans per B-scan. The spatial resolution in x direction is 12.6  μm, in axial direction 3.9  μm, and the distance between two B-scans is 33.5  μm.

2.2.

RPE Lower Boundary Surface Computation

2.2.1.

Smoothing and intensity normalization

The RPE lower boundary represents the termination of the retina and is therefore an important parameter in several morphometric computations. In this subsection, we present our RPE segmentation approach based on the method presented in Ref. 25. First, several preprocessing steps are performed, often employed in OCT images. Consider I(qxy) the intensity of a pixel qxy. Our algorithm starts by applying a large Gaussian smoothing filter (σ=5 pixels isotropic with kernel size=(10  μm×14  μm) on each B-scan separately. The smoothing operation not only reduces speckle noise present in most OCT data but also facilitates the approximation of the two most hyperintense layers, the RNFL and the RPE. Then, to address varying intensities, a contrast rescaling on each slice (B-scan) is performed. Contrast inhomogeneities can occur in the form of a B-scan having regions with different illumination or as several B-scans of the same volume with very different intensity ranges. Specifically, a histogram-based amplitude normalization method32 is used to map the signals in the original image linearly between [0, 1] using as low cutoffs the first, 66th percentiles and as high cutoff the 99th percentile on the histogram of the B-scan, where the sampled column (A-scan) is located. Figure 2(b) shows one B-scans of the resulting smoothed and normalized original volume. Figure 2(a) shows the same B-scan with its original gray values.

Fig. 2

The first step in our pipeline, specifically the RPE lower boundary surface and BMO points detection exemplified using a B-scans. (a) One B-scan of the original volume (cyan arrows indicate blood vessels and the shadow artifacts these produce; the region delimited by the yellow line is part of the ONH disc). (b) After applying smoothing and intensity normalization. (c) Approximated ISOS junction points (blue points) after removing outliers. As a convention in our computation, the outliers detected at this step are set to have in the axial (z) direction the coordinate equal to 1. (d) Approximated upper boundary RPE points (blue points). (e) Smoothed 2-D RPE lower boundary (blue line). (f) BMO points (red dots); even in the presence of blood vessel the BMO points can still be detected. 384 pixels represent 4402.80  μm and 226 pixels 881.40  μm.

JBO_23_10_106004_f002.png

2.2.2.

RPE approximation

First, we start by approximating ILM as the upper boundary. At each A-scan, we detect the first pixel from top in the smoothed and normalized volume, ISN, higher than 1/3 of the maximum value in the B-scan containing the A-scan. This gives us a set of initial estimate points for the ILM, denoted by ILMinit. Next, we approximate the upper boundary of RPE. First, we compute the image derivative, ISN, of each B-scan (vertical gradient) using a Sobel kernel. Looking along each A-scan, starting from the ILMinit set, we perform several intermediate steps for the RPE upper boundary approximation. We estimate inner and outer segment junction regions (ISOS) by finding the first set of points p, as shown in Fig. 2(c):

(1)

pxy={ISN(qixy)<(ISN(sxy)/3)},sxyILMinit.

Starting from the set pxy, RPE upper boundary is approximated:

(2)

RPEupper=max[ISN(p˜xy)],
where we only consider the points below IS/OS:

(3)

p˜xy{sixypixy<60  μm}.

At this stage, our input is a list of points that belong to the upper boundary of RPE in each B-scan. This list comprises among the points correctly positioned at the upper RPE boundary, also several outliers, especially in the presence of shadows cast by blood vessels, as well as at the region of the ONH. In order to remove these, we make use of the observation that the first and last third of each B-scan most probably lie outside the optic nerve hear area. This is an observation valid for all the ONH centered scans independently of device or scan settings. Even in these two parts, outliers caused by shadows of blood vessels might be present. To remove these, we take the gradient of the line consisting of the position of upper RPE points and compute the mean value of these from coordinates that most likely belong to the correct upper RPE points. These coordinates represent RPE boundary points that form the largest part of the gradient line between outliers (outliers in the gradient are considered to be >40  μm). The first seed point is then detected as the one closest to the mean value. Starting from this seed qseed, we iteratively remove outliers from RPEupper (points where qseedqnew>70  μm). Analog outliers from the last third of the B-scan are removed. The resulting point set of one B-scan is shown by the blue points in Fig. 2(d). The points removed from the RPEupper roughly estimate the ONH region, as well as the BMO area (BMOA). Note that ILM can have a very complex topology, whereas other retinal layers are missing in this area. We create a mask of the ONH from the removed RPEupper part by fitting an ellipse to its contour, AONH.

2.2.3.

RPE lower boundary detection

The final step consists in the RPE lower boundary detection, RPElower. We take the points with the largest negative gradient below RPEupper, closest to the RPEupper (i.e., if several minimum points have similar values, the point with the smallest distance to the corresponding RPEupper is taken). Using only the maximum gradient values leads to spurious points along each surface. Correction of these errors is done by applying a cubic smoothing spline with a high smoothing parameter. Note that in the case of presence of blood vessels, large regions of missing coordinates for the RPE might occur and the cubic spline can present deviations from the desired smooth contour.

2.2.4.

TPS fitting to the RPE lower boundary surface

Finally, to account for motion artifacts in consecutive B-scans, but also for the natural curvature of the retina, we propose an efficient two-stage thin-plate spline fitting (TPS), which improves the approach proposed by Ref. 33 without making use of the orthogonal scans presented in the work of Ref. 34. First, TPS least-square approximation is performed. The number of control points used is determined by the size of the surface along each axial dimension. At this stage, the number is set to 25% in the slow scan direction, 15%, respectively, and the control points are evenly distributed along each direction. This enables the TPS, in combination with a smoothing parameter, α, see Ref. 35 set to 0.85, to create a more smoothed surface, which keeps the curvature of the retina without being influenced from motion artifacts especially along the slow axis. In our experiments, we found that values of α=[0.70,0.85] provide consistent results. Extreme grid points in the original surface defined as mean + standard deviation in local nonoverlapping neighborhoods of 10×10 grid points of the TPS surface are removed. Then, the actual TPS fitting similar to Ref. 34 is applied. The choice of parameters at this second step is strongly influenced by the fact that we reduced outliers at a previous stage. Specifically, for grid points, we use 20% in the slow scan direction, 10% in the fast scan direction, with smoothing parameter 0.45. Consistent results were obtained for α=[0.30,0.50]. Our strategy is to obtain a TPS closer to the data in the grid points while smoothing the artifacts present in the position of the detected RPElower points, especially at the presence of blood vessels, or in the close vicinity of the approximated ONH region. Both stages are done on the RPElower without including AONH. Figure 3(a) shows the original RPE surface with typical artifacts in the in-between B-scans direction. These are corrected after applying our TPS approach while keeping the shape of the original surface. The result is presented in Fig. 3(b). The result of the RPE lower boundary after performing the TPS is shown in Fig. 2(e).

Fig. 3

(a) RPE lower boundary surface with motion artifacts. (b) Resulting TPS fitted surface. The proposed two-stage TPS is capable of reducing the motion and segmentation artifacts seen in the original RPE lower boundary and creating a smoothed surface while still keeping the shape of the retina.

JBO_23_10_106004_f003.png

2.3.

BMO Points Computation

BMO is the termination of the Bruch’s membrane (BM) layer and was proposed as a stable zero reference plane for ONH quantification.14 It is a key parameter in the detection of ONH shape parameters. A challenge in BMO detection is the correct identification of these points, especially in the presence of shadows caused by blood vessels, or the border tissue of Elsching,14 a structure similar to the BM. We propose a BMO points’ segmentation approach in the 3-D volume directly without the use of a 2-D projection image in the xy plane, as presented in several previous works.36,37

2.3.1.

Volume flattening

We start by flattening the whole OCT volume. This step, performed in several retinal segmentation algorithms,33,36 refers to the translation of all A-scans such that a chosen boundary in the volume is flat. We choose to align the retina to the smoothed RPElower. The alignment facilitates the volume reduction process, as well as the differentiation of BMO from other tissue.

2.3.2.

Volume reduction and vessel suppression

We reduce the OCT volume to a region comprising only the BMO. Depending on the ONH region fitted by the ellipse and the position of RPElower, the reduced volume can change from 384×496×145 voxels in its original size to 384×90×52 voxels, by taking only the B-scans containing the ellipse and in z direction RPElower±100  μm. Vessels appear in the RPE layer as dark intensity regions or shadows. These affect the detection of the BMO, especially because vessels gather at the ONH creating large shadows. Our approach focuses on emphasizing the RPE layer and suppressing these artifacts. To this end, we first smooth the reduced OCT volume containing the original gray values with an anisotropic diffusion filter38 and then apply a 2-D Morlet wavelet filter39 for each B-scan to enhance the RPE line.

2.3.3.

BMO points detection

The end-points of the rough ONH area, AONH, provide the starting points for BMO points detection. On each B-scan, the starting points are updated with new BMO points candidates if they meet the following conditions: (1) have minimum value in the 2-D Morlet filtered image, (2) d(pnew,pBMO-seed)<30  μm, and (3) the curvature in a neighboring region of five voxels is almost 0 to avoid including the tissue of Elsching. In case the BMO points detected in the left and right part of one B-scan overlap, the BMO starting or end region previously defined by AONH are updated accordingly. An example of a pair of (left and right) detected BMO points are shown in Fig. 2(f).

2.4.

ILM Surface Computation

The ILM separates the retina from the vitreous body and defines a critical boundary layer for the ONH. Several ILM segmentation methods have been published to separate the ILM layer around the ONH region, and most of them compute the ILM layer as a graph function and are unable to capture complex and variable topological structures of the ONH. To compute the ILM surface, we previously developed a method introduced by Gawlik et al.40 This method is based on an active contour method of Chan–Vese type and produces a truly 3-D ILM segmentation unlike other state-of-the-art methods. Figure 4(a) shows the ILM surface, which is computed using the marching cubes algorithm,41 where the input level sets are computed using the method proposed by Gawlik et al.40 The lower right corner of Fig. 4(a) shows that the approach is capable to reconstruct a complex topological structure in the ONH region and the marching cubes algorithm produces a manifold ILM surface MILM with proper neighborhood and properly oriented face normals.

Fig. 4

(a) Noisy ILM surface. (b) Smooth ILM surface. (c) Smooth ILM and RPE surfaces with BMO. The ILM surface computed using the marching cube algorithm and level sets produced by the method.40

JBO_23_10_106004_f004.png

2.5.

ILM Surface Smoothing

During data acquisition of the ONH region using the 3-D OCT scanner, noise is inevitable due to various internal and external factors. As it can be seen from Fig. 4(a), the ILM surface is not smooth and has various noise components. Staircase artifacts are also shown in the left corner of Fig. 4 as an effect of the marching cubes algorithm and of the volume scan resolution. For an accurate shape parameter computation, these artifacts including noise components should be removed at the early stage of the algorithm. To compute a noise free and a high-fidelity ILM surface, we use a robust mesh denoising algorithm, proposed by Yadav et al.42 In general, mesh denoising algorithms are divided into two categories: isotropic and anisotropic methods. Isotropic methods remove noise effectively but produce a volume shrinkage, which leads to an incorrect shape analysis of the ONH. Anisotropic methods are feature preserving mesh denoising algorithms and induce a small volume shrinkage compared to the isotropic methods. In the case of the ILM surface, the anisotropic methods treat the marching cube artifacts as features and lead to an incorrect shape analysis. The method in Ref. 42 is a combination of both isotropic and anisotropic methods and produces a high fidelity smooth ILM surface without staircase artifacts and minimum volume shrinkage, as shown in Fig. 4(b). This method also produces a high quality triangular mesh with proper face normal orientation, which is vital in further shape analysis of the ONH.

2.6.

Ellipse Fitting to BMO Points

As presented in Sec. 2.3, the proposed algorithm computes the BMO points automatically. Due to blood vessels around the ONH, noise components and 3-D OCT scan patterns, the BMO points are nonuniform and noisy, as shown in Fig. 5(a). To remove these artifacts, we fit an ellipse to the BMO point onto XY-plane. First of all, the BMO points P are projected onto the corresponding XY-plane and denoted as a points set: P2D={p˜iR2|i=0,,np1}. Then, we apply the method43 to compute a fitted ellipse to the BMO points in R2. Another key parameter in the ONH shape analysis is the center of the BMO points. This is computed as the barycentric of the all P2D points:

(4)

p˜c=1npp˜iP2Dp˜i,
where p˜cR2. Figure 5 shows that the ellipse fitting is not only removes the noise but also increases the data points uniformly.

Fig. 5

(a) Noisy and irregular BMO points, which are computed automatically. (b) Smooth and uniform density BMO points are computed using an ellipse fit.

JBO_23_10_106004_f005.png

2.7.

Correspondence between ILM and RPE Surfaces

The total retina at the ONH region is delimited by ILM and RPE. For further analysis, it is necessary to find the corresponding points between these two surfaces. In the proposed method, we compute vertices in the RPE surface corresponding to each face (fiFILM|i=0,,mILM1) of the ILM surface. In general, the RPE surface, represented here as a function graph: MRPE:R2R, has a less complex structure compared to ILM. In the OCT scanner, the number of the A-scans (x-direction) and the number of the B-scans (y-direction) are fixed, which create a regular XY-grid as a domain for the RPE graph function. Therefore, the index of each vertex of the RPE surface can be computed using the number of x-lines (vertical lines) and y-lines (horizontal lines) and the sampling size in both directions, denoted by ϵx and ϵy, respectively. The numbers of x-lines and y-lines are computed using the following equation:

(5)

xline=xmaxxminϵx+1,yline=ymaxyminϵy+1,
where xmax, xmin, ymax, and ymin are the bounding values of the RPE surface in x and y directions. For each face fiFILM, the vertex of the RPE surface onto the XY-plane is computed, which approximates the position of the corresponding A-scan and B-scan in the volume scan. Let us consider that ci represents the centroid of the face fi. To compute the corresponding vertex in the RPE surface, we project the face fi onto the corresponding XY-plane, c˜i represents the projected centroid. The terms c˜xi and c˜yi are the corresponding x and y coordinates. We compute the x-index (ix), the y-index (iy), and the vertex (i) index using x line and y line in the RPE surface for the face fi using the following equation:

(6)

ix=c˜xixmin+ϵx/2ϵx,iy=c˜yiymin+ϵy/2ϵy,i=ix+iy·xline,
where represents the ceil function and i denotes the corresponding vertex in the RPE surface. For an accurate computation, we check the neighborhood of vertex i of the RPE surface and the corresponding vertex is computed as follows:

(7)

v˜i=v˜jΩi|min|c˜iv˜j|,
where Ωi represents the 3×3 neighborhood (at XY-plane) of vertex i. The term v˜i is the projection of vertex viVRPE onto XY-plane. Finally, we get the set C={viR3|i=0,,mILM1}, which represents the set of RPE surface vertices corresponding to each face in FILM. A visual representation of the correspondence computation is shown in Fig. 6, where the RPE’s regular XY-grid is shown in blue and the projected ILM’s vertices and edges are painted in red.

Fig. 6

Correspondence between ILM and RPE surfaces; the face fiFILM (green triangle) has a corresponding vertex viVRPE in the RPE surface.

JBO_23_10_106004_f006.png

2.8.

BMO Region Segmentation

For the ONH shape analysis, the region inside the BMO points is of special interest since BMO points represent the optic disc margin. To segment this region, we exploit the elliptic representation of the BMO points in R2 along with their center, as mentioned in Eq. (4). First, we compute the center of the ILM and the RPE surfaces corresponding to the center p˜c of the BMO points. Let us consider that V˜ILM and V˜RPE represent projected sets of vertices onto XY-plane. Then, the indices of the centers of both surfaces can be computed as follows:

(8)

ilmc=j0,,nILM1|min|v˜jILMp˜c|,rpec=j0,,nRPE1|min|v˜jRPEp˜c|,
where v˜jILMV˜ILM and v˜jrpeV˜rpe. The vertices vILMcVILM and vRPEcVRPE represent the center of the ILM and the RPE surfaces, respectively.

To compute the BMO regions in the ILM and the RPE surfaces, we transform the ellipse into a circle using the affine transform. Let us consider p˜ie represents the fitted ellipse point to p˜i as mentioned in Sec. 2.6 and p˜ic denotes the corresponding affine transformed point on a circle of radius r. This transformation reduces the complexity of the BMO region computation and improves the speed of the algorithm. Now, by using the circular representation of the BMO points, the BMO regions in both RPE and ILM surfaces are computed as follows:

(9)

ΩILM={fjFILM||c˜jILMv˜ilmc|r},ΩRPE={fjFRPE||c˜jRPEv˜rpec|r},
where r is the radius of the transformed circle and c˜jILM, c˜jRPE represent the centroid of a face in the ILM and the RPE surfaces, respectively. The computation shown in Eq. (9) is done using the disk growing method.

2.9.

3-D Shape Parameters

2.9.1.

ONH cup volume

The ONH cup is defined as a segment of the ILM surface, which is below the RPE surface, as shown in Figs. 7(a) and 7(b). Note that the cup is not present in every ONH volume scan. For example, mostly in the case swollen ONH scans, the ILM surface is always above the RPE. To detect the availability of the ONH cup, we go to each face fiFILM and compute its centroid ci. As we discussed in Sec. 2.7, each face fi of the ILM has the corresponding vertex viC in RPE. If cizviz0 for all faces in ILM, then, there is no cup available in the ONH region. Otherwise, there is a cup. The terms ciz and viz show the corresponding z-coordinates (height). Similarly, we can compute the cup region:

(10)

Ωcup={fiFILM|(cizviz)0},
where Ωcup consists of faces (triangles) of ILM, which are below the RPE surface. As it can be seen from Fig. 7(b), the cup region is also a manifold surface with proper face normal orientation. To compute the volume of the cup accurately, we exploit the face normal information at each triangle of the region. The cup volume is computed using the following equation:

(11)

Vcup=(fiΩcup)Aihi,
where Ai represents the area of a triangle, which is a projection of the face fi of the ILM surface onto XY-plane and hi is the height with respect to the RPE surface. These variables are defined as follows:

(12)

Ai=12(e0×e1),hi=(cizviz),
where e0 and e1 are the connected edges of the projected triangle. The cross-product between the two edges will take care of the orientation of the corresponding face and enables a precise volume computation even in complex topological regions.

Fig. 7

A visual representation of the different morphological parameters. (a) BMO region, (b) cup volume Vcup, (c) central ONH thickness (CONHT), and (d) bending energy Eb.

JBO_23_10_106004_f007.png

2.9.2.

Central ONH thickness

The CONHT is defined as the height difference between the center of the ILM and the RPE surfaces, as shown in Fig. 7(c). The CONHT of ONH volume scan is computed as follows:

(13)

CONHT=(vilmczvrpecz),
where vilmcz and vrpecz show the corresponding height value (z-coordinates) of the center vertices vilmc and vrpec, respectively, introduced in Sec. 2.8.

2.9.3.

BMO region volume

The BMO region volume is computed using the segmented ILM and RPE surfaces. We separated the cup volume from the BMO region volume such that it does not include the cup volume, if it exists. Then, the BMO region volume can be computed as follows:

(14)

ΩBMO=ΩILM\Ωcup.

Similar to the ONH cup volume, we compute the BMO region volume using the similar formula:

(15)

VBMO=(fiΩBMO)Aihi,
where Ai is the area of the face fi that belongs to the set ΩBMO and is computed using Eq. (12). Similarly, hi is also computed using the corresponding vertices in the RPE surface as mentioned in Eq. (12).

2.9.4.

ONH total volume

Similar to the ONH cup and the BMO region volumes, the ONH total volume is also computed using ILM and RPE surfaces. The total volume is computed from the circular region, with radius 1.5 mm, centered at vilmc and vrpec for ILM and RPE surfaces, respectively, as shown in Fig. 8(a). Similar to Eq. (9), we compute the circular regions for ILM and RPE surfaces using the following equation:

(16)

Ω1.5  mmILM={fjFILM||c˜jILMv˜ilmc|1.5  mm},Ω1.5  mmRPE={fjFRPE||c˜jRPEv˜rpec|1.5  mm},
where Ω1.5  mmILM and Ω1.5  mmRPE are the sets consisting of all faces within the 1.5-mm region of the ILM and RPE surfaces from their centers. The other parameters were defined in Eq. (9). Then, the total volume region is calculated using Eq. (16):

(17)

Ωtv=Ω1.5  mm\Ωcup,
where Ωtv represents the total volume region on the ILM surface, as shown in Fig. 8(b), which is further employed for the total volume computation:

(18)

Vtv=(fiΩtv)Aihi,
where Ai is the area of the face fiΩtv and hi is the height with respect to corresponding vertex in the RPE surface similar to Eq. (12).

Fig. 8

A visual representation of total and annular volume regions. (a) 1.5 mm regions of ILM (Ω1.5  mm) and RPE (Ω1.5  mmRPE) surfaces from the BMO center. (b) Total volume (Vtv) between ILM and RPE surfaces within 1.5 mm radius region. (c) Annular volume (Vav) is defined as the volume between ILM after taking out the BMO region.

JBO_23_10_106004_f008.png

2.9.5.

ONH annular region volume

The ONH annular region represents the ONH outer region, see Fig. 8(c). On ILM surface, this region is computed using the following equation:

(19)

Ωav=Ω1.5  mm\ΩBMO,
where Ωav consists of all the ILM surface faces, which belong to the annular region of the ONH. Similar to Eq. (18), we compute the volume of the annular region using the ILM and RPE surfaces correspondence:

(20)

Vav=(fiΩav)Aihi.

The annular region volume helps to see the change in the outer region of the ONH volume in different cohorts.

2.9.6.

Bending energy

The roughness on the ILM surface within the BMO region is an important parameter and commonly known as the bending energy on a manifold surface. The bending energy measures the fairness of a surface in terms of the curvature. In general, the outer region of the ILM surface is quite smooth and flat unlike the one inside the BMO, which has very complex topological structure. In this paper, we define the bending energy within the BMO region using the element-based normal voting tensor (ENVT).44 The ENVT exploits the orientation information (face normals) to compute a shape analysis operator at each face fiΩBMO and is defined as follows:

(21)

Mi=1fjΩBMOajfjΩBMOajnj·njT,
where nj represents the normal of face fi and njT is the transpose of nj. The term aj is the area of the face fi. To assure robustness against irregular sampling of the ILM surface, we weight Eq. (21) by the corresponding face area aj. The ENVT, Mi, is a symmetric and positive semidefinite matrix, so, it can be decomposed into its spectral components:

(22)

Mi=k=02λkiekekT,
where λki={λ0i,λ1i,λ2i} are the eigenvalues vector, and these eigenvalues are sorted in decreasing order (λ0iλ1iλ2i0). The corresponding eigenvector is denoted by ek. In general, the dominant eigenvalue λ0i has the corresponding eigenvector in the direction of the face normal and the remaining two eigenvectors will be aligned to the principle curvature direction on the ILM surface. On the planer region, only λ0i will be significant, on the edge region, λ0i and λ1i will be significant and at the corners, all of these eigenvalues are significant. Using the anisotropic properties of these eigenvalues, we define the bending energy inside the BMO region using the following equation:

(23)

Eb=fiΩBMOλ1i+λ2i,
where λ1i and λ2i are the two least dominant eigenvalues of the ENVT of the face fi. Figure 7(d) shows how each face of the BMO region is colored based on Eq. (26). The color is scaled from white (flat regions) to red (sharp features).

2.9.7.

BMO-MRW

BMO-MRW has been proposed by Ref. 14 as a valid alternative structural measure. It computes the minimum distance between the BMO points and the ILM surface. The average BMO-MRW, denoted by avgMRW, is calculated as follows:

(24)

avgmrw=1npi=1np|pimrwpie|.

2.9.8.

BMO-MRW surface area

BMO-MRW surface area, BMO-MRA, is computed by taking the whole region defined by all BMO-MRW. We combine the ellipse fitted BMO points P2D and the z-coordinates from P and represent these as Pe={pieR3|i=0,,np1}, as it can be seen in Fig. 9 (green points). For each point piePe, we compute a point piMRW on the ILM surface:

(25)

pimrw=vjVilm|min|vjpie|.

Fig. 9

BMO-MRW surface with and without ILM. (a) BMO-MRW surface with ILM surface and (b) BMO-MRW surface.

JBO_23_10_106004_f009.png

The MRW points PMRW={piMRWR3|i=0,,np1} are lying on the ILM surface as shown in Fig. 9(a) (violet points). We create a quad surface using point sets Pe and Pmrw by introducing edges between the corresponding vertices in both point sets and connecting the neighbor points as shown in Fig. 9(b). The number of quad elements in the MRD surface is equal to the number of points in each point sets and is represented as Q={qi|i=0np1}. MRW-MRA is computed as follows:

(26)

Amrw=qiQaqi,
where aqi represents the area of the quad qi.

2.9.9.

BMO area

As shown in Fig. 5, BMOA represents the area under the fitted ellipse to the BMO points and is computed using the conic representation:

(27)

ABMO=πr1r2,
where r1 and r2 are the major and minor axes of the fitted ellipse.

3.

Experiments and Results

In order to evaluate our method, we performed repeated-measurement reliability tests, investigated ONH shape in healthy subjects, and tested if our method is able to detect differences in patients with diseases known to affect the ONH in the form of swelling and atrophy. Finally, we measured the implementation’s performance.

3.1.

Repeated-Measurement Reliability

In order to estimate the repeated-measurement reliability, we took three repeated scans of each eye from 10 healthy subjects. These subjects were measured each in a time frame of a week and then again in the following week. Table 1 shows the repeatability results. We see that our method scores highly at every parameter presented, with lowest intraclass correlation coefficient (ICC) of 0.905 for CONHT, and highest 0.998 for Vcup. The ICC and confidence intervals were estimated using the variance components from a one-way ANOVA.

Table 1

Repeatability test for the 3-D parameters.

ParametersICCLCIUCI
ONH cup vol. (Vcup) (mm3)0.9980.9960.999
Central ONH thickness (CONHT) (mm)0.9050.8130.958
BMO region vol (VBMO) (mm3)0.9930.9860.997
ONH total vol. (Vtv) (mm3)0.9950.9890.998
ONH annular vol. (Vav) (mm3)0.9830.9650.993
Bending energy (Eb)0.9110.8240.961
BMO-MRW surface area (AMRW) (mm2)0.9100.8230.960
BMO-MRW (dMRW) (mm)0.9930.9860.997
BMOA (ABMO) (mm2)0.9890.9760.995

Note: LCI, lower boundary of 95% confidence interval and UCI, upper boundary of 95% confidence interval.

We also evaluated our method with several other scan protocols of the same device (ONH cube with 73 B-scans, scanning angle of 15  deg×15  deg and resolution 384 A-scans per B-scan, spatial resolution in x direction is 12.6  μm, in axial direction 3.9  μm and the distance between two B-scans 61  μm, ONH star scan with 24 B-scans, scanning angle of 15  deg×15  deg and a resolution of 768 A-scans per B-scan, spatial resolution in x direction is 5.36  μm, in axial direction 3.9  μm) and the volumetric ONH-centered protocol acquired using Cirrus HD OCT (Carl Zeiss Meditec, Dublin, California), which covers 6×6×2  mm3 region with 200×200×1024 voxels and obtained positive results.

3.2.

Validation

We validated the BMO detection and checked the RPE segmentation. Five scans from the ones used in the repeatability testing were randomly selected. An experienced grader manually selected the BMO points using the RoI tool from ImageJ.45 This resulted in a total amount of 488 B-scans with manually selected BMO points, which corresponded to the number detected automatically. Furthermore, we compared the mean signed and unsigned error in the x axis, as well as in the axial one (in z axis). If the automated method identified the BMO closer to the optic disc center, the sign of distance in the x-direction was positive. Similarly, if the automated BMO located below the manual BMO, the sign of distance in the z-direction was positive. Results are shown in Table 2. To our knowledge, there is no other study regarding BMO segmentation with the device used in this approach. As such, we are not able to relate our results to other published methods, since comparative studies revealed that measurements are not directly comparable between different OCT devices.46 The ILM validation was done in another work of our research group.40 For RPE computation, an experience grader visually controlled all scans used in the repeatability testing and assessed these as correctly segmented.

Table 2

Mean unsigned and signed error in pixel and μm, for the x axis, and z axis between automatic (proposed) and manual segmentation.

Mean (±SD) unsigned error (pixels)Mean (±SD) unsigned error (μm)Mean (±SD) signed error (pixels)Mean (±SD) signed error (μm)
x axis4.9098 (±4.9182)61.8635 (±61.9693)0.6107 (±6.9261)7.6943 (±87.2693 )
z axis2.8828 (±3.1801)12.4024 (±11.2429)0.3618 (±4.2790)-1.4110 (±16.6881 )

3.3.

Clinical Evaluation

In this section, we present the results of our automatic pipeline approach for 248 OCT scans, from three groups, 71 HC eyes, 31 eyes of patients suffering from IIH,25 which causes swelling of the ONH (papilledema). We also included 146 eyes of patients with autoimmune central nervous system disorders (MS and NMOSD) and a history of ON, an inflammatory optic neuropathy that damages the optic nerve leading to neuroaxonal degeneration.47,48

All participants gave written informed consent according to the 1964 Declaration of Helsinki. The study was approved by the ethics committee of Charité – Universitätsmedizin Berlin.

In IIH patients, the ONH volume is increased and was shown to correlate with cerebrospinal fluid pressure.8 The longitudinal analyses from Ref. 8 revealed that ONH volume measured by OCT decreased after the initial lumbar puncture and initiation of therapy with acetazolamide. Additionally, increased ONH volume was associated with lower visual acuity in IIH patients, which points out to the potential clinical relevance of the parameter.

ON is one of the most common initial clinical presentations of MS without any prior history of a demyelinating event. During the course of the disease, acute ON affects 50% and 70% of MS patients.9 After initial swelling due to edema in the acute phase of ON, RNFL thickness decreased over the following six months.47,49 ON is the first NMOSD-related clinical event in 55% of the patients, which causes severe structural damage to the optic nerve and retina with resulting functional impairment. Recurrent ONs in NMOSD give rise to severely thinned pRNFL and combined ganglion cell layer and inner plexiform layer.11 Furthermore, within 5 years, 50% of NMO patients are either blind in one or both eyes.49 RNFL reduction was consistently observed in MS patients, who had never had a clinical episode of ON, as well as in the clinically unaffected fellow eye of patients with a history of ON.50,51 Similarly, a recent study52 found that there is also retinal neuroaxonal damage without ON in NMOSD, but longitudinal studies investigating neuroaxonal damage without ON in NMOSD have not been performed extensively.

We hypothesized that patients with IIH would show significant changes in comparison to HCs indicating ONH swelling. Also, patients with a history of ON would show significant changes in comparison to HCs indication ONH atrophy. The results presented in Table 3 demonstrate that our approach successfully captures the differences. The only parameter showing no difference between groups is bending energy, Eb. Although we expected that in the IIH affected eyes, the ONH shape inside the BMO to have a smoother convex shape, the data are still extremely heterogeneous. Thus, the bending energy reflects this extreme variability in the data. Especially inside the BMO, the topologies from one ONH to the other can be extremely different.

Table 3

Analysis of all the 3-D parameters defined for the HC and patient group. The last column shows the GEE analysis between the two groups.

ParametersHCIIHONGEE
Mean (SD)Min-MaxMean (SD)Min-MaxMean (SD)Min-Maxp(HC-IIH)p(HC-ON)p(IIH-ON)
ONH cup vol.(Vcup) (mm3)0.044 (0.075)0.000–0.3700.004(0.012)0.000–0.0490.035(0.052)0.000–0.2560.00120.3531.95E-07
Central ONH thickness (CONHT) (mm)0.002 (0.25)−0.545–0.5500.403(0.340)−0.242–0.9660.012(0.231)−0.399–0.6689.91E-060.7474.22E-06
BMO region vol (VBMO) (mm3)0.572 (0.21)0.107–1.0351.521 (1.022)0.436–4.30980.444(0.189)0.105–1.0505.02E-050.00122.66E-06
ONH total vol. (Vtv) (mm3)2.523 (0.250)2.114–2.9883.309 (0.799)2.351–5.3652.163(0.295)1.522–3.0872.57E-055.28E-136.54E-10
ONH annular vol. (Vav) (mm3)1.906(0.181)1.504–2.2681.78(0.357)0.572–2.2001.683(0.210)1.162–2.1180.1462.19E-100.257
Bending energy (Eb)0.026 (0.009)0.011–0.0650.029 (0.013)0.008–0.0660.029(0.010)0.011–0.0660.4770.0380.793
BMO-MRW surface area (AMRW) (mm2)1.791 (0.459)0.763–2.7892.975(1.016)1.439–5.3511.414(0.364)0.615–2.7651.35E-065.44E-063.94E-11
BMO-MRW (dMRW)0.135(0.054)0.261–0.0350.263 (0.105)0.127–0.4470.093(0.040)0.030–0.2603.52E-072.69E-051.81E-12
BMOA (ABMO) (mm2)1.906 (0.352)1.158–2.6462.682 (1.109)1.057–5.7091.861(0.377)1.030–3.3260.00280.4070.001

Note: SD, standard deviation; Min, minimum value; Max, maximum value; GEE, generalized estimating equation models analysis accounting for the intereye/intrasubject dependencies; p, p value.

We plan to further investigate these findings (for both processes of swelling and atrophy of the ONH) with the parameters presented, to gain more insight into the pathology of the ONH.

3.4.

Performance

Computation of the whole pipeline for a volume on an Intel (R) Core(TM) i7-4790, 3.60 GHz, 16 GB RAM and 64bit operating system takes on average 2 min (computation of the ILM, RPE lower boundary and BMO points takes about 75 s, and the morphological computation another 40 and 45 s).

4.

Discussion and Conclusion

We have proposed an automatic and robust pipeline for the computation of several parameters that characterize ONH shape. In particular, we developed an approach that computes truly 3-D parameters derived from triangulated mesh surfaces of the ILM and RPE, and BMO points.

The proposed method identifies the RPE lower boundary and is able to provide a smooth 2.5-D surface by employing a two-stage TPS fitting that preserves the retina natural curvature. Then, the estimated location of ONH region is detected and BMO points are identified. The issue of the presence of border tissue and the blood vessels was also dealt with by employing several filters that suppress vessel artifacts and enhance the Bruch’s membrane termination points. Additionally, the computation time is significantly reduced by performing the BMO points’ detection only in the ONH approximated region.

Furthermore, the ILM detection, using a Chan–Vese level set approach, provides a 3-D surface that is able to identify this membrane despite high heterogeneity and complicated topologies of the ONH. The issue of presence of noise and artifacts in the ILM surface, effects of acquisition process and marching cube algorithm, is corrected by employing a robust mesh denoising method. The resulting high fidelity surface preserves the features with minimal shrinkage. It is important to remember that we are interested in detecting even minimal shape changes; obtaining a surface that has no staircase artifacts, but preserves features, is crucial for parameter computation.

The same argument applies to the BMO points. We need a robust detection of this structure. Using our custom scan protocol, we have 145 B-scans in one volume, but only a part of these contain BMO points. Thus, depending on the position of each B-scan and on vessel shadow artifacts, large variation of BMO points in the XY plane can occur. To penalize the deviation of single BMO point coordinates from the elliptic shape of the BMO described in the literature, we fit an ellipse to the initially detected points. Hence, the natural trend of the BMO trajectories in the volume is kept while increasing the number of sample points.

Having these three components, ILM, RPE, and BMO points, our method computes the correspondence between surfaces ILM and RPE, such that each face of the ILM surface has a corresponding vertex in RPE. After this step, we have all the ingredients to derive the parameters.

Although the computed parameters, except the bending energy, were already introduced in several previous studies (mostly for the investigation of glaucoma or papilledema), we are the first to provide quantifiable measurements derived in a truly 3-D context. Furthermore, we were able to provide a first proof of applicability in a clinical setting, by showing the power of several parameters introduced in differentiating between inflammation in form of swelling and destruction in the form of reduction of layer thickness.

To the best of our knowledge, this is the first approach that is able to provide truly 3-D parameters. Most of the morphometric parameters derived for the characterization of the ONH shape17,21,22,24 use either manual detection or manual correction of commercial software, a time-consuming task prone to subjectivity especially in volume scans consisting of a high number of B-scans. Similarly, methods30,31,53 performed a manual detection of the BMO points and automatic segmentation of the ILM, BM, and choroid layers to compute surface correspondence between multiple eyes.

Future work will include creation of population normative parameters and statistics using larger cohorts. The goal is to better detect and understand shape changes or differences correlated with neuroinflamatory diseases. Also, shapes from a large group of eyes will be made comparable by a common surface (a mean template); surface correspondence and anatomical changes at different time points will also be detected. Since we are interested in small, localized changes that could be clinically important, for example, as indicators of disease or treatment success, we will investigate the derivation of additional morphometric parameters.

There are several limitations. First, the OCT data were not acquired relative to the foveal-BMO axis, because some of our early data were taken before this concept had been established. OCT devices with wide field optics capable of imaging ONH and fovea in the same scan session may be used to address this issue in the future and establish the foveal-BMO axis as reference axis to eliminate rotational artifacts, which might increase repeated measurement reliability also in the context of longitudinal measurements. Furthermore, the algorithm was implemented and tested only on images of one device. Future work needs to adapt the approach to imaging data from other devices. Last, the sample size of our HC cohort is too small to allow meaningful investigation of ONH shape parameters correlations, e.g., with age and sex. These analyses should be repeated in a future study with higher sample sizes. We plan to perform an evaluation of our method on a larger dataset with different scan protocols and different OCT devices and to create a normative database for the developed parameters.

In summary, our results showed that the proposed method successfully and robustly identifies ILM, RPE, and BMO points and it enables computing the morphometric parameters, that capture shape characteristics of the ONH.

Disclosures

A patent application has been submitted for the method described in this paper. Friedemann Paul serves on the scientific advisory board for the Novartis OCTIMS study; received speaker honoraria and travel funding from Bayer, Novartis, Biogen Idec, Teva, Sanofi-Aventis/Genzyme, Merck Serono, Alexion, Chugai, MedImmune, and Shire; is an academic editor for PLoS ONE; is an associate editor for Neurology Neuroimmunology and Neuroinflammation; consulted for Sanofi Genzyme, Biogen Idec, MedImmune, Shire, and Alexion; and received research support from Bayer, Novartis, Biogen Idec, Teva, Sanofi-Aventis/Genzyme, Alexion, and Merck Serono. Alexander U. Brandt served on the scientific advisory board for the Biogen Vision study; received travel funding and/or speaker honoraria from Novartis Pharma, Biogen, Bayer, and Teva; has consulted for Biogen, Nexus, Teva, and Motognosis, Nocturne. Sunil Kumar Yadav, Ella Maria Kadas, Seyedamirhusein Motamedi, Konrad Polthier, Frank Haußer, Kay Gawlik have no conflicts of interest to disclose.

Acknowledgments

The authors would like to thank Ulrich Reitebuch for helpful discussions and Hanna Zimmermann for her valuable input about the OCT and clinical data. We acknowledge support from the German Research Foundation (DFG) and the Open Access Publication Fund of Charité – Universitätsmedizin Berlin.

References

1. B. Nesmith et al., “Mathematical analysis of the normal anatomy of the aging fovea,” Invest. Ophthalmol. Visual Sci. 55, 5962–5966 (2014).IOVSDA0146-0404 https://doi.org/10.1167/iovs.14-15278 Google Scholar

2. L. Liu et al., “A sloped piecemeal Gaussian model for characterising foveal pit shape,” Ophthalmic Physiol. Opt. 36, 615–631 (2016). https://doi.org/10.1111/opo.12321 Google Scholar

3. P. Scheibe et al., “Parametric model for the 3D reconstruction of individual fovea shape from OCT data,” Exp. Eye Res. 119, 19–26 (2014).EXERA60014-4835 https://doi.org/10.1016/j.exer.2013.11.008 Google Scholar

4. P. Scheibe et al., “Analysis of foveal characteristics and their asymmetries in the normal population,” Exp. Eye Res. 148, 1–11 (2016).EXERA60014-4835 https://doi.org/10.1016/j.exer.2016.05.013 Google Scholar

5. S. K. Yadav et al., “CuBe: parametric modeling of 3D foveal shape using cubic Bezier,” Biomed. Opt. Express 8, 4181–4199 (2017).BOEICL2156-7085 https://doi.org/10.1364/BOE.8.004181 Google Scholar

6. J. B. Jonas et al., “Glaucoma,” Lancet 390, 2183–2193 (2017).LANCAO0140-6736 https://doi.org/10.1016/S0140-6736(17)31469-1 Google Scholar

7. F. Kaufhold et al., “Optic nerve head quantification in idiopathic intracranial hypertension by spectral domain OCT,” PLoS One 7, e36965 (2012).POLNCL1932-6203 https://doi.org/10.1371/journal.pone.0036965 Google Scholar

8. P. Albrecht et al., “Optical coherence tomography for the diagnosis and monitoring of idiopathic intracranial hypertension,” J. Neurol. 264, 1370–1380 (2017). https://doi.org/10.1007/s00415-017-8532-x Google Scholar

9. A. T. Toosy, D. F. Mason and D. H. Miller, “Optic neuritis,” Lancet Neurol. 13, 83–99 (2014). https://doi.org/10.1016/S1474-4422(13)70259-X Google Scholar

10. S. L. Galetta et al., “Acute optic neuritis,” Neurology 2, e135 (2015).NEURAI0028-3878 https://doi.org/10.1212/NXI.0000000000000135 Google Scholar

11. F. C. Oertel et al., “Microstructural visual system changes in AQP4-antibody-seropositive NMOSD,” Neurology 4, e334 (2017).NEURAI0028-3878 https://doi.org/10.1212/NXI.0000000000000334 Google Scholar

12. A. Petzold et al., “Retinal layer segmentation in multiple sclerosis: a systematic review and meta-analysis,” Lancet Neurol. 16, 797–812 (2017). https://doi.org/10.1016/S1474-4422(17)30278-8 Google Scholar

13. J. Bennett et al., “Neuromyelitis optica and multiple sclerosis: seeing differences through optical coherence tomography,” Mult. Scler. J. 21, 678–688 (2015). https://doi.org/10.1177/1352458514567216 Google Scholar

14. A. S. C. Reis et al., “Influence of clinically invisible, but optical coherence tomography detected, optic disc margin anatomy on neuroretinal rim evaluation,” Invest. Ophthalmol. Visual Sci. 53(4), 1852–1860 (2012).IOVSDA0146-0404 https://doi.org/10.1167/iovs.11-9309 Google Scholar

15. B. C. Chauhan and C. F. Burgoyne, “From clinical examination of the optic disc to clinical assessment of the optic nerve head: a paradigm change,” Am. J. Ophthalmol. 156(2), 218–227.e2 (2013).AJOPAA0002-9394 https://doi.org/10.1016/j.ajo.2013.04.016 Google Scholar

16. M. Young et al., “Comparison of the clinical disc margin seen in stereo disc photographs with neural canal opening seen in optical coherence tomography images,” J. Glaucoma 23, 360–367 (2014).JOGLES https://doi.org/10.1097/IJG.0b013e31829484a4 Google Scholar

17. P. Enders et al., “The use of Bruch’s membrane opening-based optical coherence tomography of the optic nerve head for glaucoma detection in microdiscs,” Br. J. Ophthalmol. 101(4), 530–535 (2017).BJOPAL0007-1161 https://doi.org/10.1136/bjophthalmol-2016-308957 Google Scholar

18. D. R. Muth and C. W. Hirnei, “Structure-function relationship between Bruch’s membrane opening-based optic nerve head parameters and visual field defects in glaucoma,” Invest. Ophthalmol. Visual Sci. 56(5), 3320–3328 (2015).IOVSDA0146-0404 https://doi.org/10.1167/iovs.14-15845 Google Scholar

19. B. C. Chauhan et al., “Enhanced detection of open-angle glaucoma with an anatomically accurate optical coherence tomography-derived neuroretinal rim parameter,” Ophthalmology 120(3), 535–543 (2013).OPANEW0743-751X https://doi.org/10.1016/j.ophtha.2012.09.055 Google Scholar

20. F. Pollet-Villard et al., “Structure-function relationships with spectral-domain optical coherence tomography retinal nerve fiber layer and optic nerve head measurements,” Invest. Ophthalmol. Visual Sci. 55(5), 2953–2962 (2014).IOVSDA0146-0404 https://doi.org/10.1167/iovs.13-13482 Google Scholar

21. B. C. Chauhan et al., “Bruch’s membrane opening minimum rim width and retinal nerve fiber layer thickness in a normal white population,” Ophthalmology 122(9), 1786–1794 (2015).OPANEW0743-751X https://doi.org/10.1016/j.ophtha.2015.06.001 Google Scholar

22. P. Enders et al., “Neuroretinal rim in non-glaucomatous large optic nerve heads: a comparison of confocal scanning laser tomography and spectral domain optical coherence tomography,” Br. J. Ophthalmol. 101(2), 138–142 (2017).BJOPAL0007-1161 https://doi.org/10.1136/bjophthalmol-2015-307730 Google Scholar

23. S. K. Gardiner et al., “A method to estimate the amount of neuroretinal rim tissue in glaucoma: comparison with current methods for measuring rim area,” Am. J. Ophthalmol. 157, 540–549.e2 (2014).AJOPAA0002-9394 https://doi.org/10.1016/j.ajo.2013.11.007 Google Scholar

24. P. Enders et al., “Novel Bruch’s membrane opening minimum rim area equalizes disc size dependency and offers high diagnostic power for glaucoma,” Invest. Ophthalmol. Visual Sci. 57, 6596–6603 (2016).IOVSDA0146-0404 https://doi.org/10.1167/iovs.16-20561 Google Scholar

25. E. M. Kadas et al., 3D Optic Nerve Head Segmentation in Idiopathic Intracranial Hypertension, pp. 262–267, Springer, Berlin, Heidelberg (2012). Google Scholar

26. J.-K. Wang et al., “Automated quantification of volumetric optic disc swelling in papilledema using spectral-domain optical coherence tomography,” Invest. Ophthalmol. Visual Sci. 53(7), 4069–4075 (2012).IOVSDA0146-0404 https://doi.org/10.1167/iovs.12-9438 Google Scholar

27. P. Auinger et al., “Baseline OCT measurements in the idiopathic intracranial hypertension treatment trial, part I: quality control, comparisons, and variability,” Invest. Ophthalmol. Visual Sci. 55(12), 8180–8188 (2014).IOVSDA0146-0404 https://doi.org/10.1167/iovs.14-14960 Google Scholar

28. S. Lee et al., “End-to-end pipeline for spectral domain optical coherence tomography and morphometric analysis of human optic nerve head,” J. Med. Biol. Eng. 31, 111–119 (2011).IYSEAK0021-3292 https://doi.org/10.5405/jmbe.845 Google Scholar

29. E. Gibson et al., “Optic nerve head registration via hemispherical surface and volume registration,” IEEE Trans. Biomed. Eng. 57, 2592–2595 (2010).IEBEAX0018-9294 https://doi.org/10.1109/TBME.2010.2060337 Google Scholar

30. S. Lee et al., “Exact surface registration of retinal surfaces from 3-D optical coherence tomography images,” IEEE Trans. Biomed. Eng. 62, 609–617 (2015).IEBEAX0018-9294 https://doi.org/10.1109/TBME.2014.2361778 Google Scholar

31. S. Lee et al., “Quantifying variability in longitudinal peripapillary RNFL and choroidal layer thickness using surface based registration of OCT images,” Transl. Vision Sci. Technol. 6(1), 11 (2017). https://doi.org/10.1167/tvst.6.1.11 Google Scholar

32. C.-L. Chen et al., “Individual A-scan signal normalization between two spectral domain optical coherence tomography devices,” Invest. Ophthalmol. Visual Sci. 54(5), 3463–3471 (2013).IOVSDA0146-0404 https://doi.org/10.1167/iovs.12-11484 Google Scholar

33. M. K. Garvin et al., “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3D graph search,” IEEE Trans. Med. Imaging 27, 1495–1505 (2008).ITMID40278-0062 https://doi.org/10.1109/TMI.2008.923966 Google Scholar

34. B. J. Antony, “A combined machine-learning and graph-based framework for the 3-D automated segmentation of retinal structures in SD-OCT images,” PhD Thesis, AAI3608177, Iowa City, Iowa (2013). Google Scholar

35. K. Rohr et al., Point-Based Elastic Registration of Medical Image Data Using Approximating Thin-Plate Splines, pp. 297–306, Springer, Berlin, Heidelberg (1996). Google Scholar

36. B. J. Antony et al., “Automated 3D segmentation of multiple surfaces with a shared hole segmentation of the neural canal opening in SD-OCT volumes,” Lect. Notes Comput. Sci. 8673, 739–746 (2014).LNCSD90302-9743 https://doi.org/10.1007/978-3-319-10404-1 Google Scholar

37. Z. Hu et al., “Automated segmentation of neural canal opening and optic cup in 3D spectral optical coherence tomography volumes of the optic nerve head,” Invest. Ophthalmol. Visual Sci. 51(11), 5708–5717 (2010).IOVSDA0146-0404 https://doi.org/10.1167/iovs.09-4838 Google Scholar

38. J. Weickert, Anisotropic Diffusion in Image Processing, Universität Kaiserslautern, Kaiserslautern (1996). Google Scholar

39. J. Soares et al., “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imaging 25, 1214–1222 (2006).ITMID40278-0062 https://doi.org/10.1109/TMI.2006.879967 Google Scholar

40. K. Gawlik et al., “An active contour method for ILM segmentation in ONH volume scans in retinal OCT,” submitted to Biomed. Opt. Express (2018). Google Scholar

41. W. E. Lorensen and H. E. Cline, “Marching cubes: a high resolution 3D surface construction algorithm,” in Proc. of the 14th Annual Conf. on Computer Graphics and Interactive Techniques, SIGGRAPH, pp. 163–169, ACM (1987). Google Scholar

42. S. K. Yadav, U. Reitebuch and K. Polthier, “Robust and high fidelity mesh denoising,” IEEE Trans. Visual Comput. Graphics PP(99), 1–1 (2018). https://doi.org/10.1109/TVCG.2018.2828818 Google Scholar

43. A. Fitzgibbon, M. Pilu and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 476–480 (1999).ITPIDJ0162-8828 https://doi.org/10.1109/34.765658 Google Scholar

44. S. K. Yadav, U. Reitebuch and K. Polthier, “Mesh denoising based on normal voting tensor and binary optimization,” IEEE Trans. Visual Comput. Graphics 24(8), 2366–2379 (2018). https://doi.org/10.1109/TVCG.2017.2740384 Google Scholar

45. C. A. Schneider, W. S. Rasband and K. W. Eliceiri, “NIH image to ImageJ: 25 years of image analysis,” Nat. Methods 9, 671–675 (2012).1548-7091 https://doi.org/10.1038/nmeth.2089 Google Scholar

46. L. Pierro et al., “Macular thickness interoperator and intra-operator reproducibility in healthy eyes using 7 optical coherence tomography instruments,” Am. J. Ophthalmol. 150, 199–204.e1 (2010).AJOPAA0002-9394 https://doi.org/10.1016/j.ajo.2010.03.015 Google Scholar

47. H. Zimmermann et al., “Optical coherence tomography for retinal imaging in multiple sclerosis,” Degener. Neurol. Neuromuscular Dis. 4, 153–162 (2014). https://doi.org/10.2147/DNND.S73506 Google Scholar

48. A. Petzold et al., “The investigation of acute optic neuritis: a review and proposed protocol,” Nat. Rev. Neurol. 10, 447–458 (2014). https://doi.org/10.1038/nrneurol.2014.108 Google Scholar

49. A. U. Brandt et al., “Frequent retinal ganglion cell damage after acute optic neuritis,” Mult. Scler. Relat. Disord. 22, 141–147 (2018). https://doi.org/10.1016/j.msard.2018.04.006 Google Scholar

50. J. Sepulcre et al., “Diagnostic accuracy of retinal abnormalities in predicting disease activity in MS,” Neurology 68, 1488–1494 (2007).NEURAI0028-3878 https://doi.org/10.1212/01.wnl.0000260612.51849.ed Google Scholar

51. P. Albrecht et al., “Degeneration of retinal layers in multiple sclerosis subtypes quantified by optical coherence tomography,” Mult. Scler. J. 18, 1422–1429 (2012). https://doi.org/10.1177/1352458512439237 Google Scholar

52. D.-C. Tian et al., “Bidirectional degeneration in the visual pathway in neuromyelitis optica spectrum disorder (NMOSD),” Mult. Scler. J. 135245851772760 (2017). https://doi.org/10.1177/1352458517727604 Google Scholar

53. S. Lee et al., “Atlas-based shape analysis and classification of retinal optical coherence tomography images using the functional shape (fshape) framework,” Med. Image Anal. 35(Suppl. C), 570–581 (2017). https://doi.org/10.1016/j.media.2016.08.012 Google Scholar

Biographies for the authors are not available.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Sunil Kumar Yadav, Sunil Kumar Yadav, Ella Maria Kadas, Ella Maria Kadas, Seyedamirhosein Motamedi, Seyedamirhosein Motamedi, Konrad Polthier, Konrad Polthier, Frank Haußer, Frank Haußer, Kay Gawlik, Kay Gawlik, Friedemann Paul, Friedemann Paul, Alexander Brandt, Alexander Brandt, } "Optic nerve head three-dimensional shape analysis," Journal of Biomedical Optics 23(10), 106004 (12 October 2018). https://doi.org/10.1117/1.JBO.23.10.106004 . Submission: Received: 5 June 2018; Accepted: 6 August 2018
Received: 5 June 2018; Accepted: 6 August 2018; Published: 12 October 2018
JOURNAL ARTICLE
13 PAGES


SHARE
Back to Top