Open Access
26 February 2018 Automated differentiation between meningioma and healthy brain tissue based on optical coherence tomography ex vivo images using texture features
Marcel Lenz, Robin Krug, Christopher Dillmann, Ralf Stroop, Nils C. Gerhardt, Hubert Welp, Kirsten Schmieder, Martin R. Hofmann
Author Affiliations +
Abstract
Brain tissue analysis is highly desired in neurosurgery, such as tumor resection. To guarantee best life quality afterward, exact navigation within the brain during the surgery is essential. So far, no method has been established that perfectly fulfills this need. Optical coherence tomography (OCT) is a promising three-dimensional imaging tool to support neurosurgical resections. We perform a preliminary study toward in vivo brain tumor removal assistance by investigating meningioma, healthy white, and healthy gray matter. For that purpose, we utilized a commercially available OCT device (Thorlabs Callisto) and measured eight samples of meningioma, three samples of healthy white, and two samples of healthy gray matter ex vivo directly after removal. Structural variations of different tissue types, especially meningioma, can already be seen in the raw OCT images. Nevertheless, an automated differentiation approach is desired, so that neurosurgical guidance can be delivered without a-priori knowledge of the surgeon. Therefore, we employ different algorithms to extract texture features and apply pattern recognition methods for their classification. With these postprocessing steps, an accuracy of nearly 98% was found.

1.

Introduction

For the subsequent recovery of patients after neurosurgical resection, it is essential to precisely resect the brain tumor without damaging nearby healthy tissue. For example, previous studies have demonstrated that the total resection of benign meningiomas increases the survival rate.1,2 On the other hand, removal of healthy tissue can cause severe reduction of life quality afterward. Therefore, it is necessary to find a clear differentiation between healthy tissue and brain tumor during the surgery to guarantee complete removal. Prior to the surgery, the position of the tumor can be determined with magnetic resonance imaging (MRI) or computed tomography. But during the resection, this position shifts up to several tens of millimeters and the shift increases during the procedure, due to removal of tissue and loss of cerebrospinal fluid.3 At the moment, the utilized tools to support such surgeries intraoperatively are not providing sufficient assistance.

Here, we present the possibility to assist neurosurgical procedures by employing optical coherence tomography (OCT) and texture feature-based postprocessing, including machine learning with principal component analysis (PCA) and support vector machines (SVM). In this study, we confirm that our approach is suitable for ex vivo analysis. With this approach, we aim to find a clear differentiation between healthy and tumor tissue automatically in the future, so that further investigations by the surgeon are not necessary.

Other imaging modalities, e.g., fluorescence microscopy4 or Raman spectroscopy,5 suffer from several drawbacks, i.e., nonuniform distribution and the lack of a three-dimensional (3-D) field of view. Moreover, fluorescence guided surgery for meningioma resection does not provide the same benefit as for malignant glioma surgeries.6 Multiphoton tomography with fluorescence lifetime imaging has led to promising results.7 Nevertheless, the penetration depth of this modality is quite low, i.e., approximately 200  μm.7 By utilizing OCT, these challenges can be overcome. OCT has demonstrated its advantages in several fields of biomedical imaging, e.g., ophthalmology and dermatology.8,9

Until now the performance of time and spectral domain OCT for the analysis of brain tissue has been analyzed by Böhringer et al.10 They highlighted that detailed structural information can be seen with spectral domain OCT and due to the increased image acquisition speed, intraoperative utilization can be potentially enabled. Full field OCT-based investigations were done by Assayag et al.,11 where nontumorous tissue and different brain tumors were imaged. Correlations with histology were clearly visible. In a recent article, the attenuation coefficient was calculated for the backscattered signal in OCT and used for the differentiation between healthy tissue and brain tumor.12,13 The measurements were performed in vivo for mice and ex vivo for human samples. Lichtenegger et al.14 combined spectroscopic imaging together with an attenuation-based approach in the visible light range for the investigation of Alzheimer’s disease. For that purpose optically cleared mice samples and ex vivo, human gray and white matter were analyzed.

These recent works have shown promising approaches for differentiation. However, automated differentiation has only been proposed by Kut et al.12,13 But for using an attenuation-based approach, the methodology has to be adjusted for every system since some parts of a system might be changed, e.g., the light source. By considering the texture of an image, the light source should not influence the result significantly. In our recent work, we were able to demonstrate that meningioma samples show differences in structure compared to healthy tissue.15,16 Now we want to enhance these structural variations by employing texture analysis. This postprocessing algorithm is very fast and can be easily implemented in other systems.

The benefit of OCT with texture analysis on mouse tissue and phantom samples has already been demonstrated by Gossage et al.17,18 A powerful metric is local binary patterns (LBP) that are widely used for face recognition algorithms,19,20 and whose benefit has also been shown in ophthalmology by Liu et al.21 and Anantrasirichai et al.22 For meningioma samples, LBP were employed for histology images taken by a bright field microscope.23

Furthermore, MRI images of different tumor types and grades were analyzed using texture analysis and subsequent classification with SVM.24

In this paper, we employ OCT imaging with a texture operator based on LBP,19 run length analysis (RL),25 Haralick’s texture features (H),26 and Laws’ texture energy measures (L)27 on 13 ex vivo brain tissue samples for an automated differentiation approach.

2.

Methodology

The sequence of our ex vivo analysis is depicted in Fig. 1. First, 13 samples from 11 patients were directly measured after resection with OCT and marked with blue ink for orientation. In our study, we obtained volumetric images for each sample. Each of these images consists of 1000 B-scans, except two samples, for which only 500 B-scans could be recorded. The resected samples were taken from an area, where the surgeon was certain to find only the tissue of interest. To verify this, a histopathological analysis was performed, so that each sample could be labeled as meningioma, healthy white, or healthy gray matter.

Fig. 1

Sequence of the ex vivo study.

JBO_23_7_071205_f001.png

The OCT measurements were performed with a commercially available spectral domain OCT system (Thorlabs Callisto). The central wavelength of the OCT device is 930 nm and provides axial and lateral resolutions of 7 and 8  μm, respectively. The maximum penetration depth is 1.7 mm in air and 1.2 mm in brain tissue, presuming a refractive index of 1.4. After the ex vivo measurements, the samples were prepared for standard histological analysis: they were embedded in paraffin and then, 10  μm thin slices were cut, stained with hematoxylin and eosin and investigated with a bright field microscope. The investigation of stained slices by an experienced pathologist is a common practice to determine the tissue type ex vivo. For a few samples, additional OCT images of the paraffin block and the slices were performed. Afterward, the OCT images of the slices were compared with the microscopic images. Using this procedure, we verified that the distinct features of every tissue type can be displayed by both imaging modalities. As these features can be recognized throughout the complete histological preparation procedure, the postprocessing, as described in Sec. 3, was applied only on the OCT measurements on unprepared samples directly after the resection. For later classification, the tissue type for each sample was diagnosed by an experienced pathologist after the above described histological preparation procedure. The histopathological findings were as follows:

  • eight meningioma,

  • three white matter (healthy tissue), and

  • two gray matter (healthy tissue).

To obtain the same lateral resolution for each data set, the images with better resolution were downsampled by use of an interpolation. This was necessary because the sample rate could not be kept constant during the experiments. All postprocessing steps were done using MATLAB (version 2015a). The libSVM toolbox was used For the SVM.28 The study was approved by the ethical committee of the Ruhr-University Bochum.

3.

Analysis

This section deals with the performed analysis that led to an accurate classification of the tissue. Figure 2 illustrates the postprocessing steps that were performed with our ex vivo OCT images. At the beginning, we introduce a segmentation algorithm that automatically finds the region of interest for every B-scan without any need for adjustments of the parameters. Then, the applied texture feature algorithms are introduced followed by a description of the PCA and the classification via SVM.

Fig. 2

Sequence of the postprocessing: LBP, local binary patterns; RL, run length analysis; H, Haralick’s texture features; and L, Laws’ texture energy measures.

JBO_23_7_071205_f002.png

3.1.

Segmentation

The flowchart of the segmentation can be seen in Fig. 3. At first, the noise of the original B-scan was suppressed by using a 20×20  pixel median filter. Afterward the edge was detected using the Canny operator.29 By thresholding with Otsu’s method, the lower boundary of the region of interest was found.30 For further analysis the original image was taken, so that no textural feature was suppressed by the median filter and divided into 32×32  pixel subimages. If at least 90% of a subimage was lying in the segmented area, it was considered for further analysis.

Fig. 3

Flowchart of the performed analysis.

JBO_23_7_071205_f003.png

3.2.

Local Binary Patterns

By employing LBP, a histogram can be calculated for every subimage. Here, each pixel is compared with its surrounding neighbors and an 8-bit binary value is calculated. Each neighbor has a predicted binary digit, which is considered as one, if its value is higher or equal than the center pixel and as zero, if the value is lower.

Since all surrounding neighbors are considered for every pixel, this methodology is rotation invariant. Afterward a histogram was calculated for each subimage and then a mean histogram was calculated for each B-scan of a 3-D data set. In total, 256 features are calculated with this methodology.

3.3.

Run Length Analysis

With RL, areas that have the similar gray level intensities in a specific direction are emphasized. The run length for each gray level is stored in a matrix. To calculate the run length matrix, the subimages have to be quantized into gmax gray levels first. In our study, gmax was chosen to be 16. For each of these quantized levels, an entry in the matrix is calculated, which stores the length of a row of pixels that share the same intensity.

Out of this matrix, several features can be extracted, to highlight different structural features of the input image. In our study, these features were extracted for horizontal and vertical runs:

  • short run emphasis,

  • long run emphasis,

  • gray level nonuniformity,

  • run length nonuniformity,

  • low gray level run emphasis,

  • high gray level run emphasis, and

  • run percentage.

For the later analysis, the mean parameter for both runs was calculated, leading to seven extracted features.

3.4.

Haralick’s Texture Features

Haralick introduced 14 texture features that are based on a gray level co-occurrence matrix, which stores the change of a gray level from 1 pixel to the next.26

Out of this matrix we investigated these features for the angles 0 deg, 45 deg, 90 deg, and 135 deg:

  • contrast,

  • entropy,

  • inverse difference moment,

  • second angular moment, and

  • dissimilarity.

All features were calculated for every direction, so that this methodology is rotation invariant as well. At the end, only the mean parameter was considered for later analysis, resulting in five features.

3.5.

Laws’ Texture Energy Measures

For this analysis, the image is convolved with different filter masks that can highlight certain structural features, as proposed by Laws.27 In our work, we used 5×5 masks that were generated by the outer product () of these operators:

  • level: L5=(14641),

  • edge: E5=(12021),

  • spot: S5=(10201),

  • wave: W5=(12021), and

  • ripple: R5=(14641).

Here, the masks E5L5, E5S5, L5S5, and R5R5 were used since it turned out that these masks were best suitable for Brodatz images, which are common data sets to verify texture analysis approaches.27 After the convolution with these masks, a 5×5 average filter was applied. Then, the mean value and the standard deviation were calculated and considered leading to eight textural features.

3.6.

Feature Reduction and Classification

The texture analysis was performed on each subimage. However, due to the large amount of data, a mean feature vector was calculated for each B-scan, leading to approximately 1000 data points for each sample. After all texture features were calculated, a PCA was done, so that the dimensionality could be reduced. As a minimum, the first three principal components were considered and the number of principal components, which were taken for classification, was increased until a variance of at least 95% was reached. PCA was performed on four different preprocessing options:31

  • 1. The combination of texture features without normalization: x(LBP,RL,H,L).

  • 2. The combination of texture features was combined using z-score normalization:

    xz-score=xμσ.

  • 3. Each data set was normalized by its minimum and maximum individually, before they were combined:

    xminmax=xminmaxmin.

  • 4. Each data set was first normalized individually, and then z-score normalization was performed on the combined feature vector:

    xminmax(z-score)=xminmaxμσ.

For the z-score normalization, the combined feature vector was subtracted with the mean value μ and divided by the standard deviation σ. At the end, an SVM with a radial basis function kernel was employed to separate the data into healthy (gray and white matter) and tumor tissue. A grid search was done to find the optimal parameters for the width of the kernel γ and the cost parameter c.32 The radial basis function kernel is described in this way:33

Eq. (1)

KRBF(x,x)=exp(γxx2).

Moreover, a 10-fold cross validation was performed. For each tissue type, the data set was split into 10 equal parts. Then the classification was performed 10 times, while every time one part was used for testing and the remaining nine parts were used for training. To get the best classification accuracy, every possible combination among the texture features was tested.

4.

Results

The raw B-scans of a meningioma sample, healthy white, and gray tissue can be seen in Fig. 4. The structural differences are already visible, but by utilizing different texture features, a classifier can be trained that provides an accuracy how precisely the classification was performed.

Fig. 4

(a) B-scan of meningioma, (b) healthy white tissue, and (c) healthy gray tissue.

JBO_23_7_071205_f004.png

Table 1 shows the obtained SVM classification accuracies for any combination of the calculated textural features after PCA was applied. The best result (99.88%) is obtained for a combination of LBP, RL, and H. In Fig. 5, the first three principal components are plotted against each other in two-dimensional (2-D) and 3-D as well. It can clearly be seen that there is a huge overlap between the point clouds for meningioma samples (red) and healthy tissue (blue), which is not desired because this does not provide robust classifiers.

Table 1

Comparison between different texture feature classification results.

x (%)xz−score (%)xmin−max (%)xmin−max(z−score) (%)
LBP99.8498.9499.7798.50
RL92.0791.8790.7689.92
H93.8493.2890.3576.27
L97.8997.1996.3295.28
LBP + RL99.1499.7399.8799.51
LBP + H99.8299.2599.8098.78
LBP + L97.9399.4999.8099.40
RL + H92.0590.9588.7790.24
RL + L97.7593.8996.2395.89
H + L97.8996.5697.3196.46
LBP + RL + H99.1799.7499.8899.58
LBP + RL + L97.7699.7799.8299.65
LBP + H + L97.9599.4899.7999.45
RL + H + L97.7593.2897.9897.71
LBP + RL + H + L97.7699.7799.8699.66

Fig. 5

PCA for the best classification accuracy: meningioma samples are depicted in red and healthy tissue in blue. A 2-D scatter plot was made for the (a) first principal component (PC1) against PC2, (b) PC1 versus PC3, (c) PC2 versus PC3, and (d) a 3-D scatter plot for all three components.

JBO_23_7_071205_f005.png

A more obvious result is depicted in Fig. 6. Here, the combination of RL, H, and L provides a slightly lower accuracy of 97.98% (bold in Table 1). This is the highest accuracy found without an overlap of meningioma and healthy tissue. With this result, the separation between meningioma and healthy tissue is far more convincing.

Fig. 6

PCA for the most obvious result: meningioma samples are depicted in red and healthy tissue in blue. A 2-D scatter plot was made for the (a) first principal component (PC1) against PC2, (b) PC1 versus PC3, (c) PC2 versus PC3, and (d) a 3-D scatter plot for all three components.

JBO_23_7_071205_f006.png

The SVM accuracies for the best and for the most obvious result are depicted in Figs. 7(a) and 7(b). A grid search was performed to find the optimal parameters for the cost parameter c and the width of the Gaussian kernel γ. By comparing both results, it can be seen that the most obvious result provides a larger area with an accuracy greater than 90%. This gives a hint that this classifier is indeed more robust.

Fig. 7

2-D plot showing the result of the grid search (a) for the best SVM accuracy and (b) for most obvious SVM accuracy. The cost parameter c and the width of the kernel γ are displayed in a logarithmic scale. The area with an accuracy greater than 90% is considerably larger for the most obvious result.

JBO_23_7_071205_f007.png

5.

Discussion

With the combination of textural features and a machine learning algorithm on ex vivo OCT images, a confident classifier was trained to distinguish between healthy tissue and meningioma. In contrast to other methodologies, fast 3-D imaging is possible with high lateral and axial resolutions. Here, we have proven that structural features of meningioma and healthy tissue differ and that a trained classifier is able to distinguish between the tissue types. Though our results clearly show the potential for brain tissue differentiation with our approach, we are aware of the fact that 13 samples do not provide an ultimate statistical significance. Therefore, we will continue our study in the future to continuously increase the number of patients. If, in future studies, data from an increased number of patients are available, the data set can be split into training and testing parts without considering one patient’s data set for training as well as for testing.

To generate these results, a mean parameter was calculated for each B-scan, which affects the actual resolution. However, during neurosurgical resections, the actual precision is approximately 1.5 mm, even for robot assisted surgeries.34 For the axial dimension, the penetration depth is around 1.2 mm. For the lateral dimension, the number of subimages that are considered for classification can be adjusted. To check how our approach would respond to less homogeneous samples, i.e., mixtures between tumorous and healthy tissue, we artificially decreased the size of the analyzed B-scans to one quarter of its original value. This leads to a maximum width of 1.25 mm, which is about the accuracy at which the neurosurgery can be performed. However, this decrease in B-scan size still provided a classification accuracy comparable to a full B-scan. With our approach, this requirement can still be fulfilled for all three dimensions. Moreover, we have shown that LBP analysis, since this is a pixel-wise method, is not capable to fulfill our need. All results, where LBP was utilized, encountered an overlap of healthy tissue and meningioma. For the most obvious result, LBP was not considered, leading to a clear differentiation between healthy tissue and meningioma. This was also the highest accuracy found by using a combination of RL, H, and L. The A-scan rate of our system was 1.2 kHz. The time needed to scan an area of ca. 3.5×3.5  mm2 was approximately 15 min, which is not a real-time acquisition. However, much less acquisition time may be desired when performing in vivo measurements, to exclude motion artifacts. In that case, high-speed OCT systems with scan rates in the MHz range may be used.35 Since our system was utilized for the ex vivo analysis of brain tissue, no optimization was made yet, to build an intraoperative system. The work of El-Haddad et al. gives a nice overview of already employed systems that were used for intraoperative measurements.36 Based on one of these approaches, automated differentiation during surgery can potentially be enabled. Our study confirms that OCT can enable the differentiation of brain tissue.

6.

Conclusion

This paper introduces a texture-based classification algorithm that distinguishes between meningioma and healthy brain tissue. Applying this algorithm on OCT images, differentiation between the tissue types with nearly 98% accuracy is possible. As the proposed methodologies are typically fast, the implementation during surgery could be done easily and is the next logical step. By increasing the number of patients and performing in vivo measurements, the classifier can be optimized further and guidance during surgery will be enabled. In that respect, it will be important to analyze tumor boundaries and infiltrative tumors. Thus, we will also analyze samples, where healthy and tumorous tissues are represented in one image and derive algorithms to determine the tumor boundaries. Moreover, there is a plan to investigate the differentiation of healthy tissue and other brain tumor types.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

We would like to thank the Stiftung Rheinisch-Westfälischer Technischer Überwachungsverein (RWTÜV, Project Nos. S189/10024/2015 and S189/10025/2015) and the RUB Research School for supporting this work.

References

1. 

B. J. McCarthy et al., “Factors associated with survival in patients with meningioma,” J. Neurosurg., 88 (5), 831 –839 (1998). http://dx.doi.org/10.3171/jns.1998.88.5.0831 JONSAC 0022-3085 Google Scholar

2. 

K. S. Condra et al., “Benign meningiomas: primary treatment selection affects survival,” Int. J. Radiat. Oncol. Biol. Phys., 39 (2), 427 –436 (1997). http://dx.doi.org/10.1016/S0360-3016(97)00317-9 IOBPD3 0360-3016 Google Scholar

3. 

O. Clatz et al., “Robust nonrigid registration to capture brain shift from intraoperative MRI,” IEEE Trans. Med. Imaging, 24 (11), 1417 –1427 (2005). http://dx.doi.org/10.1109/TMI.2005.856734 ITMID4 0278-0062 Google Scholar

4. 

Y. Li et al., “Intraoperative fluorescence-guided resection of high-grade gliomas: a comparison of the present techniques and evolution of future strategies,” World Neurosurg., 82 (1–2), 175 –185 (2014). http://dx.doi.org/10.1016/j.wneu.2013.06.014 Google Scholar

5. 

S. Koljenović et al., “Detection of meningioma in dura mater by Raman spectroscopy,” Anal. Chem., 77 (24), 7958 –7965 (2005). http://dx.doi.org/10.1021/ac0512599 ANCHAM 0003-2700 Google Scholar

6. 

A. Motekallemi et al., “The current status of 5-ALA fluorescence-guided resection of intracranial meningiomasa critical review,” Neurosurg. Rev., 38 (4), 619 –628 (2015). http://dx.doi.org/10.1007/s10143-015-0615-5 NSREDV 1437-2320 Google Scholar

7. 

S. R. Kantelhardt et al., “In vivo multiphoton tomography and fluorescence lifetime imaging of human brain tumor tissue,” J. Neuro-Oncol., 127 (3), 473 –482 (2016). http://dx.doi.org/10.1007/s11060-016-2062-8 JNODD2 0167-594X Google Scholar

8. 

M. R. Hee et al., “Optical coherence tomography of the human retina,” Arch. Ophthalmol., 113 (3), 325 –332 (1995). http://dx.doi.org/10.1001/archopht.1995.01100030081025 AROPAW 0003-9950 Google Scholar

9. 

J. Welzel et al., “Optical coherence tomography of the human skin,” J. Am. Acad. Dermatol., 37 (6), 958 –963 (1997). http://dx.doi.org/10.1016/S0190-9622(97)70072-0 JAADDB 0190-9622 Google Scholar

10. 

H. J. Böhringer et al., “Time-domain and spectral-domain optical coherence tomography in the analysis of brain tumor tissue,” Lasers Surg. Med., 38 588 –597 (2006). http://dx.doi.org/10.1002/(ISSN)1096-9101 LSMEDI 0196-8092 Google Scholar

11. 

O. Assayag et al., “Imaging of non-tumorous and tumorous human brain tissues with full-field optical coherence tomography,” NeuroImage, 2 (1), 549 –557 (2013). http://dx.doi.org/10.1016/j.nicl.2013.04.005 NEIMEF 1053-8119 Google Scholar

12. 

C. Kut et al., “Detection of human brain cancer infiltration ex vivo and in vivo using quantitative optical coherence tomography,” Sci. Transl. Med., 7 (292), 292ra100 (2015). http://dx.doi.org/10.1126/scitranslmed.3010611 STMCBQ 1946-6234 Google Scholar

13. 

W. Yuan et al., “Robust and fast characterization of OCT-based optical attenuation using a novel frequency-domain algorithm for brain cancer detection,” Sci. Rep., 7 44909 (2017). http://dx.doi.org/10.1038/srep44909 SRCEC3 2045-2322 Google Scholar

14. 

A. Lichtenegger et al., “Spectroscopic imaging with spectral domain visible light optical coherence microscopy in Alzheimer’s disease brain samples,” Biomed. Opt. Express, 8 (9), 4007 –4025 (2017). http://dx.doi.org/10.1364/BOE.8.004007 BOEICL 2156-7085 Google Scholar

15. 

M. Lenz et al., “Spectral domain optical coherence tomography for ex vivo brain tumor analysis,” Proc. SPIE, 9541 95411D (2015). http://dx.doi.org/10.1117/12.2183614 PSISDG 0277-786X Google Scholar

16. 

M. Lenz et al., “Ex vivo brain tumor analysis using spectroscopic optical coherence tomography,” Proc. SPIE, 9697 96973D (2016). http://dx.doi.org/10.1117/12.2214704 PSISDG 0277-786X Google Scholar

17. 

K. W. Gossage et al., “Texture analysis of optical coherence tomography images: feasibility for tissue classification,” J. Biomed. Opt., 8 (3), 570 –575 (2003). http://dx.doi.org/10.1117/1.1577575 JBOPFO 1083-3668 Google Scholar

18. 

K. W. Gossage et al., “Texture analysis of speckle in optical coherence tomography images of tissue phantoms,” Phys. Med. Biol., 51 (6), 1563 –1575 (2006). http://dx.doi.org/10.1088/0031-9155/51/6/014 PHMBA7 0031-9155 Google Scholar

19. 

T. Ojala, M. Pietikäinen and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognit., 29 (1), 51 –59 (1996). http://dx.doi.org/10.1016/0031-3203(95)00067-4 Google Scholar

20. 

D. Huang et al., “Local binary patterns and its application to facial image analysis: a survey,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev., 41 (6), 765 –781 (2011). http://dx.doi.org/10.1109/TSMCC.2011.2118750 Google Scholar

21. 

Y. Y. Liu et al., “Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding,” Med. Image Anal., 15 (5), 748 –759 (2011). http://dx.doi.org/10.1016/j.media.2011.06.005 Google Scholar

22. 

N. Anantrasirichai et al., “SVM-based texture classification in optical coherence tomography,” in IEEE 10th Int. Symp. on Biomedical Imaging, 1332 –1335 (2013). http://dx.doi.org/10.1109/ISBI.2013.6556778 Google Scholar

23. 

H. Qureshi et al., “Adaptive discriminant wavelet packet transform and local binary patterns for meningioma subtype classification,” Lect. Notes Comput. Sci., 5242 196 –204 (2008). http://dx.doi.org/10.1007/978-3-540-85990-1 LNCSD9 0302-9743 Google Scholar

24. 

E. I. Zacharaki et al., “Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme,” Magn. Reson. Med., 62 (6), 1609 –1618 (2009). http://dx.doi.org/10.1002/mrm.v62:6 MRMEEN 0740-3194 Google Scholar

25. 

M. M. Galloway, “Texture analysis using gray level run lengths,” Comput. Graphics Image Process., 4 (2), 172 –179 (1975). http://dx.doi.org/10.1016/S0146-664X(75)80008-6 Google Scholar

26. 

R. M. Haralick, K. Shanmugam and I. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern., SMC-3 610 –621 (1973). http://dx.doi.org/10.1109/TSMC.1973.4309314 Google Scholar

27. 

K. I. Laws, “Texture energy measures,” in Proc. Image Understanding Workshop, 47 –51 (1979). Google Scholar

28. 

C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Trans. Intell. Syst. Technol., 2 1 –27 (2011). http://dx.doi.org/10.1145/1961189 Google Scholar

29. 

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8 679 –698 (1986). http://dx.doi.org/10.1109/TPAMI.1986.4767851 ITPIDJ 0162-8828 Google Scholar

30. 

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., 9 (1), 62 –66 (1979). http://dx.doi.org/10.1109/TSMC.1979.4310076 Google Scholar

31. 

A. Jain, K. Nandakumar and A. Ross, “Score normalization in multimodal biometric systems,” Pattern Recognit., 38 (12), 2270 –2285 (2005). http://dx.doi.org/10.1016/j.patcog.2005.01.012 Google Scholar

32. 

C.-W. Hsu et al., “A practical guide to support vector classification,” Department of Computer Science, National Taiwan University, (2003). Google Scholar

33. 

R.-E. Fan, P.-H. Chen and C.-J. Lin, “Working set selection using second order information for training support vector machines,” J. Mach. Learn. Res., 6 1889 –1918 (2005). Google Scholar

34. 

T. Haidegger et al., “Future trends in robotic neurosurgery,” in 14th Nordic-Baltic Conf. on Biomedical Engineering and Medical Physics (NBC), 229 –233 (2008). Google Scholar

35. 

T. Klein and R. Huber, “High-speed OCT light sources and systems [invited],” Biomed. Opt. Express, 8 828 –859 (2017). http://dx.doi.org/10.1364/BOE.8.000828 BOEICL 2156-7085 Google Scholar

36. 

M. T. El-Haddad and Y. K. Tao, “Advances in intraoperative optical coherence tomography for surgical guidance,” Curr. Opin. Biomed. Eng., 3 (Suppl. C), 37 –48 (2017). http://dx.doi.org/10.1016/j.cobme.2017.09.007 Google Scholar

Biography

Marcel Lenz got his BSc degree in electrical engineering at the Ruhr-Universität Bochum (RUB). Afterward he continued to study international master lasers and photonics and now takes part in the fast-track PhD program (TopING) offered by the faculty of electrical engineering at the RUB. He joined Professor Hofmann’s chair of Photonics and Terahertz Technology in October 2014. His research focus lies on new approaches in optical coherence tomography, pattern recognition, and machine learning.

Robin Krug did his introductory semester in human medicine 2000–2001 at the Rheinische Bildungszentrum Köln GmbH. Then, he studied human medicine from 2001 to 2008 at the Christian-Albrechts Universität Kiel. He pursued his career as an assistant in surgery, obtaining his common trunk at the Regio Klinikum Elmshorn. Since 2012 he joined the group of Professor Dr. Kirsten Schmieder at the Universitätsklinikum Knappschaftskrankenhaus Bochum-Langendreer as an assistant in neurosurgery.

Christopher Dillmann studied electrical engineering and information technology at the Technische Hochschule Georg Agricola and received his bachelor’s degree in 2016. Since 2016, he works as a scientific assistant for the Faculty of Electrical Engineering, Information Technology and Business Engineering at the Technische Hochschule Georg Agricola. His research interests include image processing and pattern recognition.

Ralf Stroop studied his MSc degree in biochemistry, medicine, and electrical- and information technologies at the Charité, Berlin and Hagen. He graduated in neurotraumatology at the Charité University Hospital Berlin. He is working as a clinical practitioner in the field of neurosurgery and emergency medicine, as well in the field of applied medical research. A special interest is the topic of interoperative tumor detection focusing on optical and tactile sensor technology assembling an international research group.

Nils C. Gerhardt studied physics at the Philipps-Universität Marburg and received his diploma in 2001. He joined the Ruhr-Universität Bochum where he finished his PhD in electrical engineering in 2005 in the group of Professor M. Hofmann. In 2006, he cofounded the PhotonIQ Technologies GmbH. In 2013, he received his habilitation in optoelectronics and photonics. He is author or coauthor of more than 150 international publications. His research interests include semiconductor spectroscopy, spin-optoelectronics, and optical imaging.

Hubert Welp studied physics at the Universities of Münster and Marburg, where he received his diploma in 1990 and his PhD in 1994. After a one-year employment as a research assistant at MedScience Ltd., he continued his industrial career as a consultant at IBM. In 2004, he became a professor of applied computer science at the Technische Hochschule Georg Agricola Bochum. His main research interests are machine learning, computer vision, and software engineering.

Kirsten Schmieder obtained her PhD in medicine in 1993 and her habilitation in neurosurgery in 2000. She has been an adjunct professor at the Ruhr-Universität Bochum (RUB) since 2006 and full professor since 2012. Between 2008 and 2012, she was a full professor and a director at the Klinik für Neurochirurgie der Universitätsmedizin Mannheim, Universität Heidelberg. Since 2017, she has been a member of the faculty council and clinic director at the Knappschaftskrankenhaus Bochum-Langendreer.

Martin R. Hofmann holds the chair for Photonics and Terahertz Technology (PTT) at the Ruhr-Universität Bochum, Germany. He finished his PhD in 1994 at Philipps-Universität Marburg. From 1995 to spring 1996, he worked at the University College Cork, Ireland, at the Fondazione Ugo Bordoni, Rome, Italy, and at Tele Danmark Research in Hoersholm, Denmark. From 1996 to July 2001, he performed his habilitation at the Philipps-Universität Marburg before he joined the Ruhr-Universität in 2001.

© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2018/$25.00 © 2018 SPIE
Marcel Lenz, Robin Krug, Christopher Dillmann, Ralf Stroop, Nils C. Gerhardt, Hubert Welp, Kirsten Schmieder, and Martin R. Hofmann "Automated differentiation between meningioma and healthy brain tissue based on optical coherence tomography ex vivo images using texture features," Journal of Biomedical Optics 23(7), 071205 (26 February 2018). https://doi.org/10.1117/1.JBO.23.7.071205
Received: 23 October 2017; Accepted: 19 January 2018; Published: 26 February 2018
Lens.org Logo
CITATIONS
Cited by 33 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Tissues

Optical coherence tomography

Brain

Tumors

Surgery

Statistical analysis

Image classification

Back to Top