Open Access
1 July 2009 Segmentation of optical coherence tomography images for differentiation of the cavernous nerves from the prostate gland
Author Affiliations +
Abstract
The cavernous nerves course along the surface of the prostate and are responsible for erectile function. Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery may improve nerve preservation and postoperative sexual potency. Two-dimensional (2-D) optical coherence tomography (OCT) images of the rat prostate were segmented to differentiate the cavernous nerves from the prostate gland. To detect these nerves, three image features were employed: Gabor filter, Daubechies wavelet, and Laws filter. The Gabor feature was applied with different standard deviations in the x and y directions. In the Daubechies wavelet feature, an 8-tap Daubechies orthonormal wavelet was implemented, and the low-pass sub-band was chosen as the filtered image. Last, Laws feature extraction was applied to the images. The features were segmented using a nearest-neighbor classifier. N-ary morphological postprocessing was used to remove small voids. The cavernous nerves were differentiated from the prostate gland with a segmentation error rate of only 0.058±0.019. This algorithm may be useful for implementation in clinical endoscopic OCT systems currently being studied for potential intraoperative diagnostic use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

1.

Introduction

Preservation of the cavernous nerves during prostate cancer surgery is critical in preserving a man’s ability to have spontaneous erections following surgery. These microscopic nerves course along the surface of the prostate within a few millimeters of the prostate capsule, and they vary in size and location from one patient to another, making preservation of the nerves difficult during dissection and removal of a cancerous prostate gland. These observations may explain in part the wide variability in reported potency rates (9 to 86%) following prostate cancer surgery.1 Any technology capable of providing improved identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery would be of great assistance in increasing sexual function rates after surgery.

Optical coherence tomography (OCT) is a noninvasive optical imaging technique used to perform high-resolution cross-sectional in vivo and in situ imaging of microstructure in biological tissues.2 OCT imaging of the cavernous nerves in the rat and human prostate has recently been demonstrated.3, 4, 5 However, further improvement in the quality of the images is necessary before OCT can be used in the clinic as an intraoperative diagnostic tool during nerve-sparing prostate cancer surgery.

Three-dimensional (3-D) prostate segmentation, which allows clinicians to design an accurate brachytherapy treatment plan for prostate cancer, has been previously reported using computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound.6, 7 Recently, various segmentation approaches have also been applied in retinal OCT imaging. Ishikawa described an approach to segment retinal layers and extract thickness of the layers.8 Their algorithm searches for borders of retinal layers by applying an adaptive thresholding technique. Bagci described an algorithm to detect layers within the retinal tissue by enhancing edges along the image vertical dimension.9 Methods based on a Markov model and deformable splines were reported for determination of optic nerve-head geometry and thickness of retinal nerve fibers, respectively. 10, 11 However, large irregular voids in prostate OCT images require a segmentation approach different than that used for segmentation of the more regular structure of retinal layers.

Our research group recently applied the wavelet shrinkage denoising technique to improve the quality of OCT images of the prostate for identification of the cavernous nerves.12 Building on these earlier results, the segmentation technique reported here has the advantage that it is not dependent on the depth of the nerves below the tissue surface. In this regard, the proposed segmentation approach is a more versatile method. In this study, 2-D prostate images are segmented into three regions of background, nerve, and prostate gland using a nearest-neighbor classifier.

2.

Segmentation System

A block diagram of the segmentation system is provided in Fig. 1 . The input image f(x,y) is first processed to form three feature images. The features are generated by Gabor filtering, Daubechies wavelet transform, and Laws filter mask, respectively. The prostate image is then segmented into nerve, prostate, and background classes using a k -nearest neighbors classifier and the three feature images. Last, N -ary morphological postprocessing is used to remove small voids. The generation of the feature images are first described here, followed by descriptions of the classifier and postprocessing.

Fig. 1

System block diagram. Input image f(x,y) is processed into three feature images: Gabor filtered image, 8-tap Daubechies wavelet sub-band, and Laws feature. The features are classified by a k -nearest neighbors classifier into three classes: background, nerve, and prostate gland. Last, N -ary morphological close and open functions are applied, generating the final output segmented image s(x,y) .

044033_1_040904jbo1.jpg

2.1.

Gabor Filter

The first feature image is generated by a Gabor filter with impulse response h(x,y) ,13

Eq. 1

h(x,y)=g(x,y)exp[j2π(Ux+Vy)],
where

Eq. 2

g(x,y)=12πσxσyexp[12(x2σx2+y2σy2)].

The Gabor function h(x,y) is a complex sinusoid centered at frequency (U,V) and modulated by a Gaussian envelope g(x,y) . The spatial extent of the Gaussian envelope is determined by parameters σx,σy . The 2-D Fourier transform of h(x,y) is

Eq. 3

H(u,v)=G(uU,vV),
where

Eq. 4

G(u,v)=exp[2π2(σx2u2+σy2v2)],
is the Fourier transform of g(x,y) . The parameters (U,V,σx,σy) determine h(x,y) . Equations 3, 4 show that the Gabor function is essentially a bandpass filter centered about frequency (U,V) with bandwidth determined by σx,σy . The Gabor feature center frequency of (0.2,0.2)cyclespixel is applied with standard deviations of 3 and 6 in the x and y directions, respectively, based on experimental observation of minimum segmentation error.

2.2.

Daubechies Wavelet Transform

The second feature is generated by an 8-tap Daubechies orthonormal wavelet transform, which is the representation of a function by scaled and translated copies of a finite-length or fast-decaying oscillating wave form that can be used to analyze signals at multiple scales. Wavelet coefficients carry both time and frequency information, as the basis functions vary in position and scale.

The discrete wavelet transform (DWT) converts a signal to its wavelet representation. In a one-level DWT, the image c0 is split into an approximation part c1 and a detail part d1 . In a multilevel DWT, each subsequent ci is split into an approximation ci+1 and detail di+1 . For 2-D images, each ci is split into an approximation ci+1 and three detail channels di+11 , di+12 and di+13 for horizontally, vertically, and diagonally oriented details, respectively, as illustrated in Fig. 2 . The inverse DWT (IDWT) reconstructs each ci from ci+1 and di+1 . In the present work, the approximation part c1 is chosen as the filtered image for the second feature.

Fig. 2

Ordering of the approximation and detail coefficients of a two-level 2-D wavelet transform.

044033_1_040904jbo2.jpg

2.3.

Laws Filter

The third feature is generated by the Laws feature extraction method. The set of nine Laws 3×3pixel impulse response arrays hi(x,y) (Ref. 14) is convolved with a texture field to accentuate its microstructure. The i th microstructure image mi(x,y) is defined as

Eq. 5

mi(x,y)=f(x,y)hi(x,y).

Then, the energy of these microstructure arrays is measured by forming their moving window standard deviation Ti(x,y) according to

Eq. 6

Ti(x,y)=12w+1{m=wwn=ww[mi(x+m,y+n)μ(x+m,y+n)]2}12,
where w sets the window size, and μ(x,y) is the mean value of mi(x,y) over the window.

For the present system, Laws feature extraction is applied by using the Laws 2 mask as follows:

Eq. 7

Laws2=112(101202101).

Standard deviation computation of Eq. 6 is performed after the Laws mask filtering to complete the Laws feature extraction.

2.4.

K -Nearest Neighbors Classifier

The k -nearest neighbors algorithm ( k -NN) is a method for classifying objects where classification is based on the k -closest training samples in the feature space. It is implemented by the following steps:

  • 1. Training: The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. The space is partitioned into regions by locations and labels of the training samples. Three classes are used: background, nerve, and prostate gland.

  • 2. Parameter selection: The best choice of the parameter k depends on the data, where larger values of k typically reduce the effect of noise on the classification but make boundaries between classes less distinct. A parameter value of k=10 is empirically chosen ( k varied from 4 to 12) for the present implementation of the k -nearest neighbors algorithm for segmentation of the prostate images.

  • 3. Classification scheme: After training the classifier and selecting the parameter k , the prostate image is segmented based on the three feature images forming the feature vector. The Euclidean distances from the image feature vector to all stored vectors are computed, and the k -closest samples are selected. A point in the prostate image is assigned to the nerve class if it is the most frequent class label among the k -nearest training samples. After classification, the N -ary morphological postprocessing is applied to remove small voids in the final results.

2.5.

N -ary Morphological Postprocessing

The N -ary morphological postprocessing method for eliminating small misclassified regions proceeds in two steps.15 In the first step, pixels whose neighborhood consists entirely of one class in the classified image are left unchanged. Otherwise, the pixel value is set to zero to indicate that the pixel is no longer assigned to any class. In the second step, each unassigned pixel is assigned to the most prevalent class within the 8-neighborhood surrounding the pixel.

3.

Results

OCT images were taken in vivo in a rat model using a clinical endoscopic OCT system (Imalux, Cleveland, Ohio) based on an all single-mode fiber (SMF) common-path interferometer-based scanning system (Optiphase, Van Nuys, California). Mathcad 14.0 (Parametric Technology Corporation, Needham, Massachusetts) was used for implementation of the segmentation algorithm described earlier.

Figures 3a, 3c, 3e show the original OCT images of the cavernous nerves at different orientations (longitudinal, cross-sectional, and oblique) coursing along the surface of the rat prostate. Figures 3b, 3d, 3f show the same OCT images after segmentation using the system of Fig. 1. The cavernous nerves could be differentiated from the prostate gland using this segmentation algorithm.

Fig. 3

OCT images of the rat cavernous nerve: (a) and (b) longitudinal section; (c) and (d) cross section; (e) and (f) oblique section. (a), (c), and (e) Before; and (b), (d), and (f) after segmentation.

044033_1_040904jbo3.jpg

The error rate was calculated by: Error=(No. of error pixels)/(No. of total pixels), where (No. of error pixels)=(No. of false-positives+No. of false-negatives). The overall error rate for the segmentation was 0.058 with a standard deviation of 0.019, indicating the robustness of our technique. The error rate was measured as a mean of error measurements for three different sample images at different orientations (longitudinal, cross-sectional, and oblique). A different image was used for training. The error rate was determined by comparing manually segmented images to the automatically segmented images. These manually segmented images of the cavernous nerves were previously created according to histologic correlation with OCT images.12

Overall, the proposed image segmentation of Fig. 1 performed well for identification of the cavernous nerves in the prostate. Areas that need improvement include the classification of prostate gland in which there are a few small scattered regions (shown in white) in the prostate that are erroneously segmented as part of the nerves [e.g., Fig. 3b]. For the present study, it was advantageous to manually vary the Gabor filter parameters so that the Gabor filter efficacy could be directly observed in the filtered images. Based on prior investigations,13, 16 the present results demonstrate the potential of our overall approach, although future work could include automation of Gabor filter parameter selection. Cross-validation, parameter optimization, and evaluation of alternative classifiers could also be performed. Nevertheless, our current results provide a foundation for more comprehensive studies.

Last, it should be noted that the rat model represents an idealized version of the prostate anatomy because the cavernous nerve lies on the surface of the prostate and is therefore directly visible. However, in the human anatomy, there may be intervening tissue between the OCT probe and the nerves, making identification more difficult. An important advantage of the proposed classifier-based segmentation approach is that the classifier should also be able to locate the cavernous nerve when it lies at various depths beneath the surface.

4.

Conclusion

This algorithm for image segmentation of the prostate nerves may prove useful for implementation in clinical endoscopic OCT systems currently being studied for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

Acknowledgments

This research was supported by the Department of Defense Prostate Cancer Research Program, Grant No. PC073709. The authors thank Nancy Tresser of Imalux Corporation (Cleveland, Ohio) for lending us the Niris OCT system for these studies.

References

1. 

A. Burnett, G. Aus, E. Canby-Hagino, M. Cookson, A. D’Amico, R. Domchowski, D. Eton, J. Forman, S. Goldenberg, J. Hernandez, C. Higano, S. Kraus, M. Liebert, J. Moul, C. Tangen, J. Thrasher, and I. Thompson, “Function outcome reporting after clinically localized prostate cancer treatment,” J. Urol., 178 597 –601 (2007). 0022-5347 Google Scholar

2. 

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, C. Puliafito, and J. Fujimoto, “Optical coherence tomography,” Science, 254 1178 –1181 (1991). https://doi.org/10.1126/science.1957169 0036-8075 Google Scholar

3. 

M. Aron, J. Kaouk, N. Hegarty, J. Colombo, G. Haber, B. Chung, M. Zhou, and I. Gill, “Preliminary experience with the niris optical coherence tomography system during laparoscopic and robotic prostatectomy,” J. Endourol., 21 814 –818 (2007). https://doi.org/10.1089/end.2006.9938 0892-7790 Google Scholar

4. 

N. Fried, S. Rais-Bahrami, G. Lagoda, A. Chuang, A. Burnett, and L. Su, “Imaging the cavernous nerves in rat prostate using optical coherence tomography,” Lasers Surg. Med., 39 36 –41 (2007). https://doi.org/10.1002/lsm.20454 0196-8092 Google Scholar

5. 

S. Rais-Bahrami, A. Levinson, N. Fried, G. Lagoda, A. Hristov, A. Chuang, A. Burnett, and L. Su, “Optical coherence tomography of cavernous nerves: a step toward real-time intraoperative imaging during nerve-sparing radical prostatectomy,” Urology, 72 198 –204 (2008). https://doi.org/10.1016/j.urology.2007.11.084 0090-4295 Google Scholar

6. 

D. Freedman, R. Radke, T. Zhang, Y. Jeong, D. Lovelock, and G. Chen, “Model-based segmentation of medical imagery by matching distributions,” IEEE Trans. Med. Imaging, 24 281 –292 (2005). https://doi.org/10.1109/TMI.2004.841228 0278-0062 Google Scholar

7. 

Y. Zhan and D. Shen, “Deformable segmentation of 3-D ultrasound prostate images using statistical texture matching method,” IEEE Trans. Med. Imaging, 25 256 –272 (2006). https://doi.org/10.1109/TMI.2005.862744 0278-0062 Google Scholar

8. 

H. Ishikawa, D. Stein, G. Wollstein, S. Beaton, J. Fujimoto, and J. Schuman, “Macular segmentation with optical coherence tomography,” Invest. Ophthalmol. Visual Sci., 46 2012 –2017 (2005). https://doi.org/10.1167/iovs.04-0335 0146-0404 Google Scholar

9. 

A. Bagci, R. Ansari, and M. Shahidi, “A method for detection of retinal layers by optical coherence tomography image segmentation,” 144 (2007). Google Scholar

10. 

K. Boyer, A. Herzog, and C. Roberts, “Automatic recovery of the optic nervehead geometry in optical coherence tomography,” IEEE Trans. Med. Imaging, 25 553 –570 (2006). https://doi.org/10.1109/TMI.2006.871417 0278-0062 Google Scholar

11. 

M. Mujat, R. Chan, B. Cense, B. Park, C. Joo, T. Akkin, T. Chen, and J. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express, 13 9480 –9491 (2005). https://doi.org/10.1364/OPEX.13.009480 1094-4087 Google Scholar

12. 

S. Chitchian, M. Fiddy, and N. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt., 14 014031 (2009). https://doi.org/10.1117/1.3081543 1083-3668 Google Scholar

13. 

T. Weldon, W. Higgins, and D. Dunn, “Efficient Gabor filter design for texture segmentation,” Pattern Recogn., 29 2005 –2015 (1996). https://doi.org/10.1016/S0031-3203(96)00047-7 0031-3203 Google Scholar

14. 

W. Pratt, Digital Image Processing, Wiley, Hoboken. NJ (2007). Google Scholar

15. 

T. Weldon, “Removal of image segmentation boundary errors using an N-ary morphological operator,” 509 (2007). Google Scholar

16. 

T. Weldon and W. Higgins, “Designing multiple Gabor filters for multitexture image segmentation,” Opt. Eng., 38 1478 –1489 (1999). https://doi.org/10.1117/1.602196 0091-3286 Google Scholar
©(2009) Society of Photo-Optical Instrumentation Engineers (SPIE)
Shahab Chitchian, Thomas P. Weldon, and Nathaniel M. Fried "Segmentation of optical coherence tomography images for differentiation of the cavernous nerves from the prostate gland," Journal of Biomedical Optics 14(4), 044033 (1 July 2009). https://doi.org/10.1117/1.3210767
Published: 1 July 2009
Lens.org Logo
CITATIONS
Cited by 19 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Prostate

Optical coherence tomography

Image filtering

Nerve

Prostate cancer

Surgery

Back to Top