Open Access
1 July 2010 Combined image-processing algorithms for improved optical coherence tomography of prostate nerves
Author Affiliations +
Abstract
Cavernous nerves course along the surface of the prostate gland and are responsible for erectile function. These nerves are at risk of injury during surgical removal of a cancerous prostate gland. In this work, a combination of segmentation, denoising, and edge detection algorithms are applied to time-domain optical coherence tomography (OCT) images of rat prostate to improve identification of cavernous nerves. First, OCT images of the prostate are segmented to differentiate the cavernous nerves from the prostate gland. Then, a locally adaptive denoising algorithm using a dual-tree complex wavelet transform is applied to reduce speckle noise. Finally, edge detection is used to provide deeper imaging of the prostate gland. Combined application of these three algorithms results in improved signal-to-noise ratio, imaging depth, and automatic identification of the cavernous nerves, which may be of direct benefit for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

1.

Introduction

Preservation of cavernous nerves during radical prostatectomy for prostate cancer is critical for preserving sexual function after surgery. These nerves are at risk of injury during dissection and removal of a cancerous prostate gland because of the close proximity of the nerves to the prostate surface (Fig. 1 ). Their microscopic nature also makes it difficult to predict the true course and location of these nerves from one patient to another. These observations may explain in part the wide variability in reported potency rates (9 to 86%) following prostate cancer surgery.1 Therefore, any technology capable of providing improved identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery would aid the preservation of the nerves and improve postoperative sexual potency.

Fig. 1

(a) Cross sectional diagram of the human prostate showing the location of the neurovascular bundles and their close proximity to the prostate surface. The dotted line indicates the route of dissection between the prostatic capsule and the neurovascular bundle. (b) Image of human prostate during surgery. Arrows indicate the surgical dissection plane, and the dashed line indicates the position of the periprostatic neurovascular bundle under a superficial layer of fascia.

046014_1_078004jbo1.jpg

OCT is a noninvasive optical imaging technique that can be used to perform high-resolution, cross sectional in vivo and in situ imaging of microstructures in biological tissues.2 OCT imaging of cavernous nerves in rat and human prostate has recently been demonstrated.3, 4, 5, 6 However, improvements in the quality of the OCT images for identification of the cavernous nerves are necessary before clinical use.

For the present work, OCT images were acquired in vivo using a clinical endoscopic OCT system (Imalux, Cleveland, Ohio) based on an all single-mode fiber common-path interferometer-based scanning system (Optiphase, Van Nuys, California). An 8-Fr ( 2.6-mm -OD) probe was used with the OCT system. The system is capable of acquiring real-time images at 200×200pixels with 11-μm axial and 25-μm lateral resolutions in tissue.

The following study describes a step-by-step approach that employs three complementary image processing algorithms (Fig. 2 ) for improving identification and imaging of the cavernous nerves during OCT of the prostate gland. In previous work, a segmentation approach was successfully used to identify the cavernous nerves.7 However, it has proven challenging to image deeper prostate tissues with OCT. Therefore, the segmentation system in the left branch of Fig. 2 is augmented by the denoising and edge detection systems in the right branch of Fig. 2. This edge detection system is later shown to improve OCT imaging of deeper prostate tissue structures.

Fig. 2

Flow chart describing a step-by-step application of complementary image processing algorithms for OCT of the prostate nerves.

046014_1_078004jbo2.jpg

In the left branch of Fig. 2, 2-D OCT images of rat prostate are segmented to differentiate the cavernous nerves from the prostate gland. It should be noted that ultrasound image segmentation of the prostate, which allows clinicians to design an accurate brachytherapy treatment plan for prostate cancer, has been previously reported.8 Various alternative segmentation approaches have also recently been applied in retinal OCT imaging.9, 10, 11, 12, 13, 14, 15, 16 However, large irregular voids in prostate OCT images require a segmentation approach different than that used for segmentation of the more regular structure of retinal layers. Therefore, to detect cavernous nerves, three image features are employed: a Gabor filter, Daubechies wavelet, and Laws filter. The Gabor feature is applied with different standard deviations in the x and y directions. In the Daubechies wavelet feature, an eight-tap Daubechies orthonormal wavelet is implemented, and the low-pass subband is chosen as the filtered image. Finally, Laws feature extraction is applied to the images. The features are segmented using a nearest-neighbor classifier. N-ary morphological postprocessing is used to remove small voids.

As a next step to improve OCT imaging of the prostate gland, wavelet denoising is applied. Recently, wavelet techniques have been employed successfully in speckle noise reduction for MRI, ultrasound, and OCT images.17, 18, 19 A locally adaptive denoising algorithm is applied before edge detection to reduce speckle noise in OCT images of the prostate.20 The denoising algorithm is illustrated using the dual-tree complex wavelet transform. After wavelet denoising, an edge detection algorithm based on thresholding and spatial first-order differentiation is implemented to provide deeper imaging of the prostate gland. This algorithm addresses one of the main limitations in OCT imaging of the prostate tissue, which is the inability to image deep into the prostate. Currently, OCT is limited to an image depth of approximately 1mm in most opaque soft tissues. In the following sections, a segmentation approach is first described, followed by details of denoising and edge detection approaches.

2.

Segmentation System

The input image is first processed to form three feature images. The prostate image is then segmented into nerve, prostate, and background classes using a k -nearest neighbors classifier and the three feature images. Finally, N-ary morphology is used for postprocessing. The generation of the feature images are first described, followed by descriptions of the classifier and postprocessing.

2.1.

Gabor Filter

The first feature image is generated by a Gabor filter with impulse response h(x,y) ,21

Eq. 1

h(x,y)=g(x,y)exp[j2π(Ux+Vy)],
where

Eq. 2

g(x,y)=12πσxσyexp[12(x2σx2+y2σy2)].

The Gabor function is essentially a bandpass filter centered about frequency (U,V) with bandwidth determined by σx,σy . The Gabor feature center frequency of (0.2,0.2)cyclespixel is applied with standard deviations of 3 and 6 in the x and y directions, respectively, based on experimental observation of minimum segmentation error.

2.2.

Daubechies Wavelet Transform

The second feature is generated by the eight-tap Daubechies orthonormal wavelet transform. The discrete wavelet transform (DWT) converts a signal to its wavelet representation. In a one-level DWT, the image c0 is split into an approximation part c1 and detail parts d11 , d12 , and d13 for horizontal, vertical, and diagonal orientations, respectively. In a multilevel DWT, each subsequent ci is split into an approximation ci+1 and details di+11 , di+12 , and di+13 . In the present work, the approximation part c1 is chosen as the filtered image for the second feature.

2.3.

Laws Filter

The third feature is generated by the Laws feature extraction method. Laws 2 mask h(x,y) 22 is convolved with the image to accentuate its microstructure. The microstructure image m(x,y) is defined as

Eq. 3

m(x,y)=f(x,y)h(x,y),
where

Eq. 4

h=112(101202101).

Then, standard deviation computation is performed after the Laws mask filtering to complete the Laws feature extraction.

2.4.

K -Nearest Neighbor Classifier and Postprocessing

The k -nearest neighbors algorithm ( k -NN) is a method for classifying objects where classification is based on the k closest training samples in the feature space. It is implemented by training, parameter selection, and classification steps, followed by the N-ary morphological postprocessing method for eliminating small misclassified regions.7

3.

Wavelet Shrinkage Denoising

Wavelet shrinkage is denoising by shrinking (nonlinear soft thresholding) in the wavelet transform domain. The observed image X is modeled as an uncorrupted image S and multiplicative speckle noise N . On a logarithmic scale, speckle is converted to additive noise X=S+N . The wavelet shrinkage denoising algorithm requires the following four-step procedure,20

5.

Y=W(X),λ=d(Y),Z=D(Y,λ),S=W1(Z),
where operator W(.) relates to the wavelet transform, operator d(.) selects a data-adaptive threshold, D(.,λ) denotes the denoising operator with threshold λ , and W1 relates the inverse wavelet transform.

3.1.

Two-Dimensional Dual-Tree Complex Wavelet Transform

In the proposed method, the dual-tree complex wavelet transform (CDWT) calculates the complex transform of a signal using two separate DWT decompositions. If the filters used in one are specifically designed differently from those in the other, it is possible for one DWT to produce the real coefficients and the other the imaginary coefficients. This redundancy of two provides extra information for analysis at the expense of extra computational power.

In the proposed CDWT, wavelet coefficients are calculated from the Farras nearly symmetric wavelet.23

3.2.

Shrinkage Denoising

Bivariate shrinkage with a local variance estimation algorithm24 is applied for shrinkage denoising. After estimating the signal components of the noisy coefficients in the wavelet domain, the inverse wavelet transform is taken to reconstruct the noise-free image.

4.

Edge Detection System

A block diagram of the edge detection system is shown in Fig. 3 . After luminance thresholding on the input image f(x,y) , a first-order spatial differentiator of orthogonal gradient is performed to produce the differential image g(x,y) with accentuated spatial amplitude changes. Morphological postprocessing is then used to accentuate edges.

Fig. 3

Edge detection system block diagram.

046014_1_078004jbo3.jpg

4.1.

Luminance Thresholding

In this section, the glandular structure of the prostate is judged present if the luminance exceeds the threshold level of the background. The center of the glandular structures, below the boundary, in the denoised prostate image represents the background threshold level, because the boundaries of these glandular structures can be located at a superficial level.

4.2.

Orthogonal Gradient Generation

After applying the threshold level to the denoised image f(x,y) , a form of spatial first-order differentiation is performed in two orthogonal directions. In the discrete domain, the gradient in each direction is generated by22

Eq. 6

gr,c(x,y)=f(x,y)hr,c(x,y),
where

Eq. 7

hr=14(101202101),hc=14(121000121),
are the row and column impulse response arrays for the 3×3 Sobel orthogonal gradient operator.

The gradient amplitude is approximated by the magnitude combination

Eq. 8

g(x,y)=|gr(x,y)|+|gc(x,y)|.

4.3.

Morphological Postprocessing

Morphological postprocessing for accentuating edges proceeds by close operation. It is implemented by dilation followed by erosion.

5.

Combined Algorithms

Figure 2 shows the order of the combined algorithms. The segmentation algorithm was applied to differentiate the cavernous nerves from the prostate gland. This algorithm is independent of the denoising process. However, the edge detection algorithm to provide deeper imaging of the prostate gland based on thresholding and spatial first-order differentiation is dependent on the denoising process. In other words, edges are sensitive to the noise. First, the input image was denoised, then the edge detection was implemented. With a noisy image, threshold selection becomes a tradeoff between missing valid edges and creating noise-induced false edges.

The algorithms were executed on a Core 2 Duo, 1.86-GHz desktop personal computer. There were two parallel processes of Fig. 2, 8-s denoising and edge detection and 10-s segmentation. The total time for the combined processing algorithms was 10s .

6.

Results

The unprocessed time-domain (TD)-OCT images of the cavernous nerves at different orientations (longitudinal, oblique, and cross sectional) along the surface of the rat prostate are shown in Figs. 4, 4, 4 . Histologic sections of the cavernous nerves were previously processed for comparison.20

Fig. 4

OCT images of the rat cavernous nerve: (a) and (b) longitudinal section; (c) and (d) cross section; (e) and (f) oblique section. (a), (c), and (e) show before; and (b), (d), and (f) show after denoising.

046014_1_078004jbo4.jpg

Figures 4, 4, 4 show the images after denoising using CDWT. The global signal-to-noise ratio (SNR) is calculated as

Eq. 9

SNR=10×log[max(Xlin)2σlin2],
where Xlin is the 2-D matrix of pixel values in the OCT image and σlin2 is the noise variance, both on linear intensity scales.25 The mean value of SNR for nine sample images before and after denoising was measured to be 26.65 and 40.87, respectively. Therefore, a SNR increase of approximately 14dB was attained.

Figures 5, 5, 5 show the same OCT images of Figs. 4, 4, 4 after segmentation. The cavernous nerves could be differentiated from the prostate gland using the segmentation algorithm. The error rate was calculated by: error=(numberoferrorpixels)(numberoftotalpixels) , where (numberoferrorpixels)=(numberoffalsepositives+numberoffalsenegatives) . The overall error rate for the segmentation of the nerves was 0.058 with a standard deviation of 0.019, indicating the robustness of our technique. The error rate was measured as a mean of error measurements for three different sample images at different orientations (longitudinal, cross sectional, and oblique). A different image was used for training. The error rate was determined by comparing manually segmented images to the automatically segmented images of the nerves. These manually segmented images of the cavernous nerves were previously created according to histologic correlation with OCT images.20 Figures 5, 5, 5 combine edge detection of the denoised images and the segmentation results. Manual segmentation of the prostate gland was implemented to calculate performance of the edge detection algorithm. The overall error rate for the segmentation of the prostate gland was 0.076 with a standard deviation of 0.022.

Fig. 5

OCT images of the rat cavernous nerve: (a) and (b) longitudinal section; (c) and (d) cross section; (e) and (f) oblique section. (a), (c), and (e) show segmented and (b), (d), and (f) show edge detected images.

046014_1_078004jbo5.jpg

7.

Discussion

The proposed edge detection approach was successful in accentuating prostate structures deeper in the tissue, and the cavernous nerves could be differentiated from the prostate gland using the segmentation algorithm. The glandular structure of the prostate could be observed to a depth of approximately 1.6mm in Figs. 5, 5, 5 in comparison with an approximately 1-mm depth in the unprocessed OCT images in Figs. 4, 4, 4. Overall, the edge detection technique enhanced structures deeper in the prostate gland, and the proposed image segmentation algorithm performed well for identification of the cavernous nerves in the prostate.

It should also be noted that the rat model used in this study represents an idealized version of the prostate anatomy, because the cavernous nerve lies on the surface of the prostate, and is therefore directly visible. However, in human anatomy, there can be an intervening layer of fascia (Fig. 1) between the OCT probe and the nerves, making identification more difficult. Since one major limitation of OCT is its superficial imaging depth in opaque tissues, an important advantage of these image processing algorithms is that the final OCT image should be able to provide deeper imaging in the tissue and locate the cavernous nerve when it lies at various depths beneath periprostatic tissues.

8.

Conclusion

The segmentation technique is applied to differentiate cavernous nerves from the prostate gland in rat prostate. The wavelet shrinkage denoising technique using a dual-tree complex wavelet transform is used for speckle noise reduction, and by using edge detection, deeper imaging of the prostate gland is accomplished. These algorithms for image segmentation, denoising, and edge detection of the prostate may be of direct benefit for implementation in clinical endoscopic OCT systems currently being studied for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

Acknowledgments

This research was supported in part by the Department of Defense Prostate Cancer Research Program, grant number PC073709, and the Department of Energy, grant number DE-FG02-06CH11460. The authors thank Paul Amazeen and Nancy Tresser of Imalux Corporation (Cleveland, Ohio) for providing the OCT system used in these studies.

References

1. 

A. Burnett, G. Aus, E. Canby-Hagino, M. Cookson, A. D’Amico, R. Dmochowski, D. Eton, J. Forman, S. Goldenberg, J. Hernandez, C. Higano, S. Kraus, M. Liebert, J. Moul, C. Tangen, J. Thrasher, and I. Thompson, “Function outcome reporting after clinically localized prostate cancer treatment,” J. Urol., 178 597 –601 (2007). 0022-5347 Google Scholar

2. 

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, C. Puliafito, and J. Fujimoto, “Optical coherence tomography,” Science, 254 1178 –1181 (1991). https://doi.org/10.1126/science.1957169 0036-8075 Google Scholar

3. 

M. Aron, J. Kaouk, N. Hegarty, J. Colombo, G. Haber, B. Chung, M. Zhou, and I. Gill, “Preliminary experience with the niris optical coherence tomography system during laparoscopic and robotic prostatectomy,” J. Endourol, 21 814 –818 (2007). https://doi.org/10.1089/end.2006.9938 0892-7790 Google Scholar

4. 

N. Fried, S. Rais-Bahrami, G. Lagoda, A. Chuang, A. Burnett, and L. Su, “Imaging the cavernous nerves in rat prostate using optical coherence tomography,” Lasers Surg. Med., 39 36 –41 (2007). https://doi.org/10.1002/lsm.20454 0196-8092 Google Scholar

5. 

N. M. Fried, S. Rais-Bahrami, G. A. Lagoda, A.-Y. Chuang, L.-M. Su, and A. L. Burnett, “Identification and imaging of the nerves responsible for erectile function in rat prostate, in vivo, using optical nerve stimulation and optical coherence tomography,” IEEE J. Sel. Top. Quantum Electron., 13 1641 –1645 (2007). https://doi.org/10.1109/JSTQE.2007.910119 1077-260X Google Scholar

6. 

S. Rais-Bahrami, A. W. Levinson, N. M. Fried, G. A. Lagoda, A. Hristov, Y. Chuang, A. L. Burnett, and L.-M. Su, “Optical coherence tomography of cavernous nerves: A step toward real-time intraoperative imaging during nerve-sparing radical prostatectomy,” Urology, 72 198 –204 (2008). https://doi.org/10.1016/j.urology.2007.11.084 0090-4295 Google Scholar

7. 

S. Chitchian, T. Weldon, and N. Fried, “Segmentation of optical coherence tomography images for differentiation of the cavernous nerves from the prostate gland,” J. Biomed. Opt., 14 (4), 044033 (2009). https://doi.org/10.1117/1.3210767 1083-3668 Google Scholar

8. 

J. Noble and D. Boukerroui, “Ultrasound image segmentation: a survey,” IEEE Trans. Med. Imaging, 25 987 –1010 (2006). https://doi.org/10.1109/TMI.2006.877092 0278-0062 Google Scholar

9. 

D. Cabrera Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express, 13 10200 –10216 (2005). https://doi.org/10.1364/OPEX.13.010200 1094-4087 Google Scholar

10. 

M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. Srinivasan, A. Szkulmowska, J. Kaluzny, J. Fujimoto, and A. Kowalczyk, “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies,” J. Biomed. Opt., 12 (4), 041207 (2007). https://doi.org/10.1117/1.2771569 1083-3668 Google Scholar

11. 

M. Haeker, M. Sonka, R. Kardon, V. Shah, X. Wu, and M. Abramoff, “Automated segmentation of intraretinal layers from macular optical coherence tomography images,” Proc. SPIE, 6512 651214 (2007). https://doi.org/10.1117/12.710231 0277-786X Google Scholar

12. 

M. Garvin, M. Abramoff, R. Kardon, S. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging, 27 1495 –1505 (2008). https://doi.org/10.1109/TMI.2008.923966 0278-0062 Google Scholar

13. 

E. Gotzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. Hitzenberger, “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express, 16 16410 –16422 (2008). https://doi.org/10.1364/OE.16.016410 1094-4087 Google Scholar

14. 

C. Ahlers, C. Simader, W. Geitzenauer, G. Stock, P. Stetson, S. Dastmalchi, and U. Schmidt-Erfurth, “Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography,” Br. J. Ophthamol., 92 197 –203 (2008). https://doi.org/10.1136/bjo.2007.120956 0007-1161 Google Scholar

15. 

T. Fabritius, S. Makita, M. Miura, R. Myllyla, and Y. Yasuno, “Automated segmentation of the macula by optical coherence tomography,” Opt. Express, 17 15659 –15669 (2009). https://doi.org/10.1364/OE.17.015659 1094-4087 Google Scholar

16. 

A. Mishra, A. Wong, K. Bizheva, and D. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express, 17 23719 –23728 (2009). https://doi.org/10.1364/OE.17.023719 1094-4087 Google Scholar

17. 

D. Adler, T. Ko, and J. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett., 29 2878 –2880 (2004). https://doi.org/10.1364/OL.29.002878 0146-9592 Google Scholar

18. 

A. Pizurica, A. Wink, E. Vansteenkiste, W. Philips, and J. Roerdink, “A review of wavelet denoising in MRI and ultrasound brain imaging,” Curr. Med. Imag. Rev., 2 247 –260 (2006). https://doi.org/10.2174/157340506776930665 Google Scholar

19. 

A. Pizurica, L. Jovanov, B. Huysmans, V. Zlokolica, P. Keyser, F. Dhaenens, and W. Philips, “Multiresolution denoising for optical coherence tomography: a review and evaluation,” Curr. Med. Imag. Rev., 4 270 –284 (2008). https://doi.org/10.2174/157340508786404044 Google Scholar

20. 

S. Chitchian, M. Fiddy, and N. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt., 14 (1), 014031 (2009). https://doi.org/10.1117/1.3081543 1083-3668 Google Scholar

21. 

T. Weldon, W. Higgins, and D. Dunn, “Efficient Gabor filter design for texture segmentation,” Pattern Recogn., 29 2005 –2015 (1996). https://doi.org/10.1016/S0031-3203(96)00047-7 0031-3203 Google Scholar

22. 

W. Pratt, Digital Image Processing, Wiley, New York (2007). Google Scholar

23. 

A. Abdelnour and I. Selesnick, “Design of 2-band orthogonal near-symmetric CQF,” 3693 (2001). Google Scholar

24. 

L. Sendur and I. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process. Lett., 9 438 –441 (2002). https://doi.org/10.1109/LSP.2002.806054 1070-9908 Google Scholar

25. 

S. Xiang, L. Zhou, and J. Schmitt, “Speckle noise reduction for optical coherence tomography,” Proc. SPIE, 3196 79 –88 (1998). https://doi.org/10.1117/12.297921 0277-786X Google Scholar
©(2010) Society of Photo-Optical Instrumentation Engineers (SPIE)
Shahab Chitchian, Thomas P. Weldon, Michael A. Fiddy, and Nathaniel M. Fried "Combined image-processing algorithms for improved optical coherence tomography of prostate nerves," Journal of Biomedical Optics 15(4), 046014 (1 July 2010). https://doi.org/10.1117/1.3481144
Published: 1 July 2010
Lens.org Logo
CITATIONS
Cited by 21 scholarly publications and 2 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Prostate

Image segmentation

Optical coherence tomography

Denoising

Edge detection

Wavelets

Wavelet transforms

Back to Top