## 1.

## Introduction

Preservation of cavernous nerves during radical prostatectomy for prostate cancer is critical for preserving sexual function after surgery. These nerves are at risk of injury during dissection and removal of a cancerous prostate gland because of the close proximity of the nerves to the prostate surface (Fig. 1
). Their microscopic nature also makes it difficult to predict the true course and location of these nerves from one patient to another. These observations may explain in part the wide variability in reported potency rates (9 to 86%) following prostate cancer surgery.^{1} Therefore, any technology capable of providing improved identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery would aid the preservation of the nerves and improve postoperative sexual potency.

OCT is a noninvasive optical imaging technique that can be used to perform high-resolution, cross sectional *in vivo* and *in situ* imaging of microstructures in biological tissues.^{2} OCT imaging of cavernous nerves in rat and human prostate has recently been demonstrated.^{3, 4, 5, 6} However, improvements in the quality of the OCT images for identification of the cavernous nerves are necessary before clinical use.

For the present work, OCT images were acquired *in vivo* using a clinical endoscopic OCT system (Imalux, Cleveland, Ohio) based on an all single-mode fiber common-path interferometer-based scanning system (Optiphase, Van Nuys, California). An 8-Fr (
$2.6\text{-}\mathrm{mm}$
-OD) probe was used with the OCT system. The system is capable of acquiring real-time images at
$200\times 200\phantom{\rule{0.3em}{0ex}}\text{pixels}$
with
$11\text{-}\mu \mathrm{m}$
axial and
$25\text{-}\mu \mathrm{m}$
lateral resolutions in tissue.

The following study describes a step-by-step approach that employs three complementary image processing algorithms (Fig. 2
) for improving identification and imaging of the cavernous nerves during OCT of the prostate gland. In previous work, a segmentation approach was successfully used to identify the cavernous nerves.^{7} However, it has proven challenging to image deeper prostate tissues with OCT. Therefore, the segmentation system in the left branch of Fig. 2 is augmented by the denoising and edge detection systems in the right branch of Fig. 2. This edge detection system is later shown to improve OCT imaging of deeper prostate tissue structures.

In the left branch of Fig. 2, 2-D OCT images of rat prostate are segmented to differentiate the cavernous nerves from the prostate gland. It should be noted that ultrasound image segmentation of the prostate, which allows clinicians to design an accurate brachytherapy treatment plan for prostate cancer, has been previously reported.^{8} Various alternative segmentation approaches have also recently been applied in retinal OCT imaging.^{9, 10, 11, 12, 13, 14, 15, 16} However, large irregular voids in prostate OCT images require a segmentation approach different than that used for segmentation of the more regular structure of retinal layers. Therefore, to detect cavernous nerves, three image features are employed: a Gabor filter, Daubechies wavelet, and Laws filter. The Gabor feature is applied with different standard deviations in the
$x$
and
$y$
directions. In the Daubechies wavelet feature, an eight-tap Daubechies orthonormal wavelet is implemented, and the low-pass subband is chosen as the filtered image. Finally, Laws feature extraction is applied to the images. The features are segmented using a nearest-neighbor classifier. N-ary morphological postprocessing is used to remove small voids.

As a next step to improve OCT imaging of the prostate gland, wavelet denoising is applied. Recently, wavelet techniques have been employed successfully in speckle noise reduction for MRI, ultrasound, and OCT images.^{17, 18, 19} A locally adaptive denoising algorithm is applied before edge detection to reduce speckle noise in OCT images of the prostate.^{20} The denoising algorithm is illustrated using the dual-tree complex wavelet transform. After wavelet denoising, an edge detection algorithm based on thresholding and spatial first-order differentiation is implemented to provide deeper imaging of the prostate gland. This algorithm addresses one of the main limitations in OCT imaging of the prostate tissue, which is the inability to image deep into the prostate. Currently, OCT is limited to an image depth of approximately
$1\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$
in most opaque soft tissues. In the following sections, a segmentation approach is first described, followed by details of denoising and edge detection approaches.

## 2.

## Segmentation System

The input image is first processed to form three feature images. The prostate image is then segmented into nerve, prostate, and background classes using a $k$ -nearest neighbors classifier and the three feature images. Finally, N-ary morphology is used for postprocessing. The generation of the feature images are first described, followed by descriptions of the classifier and postprocessing.

## 2.1.

### Gabor Filter

The first feature image is generated by a Gabor filter with impulse response
$h(x,y)$
,^{21}

## Eq. 2

$$g(x,y)=\frac{1}{2\pi {\sigma}_{x}{\sigma}_{y}}\phantom{\rule{0.2em}{0ex}}\mathrm{exp}[-\frac{1}{2}(\frac{{x}^{2}}{{\sigma}_{x}^{2}}+\frac{{y}^{2}}{{\sigma}_{y}^{2}})].$$The Gabor function is essentially a bandpass filter centered about frequency $(U,V)$ with bandwidth determined by ${\sigma}_{x},{\sigma}_{y}$ . The Gabor feature center frequency of $(0.2,0.2)\phantom{\rule{0.3em}{0ex}}\text{cycles}\u2215\text{pixel}$ is applied with standard deviations of 3 and 6 in the $x$ and $y$ directions, respectively, based on experimental observation of minimum segmentation error.

## 2.2.

### Daubechies Wavelet Transform

The second feature is generated by the eight-tap Daubechies orthonormal wavelet transform. The discrete wavelet transform (DWT) converts a signal to its wavelet representation. In a one-level DWT, the image ${c}_{0}$ is split into an approximation part ${c}_{1}$ and detail parts ${d}_{1}^{1}$ , ${d}_{1}^{2}$ , and ${d}_{1}^{3}$ for horizontal, vertical, and diagonal orientations, respectively. In a multilevel DWT, each subsequent ${c}_{i}$ is split into an approximation ${c}_{i+1}$ and details ${d}_{i+1}^{1}$ , ${d}_{i+1}^{2}$ , and ${d}_{i+1}^{3}$ . In the present work, the approximation part ${c}_{1}$ is chosen as the filtered image for the second feature.

## 2.3.

### Laws Filter

The third feature is generated by the Laws feature extraction method. Laws 2 mask
$h(x,y)$
^{22} is convolved with the image to accentuate its microstructure. The microstructure image
$m(x,y)$
is defined as

## Eq. 4

$$\mathbf{h}=\frac{1}{12}\left(\begin{array}{ccc}1& 0& -1\\ 2& 0& -2\\ 1& 0& -1\end{array}\right).$$Then, standard deviation computation is performed after the Laws mask filtering to complete the Laws feature extraction.

## 2.4.

### $K$ -Nearest Neighbor Classifier and Postprocessing

The
$k$
-nearest neighbors algorithm (
$k$
-NN) is a method for classifying objects where classification is based on the
$k$
closest training samples in the feature space. It is implemented by training, parameter selection, and classification steps, followed by the N-ary morphological postprocessing method for eliminating small misclassified regions.^{7}

## 3.

## Wavelet Shrinkage Denoising

Wavelet shrinkage is denoising by shrinking (nonlinear soft thresholding) in the wavelet transform domain. The observed image
$X$
is modeled as an uncorrupted image
$S$
and multiplicative speckle noise
$N$
. On a logarithmic scale, speckle is converted to additive noise
$X=S+N$
. The wavelet shrinkage denoising algorithm requires the following four-step procedure,^{20}

## 5.

## 3.1.

### Two-Dimensional Dual-Tree Complex Wavelet Transform

In the proposed method, the dual-tree complex wavelet transform (CDWT) calculates the complex transform of a signal using two separate DWT decompositions. If the filters used in one are specifically designed differently from those in the other, it is possible for one DWT to produce the real coefficients and the other the imaginary coefficients. This redundancy of two provides extra information for analysis at the expense of extra computational power.

In the proposed CDWT, wavelet coefficients are calculated from the Farras nearly symmetric wavelet.^{23}

## 4.

## Edge Detection System

A block diagram of the edge detection system is shown in Fig. 3 . After luminance thresholding on the input image $f(x,y)$ , a first-order spatial differentiator of orthogonal gradient is performed to produce the differential image $g(x,y)$ with accentuated spatial amplitude changes. Morphological postprocessing is then used to accentuate edges.

## 4.1.

### Luminance Thresholding

In this section, the glandular structure of the prostate is judged present if the luminance exceeds the threshold level of the background. The center of the glandular structures, below the boundary, in the denoised prostate image represents the background threshold level, because the boundaries of these glandular structures can be located at a superficial level.

## 4.2.

### Orthogonal Gradient Generation

After applying the threshold level to the denoised image
$f(x,y)$
, a form of spatial first-order differentiation is performed in two orthogonal directions. In the discrete domain, the gradient in each direction is generated by^{22}

## Eq. 7

$${\mathbf{h}}_{\mathrm{r}}=\frac{1}{4}\left(\begin{array}{ccc}1& 0& -1\\ 2& 0& -2\\ 1& 0& -1\end{array}\right),\phantom{\rule{1em}{0ex}}{\mathbf{h}}_{\mathrm{c}}=\frac{1}{4}\left(\begin{array}{ccc}-1& -2& -1\\ 0& 0& 0\\ 1& 2& 1\end{array}\right),$$The gradient amplitude is approximated by the magnitude combination

## 5.

## Combined Algorithms

Figure 2 shows the order of the combined algorithms. The segmentation algorithm was applied to differentiate the cavernous nerves from the prostate gland. This algorithm is independent of the denoising process. However, the edge detection algorithm to provide deeper imaging of the prostate gland based on thresholding and spatial first-order differentiation is dependent on the denoising process. In other words, edges are sensitive to the noise. First, the input image was denoised, then the edge detection was implemented. With a noisy image, threshold selection becomes a tradeoff between missing valid edges and creating noise-induced false edges.

The algorithms were executed on a Core 2 Duo, $1.86\text{-}\mathrm{GHz}$ desktop personal computer. There were two parallel processes of Fig. 2, $8\text{-}\mathrm{s}$ denoising and edge detection and $10\text{-}\mathrm{s}$ segmentation. The total time for the combined processing algorithms was $10\phantom{\rule{0.3em}{0ex}}\mathrm{s}$ .

## 6.

## Results

The unprocessed time-domain (TD)-OCT images of the cavernous nerves at different orientations (longitudinal, oblique, and cross sectional) along the surface of the rat prostate are shown in Figs.
4, 4, 4
. Histologic sections of the cavernous nerves were previously processed for comparison.^{20}

Figures 4, 4, 4 show the images after denoising using CDWT. The global signal-to-noise ratio (SNR) is calculated as

## Eq. 9

$$\mathrm{SNR}=10\times \mathrm{log}[\mathrm{max}{\left({X}_{\mathrm{lin}}\right)}^{2}\u2215{\sigma}_{\mathrm{lin}}^{2}],$$^{25}The mean value of SNR for nine sample images before and after denoising was measured to be 26.65 and 40.87, respectively. Therefore, a SNR increase of approximately $14\phantom{\rule{0.3em}{0ex}}\mathrm{dB}$ was attained.

Figures
5, 5, 5
show the same OCT images of Figs.
4, 4, 4 after segmentation. The cavernous nerves could be differentiated from the prostate gland using the segmentation algorithm. The error rate was calculated by:
$\text{error}=\left(\text{number}\phantom{\rule{0.3em}{0ex}}\text{of}\phantom{\rule{0.3em}{0ex}}\text{error}\phantom{\rule{0.3em}{0ex}}\text{pixels}\right)\u2215\left(\text{number}\phantom{\rule{0.3em}{0ex}}\text{of}\phantom{\rule{0.3em}{0ex}}\text{total}\phantom{\rule{0.3em}{0ex}}\text{pixels}\right)$
, where
$\left(\text{number}\phantom{\rule{0.3em}{0ex}}\text{of}\phantom{\rule{0.3em}{0ex}}\text{error}\phantom{\rule{0.3em}{0ex}}\text{pixels}\right)=(\text{number}\phantom{\rule{0.3em}{0ex}}\text{of}\phantom{\rule{0.3em}{0ex}}\text{false}\phantom{\rule{0.3em}{0ex}}\text{positives}+\text{number}\phantom{\rule{0.3em}{0ex}}\text{of}\phantom{\rule{0.3em}{0ex}}\text{false}\phantom{\rule{0.3em}{0ex}}\text{negatives})$
. The overall error rate for the segmentation of the nerves was 0.058 with a standard deviation of 0.019, indicating the robustness of our technique. The error rate was measured as a mean of error measurements for three different sample images at different orientations (longitudinal, cross sectional, and oblique). A different image was used for training. The error rate was determined by comparing manually segmented images to the automatically segmented images of the nerves. These manually segmented images of the cavernous nerves were previously created according to histologic correlation with OCT images.^{20} Figures
5, 5, 5 combine edge detection of the denoised images and the segmentation results. Manual segmentation of the prostate gland was implemented to calculate performance of the edge detection algorithm. The overall error rate for the segmentation of the prostate gland was 0.076 with a standard deviation of 0.022.

## 7.

## Discussion

The proposed edge detection approach was successful in accentuating prostate structures deeper in the tissue, and the cavernous nerves could be differentiated from the prostate gland using the segmentation algorithm. The glandular structure of the prostate could be observed to a depth of approximately $1.6\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$ in Figs. 5, 5, 5 in comparison with an approximately $1\text{-}\mathrm{mm}$ depth in the unprocessed OCT images in Figs. 4, 4, 4. Overall, the edge detection technique enhanced structures deeper in the prostate gland, and the proposed image segmentation algorithm performed well for identification of the cavernous nerves in the prostate.

It should also be noted that the rat model used in this study represents an idealized version of the prostate anatomy, because the cavernous nerve lies on the surface of the prostate, and is therefore directly visible. However, in human anatomy, there can be an intervening layer of fascia (Fig. 1) between the OCT probe and the nerves, making identification more difficult. Since one major limitation of OCT is its superficial imaging depth in opaque tissues, an important advantage of these image processing algorithms is that the final OCT image should be able to provide deeper imaging in the tissue and locate the cavernous nerve when it lies at various depths beneath periprostatic tissues.

## 8.

## Conclusion

The segmentation technique is applied to differentiate cavernous nerves from the prostate gland in rat prostate. The wavelet shrinkage denoising technique using a dual-tree complex wavelet transform is used for speckle noise reduction, and by using edge detection, deeper imaging of the prostate gland is accomplished. These algorithms for image segmentation, denoising, and edge detection of the prostate may be of direct benefit for implementation in clinical endoscopic OCT systems currently being studied for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

## Acknowledgments

This research was supported in part by the Department of Defense Prostate Cancer Research Program, grant number PC073709, and the Department of Energy, grant number DE-FG02-06CH11460. The authors thank Paul Amazeen and Nancy Tresser of Imalux Corporation (Cleveland, Ohio) for providing the OCT system used in these studies.

## References

**,” J. Urol., 178 597 –601 (2007). 0022-5347 Google Scholar**

*Function outcome reporting after clinically localized prostate cancer treatment***,” Science, 254 1178 –1181 (1991). https://doi.org/10.1126/science.1957169 0036-8075 Google Scholar**

*Optical coherence tomography***,” J. Endourol, 21 814 –818 (2007). https://doi.org/10.1089/end.2006.9938 0892-7790 Google Scholar**

*Preliminary experience with the niris optical coherence tomography system during laparoscopic and robotic prostatectomy***,” Lasers Surg. Med., 39 36 –41 (2007). https://doi.org/10.1002/lsm.20454 0196-8092 Google Scholar**

*Imaging the cavernous nerves in rat prostate using optical coherence tomography***,” IEEE J. Sel. Top. Quantum Electron., 13 1641 –1645 (2007). https://doi.org/10.1109/JSTQE.2007.910119 1077-260X Google Scholar**

*Identification and imaging of the nerves responsible for erectile function in rat prostate,**in vivo*, using optical nerve stimulation and optical coherence tomography**,” Urology, 72 198 –204 (2008). https://doi.org/10.1016/j.urology.2007.11.084 0090-4295 Google Scholar**

*Optical coherence tomography of cavernous nerves: A step toward real-time intraoperative imaging during nerve-sparing radical prostatectomy***,” J. Biomed. Opt., 14 (4), 044033 (2009). https://doi.org/10.1117/1.3210767 1083-3668 Google Scholar**

*Segmentation of optical coherence tomography images for differentiation of the cavernous nerves from the prostate gland***,” IEEE Trans. Med. Imaging, 25 987 –1010 (2006). https://doi.org/10.1109/TMI.2006.877092 0278-0062 Google Scholar**

*Ultrasound image segmentation: a survey***,” Opt. Express, 13 10200 –10216 (2005). https://doi.org/10.1364/OPEX.13.010200 1094-4087 Google Scholar**

*Automated detection of retinal layer structures on optical coherence tomography images***,” J. Biomed. Opt., 12 (4), 041207 (2007). https://doi.org/10.1117/1.2771569 1083-3668 Google Scholar**

*Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies***,” Proc. SPIE, 6512 651214 (2007). https://doi.org/10.1117/12.710231 0277-786X Google Scholar**

*Automated segmentation of intraretinal layers from macular optical coherence tomography images***,” IEEE Trans. Med. Imaging, 27 1495 –1505 (2008). https://doi.org/10.1109/TMI.2008.923966 0278-0062 Google Scholar**

*Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search***,” Opt. Express, 16 16410 –16422 (2008). https://doi.org/10.1364/OE.16.016410 1094-4087 Google Scholar**

*Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography***,” Br. J. Ophthamol., 92 197 –203 (2008). https://doi.org/10.1136/bjo.2007.120956 0007-1161 Google Scholar**

*Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography***,” Opt. Express, 17 15659 –15669 (2009). https://doi.org/10.1364/OE.17.015659 1094-4087 Google Scholar**

*Automated segmentation of the macula by optical coherence tomography***,” Opt. Express, 17 23719 –23728 (2009). https://doi.org/10.1364/OE.17.023719 1094-4087 Google Scholar**

*Intra-retinal layer segmentation in optical coherence tomography images***,” Opt. Lett., 29 2878 –2880 (2004). https://doi.org/10.1364/OL.29.002878 0146-9592 Google Scholar**

*Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter***,” Curr. Med. Imag. Rev., 2 247 –260 (2006). https://doi.org/10.2174/157340506776930665 Google Scholar**

*A review of wavelet denoising in MRI and ultrasound brain imaging***,” Curr. Med. Imag. Rev., 4 270 –284 (2008). https://doi.org/10.2174/157340508786404044 Google Scholar**

*Multiresolution denoising for optical coherence tomography: a review and evaluation***,” J. Biomed. Opt., 14 (1), 014031 (2009). https://doi.org/10.1117/1.3081543 1083-3668 Google Scholar**

*Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform***,” Pattern Recogn., 29 2005 –2015 (1996). https://doi.org/10.1016/S0031-3203(96)00047-7 0031-3203 Google Scholar**

*Efficient Gabor filter design for texture segmentation***,” 3693 (2001). Google Scholar**

*Design of 2-band orthogonal near-symmetric CQF***,” IEEE Signal Process. Lett., 9 438 –441 (2002). https://doi.org/10.1109/LSP.2002.806054 1070-9908 Google Scholar**

*Bivariate shrinkage with local variance estimation***,” Proc. SPIE, 3196 79 –88 (1998). https://doi.org/10.1117/12.297921 0277-786X Google Scholar**

*Speckle noise reduction for optical coherence tomography*