21 March 2016 3D transrectal ultrasound (TRUS) prostate segmentation based on optimal feature learning framework
Author Affiliations +
Abstract
We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xiaofeng Yang, Xiaofeng Yang, Peter J. Rossi, Peter J. Rossi, Ashesh B. Jani, Ashesh B. Jani, Hui Mao, Hui Mao, Walter J. Curran, Walter J. Curran, Tian Liu, Tian Liu, "3D transrectal ultrasound (TRUS) prostate segmentation based on optimal feature learning framework", Proc. SPIE 9784, Medical Imaging 2016: Image Processing, 97842F (21 March 2016); doi: 10.1117/12.2216396; https://doi.org/10.1117/12.2216396
PROCEEDINGS
7 PAGES


SHARE
Back to Top