Segmentation of the prostate in 3D CT images is a crucial step in treatment planning and procedure guidance such as brachytherapy and radiotherapy. However, manual segmentation of the prostate is very time-consuming and depends on the experience of the clinician. On the contrary, automated prostate segmentation is more helpful in practice, whereas the task is very challenging due to low soft-tissue contrast in CT images. In this paper, we propose a 3D deeply supervised fully-convolutional-network (FCN) with dilated convolution kernel to automatically segment prostate in CT images. A deep supervision strategy could acquire more powerful discriminative capability and accelerate the optimization convergence in training stage, while concatenating the dilated convolution enlarges the receptive field to extract more global contextual information for accurate prostate segmentation. The presented method was evaluated using 15 prostate CT images and obtained a mean Dice similarity coefficient (DSC) of 0.85±0.04 and mean surface distance (MSD) of 1.92±0.46 mm. The experimental results show that our approach yields accurate CT prostate segmentation, which can be employed for the prostate-cancer treatment planning of brachytherapy and external beam radiotherapy.
We propose a method to automatically segment prostate from TRUS image based on multi-derivate deeply supervised network and multi-directional contour refinement. 3D multi-derivate V-Net is introduced to enable end-to-end segmentation. Deep supervision mechanism is integrated into the hidden layers to cope with the optimization difficulties when training such a network with limited training data. The probability map of new prostate contour is generated by the well-trained network and fused to reconstruct the prostate contour by multi-directional contour refinement. This proposed algorithm was evaluated using 30 patients’ data with TRUS image and manual contours. The mean Dice similarity coefficient (DSC) and mean surface distance (MSD) were 0.92 and 0.60 mm, which demonstrate the high accuracy of the proposed segmentation method. We have developed a novel deep learning-based method demonstrated that this method could significantly improve contour accuracy especially around the apex and base region. This segmentation technique could be a useful tool in ultrasound-guided interventions for prostate-cancer diagnosis and treatment.
Prostate segmentation of MR volumes is a very important task for treatment planning and image-guided brachytherapy and radiotherapy. Manual delineation of prostate in MR image is very time-consuming and depends on the subjective experience of the physicians. On the other hand, automatic prostate segmentation becomes a reasonable and attractive choice for its speed, even though the task is very challenging because of inhomogeneous intensity and variability of prostate appearance and shape. In this paper, we propose a method to automatically segment MR prostate image based on 3D deeply supervised FCN with concatenated atrous convolution (3D DSA-FCN). More discriminative features provide explicit convergence acceleration in training stage using straightforward dense predictions as deep supervision and the concatenated atrous convolution extract more global contextual information for accurate predictions. The presented method was evaluated on the internal dataset comprising 15 T2-weighted prostate MR volumes from Winship Cancer Institute and obtained a mean Dice similarity coefficient (DSC) of 0.852±0.031, 95% Hausdorff distance (95%HD) 7.189±1.953 mm and mean surface distance (MSD) of 1.597±0.360 mm. The experimental results show that our 3D DSA-FCN could yield satisfied MR prostate segmentation, which can be used for image-guided radiotherapy.
We propose a learning method to generate corrected CBCT (CCBCT) images with the goal of improving the image quality and clinical utility of on-board CBCT. The proposed method integrated a residual block concept into a cyclegenerative adversarial network (cycle-GAN) framework, which is named as Res-cycle GAN in this study. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which could further constrain the learning model. A fully convolution neural network (FCN) with residual block is used in generator to enable end-toend transformation. A FCN is used in discriminator to discriminate from planning CT (ground truth) and correction CBCT (CCBCT) generated by the generator. This proposed algorithm was evaluated using 12 sets of patient data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes and spatial non-uniformity (SNU) in the selected regions of interests (ROIs) were used to quantify the correction accuracy of the proposed algorithm. Overall, the MAE, PSNR, NCC and SNU were 20.8±3.4 HU, 32. 8±1.5 dB, 0.986±0.004 and 1.7±3.6%. We have developed a novel deep learning-based method to generate CCBCT with a high image quality. The proposed method increases on-board CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiotherapy.
We develop a learning-based method to generate patient-specific pseudo computed tomography (CT) from routinely acquired magnetic resonance imaging (MRI) for potential MRI-based radiotherapy treatment planning. The proposed pseudo CT (PCT) synthesis method consists of a training stage and a synthesizing stage. During the training stage, patch-based features are extracted from MRIs. Using a feature selection, the most informative features are identified as an anatomical signature to train a sequence of alternating random forests based on an iterative refinement model. During the synthesizing stage, we feed the anatomical signatures extracted from an MRI into the sequence of well-trained forests for a PCT synthesis. Our PCT was compared with original CT (ground truth) to quantitatively assess the synthesis accuracy. The mean absolute error, peak signal-to-noise ratio, and normalized cross-correlation indices were 60.87 ± 15.10 HU, 24.63 ± 1.73 dB, and 0.954 ± 0.013 for 14 patients’ brain data and 29.86 ± 10.4 HU, 34.18 ± 3.31 dB, and 0.980 ± 0.025 for 12 patients’ pelvic data, respectively. We have investigated a learning-based approach to synthesize CTs from routine MRIs and demonstrated its feasibility and reliability. The proposed PCT synthesis technique can be a useful tool for MRI-based radiation treatment planning.