A routine 3D transrectal ultrasound (TRUS) volume is usually captured with large slice thickness (e.g., 2-5mm). Such ultrasound images with low out-of-slice resolution affect contouring and needle/seed detection in prostate brachytherapy. The purpose of this study is to develop a deep-learning-based method to construct high-resolution images from routinely captured prostate ultrasound images for brachytherapy. We propose to integrate a deeply supervised attention model into a Generative Adversarial Network (GAN)-based framework to improve ultrasound image resolution. Deep attention GANs are introduced to enable end-to-end encoding-and-decoding learning. Next, an attention model is used to retrieve the most relevant information from the encoder. The residual network is used to learn the difference between low- and highresolution images. This technique was validated with 20 patients. We performed a leave-one-out cross-validation method to evaluate the proposed algorithm. Our reconstructed, high-resolution TRUS images from down-sampled images were compared with the original image to evaluate the performance quantitatively. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) of image intensity profiles between reconstructed and original images were 6.5 ± 0.5 and 38.0 ± 2.4dB.
Brain metastases are one of the most common neurologic complications of cancers, occurring in about 30% of all patients with cancer. Moreover, about 40% of brain metastases patients have more than three metastases. Stereotactic radiosurgery (SRS) is a well-established treatment for brain metastases, which requires accurate detection and delineation of the brain metastases. However, manually detecting and locating all the brain metastases can be very time-consuming and laborintensive, which is a big efficiency bottleneck in this typical one-day outpatient SRS procedure. Developing a fast automatic detection tool of brain metastases is highly desirable, but is very challenging given the large number of brain metastases that a patient can have and the small size that a brain metastasis can be. In this work, we propose to use a 3D Mask R-CNN method to automatically and quickly detect the brain metastases on magnetic resonance (MR) images for SRS. At the training stage, coarse feature maps were extracted from 3D MR image patches using pretrained ResNet. Then, a region proposal network (RPN) was used to predict the locations and sizes of the rough candidate tumor ROIs from the coarse feature maps. By using a uniformed fully convolution network (FCN), the metastases within ROI was segmented. The segmentation loss, classification loss (metastases or non-metastases), as well as ROI location and size regression loss were used to supervise the proposed networks. For a new query patient, candidate ROIs and predicted probability maps within ROIs were obtained from our trained model. By aggregating ROIs and the tumor probability maps and performing a consolidation via weighted cluster scoring, the final ROIs of the brain metastases was obtained. We have tested our method on 20 patients’ brain contrast T1-weighted MR images, and achieved 86.5%±3.2% sensitivity and 89.7%±4.8% specificity. For each patient, it took our trained model a few seconds to detect the brain metastases on the 3D MR images. The results of our preliminary study have demonstrated its efficacy and clinical feasibility. This auto-detection method could be a useful tool to significantly improve the efficiency of SRS treatment planning and hence ultimately improve the clinical outcome.
We propose an approach based on a weekly supervised method for MR-TRUS image registration. Inspired by the viscous fluid physical model, we made the first attempt at combining convolutional neural network (CNN) and long short-term memory (LSTM) Neural Network to perform deep learning-based dense deformation field prediction. Through the integration of convolutional long short-term memory (ConvLSTM) Neural Network and weakly supervised approach, we achieved accurate results in terms of Dice similarity coefficient (DSC) and target registration error (TRE) without using conventional intensity-based image similarity measures. Thirty-six sets of patient data were used in the study. Experimental results showed that our proposed ConvLSTM neural network produced a mean TRE of 2.85±1.72 mm and a mean Dice of 0.89.
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow of multi-needle detection via considering the images without needles as auxiliary. Specifically, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we developed an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning (ORDL). Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to determine the centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm (RANSAC) per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments are conducted on a prostate data set of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our approach can correctly detect 95% needles with a tip location error of 1.01 mm on the prostate dataset. This technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy and facilitate the clinical workflow.
We developed a machine-learning-based method generate good quality low dose CT using a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates is used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full dose images were used to generate projection data, which were then added with noise to simulate the low mAs scanning scenario. Low dose CT images were reconstructed from this noise-contaminated projection data, and were fed into our network along with the original full dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high quality CT images generated by our method with the original full dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ~1.6%, with respective to the original full dose images. The proposed method successfully improved the noise, CNR and non-uniformity level to be close to those of full dose CT images, and outperforms a state-of-art iterative reconstruction method. Dosimetric studies shows that the average differences of DVH metrics are less than 0.1 Gy (p>0.05). These quantitative results strongly indicate that the denoised low dose CT images using our method maintains image accuracy and quality, and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. This study also demonstrates the great potential for low dose CT in the process of simulation and treatment planning.
KEYWORDS: Computed tomography, Image quality, X-ray computed tomography, Radiotherapy, Computer simulations, CT reconstruction, Monte Carlo methods, Data modeling, Signal to noise ratio, Medical imaging
Low-dose computed tomography (CT) is desirable for treatment planning and simulation in radiation therapy. Multiple rescanning and replanning during the treatment course with a smaller amount of dose than a single conventional full-dose CT simulation is a crucial step in adaptive radiation therapy. We developed a machine learning-based method to improve image quality of low-dose CT for radiation therapy treatment simulation. We used a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates (AGs) was used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full-dose images were used to generate projection data, which were then added with noise to simulate the low-mAs scanning scenario. Low-dose CT images were reconstructed from this noise-contaminated projection data and were fed into our network along with the original full-dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high-quality CT images generated by our method with the original full-dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ∼1.6 % , with respect to the original full-dose images. The proposed method successfully improved the noise, contract-to-noise ratio, and nonuniformity level to be close to those of full-dose CT images and outperforms a state-of-the-art iterative reconstruction method. Dosimetric studies show that the average differences of dose-volume histogram metrics are <0.1 Gy (p > 0.05). These quantitative results strongly indicate that the denoised low-dose CT images using our method maintains image accuracy and quality and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. We also demonstrate the great potential for low-dose CT in the process of simulation and treatment planning.