Radiation treatment for head-and-neck (HN) cancers requires accurate treatment planning based on 3D patient models derived from CT images. In clinical practice, the treatment volumes and organs-at-risk (OARs) are manually contoured by experienced physicians. This tedious and time-consuming procedure limits clinical workflow and resources. In this work, we propose to use a 3D Faster R-CNN to automatically detect the location of head and neck organs, then apply a U-Net to segment the multi-organ contours, called U-RCNN. The mean Dice similarity coefficient (DSC) of esophagus, larynx, mandible, oral cavity, left parotid, right parotid, pharynx and spinal cord were ranging from 79% to 89%, which demonstrated the segmentation accuracy of the proposed U-RCNN method. This segmentation technique could be a useful tool to facilitate routine clinical workflow in H&N radiotherapy.
In this study, we propose a synthetic CT (sCT) aided MRI-CT deformable image registration for head and neck radiotherapy. An image synthesis network, cycle consistent generative adversarial network (CycleGAN), was first trained using 25 pre-aligned CT-MRI image pairs. Using the MR head and neck images, the trained CycleGAN then predicts sCT images, which were used as MRI’s surrogate in MRI-CT registration. Demons registration algorithm was used to perform the sCT-CT registration on 5 separate datasets. For comparison, the original MRI and CT images were registered using mutual information as similarity metric. Our results showed that the target registration errors after registration were on average 1.31 mm and 1.02 mm for MRI-CT and sCT-CT registration, respectively. The mean normalized cross correlation between the sCT and CT after registration was 0.97, indicating that the proposed method is a viable way to perform MRI-CT image registration for head neck patients.
We propose a method to automatically segment multiple organs at risk (OARs) from routinely-acquired thorax CT images using generative adversarial network (GAN). Multi-label U-Net was introduced in generator to enable end-to-end segmentation. Esophagus and spinal cord location information were used to train the GAN in specific regions of interest (ROI). The probability maps of new CT thorax multi-organ were generated by the well-trained network and fused to reconstruct the final contour. This proposed algorithm was evaluated using 20 patients' data with thorax CT images and manual contours. The mean Dice similarity coefficient (DSC) for esophagus, heart, left lung, right lung and spinal cord was 0.73±0.04, 0.85±0.02, 0.96±0.01, 0.97±0.02 and 0.88±0.03. This novel deep-learning-based approach with the GAN strategy can automatically and accurately segment multiple OARs in thorax CT images, which could be a useful tool to improve the efficiency of the lung radiotherapy treatment planning.