Recently, deep neural networks (DNNs) have been widely applied in low-dose computed tomography (LDCT) imaging field. Their performances are highly related to the number of the pre-collected training data. Meanwhile, the training data is usually hard to obtain, especially for the high-dose CT (HDCT) images. And HDCT images sometimes contain undesired noises, which easily result in network overfitting. To address the two issues, we proposed a cooperative meta-learning strategy for CT image reconstruction (CmetaCT) combining the metalearning strategy and Co-teaching strategy. The meta-learning (teacher/student model) strategy allows for training network with a large number of LDCT images without the corresponding HDCT images and only a small number of labeled CT data in a semi-supervised learning manner. And the Co-teaching strategy is able to make a trade-off between overfitting and introducing extra errors, which includes a part of samples in every minibatch for updating model parameters. Due to the capacity of meta-learning, the presented CmetaCT method is flexible enough to utilize any existing CT restoration/reconstruction network in meta-learning framework. Finally, both quantitative and visual results indicated that the proposed CmetaCT method achieves a superior performance on low-dose CT imaging compared with the DnCNN method.
Deep learning (DL) networks show a great potential in computed tomography (CT) imaging field. Most of them are supervised DL network greatly based on their capability and the amount of CT training data (i.e., low-dose CT measurements/high-quality ones). However, collection of large-scale CT datasets are time-consuming and expensive. In addition, the training and testing CT datasets used for supervised DL network are highly desired similarities in CT scan protocol (i.e., similar anatomical structure, and same kVp setting). These two issues are particularly critical in spectral CT imaging. In this work, to address the issues, we presents an unsupervised data fidelity enhancement network (USENet) to produce high-quality spectral CT images. Specifically, the presented USENet consists of two parts, i.e., supervised network and unsupervised network. In the supervised network, the spectral CT image pairs at 140 kVp (low-dose CT images/high-dose ones) are used for network training. It should be noted that there is a great difference of CT value between spectral CT images at 140 kVp and 80 kVp, and the supervised network trained with CT images at 140 kVp cannot be directly used for CT image reconstruction at 80 kVp. Then unsupervised network enrolls physical model and the spectral CT measurements at 80 kVp for fine-tuning the supervised network, which is the major contribution of the presented USENet method. Finally, accurate spectral CT reconstructions are achieved for the sparse-view and low-dose cases, which fully demonstrate the effectiveness of the presented USENet method.
Energy-resolving CT (ErCT) with a photon counting detector (PCD) is able to generate multi-energy data with high spatial resolution, and it can be used to improve contrast-to-noise ratio (CNR) of iodinated tissues and to reduce beam hardening artifacts. In addition, ErCT allows for generating virtual mono-energetic CT images with improved CNR. However, most of ErCT scanners are lab-built, but little used in clinical research. Deep learning based methods can help to generate ErCT images from energy-integrating CT (EiCT) images via convolution neural networks (CNNs) because of its capability in learning features of the EiCT images and ErCT images. Nevertheless, current CNNs usually generate ErCT images at one energy bin at a time, and there is large room for improvement, such as, generating multi-energy ErCT images at a time. Therefore, in this work, we investigate to leverage a deep generative model (IuGAN-ErCT) to simultaneously generate ErCT images at multiple energy bins from existing EiCT images. Specifically, a unified generative adversarial network (GAN) is employed. With a single generator, the generative network learns the latent correlation between the EiCT images and ErCT images to estimate ErCT images from EiCT images. Moreover, to maintain the value accuracy of different ErCT images, we introduced a fidelity loss function. In the experiment, 1384 abdomen and chest images collected from 22 patients were utilized to train the proposed IuGAN-ErCT method and 130 slices were used for test. Result shows that the IuGAN-ErCT method can generate more accurate ErCT images than the uGAN-ErCT method both in quantitative and qualitative evaluation.