Motion artifacts are among the most important factors degrading the diagnostic performance of x-ray CT images, in particular for photon-counting CT where these artifacts can degrade the higher spatial resolution and quantitative imaging capabilities. The purpose of this simulation study is to evaluate the capability of deep neural networks to correct for motion artifacts in spectral photon-counting cardiac CT, by generating motion-corrected virtual monoenergetic images at a range of different keVs. We used CatSim to generate synthetic training data by simulating motion-corrupted and motion-free CT imaging (100kVp, 1s rotation) of the dynamic XCAT phantom including heart and respiratory motion. In total 2160 image pairs were generated. We trained two different neural networks for the task of estimating motion-artifact-reduced images from motion-corrupted images: one based on UNet and one based on a Wasserstein generative adversarial network with a gradient penalty (WGAN-GP). To make these networks applicable to virtual monoenergetic images at different energies, we trained them with 40keV and 70keV monoenergetic images as inputs and used a loss function with two terms: 1) L1-loss on soft tissue and bone basis images and 2) perceptual loss on 70keV monoenergetic images. Our results show that the motion artifacts from 40keV to 100keV are reduced substantially. In conclusion, these results demonstrate the potential of image-domain deep neural networks to correct for motion artifacts in spectral cardiac CT images.
|