Inherent to Computed tomography (CT) is image reconstruction, constructing 3D voxel values from noisy projection data. Unlike deriving forward operators, modeling the inverse operation remains challenging. Given the ill-posed nature of the inverse problem, data-driven methods still need additional regularization to enhance the accuracy of the results. Besides, the generalization of the results hinges upon the availability of training data and access to the ground truth. This paper offers a new strategy to reconstruct CT images with the advantage of ground truth accessible through a virtual imaging trial (VIT) platform. A learned primal-dual deep neural network (LPD-DNN) employed the forward model and its adjoint as a surrogate of the imaging’s geometry and physics. VIT offered paired projection and ground truth data from anthropomorphic human models without noise and resolution degradation. The models included a library of anthropomorphic, computational (XCAT) patient models. The DukeSim simulator was utilized to form realistic projection data emulating the impact of the physics and geometry of a commercial CT scanner (Flash, Siemens). The resultant noisy sinogram data associated with each slice was thus generated for training. Corresponding linear attenuation coefficients of phantoms’ materials at the x-ray spectrum effective energy were used as the labels. The LPD-DNN was then deployed, learning the complex operators and hyper-parameters in the proximal primal-dual optimization. The obtained validation results showed 12% normalized root mean square error with respect to the ground truth labels, peak signal-to-noise ratio of 32 dB, a signal-to-noise ratio of 1.5, and a structural similarity index of 96%. These results were highly favorable compared to standard filtered-back projection reconstruction (65%, 17 dB, 1.0, 26%).
|