Computed tomography (CT) reconstruction from X-ray projections acquired within a limited angle range is challenging. Both analytical and conventional iterative methods suffer from severe artifacts due to the incompleteness of sinogram. To obtain high-quality reconstructions from limited angle CT, it is crucial to integrate model-based methods with better learned priors from existing big databases of CT images. Transform learning is an unsupervised data-driven model that has recently shown promise in several medical imaging applications. However, its performance is limited due to the use of hand-crafted penalty terms on the learned transform and sparse coefficients. Inspired by the great success of convolutional neural network, we propose a supervised transform learning method for limited angle CT image reconstruction, where we redesign the conventional unsupervised iterative transform learning algorithm and learn the priors for both sparse coefficients and transform in a supervised manner. Clinical patient data results show that the proposed method significantly improves image quality of reconstructions, compared to a denoising deep convolutional neural network method, FBPConvNet, and a representative iterative neural network method, LEARN.
Penalized weighted-least squares (PWLS) with learned material priors is a promising way to achieve high quality basis material images using dual energy CT (DECT). We propose a new image-domain multi-material decomposition (MMD) method that combines PWLS estimation with regularization based on a union of learned crossmaterial transforms (CULTRA) model. Numerical experiments with the XCAT phantom show that the proposed method significantly improves the basis materials’ image quality over direct matrix inversion and PWLS decomposition with regularization involving a total nuclear norm (TNV) term and a ℓ0 norm term (PWLS-TNV-ℓ0).
Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA’s CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.