Interior tomography that acquires truncated data of a specific interior region-of-interest (ROI) is an attractive option to low-dose imaging. However, image reconstruction from such measurement does not yield an accurate solution because of data insufficiency. There have been developed a host of approaches to getting an approximate useful solution including various weighting methods, iterative reconstruction methods, and methods with prior knowledge. In this study, we use a deep-neural-network, which has shown its potentials in various fields including medical imaging, to reconstruct interior tomographic images. We assumed an offset-detector geometry which has wide applications in cone-beam CT (CBCT) imaging for its extended field-of-view (FOV) in this work. We trained a network to synthesize ‘ramp-filtered’ data within the detector active area so that the corresponding ROI reconstruction would be truncation-artifact-free in the filteredbackprojection (FBP) reconstruction framework. We have compared the results with post- and pre-convolution weighting methods and shown outperformance of the neural network approach.
Reducing the number of projections in computed tomography (CT) has been exploited as a low-dose option in conjunction with advanced iterative image reconstruction algorithms. While such iterative image reconstruction methods do provide useful images and valuable insights of the inverse imaging problems, it is an intriguing issue whether missing view projection data in the sinogram can be successfully recovered. There have been reported several approaches to interpolating the missing sinogram data. Deep-learning based super-resolution techniques in the field of natural image enhancement have been recently introduced and showed promising results. Inspired by the super-resolution techniques, we have earlier proposed a sinogram inpainting method that uses a convolutional-neural-network for sparsely viewsampled CT. Despite of the encouraging initial results, our previously proposed method had two drawbacks. The measured sinogram was contaminated in the process of filling the missing sinogram by the deep learning network. Additionally, the sum of the interpolated sinogram in the direction of detector row at each view angle was not preserved. In this study, we improve our previously developed deep-learning based sinogram synthesis method by adding new layers and modifying the size of receptive field in the deep learning network to overcome the above limitations. From the quantitative evaluations on the image accuracy and quality using real patients’ CT images, we show that the new approach synthesizes more accurate sinogram and thus leads to higher quality of CT image than the previous one.
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively
investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT)
applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts
that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been
investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that
reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method
using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing
projection data and compared its performances with the other interpolation techniques.
Our earlier work has demonstrated that the data consistency condition can be used as a criterion for scatter kernel optimization in deconvolution methods in a full-fan mode cone-beam CT . However, this scheme cannot be directly applied to CBCT system with an offset detector (half-fan mode) because of transverse data truncation in projections. In this study, we proposed a modified scheme of the scatter kernel optimization method that can be used in a half-fan mode cone-beam CT, and have successfully shown its feasibility. Using the first-reconstructed volume image from half-fan projection data, we acquired full-fan projection data by forward projection synthesis. The synthesized full-fan projections were partly used to fill the truncated regions in the half-fan data. By doing so, we were able to utilize the existing data consistency-driven scatter kernel optimization method. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by an experimental study using the ACS head phantom.