Tomographic image reconstruction requires precise geometric measurements and calibration for the scanning system to yield optimal images. The isocenter offset is a very important geometric parameter that directly governs the spatial resolution of reconstructed images. Due to system imperfections such as mechanical misalignment, an accurate isocenter offset is difficult to achieve. Common calibration procedures used during isocenter offset tuning, such as pin scan, are not able to reach precision of subpixel level and are also inevitably hampered by system imperfections. We propose a purely data-driven method based on Fourier shift theorem to indirectly, yet precisely, estimate the isocenter offset at the subpixel level. The solution is obtained by applying a generalized M-estimator, a robust regression algorithm, to an arbitrary sinogram of axial scanning geometry. Numerical experiments are conducted on both simulated phantom data and actual data using a tungsten wire. Simulation results reveal that the proposed method achieves great accuracy on estimating and tuning the isocenter offset, which, in turn, significantly improves the quality of final images, particularly in spatial resolution.
Since its recent inception, simultaneous image reconstruction for multimodality fusion has received a great deal of attention due to its superior imaging performance. On the other hand, the compressed sensing (CS)-based image reconstruction methods have undergone a rapid development because of their ability to significantly reduce the amount of raw data. In this work, we combine computed tomography (CT) and magnetic resonance imaging (MRI) into a single CS-based reconstruction framework. From a theoretical viewpoint, the CS-based reconstruction methods require prior sparsity knowledge to perform reconstruction. In addition to the conventional data fidelity term, the multimodality imaging information is utilized to improve the reconstruction quality. Prior information in this context is that most of the medical images can be approximated as piecewise constant model, and the discrete gradient transform (DGT), whose norm is the total variation (TV), can serve as a sparse representation. More importantly, the multimodality images from the same object must share structural similarity, which can be captured by DGT. The prior information on similar distributions from the sparse DGTs is employed to improve the CT and MRI image quality synergistically for a CT-MRI scanner platform. Numerical simulation with undersampled CT and MRI datasets is conducted to demonstrate the merits of the proposed hybrid image reconstruction approach. Our preliminary results confirm that the proposed method outperforms the conventional CT and MRI reconstructions when they are applied separately.