The high utility and wide applicability of x-ray imaging has led to a rapidly increased number of CT scans over the past
years, and at the same time an elevated public concern on the potential risk of x-ray radiation to patients. Hence, a hot
topic is how to minimize x-ray dose while maintaining the image quality. The low-dose CT strategies include modulation
of x-ray flux and minimization of dataset size. However, these methods will produce noisy and insufficient projection
data, which represents a great challenge to image reconstruction. Our team has been working to combine statistical
iterative methods and advanced image processing techniques, especially dictionary learning, and have produced
excellent preliminary results. In this paper, we report recent progress in dictionary learning based low-dose CT
reconstruction, and discuss the selection of regularization parameters that are crucial for the algorithmic optimization.
The key idea is to use a “balancing principle” based on a model function to choose the regularization parameters during
the iterative process, and to determine a weight factor empirically for address the noise level in the projection domain.
Numerical and experimental results demonstrate the merits of our proposed reconstruction approach.
As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to
human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose
case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the
insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the
smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures
effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed
the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image.
In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In
the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand,
patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch
based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the
proposed method. The experimental results validate this method can lead to better image with less noise and more detail
than other methods in low-count and few-views cases.
Statistical CT reconstruction using penalized weighted least-squares(PWLS) criteria can improve image-quality in low-dose CT reconstruction. A suitable design of regularization term can benefit it very much. Recently, sparse representation based on dictionary learning has been treated as the regularization term and results in a high quality reconstruction. In this paper, we incorporated a multiscale dictionary into statistical CT reconstruction, which can keep more details compared with the reconstruction based on singlescale dictionary. Further more, we
exploited reweigted <i>l</i><sub>1</sub> norm minimization for sparse coding, which performs better than I norm minimization
in locating the sparse solution of underdetermined linear systems of equations. To mitigate the time consuming process that computing the gradiant of regularization term, we adopted the so-called double surrogates method to accelerate ordered-subsets image reconstruction. Experiments showed that combining multiscale dictionary and reweighted <i>l</i><sub>1</sub> norm minimization can result in a reconstruction superior to that bases on singlescale dictionary and <i>l</i><sub>1</sub> norm minimization.
In medical x ray computed tomography (CT) imaging devices, the x ray tube usually emits a polychromatic spectrum of photons resulting in beam-hardening artifacts in the reconstructed images. The bone-correction method has been widely adopted to compensate for beam-hardening artifacts. However, its correction performance is highly dependent on the empirical determination of a scaling factor, which is used to adjust the ratio of the reconstructed value in the bone region to the actual mass density of bone-tissue. A significant problem with bone-correction is that a large number of physical experiments are routinely required to accurately calibrate the scaling factor. In this article, an improved bone-correction method is proposed, based on the projection data consistency condition, to automatically determine the scaling factor. Extensive numerical simulations have verified the existence of an optimal scaling factor, the sensitivity of bone-correction to the scaling factor, and the efficiency of the proposed method for the beam-hardening correction.
This paper presents a statistical interior tomography approach combining an optimization of the truncated Hilbert
transform (THT) data. With the introduction of the compressed sensing (CS) based interior tomography, a statistical
iteration reconstruction (SIR) regularized by the total variation (TV) has been proposed to reconstruct an interior region
of interest (ROI) with less noise from low-count local projections. After each update of the CS based SIR, a THT
constraint can be incorporated by an optimizing strategy. Since the noisy differentiated back-projection (DBP) and its
corresponding noise variance on each chord can be calculated from the Poisson projection data, an object function is
constructed to find an optimal THT of the ROI from the noisy DBP and the present reconstructed image. Then the
inversion of this optimized THT on each chord is performed and the resulted ROI will be the initial image of next update
for the CS based SIR. In addition, a parameter in the optimization of THT step can be used to determine the stopping rule
of the iteration heuristically. Numerical simulations are performed to evaluate the proposed approach. Our results
indicate that this approach can reconstruct an ROI with high accuracy by reducing the noise effectively.
While classic CT theory targets exact reconstruction of a whole cross-section or of an entire object from complete
projections, practical applications often focus on much smaller internal ROIs. Traditional CT methods cannot exactly
reconstruct an internal ROI only from local truncated projections associated with x-rays through the ROI, because this
interior problem does not have a unique solution. When applying approximate local CT algorithms for interior
reconstruction from truncated projection data, features outside the ROI may create artifacts overlapping inside features,
rendering the images inaccurate or useless. Recently, novel solutions for the interior problem were published by our
group with numerical results demonstrating that the interior problem can be solved in a theoretically exact and
numerically stable fashion aided by some prior knowledge, such as a small known sub-region inside the ROI or the ROI
can modeled by piecewise constant/polynomial function. In this invited paper, we will review the recent progress in local
reconstruction. The topic includes lambda tomography, analytic and iterative interior reconstructions, with an emphasis
on total variation minimization based soft-threshold methods and statistical based interior reconstruction algorithms.
The long-standing interior problem has been recently revisited, leading to promising results on exact local reconstruction
also referred to as interior tomography. To date, there are two key computational ingredients of interior tomography. The
first ingredient is inversion of the truncated Hilbert transform with prior sub-region knowledge. The second is
compressed sensing (CS) assuming a piecewise constant or polynomial region of interest (ROI). Here we propose a
statistical approach for interior tomography incorporating the aforementioned two ingredients as well. In our approach,
projection data follows the Poisson model, and an image is reconstructed in the maximum a posterior (MAP) framework
subject to other interior tomography constraints including known subregion and minimized total variation (TV). A
deterministic interior reconstruction based on the inversion of the truncated Hilbert transform is used as the initial image
for the statistical interior reconstruction. This algorithm has been extensively evaluated in numerical and animal studies
in terms of major image quality indices, radiation dose and machine time. In particular, our encouraging results from a
low-contrast Shepp-Logan phantom and a real sheep scan demonstrate the feasibility and merits of our proposed
statistical interior tomography approach.
This paper presents a statistical reconstruction algorithm for dual-energy (DE) CT of polychromatic x-ray source. Each
pixel in the imaged object is assumed to be composed of two basis materials (i.e., bone and soft tissue) and a penalizedlikelihood
objective function is developed to determine the densities of the two basis materials. Two penalty terms are
used respectively to penalize the bone density difference and the soft tissue density difference in neighboring pixels. A
gradient ascent algorithm for monochromatic objective function is modified to maximize the polychromatic penalizedlikelihood
objective function using the convexity technique. In order to reduce computation consumption, the
denominator of the update step is pre-calculated with reasonable approximation replacements. Ordered-subsets method is
applied to speed up the iteration. Computer simulation is implemented to evaluate the penalized-likelihood algorithm.
The results indicate that this statistical method yields the best quality image among the tested methods and has a good
noise property even in a lower photon count.
Optical sensing of specific molecular target using near-infrared light has been recognized to be the crucial technology,
have changing human's future. The imaging of Fluorescence Molecular Tomography is the most novel technology in
optical sensing. It uses near-infrared light(600-900nm) as instrument and utilize fluorochrome as probe to take noncontact
three-dimensional imaging for live molecular targets and to exhibit molecular process in vivo. In order to solve
the problem of forward simulation in FMT, this paper mainly introduces a new simulation modeling. The modeling
utilizes Monte Carlo method and is implemented in C++ programming language. Ultimately its accuracy has been
testified by comparing with analytic solutions and MOSE from University of Iowa and Chinese Academy of Science.
The main characters of the modeling are that it can simulate both of bioluminescent imaging and FMT and take analytic
calculation and support more than one source and CCD detector simultaneously. It can generate sufficient and proper
data and pre-preparation for the study of fluorescence molecular tomography.