Over the past decade, there has been abundant research on future cardiac CT architectures and corresponding reconstruction algorithms. Multiple cardiac CT concepts have been published, including third-generation single-source CT with wide-cone coverage, dual-source CT, and electron-beam CT, etc. In this paper, we apply a Radon space analysis method to two multi-beamline architectures: triple-source CT and semi-stationary ring-source CT. In our studies, we have considered more than thirty cardiac CT architectures and triple-source CT was identified as a promising solution, offering approximately a three-fold advantage in temporal resolution, which can significantly reduce motion artifacts due to the moving heart and lungs. In this work, we describe a triple-source CT architecture with all three beamlines (i.e. source-detector pairs) limited to the cardiac field of view in order to eliminate the radiation dose outside the cardiac region. We also demonstrate the capability of performing full field of view imaging when desired, by shifting the detectors. Ring-source dual-rotating-detector CT is another architecture of interest, which offers the opportunity to provide high temporal resolution using a full-ring stationary source. With this semi-stationary architecture, we found that the azimuthal blur effect can be greater than in a fully-rotating CT system. We therefore propose novel scanning modes to reduce the azimuthal blur in ring-source rotating detector CT. Radon space analysis method proves to be a useful method in CT system architecture study.
Computer simulation tools for X-ray CT are important for research efforts in developing reconstructionmethods, designing
new CT architectures, and improving X-ray source and detector technologies. In this paper, we propose a physics-based
modeling method for X-ray CT measurements with energy-integrating detectors. It accurately accounts for the dependence
characteristics on energy, depth and spatial location of the X-ray detection process, which is either ignored or over
simplified in most existing CT simulation methods. Compared with methods based on Monte Carlo simulations, it is
computationally much more efficient due to the use of a look-up table for optical collection efficiency. To model the CT
measurments, the proposed model considers five separate effects: energy- and location-dependent absorption of the incident
X-rays, conversion of the absorbed X-rays into the optical photons emitted by the scintillator, location-dependent
collection of the emitted optical photons, quantumefficiency of converting fromoptical photons to electrons, and electronic
noise. We evaluated the proposed method by comparing the noise levels in the reconstructed images from measured data
and simulations of a GE LightSpeed VCT system. Using the results of a 20 cm water phantom and a 35 cm polyethylene
(PE) disk at various X-ray tube voltages (kVp) and currents (mA), we demonstrated that the proposed method produces realistic CT simulations. The difference in noise standard deviation between measurements and simulations is approximately
2% for the water phantom and 10% for the PE phantom.
The image quality entitlement is evaluated for multi-energy bin photon counting (PC) spectral CT relative to that of
energy integration an dual kVp (dkVp) imaging. Physics simulations of X-ray projection channel data and CT images
are used to map contrast-to-noise metrics for simple numerical phantoms objects with soft tissue, calcium and iodine
materials. The benefits are quantified under ideal detector conditions. Spectral optimization yields on the order of 2X
benefit for iodine visualization measured by CNR^2/dose in two different imaging modes: optimal energy weighting,
and optimal mono-energy imaging. In another case studied, strict dose equivalence is maintained by use of a composite
spectrum for PC simulation that combines simultaneously the two kVp excitations used sequentially for dkVp. In this
case, mono-energetic imaging of iodine contrast agent is shown to achieve 40% higher dose efficiency for photon
counting compared to dual kVp although non-ideal characteristics of the photon counting response can eliminate much
of this benefit.
The impact of the system parameters of the modulator on X-ray scatter correction using primary modulation is studied and an optimization of the modulator design is presented. Recently, a promising scatter correction method for X-ray computed tomography (CT) that uses a checkerboard pattern of attenuating blockers (primary modulator) placed between the X-ray source and the object has been developed and experimentally verified. The
blocker size, d, and the blocker transmission factor, α, are critical to the performance of the primary modulation
method. In this work, an error caused by aliasing of primary whose magnitude depends on the choices of d and α, and the scanned object, is set as the object function to be minimized, with constraints including the X-ray focal spot, the physical size of the detector element, and the noise level. The optimization is carried out in two steps. In the first step, d is chosen as small as possible but should meet a lower-bound condition. In the
second step, α should be selected to balance the error level in the scatter estimation and the noise level in the
reconstructed image. The lower bound of d on our tabletop CT system is 0.83 mm. Numerical simulations suggest 0.6 < α < 0.8 is appropriate. Using a Catphan 600 phantom, a copper modulator (d = 0.89 mm, α = 0.70) expectedly outperforms an aluminum modulator (d = 2.83 mm, α = 0.90). With the aluminum modulator, our method reduces the average error of CT number in selected contrast rods from 371.4 to 25.4 Hounsfield units (HU) and enhances the contrast to noise ratio (CNR) from 10.9 to 17.2; when the copper modulator is used, the
error is further reduced to 21.9 HU and the CNR is further increased to 19.2.
A new imaging configuration whose trajectory is a multisegment straight line is investigated, and a practical reconstruction algorithm is proposed. It is a natural extension of an imaging configuration with a straight-line trajectory. These kinds of scanning systems may be useful in industry and security inspections. As is known, projection data from a single straight-line trajectory are incomplete and their reconstruction suffers from a limited-angle problem. A multisegment straight-line trajectory can be used to compensate for this deficiency. To reconstruct images, a practical reconstruction algorithm is derived. It is of the Feldkamp-Davis-Kress (FDK) type, and is efficient and straightforward. Like the FDK algorithm, our reconstruction is exact in the midplane and can be exact everywhere if the density of the scanned object is independent of the direction z, though the integral of the reconstructed image along z is no longer preserved. Numerical simulations validate our method.
A computed tomography (CT) imaging configuration with a straight-line trajectory is investigated, and a direct filtered backprojection (FBP) algorithm is presented. This kind of system may be useful for industrial applications and security inspections. Projections from a straight-line trajectory have a special property where data from each detector element correspond to a parallel-beam projection of a certain view angle. However, the sampling steps of parallel beams differ from view to view. Rebinning raw projections into uniformly sampled parallel-beam projections is a common choice for this type of reconstruction problem. However, the rebinning procedure suffers a loss of spatial resolution because of interpolations. Our reconstruction method is first derived from the Fourier slice theorem, where a coordinate transform and geometrical relations in projection and backprojection are used. It is then extended to 3-D scanning geometry. Finally, data-shift preprocessing is proposed to reduce computation and memory requirements by removing useless projections in raw data. In this method, the spatial resolution is better preserved and the reconstruction is less sensitive to data truncation than in the rebinning-to-parallel-beam method. To deal with limited angle problem, an iterative reconstruction reprojection method is introduced to estimate missing data and improve the image quality.
The presence of random noise in a CT system degrades the quality of CT images and therefore poses great difficulty to following tasks, such as segmentation and signal identification. In this paper, an efficient denoising algorithm is proposed to improve the quality of CT images. This algorithm mainly consists of three steps: (1) According to the inter-scale relationship of wavelet coefficient magnitude sum in the cone of influence (COI), wavelet coefficients are classified into two categories: edge-related and regular coefficients and irregular coefficients; (2) For edge-related and regular coefficients, only those located at the lowest decomposition level are denoised by wiener filtering, while no changes are made on coefficients located at other decomposition levels. (3) Irregular coefficients are denoised at all levels by wiener filtering. This algorithm is performed on projection data from which CT images are reconstructed. Experimental results show that: (1) It can effectively reduce the noise intensity while preserving the information of details as much as possible; (2) It is independent of CT scanning geometry and thus applicable to various CT systems. The denoising results indicate that this algorithm can offer great help to follow-up analysis based on CT images.
Image segmentation is a classical and challenging problem in image processing and computer vision. Most of the segmentation algorithms, however, do not consider overlapped objects. Due to the special characteristics of X-ray imaging, the overlapping of objects is very commonly seen in X-ray images and needs to be carefully dealt with. In this paper, we propose a novel energy functional to solve this problem. The Euler-Lagrange equation is derived and the segmentation is converted to a front propagating problem that can be efficiently solved by level set methods. We noticed that the proposed energy functional has no unique extremum and the solution relies on the initialization. Thus, an initialization method is proposed to get satisfying results. The experiment on real data validated our proposed method.
Cupping artifact is one of the most serious problems in a middle-low energy X-ray Flat panel detector (FPD)-based cone beam CT system. Both beam hardening effects and scatter could induce cupping artifacts in reconstructions and degrade image quality. In this paper, a two-step cupping-correction method is proposed to eliminate cupping: 1) scatter removal; 2) beam hardening correction. By experimental measurement using Beam Stop Array (BSA), the X-ray scatter distribution of a specific object is estimated in the projection image. After interpolation and subtraction, the primary intensity of the projection image is computed. The scatter distribution can also be obtained using convolution with a low-pass filter as kernel. The linearization is used as beam hardening correction method for one-material object. For two-material cylindrical objects, a new approach without iteration involved is present. There are three processes in this approach. Firstly, correct raw projections by the mapping function of the outer material. Secondly, reconstruct the cross-section image from the modified projections. Finally, scale the image by a simple weighting function. After scatter removal and beam hardening correction, the cupping artifacts are well removed, and the contrast of the reconstructed image is remarkably improved.
Optical Character Recognition (OCR) is a classical research field and has become one of most successful applications in the area of pattern recognition. Feature extraction is a key step in the process of OCR. This paper presents three algorithms for feature extraction based on binary images: the Lattice with Distance Transform (DTL), Stroke Density (SD) and Co-occurrence Matrix (CM). DTL algorithm improves the robustness of the lattice feature by using distance transform to increase the distance of the foreground and background and thus
reduce the influence from the boundary of strokes. SD and CM algorithms extract robust stroke features base on the fact that human recognize characters according to strokes, including length and orientation. SD reflects the quantized stroke information including the length and the orientation. CM reflects the length and orientation of a contour. SD and CM together sufficiently describe strokes. Since these three groups of feature vectors complement each other in expressing characters, we integrate them and adopt a hierarchical algorithm to achieve optimal performance. Our methods are tested on the USPS (United States Postal Service) database and the Vehicle License Plate Number Pictures Database (VLNPD). Experimental results shows that the methods gain high recognition rate and cost reasonable average running time. Also, based on similar condition, we compared our results to the box method proposed by Hannmandlu . Our methods demonstrated better performance in efficiency.