Compressed sensing (CS) requires undersampled projection data, but CT x-ray tubes cannot be pulsed quickly enough to achieve reduced-view undersampling. We propose an alternative within-view undersampling strategy, named SparseCT, as a practical CS technique to reduce CT radiation dose. SparseCT uses a multi-slit collimator (MSC) to interrupt the x-ray beam, thus acquiring undersampled projection data directly. This study evaluated the feasibility of SparseCT via simulations using a standardized patient dataset. Because the projection data in the dataset are fully sampled, we retrospectively undersample the projection data to simulate SparseCT acquisitions in three steps. First, two photon distributions were simulated, representing the cases with and without the MSC. Second, by comparing the two distributions, detector regions with more than 80% of x-ray blocked by the MSC were identified and the corresponding projection data were not used. Third, noise was inserted into the rest of the projection data to account for the increase in quantum noise due to reduced flux (partial MSC blockage). The undersampled projection data were then reconstructed iteratively using a penalized weighted least squares cost function with the conjugate gradient algorithm. The image reconstruction problem promotes sparsity in the solution and incorporates the undersampling model. Weighting factors were applied to the projection data during the reconstruction to account for the noise variation in the undersampled projection. Compared to images acquired with reduced tube current (provided in the standardized patient dataset), SparseCT undersampling presented less image noise while preserving pathologies and fine structures such as vessels in the reconstructed images.
Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if – due to noise - a pixel’s vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.
Recent advances in Photon Counting CT (PCCT) have facilitated the simultaneous acquisition of multiple
image volumes with differing energy thresholds. This presents the user with several choices for energy
threshold combinations. As compared to standard clinical Dual kVp CT , where the user typically has only
three choices of kVp pairings (e.g., 80/150Sn, 90/150Sn, 100/150Sn), a “quad” PCCT system with 14
threshold settings has Choose(14,4)= 1001 possible threshold combinations (assuming no restrictions). In this
paper we describe a computationally tractable means to order, from best (most accurate) to worst (least
accurate), threshold combinations for the task of <i>discriminating </i>pure materials of assumed approximate
concentrations using the Bhattacharyya Coefficient. We observe that this ordering is not necessarily identical
to the ordering for the task of <i>decomposing </i>material mixtures into their components. We demonstrate our
approach on phantom data.
Aortic Aneurysms (AA) are the 13th leading cause of death in the US. In standard clinical
practice, intervention is initiated when the maximal diameter cross-sectional reaches
5.5cm. However, this is a 1D measure and it has been suggested in the literature that
higher order measurements (area, volume) might be more appropriate clinically.
Unfortunately, no commercially available tools exist for extracting a 3D model of the
epithelial layer (versus the lumen) of the vessel. Therefore, we present work towards
semi-automatically recovering the aorta from CT angiography volumes with the aim to
facilitate such studies.
We build our work upon a previous approach to this problem. Bodur et. al.,
presented a variant of the iso-perimetric algorithm to semi-automatically segment several
individual aortic cross-sections across longitudinal studies, quantifying any growth. As a
by-product of these sparse cross-sections, it is possible to form a series of rough 3D
models of the aorta.
In this work we focus on creating a more detailed 3D model at a single time point
by automatically recovering the aorta between the sparse user-initiated segmentations.
Briefly, we fit a tube model to the sparse segmentations to approximate the cross-sections
at intermediate regions, refine the approximations and apply the isoperimetric algorithm
to them. From these resulting dense cross-sections we reconstruct our model. We applied
our technique to 12 clinical datasets which included significant amounts of thrombus.
Comparisons of the automatically recovered cross-sections with cross-sections drawn by
an expert resulted in an average difference of .3cm for diameter and 2cm^2 for area.
It is standard practice for physicians to rely on empirical, population based models to define the relationship
between regions of left ventricular (LV) myocardium and the coronary arteries which supply them with
blood. Physicians use these models to infer the presence and location of disease within the coronary arteries
based on the condition of the myocardium within their distribution (which can be established non-invasively
using imaging techniques such as ultrasound or magnetic resonance imaging). However, coronary artery
anatomy often varies from the assumed model distribution in the individual patient; thus, a non-invasive
method to determine the correspondence between coronary artery anatomy and LV myocardium would have
immediate clinical impact. This paper introduces an image-based rendering technique for visualizing maps of
coronary distribution in a patient-specific approach. From an image volume derived from computed
tomography (CT) images, a segmentation of the LV epicardial surface, as well as the paths of the coronary
arteries, is obtained. These paths form seed points for a competitive region growing algorithm applied to the
surface of the LV. A ray casting procedure in spherical coordinates from the center of the LV is then
performed. The cast rays are mapped to a two-dimensional circular based surface forming our coronary
distribution map. We applied our technique to a patient with known coronary artery disease and a qualitative
evaluation by an expert in coronary cardiac anatomy showed promising results.
Aortic aneurysms are the 13<sup>th</sup> leading cause of death in the United States. In
standard clinical practice, assessing the progression of disease in the aorta, as well as
the risk of aneurysm rupture, is based on measurements of aortic diameter. We
propose a method for automatically segmenting the aortic vessel border allowing the
calculation of aortic diameters on CTA acquisitions which is accurate and fast,
allowing clinicians more time for their evaluations. While segmentation of aortic
lumen is straightforward in CTA, segmentation of the outer vessel wall (epithelial
layer) in a diseased aorta is difficult; furthermore, no clinical tool currently exists to
perform this task. The difficulties are due to the similarities in intensity of
surrounding tissue (and thrombus due to lack of contrast agent uptake), as well as the
complications from bright calcium deposits.
Our overall method makes use of a centerline for the purpose of resampling
the image volume into slices orthogonal to the vessel path. This centerline is
computed semi-automatically via a distance transform. The difficult task of
automatically segmenting the aortic border on the orthogonal slices is performed via
a novel variation of the isoperimetric algorithm which incorporates circular
constraints (priors). Our method is embodied in a prototype which allows the loading
and registration of two datasets simultaneously, facilitating longitudinal
comparisons. Both the centerline and border segmentation algorithms were evaluated
on four patients, each with two volumes acquired 6 months to 1.5 years apart, for a
total of eight datasets. Results showed good agreement with clinicians' findings.
Post myocardial infarction, the identification and assessment of non-viable (necrotic) tissues is necessary for effective development of intervention strategies and treatment plans. Delayed Enhancement Magnetic Resonance (DEMR) imaging is a technique whereby non-viable cardiac tissue appears with increased signal intensity. Radiologists typically acquire these images in conjunction with other functional modalities (e.g., MR Cine), and use domain knowledge and experience to isolate the non-viable tissues. In this paper, we present a technique for automatically segmenting these tissues given the delineation of myocardial borders in the DEMR and in the End-systolic and End-diastolic MR Cine images. Briefly, we obtain a set of segmentations furnished by an expert and employ an artificial intelligence technique, Support Vector Machines (SVMs), to "learn" the segmentations based on features culled from the images. Using those features we then allow the SVM to predict the segmentations the expert would provide on previously unseen images.
MR imaging is increasingly being used as a method for analyzing and diagnosing cardiac function. Segmentation of heart chambers facilitates volume computation, as well as ventricular motion analysis. Successful techniques have been developed for segmentation of individual 2D slices. However 2D models limit the description of a 3D phenomenon to two dimensions and use only 2D constraints. The resulting model lacks interslice coherency, making interslice interpolation necessary. In addition, the model is more susceptible to corruption due to noise local to one or more slices. We present work towards an approach to segmenting cine MR images using a 3D deformable model with rigid and nonrigid components. Past approaches have used models without rigid components or used isotropic CT data. Our model adaptively subdivides the mesh in response to the forces extracted from image data. Additionally, the local mesh of the model encodes surface orientation to align the model with the desired edge directions, a crucial constraint for distinguishing close anatomical structures. The modified subdivision algorithm preserves orientation of the elements by vertex ordering. We present results of segmenting two multi-slice cardiac MR image series with interslice resolutions of 8 and 4 mm/slice, and intraslice resolution of 1mm/pixel. We also include work in progress on tracking multislice, multiphase cine cardiac MR sequences with 4mm interslice, and 1mm intraslice resolution.
The application of numeric methods to the minimization of error has become an emerging paradigm for obj ect recovery. Typically, a parametric representation describing the object is postulated. Its parameters are then adjusted to minimize some measurement of the distance between the representation and the datapoints (the error-of-fit model). Characteristics of the sensor used to recover the points may be implicit in this formulation or may not be included at all. While sensors may be precise for a specific field of view no sensor is everywhere exact. A laser range finder for example, yields very sharp x- and y-coordinate values; however, its z-coordinate is less trustworthy. It becomes important to capture the strengths and weaknesses of a sensor and incorporate them into the recovery process. We seek to make explicit the contribution of a particular sensor by introducing a sensor model. This partitioning facilitates the development of an appropriate description of a sensor's characteristics. Also, it helps clarify interactions among different aspects of the recovery process ( i.e. error-of-fit model, sensor model, and parametric object representation). The sensor model is reflected in the certainty of sensed quantities (position, color, intensity) associated with a datapoint. We explore whether the introduction of an explicit sensor model yields an improvement in the recovery process. The PROVER (Parametric Representation Of Volumes: Experimental Recovery) System, a testbed used in the development of sensor models is described.