The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA’s specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong’s test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.
Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17±2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.
Metallic objects severely limit diagnostic CT imaging because of their high X-ray attenuation in the diagnostic energy
range. In contrast, radiation therapy linear accelerators now offer CT imaging with X-ray energies in the megavolt range,
where the attenuation coefficients of metals are significantly lower. We hypothesized that Mega electron-Voltage Cone-Beam CT (MVCT) implemented on a radiation therapy linear accelerator can detect and quantify small features in the
vicinity of metallic implants with accuracy comparable to clinical Kilo electron-Voltage CT (KVCT) for imaging. Our
test application was detection of osteolytic lesions formed near the metallic stem of a hip prosthesis, a condition of
severe concern in hip replacement surgery.
Both MVCT and KVCT were used to image a phantom containing simulated osteolytic bone lesions centered around a
Chrome-Cobalt hip prosthesis stem with hemispherical lesions with sizes and densities ranging from 0.5 to 4 mm radius
and 0 to 500 mg•cm-3, respectively. Images for both modalities were visually graded to establish lower limits of lesion
visibility as a function of their size. Lesion volumes and mean density were determined and compared to reference
Volume determination errors were reduced from 34%, on KVCT, to 20% for all lesions on MVCT, and density
determination errors were reduced from 71% on KVCT to 10% on MVCT.
Localization and quantification of lesions was improved with MVCT imaging. MVCT offers a viable alternative to
clinical CT in cases where accurate 3D imaging of small features near metallic hardware is critical. These results need to
be extended to other metallic objects of different composition and geometry.
Graphics processing units (GPUs) are increasingly used for general purpose calculations. Their pipelined architecture
can be exploited to accelerate various parallelizable algorithms. Medical imaging applications are
inherently well suited to benefit from the development of GPU-based computational platforms. We evaluate in
this work the potential of GPUs to improve the execution speed of two common medical imaging tasks, namely
Fourier transforms and tomographic reconstructions. A two-dimensional fast Fourier transform (FFT) algorithm
was GPU-implemented and compared, in terms of execution speed, to two popular CPU-based FFT routines.
Similarly, the Feldkamp, David and Kress (FDK) algorithm for cone-beam tomographic reconstruction was implemented
on the GPU and its performance compared to a CPU version. Different reconstruction strategies
were employed to assess the performance of various GPU memory layouts. For the specific hardware used, GPU
implementations of the FFT were up to 20 times faster than their CPU counterparts, but slower than highly
optimized CPU versions of the algorithm. Tomographic reconstructions were faster on the GPU by a factor up to
30, allowing 2563 voxel reconstructions of 256 projections in about 20 seconds. Overall, GPUs are an attractive
alternative to other imaging-dedicated computing hardware like application-specific integrated circuits (ASICs)
and field programmable gate arrays (FPGAs) in terms of cost, simplicity and versatility. With the development
of simpler language extensions and programming interfaces, GPUs are likely to become essential tools in medical
Micro-CT for bone structural analysis has progressed from an in-vitro laboratory technique to devices for in-vivo
assessment of small animals and the peripheral human skeleton. Currently, topological parameters of bone architecture
are the primary goals of analysis. Additional measurement of the density or degree of mineralization (DMB) of
trabecular and cortical bone at the microscopic level is desirable to study effects of disease and treatment progress. This
information is not commonly extracted because of the challenges of accurate measurement and calibration at the tissue
level. To assess the accuracy of micro-CT DMB measurements in a realistic but controlled situation, we prepared bone-mimicking
watery solutions at concentrations of 100 to 600 mg/cm3 K2PO4H and scanned them with micro-CT, both in
glass vials and microcapillary tubes with inner diameters of 50, 100 and 150 mm to simulate trabecular thickness. Values
of the linear attenuation coefficients m in the reconstructed image are commonly affected by beam hardening effects for
larger samples and by partial volume effects for small volumes. We implemented an iterative reconstruction technique to
reduce beam hardening. Partial voluming was sought to be reduced by excluding voxels near the tube wall. With these
two measures, improvement on the constancy of the reconstructed voxel values and linearity with solution concentration
could be observed to over 90% accuracy. However, since the expected change in real bone is small more measurements
are needed to confirm that micro-CT can indeed be adapted to assess bone mineralization at the tissue level.
Micro-CT for bone structural analysis has progressed from an in-vitro laboratory technique to devices for in-vivo assessment of small animals and the peripheral human skeleton. Currently, topological parameters of bone architecture are the primary goals of analysis. Additional measurement of the density or degree of mineralization (DMB) of trabecular and cortical bone at the microscopic level is desirable to study effects of disease and treatment progress. This information is not commonly extracted because of the challenges of accurate measurement and calibration at the tissue level. To assess the accuracy of micro-CT DMB measurements in a realistic but controlled situation, we prepared bone-mimicking watery solutions at concentrations of 100 to 600 mg/cm3 K2PO4H and scanned them with micro-CT, both in glass vials and microcapillary tubes with inner diameters of 50, 100 and 150 μm to simulate trabecular thickness. Values of the linear attenuation coefficients μ in the reconstructed image are commonly affected by beam hardening effects for larger samples and by partial volume effects for small volumes. We implemented an iterative reconstruction technique to reduce beam hardening. Partial voluming was sought to be reduced by excluding voxels near the tube wall. With these two measures, improvement on the constancy of the reconstructed voxel values and linearity with solution concentration could be observed to over 90% accuracy. However, since the expected change in real bone is small more measurements are needed to confirm that micro-CT can indeed be adapted to assess bone mineralization at the tissue level.
The main use of micro-CT is to analyze bone structure, and values of the linear attenuation coefficients μ at every voxel position in the scanned volume are only used for the analysis of bone structure by establishing a threshold that separates bone from non-bone material. To additionally quantify the degree of mineralization of bone (DMB) from μ in multi-component samples, we corrected for beam hardening and its associated errors in DMB quantification caused by the polychromatic spectra of X-ray tubes used in bench-top micro-CT devices. The correction was implemented by simulating the difference of mono-chromatic and poly-chromatic X-ray sources and adding these differences to the original image in an iterative fashion. When applied to simple cylinders containing a single material, improvement on the constancy of the reconstructed voxel values could be observed to over 90% accuracy.
Objective: Metabolic activity in trabecular bone is an important indicator in the therapy of bone diseases like osteoporosis. It is reflected by the amount of osteoid (young, not yet mineralized bone) and young calcified tissue (YCT). Our aim was to replace standard 2D histomorphometry with a 3D approach for osteoid and YCT measurement. Measurement Methods: Excised lumbar vertebrae of 5 ovariectomized (OVX) and 5 control rats were 3D-scanned with computed micro-tomography ((mu) CT, isotropic spatial resolution 20 micrometer3) and laser scanning confocal microscopy (LSCM, 20X magnification, 1X1X2 micrometer3 spatial resolution). (mu) CT shows trabecular bone structure; LSCM shows osteoid and YCT by fluorescent light. Image Processing Methods: The fraction of bone to tissue volume (BV/TV) and the number of trabeculae (Tb.N) were calculated from globally thresholded (mu) CT images. LSCM images were enhanced using top-hat transform, globally thresholded and morphologically closed. Separate regions were labeled by volume growing. We measured feature volume to background volume ratio and number of features per unit volume. Results and Conclusions: In the specimens obtained from the OVX rats, a significant increase in the volume fractions of osteoid and YCT could be seen. The (mu) CT-LSCM approach presents a significant improvement over time-consuming, standard histomorphometry. The image processing for both modalities could be achieved automatically.
Thin slice spiral computed tomography techniques open up new avenues in assessing human bone architecture in vivo. A slice images do not yield accurate and reproducible results for structural parameters. The use of standard 2D histomorphometric measures is problematic because even a slice thickness of 1 mm is large compared to a trabecular thickness of less than 200 micrometers . In this study we investigated the use of topological parameters an in particular their dependence on spatial resolution. The topology was derived from a 3D skeleton of a (mu) CT dataset of a real trabecular bovine bone sample. The 3D thinning algorithm from which the skeleton was computed will be descried in detail. Decreasing the spatial resolution of the (mu) CT resulted in the following percent changes: number of branches -46 percent; number of noes -50 percent; avg. branch length +28 percent. These results indicate that it is not possible to determine topological parameters accurately in thin slice CT images. However, diagnostically relevant changes over time may still be quantifiable. In order to better analyze these problems we developed realistic spongiosa models using the rapid prototyping technique of stereolithography. Plastic models of a real trabecular bone network were built. So far these spongiosa models are slightly enlarged compared to the original data. Stereolithographic models of artificial geometries showed that a spatial resolution of 100micrometers and variations of +/- 50 micrometers are technically achievable. (mu) CT imaging of the stereolithographic spongiosa models revealed an excellent agreement between the model and the original dataset. Although the absorption characteristics of plastic and bone are different the CT contrast of the stereolithographic model imaged in air is comparable to the contrast of bone imaged in a marrow matrix. Thus for clinical purposes the plastic models can serve as a standard of trabecular bone which can, for example, be used to compare 2D and 3D structural analysis methods, the impact of spatial resolution, and the influence of segmentation techniques.