Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful
therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a
cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging
system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation
enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors
as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods
results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get
suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D
reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets,
which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard
multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction
algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided
by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam
geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed
and compared with reference images from magnetic resonance scans. We found a high correlation between both
Nowadays medical interventions are often supported by localization systems using different measurement tools (MT). This requires to register the MT coordinate sytem to the world coordinate system used by the medical device. The hand-eye calibration is a well-known method from robotics to estimate the transformation between the gripper of a robot (hand) and a MT (eye) rigidly attached to the robot. Using a calibration tool (e.g. checker board) one can obtain the hand-eye transformation using known relative movements of the robot and the data from the MT. The approach can also be used for MT located elsewhere * using markers on the device. The position of the markers is not required to be known since they are rigid during the motions. Based on prior work using dual quaternions to represent transformations in the SE(3) we not only took into account movements between immediate neighbour positions Pi and Pj , but combined all positions to gain (P/2) submotions in every subset Σ p/n=3 (p/n) t without increasing the number of positions conducted during the calibration. We took into account the unity constraint for dual quaternions since only those represent rigid motions in space. We performed simulations that show the advantage of our algorithm. Additionally we gained experimental data which supported the outcome of the simulations. We can outline that our approach achieves more accurate results estimating the hand-eye transformation than the aforementioned algorithms.
The steadily growing computational power of modern hardware allows use of more sophisticated reconstruction
methods. We present an implementation of the maximum likelihood (ML) method, a previously studied method,
for the case of a flat-panel rotational X-ray device. Contrary to the related principle of algebraic reconstruction
(ART), the ML method takes into consideration the physical properties of X-radiation, especially the corpuscular
character and the associated Poisson distribution of the measured number of photons. The basic principle is the
maximization of the joint probability of all measured projections with respect to the attenuation coefficients of
all voxels of the object. The application of the ML optimization procedure finally generates an iterative scheme
for the update of the attenuation coefficients. For this, in each step an accurate estimation of the forward
projections (FP) is mandatory. We use an approximate calculation of the footprints of single voxels based on
separable trapezoids. The resulting enormous computational effort is handled by an efficient implementation
on GPGPU (General-purpose computing on graphics processing units). As a first look, using data from 133
projections of a sheep head acquired by means of a flat-panel rotational angiography system, we compare the
reconstruction by the ML-based method with the gold standard - the Feldkamp filtered back projection (FBP)
procedure. The results reveal a clearly reduced amount of streak artifacts as well as less blurring in the statistical
Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several
brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of
various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can
hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the
diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical
slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime.
To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to
an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the
registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the
ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate.
Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for
reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the
diagnostic yield of midbrain sonography.
In this work, we present a new type of model for object localization, which is well suited for anatomical objects
exhibiting large variability in size, shape and posture, for usage in the discriminative generalized Hough transform
(DGHT). The DGHT combines the generalized Hough transform (GHT) with a discriminative training approach
to automatically obtain robust and efficient models. It has been shown to be a strong tool for object localization
capable of handling a rather large amount of shape variability. For some tasks, however, the variability exhibited
by different occurrences of the target object becomes too large to be represented by a standard DGHT model. To
be able to capture such highly variable objects, several sub-models, representing the modes of variability as seen by
the DGHT, are created automatically and are arranged in a higher dimensional model. The modes of variability
are identified on-the-fly during training in an unsupervised manner. Following the concept of the DGHT, the
sub-models are jointly trained with respect to a minimal localization error employing the discriminative training
approach. The procedure is tested on a dataset of thorax radiographs with the target to localize the clavicles.
Due to different arm positions, the posture and arrangement of the target and surrounding bones differs strongly,
which hampers the training of a good localization model. Employing the new model approach the localization
rate improves by 13% on unseen test data compared to the standard model.
An automatic algorithm for training of suitable models for the Generalized Hough Transform (GHT) is presented. The applied iterative approach learns the shape of the target object directly from training images and incorporates variability in pose and scale of the target object exhibited in the images. To make the model more robust and representative for the target object, an individual weight is estimated for each model point using a discriminative approach. These weights will be employed in the voting procedure of the GHT, increasing the impact of important points on the localization result. The proposed procedure is extended here with a new error measure and a revised point weight training to enable the generation of models representing several target objects. Common parts of the target objects will thereby obtain larger weights, while the model might also contain object specific model points, if necessary, to be representative for all targets.
The method is applied here to the localization of knee joints in long-leg radiographs. A quantitative comparison of the new approach with the separate localization of right and left knee showed improved results concerning localization precision and performance.
We propose a system to estimate blood flow velocity in angiographic image data for patient-specific blood
flow simulations. Angiographies are acquired routinely for diagnosis and before treatment of vascular diseases.
Projective blood flow is measured in digital subtraction X-ray angiography (2D-DSA) images by tracking contrast
agent propagation. Spatial information is added by re-projecting 2D centerline pixels to the reconstructed 3D
X-ray rotation angiography (3D-RA) data of the same subject. Ambiguities caused by occluding vessels from
the virtual viewpoint of the acquired 2D-DSA image are resolved by a graph-based approach. The blood flow
velocity can be used as boundary condition for exact blood flow simulations that can help physicians to understand
hemodynamics of the vasculature. Our focus is to analyze cerebral angiographic data. We performed several
experiments with phantom and patient data that proved the accuracy and the functionality of our method. We
evaluated experimentally the projective flow estimation method and the re-projection method. We measured
mean deviations to the ground truth between 11 % and 15.7 % for phantom data. We also showed the ability of
our method to produce plausible results with patient-data.
New advances in MRI technology enable fast acquisition of high-resolution images. In combination with the new open
architecture this scanners are entering the surgical suite being used as intra-operative imaging modality for minimally
invasive interventions. However, for a usage on a large scale the major issue of availability of appropriate surgical tools
is still unsolved. Such instruments, i.e. needles and catheters have to be MR-safe and -compatible but in contrast still
have to be visible within the MRI image. This usually is solved by integration of markers onto non-magnetic devices.
For reasons of MR-safety, work-flow and cost effectiveness semi-active markers without any connection to the outside
are preferable. The challenge in development and integration of such resonant markers is to precisely meet the MRI
frequency by keeping the geometrical dimensions of the interventional tool constant. This paper focuses on the reliable
integration and easy fabrication of such resonant markers on the tip of an interventional instrument. Starting with a
theoretical background for resonant labels a self-sufficient pre-tuned marker consisting of a standard capacitor and a
thin-film inductor is presented. A prototype is built using aerosol deposition for the inductor on a 6-F polymer catheter
and by integration of an off-the-shelf capacitor into the lumen of the catheter. Due to the fact that the dielectric materials
of some capacitors lead to artifacts in the MRI image different capacitor technologies are investigated. The prototypes
are scanned by an interventional MRI device proving the proper functionality of the tools.
This paper presents a new iterative motion correction technique composed of motion estimation in projection space, motion segmentation in image space, and motion compensation within an analytical filtered-backprojection (FBP) image reconstruction algorithm. The motion is estimated by elastic registration of acquired projections on reference projections. Reference projections are sampled from the image, reconstructed in a previous iteration step. To apply the motion compensation locally, the image regions significantly affected by motion are segmented. First the perceived motion is identified in projection space by computing the absolute difference between acquired line integrals and reference line integrals. Then, differences are reconstructed in image space, and the image is regularized with a pipeline of standard image processing operators. The result of this procedure is a normalized motion map, associating each image element with a measure of the local motion detected there. The estimated displacement vectors in projection space and the reconstructed motion map in image space are then used by an adaptive motion-compensated FBP algorithm to reconstruct a sharper image. Results are shown qualitatively and quantitatively for reconstructions from realistic projections, simulated from clinical patient data. Since the method does not assume any periodicity of the motion model, it can correct reconstruction artifacts due to unstructured patient motion, such as breath-hold failure, abdominal contractions, and nervous movements.
This paper presents an iterative method for compensation of motion artifacts for slowly rotating computed tomography (CT) systems. The inconsistencies among projections introduce severe reconstruction artifacts for free breathing acquisitions. Streaks and false structures appear and the resolution is limited by strong blurring. The rationale of the motion compensation method is to iteratively correct the reconstructed image by first extracting the motion artifacts in projection space, then reconstructing the artifacts in image space, and finally subtracting the artifacts from the original reconstruction. The perceived motion is extracted in projection space from the difference between acquired and reference projections, sampled from the image reconstructed in a previous iteration step. The initial image is reconstructed from acquired data and is nevertheless considered as the reference, although it contains artifacts. This image is iteratively corrected by subtraction of the estimated motion artifacts. The originality of the technique stems from the fact that the patient motion is not estimated but the artifacts are reconstructed in image space. It can provide sharp static anatomical images on slowly rotating on-board imagers in radiotherapy or interventional C-arm systems. Qualitative and quantitative figures are shown for experiments based on simulated projections of a sequence of clinical images resulting from a respiratory-gated helical CT acquisition. The border of the diaphragm becomes sharper and the contrast improves for small structures in the lungs.
Scattered radiation is a major source of artifacts in flat detector based cone-beam computed tomography. In this paper, a novel software-based method for retrospective scatter correction is described and evaluated. The method is based on approximation of the imaged object by a simple geometric model (e.g., a homogeneous water-like ellipsoid) that is estimated from the set of acquired projections. This is achieved by utilizing a numerical optimization procedure to determine the model parameters for which there is maximum correspondence between the measured projections and the projections of the model. Monte-Carlo simulations of this model are used for calculation of scatter estimates for the acquired projections. Finally, using the scatter-corrected projections, tomographic reconstruction is conducted by means of cone-beam filtered back-projection. The correction method is evaluated using simulated and experimentally acquired projection data sets of geometric and physical head phantoms. It is found that the method is able to accurately estimate mean scatter levels in X-ray projections, allowing to significantly reduce scatter-caused artifacts in 3D reconstructed images.
This study deals with a systematic assessment of the potential of different schemes for computerized scatter correction in flat detector based cone-beam X-ray computed tomography. The analysis is based on simulated scatter of a CT image of a human head. Using a Monte-Carlo cone-beam CT simulator, the spatial distribution of scattered radiation produced by this object has been calculated with high accuracy for the different projected views of a circular tomographic scan. Using this data and, as a reference, a scatter-free forward projection of the phantom, the potential of different schemes for scatter correction has been evaluated. In particular, the ideally achievable degree of accuracy of schemes based on estimating a constant scatter level in each projection was compared to approaches aiming at estimation of a more complex spatial shape of the scatter distribution. For each scheme, remaining cupping artifacts in the reconstructed volumetric image were quantified and analyzed. It was found that already accurate estimation of a constant scatter level for each projection allows for comparatively accurate compensation of scatter-caused artifacts.
Scattered radiation is a major source of image degradation and nonlinearity in flat detector based cone-beam CT. Due to the bigger irradiated volume the amount of scattered radiation in true cone-beam geometry is considerably higher than for fan beam CT. This on the one hand reduces the signal to noise ratio, since the additional scattered photons contribute only to the noise and not to the measured signal, and on the other hand cupping and streak artifacts arise in the reconstructed volume. Anti-scatter grids composed of lead lamellae and interspacing material decrease the SNR for flat detector based CB-CT geometry, because the beneficial scatter attenuating effect is overcompensated by the absorption of primary radiation. Additionally, due to the high amount of scatter that still remains behind the grid, cupping and streak artifacts cannot be reduced sufficiently. Computerized scatter correction schemes are therefore essential for achieving artifact-free reconstructed images in cone-beam CT. In this work, a fast model based scatter correction algorithm is proposed, aiming at accurately estimating the level and spatial distribution of scattered radiation background in each projection. This will allow for effectively reducing streak and cupping artifacts due to scattering in cone-beam CT applications.
In this paper, soft tissue contrast visibility in neural applications is investigated for volume imaging based on flat X-ray detector cone-beam CT. Experiments have been performed on a high precision bench-top system with rotating object table and fixed X-ray tube-detector arrangement. Several scans of a post mortem human head specimen have been performed under various conditions. Hereby two different flat X-ray detectors with 366 x 298mm2 (Trixell Pixium 4700) and 176 x 176mm2 (Trixell Pixium 4800) active area have been employed. During a single rotation up to 720 projections have been acquired.
For reconstruction of the 3D images a Feldkamp algorithm has been employed. Reconstructed images of the head of human cadaver demonstrate that added soft tissue contrast down to 10 HU is detectable for X-ray dose comparable to CT. However, the limited size of the smaller detector led to truncation artifacts, which were partly compensated by extrapolation of the projections outside the field of view.
To reduce cupping artifacts resulting from scattered radiation and to improve visibility of low contrast details, a novel homogenization procedure based on segmentation and polynomial fitting has been developed and applied on the reconstructed voxel data. Even for narrow HU-Windows, limitations due to scatter induced cupping artifacts are no longer noticeable after applying the homogenization procedure.
In this paper, the performance of focused lamellar anti-scatter grids, which are currently used in fluoroscopy, is studied in order to determine guidelines of grid usage for flat detector based cone beam CT. The investigation aims at obtaining the signal to noise ratio improvement factor by the use of anti-scatter grids.
First, the results of detailed Monte Carlo simulations as well as measurements are presented. From these the general characteristics of the impinging field of scattered and primary photons are derived. Phantoms modeling the head, thorax and pelvis regions have been studied for various imaging geometries with varying phantom size, cone and fan angles and patient-detector distances.
Second, simulation results are shown for ideally focused and vacuum spaced grids as best case approach as well as for grids with realistic spacing materials. The grid performance is evaluated by means of the primary and scatter transmission and the signal to noise ratio improvement factor as function of imaging geometry and grid parameters.
For a typical flat detector cone beam CT setup, the grid selectivity and thus the performance of anti-scatter grids is much lower compared to setups where the grid is located directly behind the irradiated object. While for small object-to-grid distances a standard grid improves the SNR, the SNR for geometries as used in flat detector based cone beam CT is deteriorated by the use of an anti-scatter grid for many application scenarios. This holds even for the pelvic region.
Standard fluoroscopy anti-scatter grids were found to decrease the SNR in many application scenarios of cone beam CT due to the large patient-detector distance and have, therefore, only a limited benefit in flat detector based cone beam CT.
We present results on 3D image quality in terms of spatial resolution (MTF) and low contrast detectability, obtained on a flat dynamic X-ray detector (FD) based cone-beam CT (CB-CT) setup. Experiments have been performed on a high precision bench-top system with rotating object table, fixed X-ray tube and 176 x 176 mm2 active detector area (Trixell Pixium 4800). Several objects, including CT performance-, MTF- and pelvis phantoms, have been scanned under various conditions, including a high dose setup in order to explore the 3D performance limits. Under these optimal conditions, the system is capable of resolving less than 1% (~10 HU) contrast in a water background. Within a pelvis phantom, even inserts of muscle and fat equivalent are clearly distinguishable. This also holds for fast acquisitions of up to 40 fps. Focusing on the spatial resolution, we obtain an almost isotropic three-dimensional resolution of up to 30 lp/cm at 10% modulation.