Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter
through the femoral artery and vascular system to the site of pathology. Intra-interventional navigation is done under the
guidance of one or at most two two-dimensional (2D) X-ray fluoroscopic images or 2D digital subtracted angiograms
(DSA). Due to the projective nature of 2D images, the interventionist needs to mentally reconstruct the position of the
catheter in respect to the three-dimensional (3D) patient vasculature, which is not a trivial task. By 3D-2D registration of
pre-interventional 3D images like CTA, MRA or 3D-DSA and intra-interventional 2D images, intra-interventional tools
such as catheters can be visualized on the 3D model of patient vasculature, allowing easier and faster navigation. Such a
navigation may consequently lead to the reduction of total ionizing dose and delivered contrast medium. In the past,
development and evaluation of 3D-2D registration methods for endovascular treatments received considerable attention.
The main drawback of these methods is that they have to be initialized rather close to the correct position as they mostly
have a rather small capture range. In this paper, a novel registration method that has a higher capture range and success
rate is proposed. The proposed method and a state-of-the-art method were tested and evaluated on synthetic and clinical
3D-2D image-pairs. The results on both databases indicate that although the proposed method was slightly less accurate,
it significantly outperformed the state-of-the-art 3D-2D registration method in terms of robustness measured by capture
range and success rate.
Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter
through the femoral artery and vascular system into the brain and into the aneurysm or AVM. Intra-interventional
navigation utilizes digital subtraction angiography (DSA) to visualize vascular structures and X-ray fluoroscopy to
localize the endovascular components. Due to the two-dimensional (2D) nature of the intra-interventional images,
navigation through a complex three-dimensional (3D) structure is a demanding task. Registration of pre-interventional
MRA, CTA, or 3D-DSA images and intra-interventional 2D DSA images can greatly enhance visualization and
navigation. As a consequence of better navigation in 3D, the amount of required contrast medium and absorbed dose
could be significantly reduced. In the past, development and evaluation of 3D-2D registration methods received
considerable attention. Several validation image databases and evaluation criteria were created and made publicly
available in the past. However, applications of 3D-2D registration methods to cerebral angiograms and their validation
are rather scarce. In this paper, the 3D-2D robust gradient reconstruction-based (RGRB) registration algorithm is applied
to CTA and DSA images and analyzed. For the evaluation purposes five image datasets, each comprised of a 3D CTA
and several 2D DSA-like digitally reconstructed radiographs (DRRs) generated from the CTA, with accurate gold
standard registrations were created. A total of 4000 registrations on these five datasets resulted in mean mTRE values
between 0.07 and 0.59 mm, capture ranges between 6 and 11 mm and success rates between 61 and 88% using a failure
threshold of 2 mm.
In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for
image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We
used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed
tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam
CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging
techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over
existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and
two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The
projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found
to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging
technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.
Nowadays, radiation therapy systems incorporate kV imaging units which allow for the real-time acquisition
of intra-fractional X-ray images of the patient with high details and contrast. An application of this technology
is tumor motion monitoring during irradiation. For tumor tracking, implanted markers or position sensors
are used which requires an intervention. 2D/3D intensity based registration is an alternative, non-invasive
method but the procedure must be accelerate to the update rate of the device, which lies in the range of 5
Hz. In this paper we investigate fast CT to a single kV X-ray 2D/3D image registration using a new porcine
reference phantom with seven implanted fiducial markers. Several parameters influencing the speed and accuracy
of the registrations are investigated. First, four intensity based merit functions, namely Cross-Correlation,
Rank Correlation, Mutual Information and Correlation Ratio, are compared. Secondly, wobbled splatting and
ray casting rendering techniques are implemented on the GPU and the influence of each algorithm on the
performance of 2D/3D registration is evaluated. Rendering times for a single DRR of 20 ms were achieved.
Different thresholds of the CT volume were also examined for rendering to find the setting that achieves the best
possible correspondence with the X-ray images. Fast registrations below 4 s became possible with an inplane
accuracy down to 0.8 mm.
Verification of registration accuracy is paramount for assessing the validity and clinical feasibility of a registration method. When a ground truth registration is not available or when local misalignments need to be examined, a qualitative assessment of registration results must be performed. The verification of registration by analyzing correspondences of gradients derived from the rigidly registered CT and MR images was performed. Strongest
local CT gradients were extracted and transformed into the MR gradient image. A local gradient correspondence
search in the MR image was performed using discrete systematic displacements in the direction of the strongest
local CT gradients. As the measure of gradient correspondence between the CT and MR gradients the absolute values and the directions of the gradients were considered. The directional information was integrated by means of a weighting function which was calculated as a product of absolute values of the strongest local CT gradient and the MR gradient weighted by the angle between these two gradients. Two correspondence visualization techniques and gradient displacement analysis were developed to highlight the misaligned gradients and provide qualitative assessment of local misregistration. The feasibility of the proposed approach was demonstrated on the CT and MR images of the RIRE database registered using the normalized mutual information similarity measure. Global and local misregistrations were detected. Furthermore, the acquisition artifacts of non-rectified MR images could be visualized and were shown to degrade the registration performance.
A number of intensity and feature based methods have been proposed for 3D to 2D registration. However,
for multimodal 3D/2D registration of MR and X-ray images, only hybrid and reconstruction-based methods
were shown to be feasible. In this paper we optimize the extraction of features in the form of bone edge
gradients, which were proposed for 3D/2D registration of MR and X-ray images. The assumption behind
such multimodal registration is that the extracted gradients in 2D X-ray images match well to the corresponding
gradients extracted in 3D MR images. However, since MRI and X-rays are fundamentally different modalities, the
corresponding bone edge gradients may not appear in the same position and the the above-mentioned assumption
may thus not be valid. To test the validity of this assumption, we optimized the extraction of bone edges
in 3D MR and also in CT images for the registration to 2D X-ray images. The extracted bone edges were
systematically displaced in the direction of their gradients, i.e. in the direction of the normal to the bone
surface, and corresponding effects on the accuracy and convergence of 3D/2D registration were evaluated. The
evaluation was performed on two different sets of MR, CT and X-ray images of spine phantoms with known gold
standard, first consisting of five and the other of eight vertebrae. The results showed that a better registration
can be obtained if bone edges in MR images are optimized for each application-specific MR acquisition protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.