Image registration for internal organs and soft tissues is considered extremely challenging due to organ shifts and tissue deformation caused by patients’ movements such as respiration and repositioning. In our previous work, we proposed a fast registration method for deformable tissues with small rotations. We extend our method to deformable registration of soft tissues with large displacements. We analyzed the deformation field of the liver by decomposing the deformation into shift, rotation, and pure deformation components and concluded that in many clinical cases, the liver deformation contains large rotations and small deformations. This analysis justified the use of linear elastic theory in our image registration method. We also proposed a region-based neuro-fuzzy transformation model to seamlessly stitch together local affine and local rigid models in different regions. We have performed the experiments on a liver MRI image set and showed the effectiveness of the proposed registration method. We have also compared the performance of the proposed method with the previous method on tissues with large rotations and showed that the proposed method outperformed the previous method when dealing with the combination of pure deformation and large rotations. Validation results show that we can achieve a target registration error of 1.87±0.87 mm and an average centerline distance error of 1.28±0.78 mm. The proposed technique has the potential to significantly improve registration capabilities and the quality of intraoperative image guidance. To the best of our knowledge, this is the first time that the complex displacement of the liver is explicitly separated into local pure deformation and rigid motion.
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance
imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US),
can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a
well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality
real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would
be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational
cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical
environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention.
Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational
precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR
and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing
and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our
multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting
the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different
imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing
are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D
MR and US cardiac datasets.
Intra-cardiac echocardiography (ICE) is commonly used to guide intra-cardiac procedures, such as the treatment of atrial
fibrillation (AF). However, effective surgical navigation based on ICE images is not trivial, due to the low signal-to-noise
ratio (SNR) and limited field of view of ultrasound (US) images. The interpretation of ICE can be significantly
improved if correctly placed in the context of three-dimensional magnetic resonance (MR) or computed tomography
(CT) images by simultaneously presenting the complementary anatomical information from the two modalities. The
purpose of this research is to demonstrate the feasibility of multimodality image registration of 2D intra-cardiac US
images with 3D computed tomography (CT) images. In our previous work, a two-step registration procedure has been
proposed to register US images with MR images and was validated on a patient dataset. In this work, we extend the two-step
method to intra-cardiac procedures and provide a detailed assessment of registration accuracy by determining the
target registration errors (TRE) on a heart phantom, which had fiducial markers affixed to the surface to facilitate
evaluation of registration accuracy. The resultant TRE on the heart phantom was 3.7 mm. This result is considered to
be acceptable for guiding a probe in the heart during ablative therapy for atrial fibrillation. To our knowledge, there is
no previous report describing multimodality registration of 2D intra-cardiac US to high-resolution 3D CT.
Rapid registration of multimodal cardiac images can improve image-guided cardiac surgeries and cardiac disease diagnosis. While mutual information (MI) is arguably the most suitable registration technique, this method is too slow to converge for real time cardiac image registration; moreover, correct registration may not coincide with a global or even local maximum of MI. These limitations become quite evident when registering three-dimensional (3D) ultrasound (US) images and dynamic 3D magnetic resonance (MR) images of the beating heart. To overcome these issues, we present a registration method that uses a reduced number of voxels, while retaining adequate registration accuracy. Prior to registration we preprocess the images such that only the most representative anatomical features are depicted. By selecting samples from preprocessed images, our method dramatically speeds up the registration process, as well as ensuring correct registration. We validated this registration method for registering dynamic US and MR images of the beating heart of a volunteer. Experimental results on in vivo cardiac images demonstrate significant improvements in registration speed without compromising registration accuracy. A second validation study was performed registering US and computed tomography (CT) images of a rib cage phantom. Two similarity metrics, MI and normalized crosscorrelation (NCC) were used to register the image sets. Experimental results on the rib cage phantom indicate that our method can achieve adequate registration accuracy within 10% of the computation time of conventional registration methods. We believe this method has the potential to facilitate intra-operative image fusion for minimally invasive cardio-thoracic surgical navigation.
Image-guided procedures within the thoracic cavity require accurate registration of a pre-operative virtual model to the patient. Currently, surface landmarks are used for thoracic cavity registration; however, this approach is unreliable due to skin movement relative to the ribs. An alternative method for providing surgeons with image feedback in the operating room is to integrate images acquired during surgery with images acquired pre-operatively. This integration process is required to be automatic, fast, accurate and robust; however inter-modal image registration is difficult due to the lack of a direct relationship between the intensities of the two image sets. To address this problem, Computed Tomography (CT) was used to acquire pre-operative images and Ultrasound (US) was used to acquire peri-operative images. Since bone has a high electron density and is highly echogenic, the rib cage is visualized as a bright white boundary in both datasets. The proposed approach utilizes the ribs as the basis for an intensity-based registration method -- mutual information. We validated this approach using a thorax phantom. Validation results demonstrate that this approach is accurate and shows little variation between operators. The fiducial registration error, the registration error between the US and CT images, was < 1.5mm. We propose this registration method as a basis for precise tracking of minimally invasive thoracic procedures. This method will permit the planning and guidance of image-guided minimally invasive procedures for the lungs, as well as for both catheter-based and direct trans-mural interventions within the beating heart.