Accurate two-dimensional to three-dimensional (2-D/3-D) registration of preoperative 3-D data and intraoperative 2-D x-ray images is a key enabler for image-guided therapy. Recent advances in 2-D/3-D registration formulate the problem as a learning-based approach and exploit the modeling power of convolutional neural networks (CNN) to significantly improve the accuracy and efficiency of 2-D/3-D registration. However, for surgery-related applications, collecting a large clinical dataset with accurate annotations for training can be very challenging or impractical. Therefore, deep learning-based 2-D/3-D registration methods are often trained with synthetically generated data, and a performance gap is often observed when testing the trained model on clinical data. We propose a pairwise domain adaptation (PDA) module to adapt the model trained on source domain (i.e., synthetic data) to target domain (i.e., clinical data) by learning domain invariant features with only a few paired real and synthetic data. The PDA module is designed to be flexible for different deep learning-based 2-D/3-D registration frameworks, and it can be plugged into any pretrained CNN model such as a simple Batch-Norm layer. The proposed PDA module has been quantitatively evaluated on two clinical applications using different frameworks of deep networks, demonstrating its significant advantages of generalizability and flexibility for 2-D/3-D medical image registration when a small number of paired real-synthetic data can be obtained.
Minimally invasive abdominal aortic aneurysm (AAA) stenting can be greatly facilitated by overlaying the preoperative
3-D model of the abdominal aorta onto the intra-operative 2-D X-ray images. Accurate 2-D/3-D registration in 3-D
space makes the 2-D/3-D overlay robust to the change of C-Arm angulations. By far, the 2-D/3-D registration methods
based on simulated X-ray projection images using multiple image planes have been shown to be able to provide
satisfactory 3-D registration accuracy. However, one drawback of the intensity-based 2-D/3-D registration methods is
that the similarity measure is usually highly non-convex and hence the optimizer can easily be trapped into local minima.
User interaction therefore is often needed in the initialization of the position of the 3-D model in order to get a successful
2-D/3-D registration. In this paper, a novel 3-D pose initialization technique is proposed, as an extension of our
previously proposed bi-plane 2-D/3-D registration method for AAA intervention . The proposed method detects
vessel bifurcation points and spine centerline in both 2-D and 3-D images, and utilizes landmark information to bring the
3-D volume into a 15mm capture range. The proposed landmark detection method was validated on real dataset, and is
shown to be able to provide a good initialization for 2-D/3-D registration in , thus making the workflow fully