This paper describes the development and initial cadaver studies using a prototype image-guided surgery system for
femoroplasty, which is a potential alternative treatment for reducing fracture risk in patients with severe osteoporosis.
Our goal is to develop an integrated surgical guidance system that will allow surgeons to augment the femur using
patient-specific biomechanical planning and intraoperative analysis tools. This paper focuses on the intraoperative
module, which provides real-time navigation of an injection device and estimates the distribution of the injected material
relative to the preoperative plan. Patient registration is performed using intensity-based 2D/3D registration of X-ray
images and preoperative CT data. To co-register intraoperative X-ray images and optical tracker coordinates, we
integrated a custom optically-tracked fluoroscope fiducial allowing real-time visualization of the injection device with
respect to the patient's femur. During the procedure, X-ray images were acquired to estimate the 3D distribution of the
injected augmentation material (e.g. bone cement). Based on the injection progress, the injection plan could be adjusted
if needed to achieve optimal distribution. In phantom experiments, the average target registration error at the center of
the femoral head was 1.4 mm and the rotational error was 0.8 degrees when two images were used. Three cadaveric
studies demonstrated efficacy of the navigation system. Our preliminary simulation study of the 3D shape reconstruction
algorithm demonstrated that the 3D distribution of the augmentation material could be estimated within 12% error from
six X-ray images.
We demonstrate an improvement to cone-beam tomographic imaging by using a prior anatomical model. A protocol
for scanning and reconstruction has been designed and implemented for a conventional mobile C-arm:
a 9 inch image-intensifier OEC-9600. Due to the narrow field of view (FOV), the reconstructed image contains
strong truncation artifacts. We propose to improve the reconstructed images by fusing the observed x-ray
data with computed projections of a prior 3D anatomical model, derived from a subject-specific CT or from a
statistical database (atlas), and co-registered (3D/2D) to the x-rays.
The prior model contains a description of geometry and radiodensity as a tetrahedral mesh shape and density
polynomials, respectively. A CT-based model can be created by segmentation, meshing and polynomial fitting of
the object's CT study. The statistical atlas is created through principal component analysis (PCA) of a collection
of mesh instances deformably-registered (3D/3D) to patient datasets.
The 3D/2D registration method optimizes a pixel-based similarity score (mutual information) between the
observed x-rays and the prior. The transformation involves translation, rotation and shape deformation based on
the atlas. After registration, the image intensities of observed and prior projections are matched and adjusted,
and the two information sources are blended as inputs to a reconstruction algorithm.
We demonstrate recostruction results of three cadaveric specimens, and the effect of fusing prior data to
compensate for truncation. Further uses of hybrid reconstruction, such as compensation for the scan's limited
arc length, are suggested for future research.