The design of mobile X-ray C-arm equipment with image tomography and surgical guidance capabilities involves the
retrieval of repeatable gantry positioning in three-dimensional space. Geometry misrepresentations can cause
degradation of the reconstruction results with the appearance of blurred edges, image artifacts, and even false structures.
It may also amplify surgical instrument tracking errors leading to improper implant placement. In our prior publications
we have proposed a C-arm 3D positioner calibration method comprising separate intrinsic and extrinsic geometry
calibration steps. Following this approach, in the present paper, we extend the intrinsic geometry calibration of C-gantry
beyond angular positions in the orbital plane into angular positions on a unit sphere of isocentric rotation. Our method
makes deployment of markerless interventional tool guidance with use of high-resolution fluoro images and
electromagnetic tracking feasible at any angular position of the tube-detector assembly. Variations of the intrinsic
parameters associated with C-arm motion are measured off-line as functions of orbital and lateral angles. The proposed
calibration procedure provides better accuracy, and prevents unnecessary workflow steps for surgical navigation
applications. With a slight modification, the Misalignment phantom, a tool for intrinsic geometry calibration, is also
utilized to obtain an accurate 'image-to-sensor' mapping. We show simulation results, image quality and navigation
accuracy estimates, and feasibility data acquired with the prototype system. The experimental results show the potential
of high-resolution CT imaging (voxel size below 0.5 mm) and confident navigation in an interventional surgery setting
with a mobile C-arm.
In this paper we propose a framework of constructing and using a shape
prior in estimation problems. The key novelty of our technique is a
new way to use high level, global shape knowledge to derive a local
driving force in a curve evolution context. We capture information
about shape in the form of a family of shape distributions (cumulative distribution functions) of features related to the shape. We design a prior objective function that penalizes the differences between model shape distributions and those of an estimate. We incorporate this prior in a curve evolution formulation for function minimization. Shape distribution-based representations are shown to satisfy several desired properties, such as robustness and invariance. They also have good discriminative and generalizing properties. To our knowledge, shape distribution-based representations have only been used for shape classification. Our work represents the development of a tractable framework for their incorporation in estimation problems. We apply our framework to three applications: shape morphing, average shape calculation, and image segmentation.
The removal of unwanted, parasitic vibrations in a video sequence induced by camera motion is an essential part of video acquisition in industrial, military and consumer applications. In this paper, we present a new image processing method to remove such vibrations and reconstruct a video sequence void of sudden camera movements. Our approach to separating unwanted vibrations from intentional camera motion is based on a probabilistic estimation framework. We treat estimated parameters of interframe camera motion as noisy observations of the intentional camera motion parameters. We construct a physics-based state-space model of these interframe motion parameters and use recursive Kalman filtering to perform stabilized camera position estimation. A six-parameter affine model is used to describe the interframe transformation, allowing quite accurate description of typical scene changes due to camera motion. The model parameters are estimated using a p-norm-based multi-resolution approach. This approach is robust to model mismatch and to object motion within the scene (which are treated as outliers). We use mosaicking in order to reconstruct undefined areas that result from motion compensation applied to each video frame. Registration between distant frames is performed efficiently by cascading interframe affine transformation parameters. We compare our method's performance with that of a commercial product on real-life video sequences, and show a significant improvement in stabilization quality for our method.