Medical augmented reality has been actively studied for decades and many methods have been proposed to revolutionize clinical procedures. One example is the camera augmented mobile C-arm (CAMC), which provides a real-time video augmentation onto medical images by rigidly mounting and calibrating a camera to the imaging device. Since then, several CAMC variations have been suggested by calibrating 2D/3D cameras, trackers, and more recently a Microsoft HoloLens to the C-arm. Different calibration methods have been applied to establish the correspondence between the rigidly attached sensor and the imaging device. A crucial step for these methods is the acquisition of X-Ray images or 3D reconstruction volumes; therefore, requiring the emission of ionizing radiation. In this work, we analyze the mechanical motion of the device and propose an alternative method to calibrate sensors to the C-arm without emitting any radiation. Given a sensor is rigidly attached to the device, we introduce an extended pivot calibration concept to compute the fixed translation from the sensor to the C-arm rotation center. The fixed relationship between the sensor and rotation center can be formulated as a pivot calibration problem with the pivot point moving on a locus. Our method exploits the rigid C-arm motion describing a Torus surface to solve this calibration problem. We explain the geometry of the C-arm motion and its relation to the attached sensor, propose a calibration algorithm and show its robustness against noise, as well as trajectory and observed pose density by computer simulations. We discuss this geometric-based formulation and its potential extensions to different C-arm applications.
For computer-assisted interventions in orthopedic surgery, automatic bone surface delineation can be of great value. For instance, given such a method, an automatically extracted bone surface from intraoperative imaging modalities can be registered to the bone surfaces from preoperative images, allowing for enhanced visualization and/or surgical guidance. Ultrasound (US) is ideal for imaging bone surfaces intraoperatively, being real-time, non-ionizing, and cost-effective. However, due to its low signal-to-noise ratio and imaging artifacts, extracting bone surfaces automatically from such images remains challenging. In this work, we examine the suitability of deep learning for automatic bone surface extraction from US. Given 1800 manually annotated US frames, we examine the performance of two popular neural networks used for segmentation. Furthermore, we investigate the effect of different preprocessing methods used for manual annotations in training on the final segmentation quality, and demonstrate excellent qualitative and quantitative segmentation results.