This paper presents a feasibility and evaluation study for using 2D ultrasound in conjunction with our statistical deformable bone model in the scope of computer-assisted surgery (CAS). The final aim is to provide the surgeon with an enhanced 3D visualization for surgical navigation in orthopaedic surgery without the need for preoperative CT or MRI scans. We unified our earlier work to combine several automatic methods for statistical bone shape prediction from a sparse set of surface points, and ultrasound segmentation and calibration to provide the intended rapid and accurate visualization. We compared the use of a tracked digitizing pointer to ultrasound to acquire landmarks and bone surface points for the estimation of two cast proximal femurs, where two users performed the experiments 5-6 times per scenario. The concept of CT-based error introduced in the paper is used to give an approximate quantitative value to the best hoped-for prediction error, or lower-bound error, for a given anatomy. The conclusions of this work were that the pointer-based approach produced good results, and although the ultrasound-based approach performed considerably worse on average, there were several cases where the results were comparable to the pointer-based approach. It was determined that the primary factor for poor ultrasound performance was the inaccurate localization of the three initial landmarks, which are used for the statistical shape model.
Image-guided, computer-assisted neurosurgery has emerged to improve localization and targeting, to provide a better anatomic definition of the surgical field, and to decrease invasiveness. Usually, in image-guided surgery, a computer displays the surgical field in a CT/MR environment, using axial, coronal or sagittal views, or even a 3D representation of the patient. Such a system forces the surgeon to look away from the surgical scene to the computer screen. Moreover, this kind of information, being pre-operative imaging, can not be modified during the operation, so it remains valid for guidance in the first stage of the surgical procedure, and mainly for rigid structures like bones. In order to solve the two constraints mentioned before, we are developing an ultrasoundguided surgical microscope. Such a system takes the advantage that surgical microscopy and ultrasound systems are already used in neurosurgery, so it does not add more complexity to the surgical procedure. We have integrated an optical tracking device in the microscope and an augmented reality overlay system with which we avoid the need to look away from the scene, providing correctly aligned surgical images with sub-millimeter accuracy. In addition to the standard CT and 3D views, we are able to track an ultrasound probe, and using a previous calibration and registration of the imaging, the image obtained is correctly projected to the overlay system, so the surgeon can always localize the target and verify the effects of the intervention. Several tests of the system have been already performed to evaluate the accuracy, and clinical experiments are currently in progress in order to validate the clinical usefulness of the system.
The emerging mobile fluoroscopic 3D technology linked with a navigation
system combines the advantages of CT-based and C-arm-based navigation. The
intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the
combined visualization of surgical instruments and anatomical structures
for enhanced planning, surgical eye-navigation and landmark digitization.
We performed a thorough evaluation of several segmentation algorithms using
a large set of data from different anatomical regions and man-made phantom
objects. The analyzed segmentation methods include
automatic thresholding, morphological operations, an adapted region growing
method and an implicit 3D geodesic snake method. In regard to computational
efficiency, all methods performed within acceptable limits on a standard
Desktop PC (30sec-5min). In general, the best results were obtained with
datasets from long bones, followed by extremities. The segmentations of
spine, pelvis and shoulder datasets were generally of poorer quality. As
expected, the threshold-based methods produced the worst results. The
combined thresholding and morphological operations methods were considered
appropriate for a smaller set of clean images. The region growing method
performed generally much better in regard to computational efficiency and
segmentation correctness, especially for datasets of joints, and lumbar and
cervical spine regions. The less efficient implicit snake method was able
to additionally remove wrongly segmented skin tissue regions. This study
presents a step towards efficient intra-operative segmentation of 3D fluoroscopy
datasets, but there is room for improvement. Next, we plan to study model-based
approaches for datasets from the knee and hip joint region, which would be
thenceforth applied to all anatomical regions in our continuing development
of an ideal segmentation procedure for 3D fluoroscopic images.