Minimally invasive procedures are increasingly attractive to patients and medical personnel because they can reduce
operative trauma, recovery times, and overall costs. However, during these procedures, the physician has a very limited
view of the interventional field and the exact position of surgical instruments. We present an image-guided platform for
precision placement of surgical instruments based upon a small four degree-of-freedom robot (B-RobII; ARC
Seibersdorf Research GmbH, Vienna, Austria). This platform includes a custom instrument guide with an integrated
spiral fiducial pattern as the robot's end-effector, and it uses intra-operative computed tomography (CT) to register the
robot to the patient directly before the intervention. The physician can then use a graphical user interface (GUI) to select
a path for percutaneous access, and the robot will automatically align the instrument guide along this path. Potential
anatomical targets include the liver, kidney, prostate, and spine. This paper describes the robotic platform, workflow,
software, and algorithms used by the system. To demonstrate the algorithmic accuracy and suitability of the custom
instrument guide, we also present results from experiments as well as estimates of the maximum error between target
and instrument tip.
Lung cancer screening for early diagnosis is a clinically important problem. One screening method is to test tissue samples obtained from CT-fluoroscopy (CTF) guided lung biopsy. CTF provides real-time imaging; however on most machines the view is limited to a single slice. Mentally reconstructing the direction of the needle when it is not in the imaging plane is a difficult task. We are currently developing 3D visualization software that will augment the physician's ability to perform this task. At the beginning of the procedure a CT scan is acquired at breath-hold. The physician then specifies an entry point and a target point on the CT. As the procedure advances the physician acquires a CTF image at breath-hold; the system then registers the current setup to the CT scan. To assess the performance of different registration algorithms for CTF/CT registration we propose to use simulated CTF images. These images are created by deforming the original CT volume and extracting a slice from it. Realistic deformation of the CT volume is achieved by using positional information from electromagnetically tracked fiducials, acquired throughout the respiratory cycle. To estimate the dense displacement field underlying the sparse displacement field provided by the fiducials we use radial basis function interpolation. Finally, we evaluated Thirion's "demons" algorithm, as implemented in ITK, for the task of slice-to-volume registration. We found it to be unsuitable for this task, as in most cases the recovered displacements were less than 50% of the original ones.