Computer assistance in Minimally Invasive Surgery is a very active field of research. Many systems designed for Computer Assisted Surgery require information about the instruments' positions and orientations. Our main focus lies on tracking a laparoscopic ultrasound probe to generate 3D ultrasound volumes. State-of-the-art tracking methods such as optical or electromagnetic tracking systems measure pose with respect to a fixed extra-body coordinate system. This causes inaccuracies of the reconstructed ultrasound volume in the case of patient motion, e.g. due to respiration. We propose attaching an endoscopic camera to the ultrasound probe and calculating the camera motion from the video sequence with respect to the organ surface. We adapt algorithms developed for solving the relative pose problem to recreate the camera path during the ultrasound sweep over the organ. By this image-based motion estimation camera motion can only be determined up to an unknown scale factor, known as the depth-speed-ambiguity. We show, how this problem can be overcome in the given scenario, exploiting the fact, that the distance of the camera to the organ surface is fixed and known. Preprocessing steps are applied to compensate for endoscopic image quality deficiencies.
Modern techniques for technical inspection as well as medical diagnostics and therapy in keyhole-surgery scenarios
make use of flexible endoscopes. Common to both application fields are very small natural or manmade entry
points to the observed scene, as well as the complexity of the hollow itself. These make the use of rigid lens-based
endoscopes or tip chip videoscopes impossible. Due to the fact that the fiber-optic image guide of a
flexible endoscope introduces a comb structure to the acquired images, many research has been devoted to
algorithms for an effective removal of such artifacts. Oftentimes, this research has been motivated by the fact,
that the comb structure prevents an application of some well-established methods offered by the computer vision
and image processing community. Unfortunately, the performance of the presented approaches are commonly
visually evaluated or with respect to proprietary, non-standardized metrics. Thus, the performances of individual
algorithms are hard to compare with each other. For this reasons, we propose a performance measure for fiber-optic
imaging devices that has been motivated by the physics of optics. In this field, an optical system is
frequently described by linear systems theory and the system's quality can be expressed by its transfer function.
The determination of this transfer function has been standardized by the ISO for lens based imaging systems and
represents a widely accepted measure for the quality of such systems. In this contribution, we present methods
that account for fiber-optic imaging systems and thus enable a standardized performance evaluation. Finally, we
demonstrate its use by comparing two recent state of the art comb structure removal algorithms, each of them
being a representative of a spatial and a frequency domain method, respectively.