Over 40.000 mitral reconstructions are performed each year in the United States. To ensure a successful and
durable outcome of the operation, detailed quantification of the mitral annulus is helpful. However, manual
measurement is time consuming and hard to perform in clinical routine. We propose a fast semi-automatic
method to create a precise model of the mitral annulus from 3D ultrasound data. The basic idea is to combine
image information with anatomical knowledge in form of a standard mitral annulus model. This way, the method
can adjust to the individual image data and still cope with strong artifacts and incomplete images. By comparing
the resulting models to manually created ground truth data of 39 patients, we identified a mean error of 3.49
mm. This is lower than the determined standard deviation of the expert (4.13 mm) and confirms the accuracy
of the proposed method. The overall time to create a mitral annulus model from 3D ultrasound image data is
less than a minute. Due to its speed, accuracy and robustness, the method is eligible for the clinical routine.
An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems.
Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain
viable within 3D environments. In order to establish this a new user interface was created that applied
various understood principles of interface design. A user study was then performed where it was compared
with an earlier interface for a series of medical visualization tasks.
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes
transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented
Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and
subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position
and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any
external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative
planning step in which the navigation targets are defined, the procedure consists of two main steps which
are carried out during the intervention: First, the preoperatively prepared planning data is registered with an
intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are
continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical
structures can be superimposed on the video image.
This paper focuses on the latter step. We have implemented several promising real-time algorithms and
incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them
for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment
has been developed, which allows for the simulation of navigation targets and navigation aids, including their
measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an
inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.