State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number
of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these
large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications
is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery
planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation
maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and
improvement for controllers that help to interact with 3D environments in the domain of computer games.
These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and
orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed
by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller
only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose
estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4
Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm,
0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype
that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis
of a segmented vessel tree.
Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and
annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation
of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried
out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which
requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly
in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly
increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's
comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device
and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx
display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar
setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and
from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel
3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information
out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we
present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional
Two-dimensional roadmapping is considered state-of-the-art in guidewire navigation during endovascular interventions.
This paper presents a methodology for extracting the guidewire from a sequence of 2-D roadmap
images in almost real time. The detected guidewire can be used to improve its visibility on noisy fluoroscopic
images or to do a back projection of the guidewire into a registered 3-D vessel tree. A lineness filter based on
the Hessian matrix is used to detect only those line structures in the image that lie within the vessel tree. Loose
wire fragments are properly linked by a novel connection method fulfilling clinical processing requirements. We
show that Dijkstra's algorithm can be applied to efficiently compute the optimal connection path. The entire
guidewire is finally approximated by a B-spline curve in a least-squares manner. The proposed method is both
integrated into a commercial clinical prototype and evaluated on five different patient data sets containing up to
249 frames per image series.
We present a novel representation of 3D <i>salient region features</i> and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.
The evaluation of tumor growth as regression under therapy is an important clinical issue. Rigid registration of sequentially acquired 3D-images has proven its value for this purpose. Existing approaches to rigid image registration use the whole volume for the estimation of the rigid transform. Non-rigid soft tissue deformation, however, will imply a bias to the registration result, because local deformations cannot be modeled by rigid transforms. Anatomical substructures, like bones or teeth, are not affected by these deformations, but follow a rigid transform. This important observation is incorporated in the proposed registration algorithm. The selection of anatomical substructure is done by manual interaction of medical experts adjusting the transfer function of the volume rendering software. The parameters of the transfer function are used to identify the voxels that are considered for registration. A rigid transform is estimated by a quaternion gradient descent algorithm based on the intensity values of the specified tissue classes. Commonly used voxel intensity measures are adjusted to the modified registration algorithm. The contribution describes the mathematical framework of the proposed registration method and its implementation in a commercial software package. The experimental evaluation includes the discussion of different similarity measures, the comparison of the proposed method to established rigid registration techniques and the evaluation of the efficiency of the new method. We conclude with the discussion of potential medical applications of the proposed registration algorithm.