Dynamic volume rendering of the beating heart is an important element in cardiac disease diagnosis and therapy
planning, providing the clinician with insight into the internal cardiac structure and functional behavior. Most
clinical applications tend to focus upon a particular set of organ structures, and in the case of cardiac imaging,
it would be helpful to embed anatomical features into the dynamic volume that are of particular importance to
an intervention. A uniform transfer function (TF), such as is generally employed in volume rendering, cannot
effectively isolate such structures because of the lack of spatial information and the small intensity differences
between adjacent tissues. Explicit segmentation is a powerful way to approach this problem, which usually
yields a single binary mask volume (MV), where a unit value in a voxel within the MV acts as a tag label
representing the anatomical structure of interest (ASOI). These labels are used to determine the TF employed
to adjust the ASOI display. Traditional approaches for rendering such segmented volumetric datasets usually
deliver unsatisfactory results, such as noninteractive rendering speed, low image quality, intermixing artifacts
along the rendered subvolume boundaries, and speckle noise. In this paper, we introduce a new "color coding"
approach, based on the graphics processing unit (GPU) accelerated raycasting algorithm and a pre-integrated
voxel classification method, to address this problem. The mask tag labels derived from segmentation are first
smoothed with a Gaussian filter, and multiple TFs are designed for each of the MVs and the source cardiac
volume respectively, mapping the voxel's intensity to color and opacity at each sampling point along the casting
ray. The resultant values are composited together using a boundary color adjustment technique, which acts as
"coding" the segmented anatomical structure information into the rendered source volume of the beating heart.
Our algorithm produces high image quality in real-time without introducing intermixing artifacts in the rendered
4-dimensional (4D) cardiac volumes.
We describe an interactive multimodality display environment, which combines anatomic CT, MRI, functional MRI images and photographs taken during surgical procedures, to provide comprehensive localization information regarding epilepsy seizure foci and the context of their surroundings. Our environment incorporates several unique features, including GPU-accelerated volume rendering and image fusion, versatile GPU-based clipping of volumetric images, and the ability to enhance the information delivered to the surgeon by fusing a direct (photographic) view of the surgical field with the volumetric image. We employ direct volume rendering for the fusion of multiple volumes using GPU-accelerated ray-casting. In addition, to expose the internal structures during volume fusion, we have developed user interaction tools that enable the surgeon to explore the fused volume using clipping-cube and cutaway clipping schemes. The fusion of intraoperative images onto the image volume allows enhanced visualization of the surgical procedure sites within the surgical planning environment. These techniques have been implemented as Visualization Toolkit (VTK) classes using the OpenGL fragment shading program and Python modules, and have been successfully implemented within our surgical planning environment "EpilepsyViewer". The results and performance of our GPU-based approach are compared with similar techniques in VTK, demonstrating that the use of the GPU can greatly accelerate visualization and enable increased flexibility of the system in the operating room. The result of photographic overlay shows good correspondence between the intraoperative photograph images and the preoperative image model. This environment can also be extended for use in other neurosurgical planning tasks.
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance
imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US),
can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a
well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality
real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would
be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational
cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical
environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention.
Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational
precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR
and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing
and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our
multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting
the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different
imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing
are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D
MR and US cardiac datasets.
Direct volumetric visualization of medical datasets has important application in areas such as minimally-invasive therapies and surgical simulation. In popular fixed-slice-distance hardware-based volume rendering algorithms, such as 2D and 3D texture mapping, the non-isotropic nature of the volumetric medical images and the constantly changed viewing rays make it difficult to render medical datasets without disturbing or slicing artifacts during volume rotation. We have developed a hardware accelerated 3D medical image visualization system based on a commodity graphics unit, in which a viewing-direction based dynamic texture slice resampling scheme is descirbed and implemented on an Nvidia graphics processing unit (GPU). In our algorithm, we utilize graphics hardware to dynamically slice the volume texture according to the viewing directions during the rendering process, in which the slice number can be dynamically changed without consuming additional video memory. Near-uniform effective slice spacing can be achieved in real-time and updated as the viewing angles change, so improved uniform visual quality is achieved with high rendering performance. To further improve rendering efficiency, we have implemented a multi-resolution scheme within our rendering system, which offers the user the option to highlight the volume of interest (VOI) and render it with higher resolution than the surrounding structures. This system also incorporates a fragment-level interactive post-classification algorithm that modifies the texture directly within the texture unit on graphics card, making it possible to interactively change transfer function parameters and navigate medical datasets in real-time during the 3D medical image visualization process.