During the past decade, various volume visualization techniques have been developed for different purposes, and many
of them, such as direct volume rendering, maximum intensity projection and non-photorealistic rendering, have been
implemented on consumer graphics hardware for real time visualization. However, effective multi-volume visualization,
a way to establish the visual connections between two or more types of data, has not been adequately addressed even
though it has wide applications in medical imaging and numerical simulation based on 3D physical model. In this paper,
we aim to develop an effective GPU-based system for multi-volume visualization which is able to reveal both the
connections and distinctions among multiple volume data. To address the main challenge for multi-volume visualization
on how to establish the visual correspondences while maintaining the distinctive information among multiple volumes, a
multi-level distinction mechanism is developed including 2D transfer function, mixed rendering modes, and volume
clipping. Taking advantage of the fast hardware-supported processing capabilities, the system is implemented based on
the GPU programming. Several advanced volume rendering techniques based on segmented volume are also
implemented. The resulting visualization is a highly interactive image fusion system with high quality image and three-level
volume distinction. We demonstrate the effectiveness of our system with a case study in which the heat effect on
brain tumor, represented as a temperature volume resulting from high intensity focused ultrasound beam exposure over
time, is visualized in the context of a MRI head volume.
High intensity focused ultrasound (abbreviated as HIFU) has its potential in tumor treatment due to its non-invasive benefits. During HIFU exposure, cavitation (generation of gas bubbles) is often observed, which can be an indication of potential lesion created by HIFU power. Due to a large difference in ultrasound acoustic properties between the gas bubble and surrounding tissues, ultrasonic energy is reflected and scattered at the HIFU focus, thus indicating activity around the focal area and often interfering HIFU dosage delivery. A good understanding and control of cavitation phenomenon could potentially enhance the HIFU delivery and treatment outcomes. Quantifying the onset timing and extent of the cavitation could be potentially used for detecting HIFU effects and therapy guidance. In this paper, we study the relationships among HIFU parameters, the characteristics of cavitation quantified from ultrasound imaging, and characteristics of the final tissue lesion created by HIFU.
In our study, we used 12 freshly excised pig brains in vitro for observation and analysis of cavitation activities during HIFU exposure with different HIFU parameters. Final lesions were examined by slicing the brain tissues into thin slices and 3D volume was constructed with segmentation of the lesion. HIFU parameters, cavitation activities through image processing and lesion characterization were correlated. We also present our initial understanding of the process of cavitation activities under certain HIFU parameters and control of such activities that could lead to optimal lesion
Three-dimensional texture-based volume rendering is a technique that treats a 3D volume as a 3D texture, renders multiple 2D view-oriented slices and blends them into the frame buffers. This technique is thoroughly developed in computer graphics and medical visualization, and widely accepted due to the advancement of computer hardware. This research aims at developing fast parallel slice cutting and partial exposing algorithms used in real-time 3D-texture-based volume rendering for image-guided surgery and therapy planning. In texture-based volume rendering, a large amount of slices are needed to render the volume to achieve high quality image, but for real-time interactive volume rendering, the computation time is critical. Instead of repeating the cutting algorithms for each slice against the volume data as conventional cutting algorithms do, the slice cutting algorithm developed in this paper applies the cutting only to the initial slice, and gets the slice vertexes and 3D texture coordinates for all the others based on the distance between the current slice and the initial slice. The new algorithm dramatically reduces the computation time for slice cutting, and eases the generation of sectional view for a volume. Partial exposing is another useful technique used in volume visualization to reveal important but hidden information. Two depth-based partial exposing algorithms are developed and implemented in this paper. Both partial exposing techniques can work with arbitrary complex, but convex, shapes of cutaway object, and their implementations maintain the interactive frame rate for 3D texture-based volume rendering without apparent performance decline compared to non-cutaway rendering.