The goal of this paper is the proposal and evaluation of a ray-casting strategy that takes advantage of the spatial and temporal coherence in image-space as well as in object-space in order to speed up rendering. It is based on a double structure: in image-space, a temporal buffer that stores for each pixel the next instant of time in which the pixel must be recomputed, and in object-space a Temporal Run-Length Encoding of the voxel values through time. The algorithm skips empty and unchanged pixels through three different space-leaping strategies. It can compute the images sequentially in time or generate them simultaneously in batch. In addition, it can handle simultaneously several data modalities. Finally, an on-purpose out-of-core strategy is used to handle large datasets. The tests performed on two medical datasets and various phantom datasets show that the proposed strategy significantly speeds-up rendering.
In the last years there is a growing demand of multimodal medical
rendering systems able to visualize simultaneously data coming from
different sources. This paper addresses the Direct Volume Rendering
(DVR) of aligned multimodal data in medical applications.
Specifically, it proposes a hierarchical representation of the
multimodal data set based on the construction of a Fusion Decision
Tree (FDT) that, together with a run-length encoding of the non-empty
data, provides means of efficiently accessing to the data. Three
different implementations of these structures are proposed. The
simulations results show that the traversal of the data is fast and
that the method is suitable when interactive modifications of the
fusion parameters are required.