We extend the notion of multi-resolution spatial data approximation of static datasets to spatio-temporal approximation of time-varying datasets. By including the temporal dimension, we allow a region of one time-step to approximate a congruent region at another time-step.
Approximations of static datasets are generated by refining an approximation until a given error-bound is met. To approximate time-varying datasets we use data from another time-step when that data meets a given error-bound for the current time-step. Our technique exploits the fact that time-varying datasets typically do not change uniformly over time. By loading data from rapidly changing regions only, less data needs to be loaded to generate an approximation. Regions that hardly change are not loaded and are approximated by regions from another time-step. Typically, common techniques only permit binary classification between consecutive time-steps. Our technique allows a run-time error-criterion to be used between
non-temporally consecutive time-steps. The errors between time-steps are calculated in a pre-processing step and stored in error-tables. These error-tables are used to calculate errors at run-time, thus no data needs to be accessed.
Visualizing high-resolution volumetric datasets is a challenging task for large volumetric datasets. With every new generation of scanners the available resolution is increasing, and state-of-the-art approaches can't be extended to handle these large amounts of data, either due to the nature of the algorithm or available hardware as the limiting factor. Current off-the-shelf graphics hardware allows interactive texture-based volume-rendering of volumetric datasets up to a resolution of 5123 datapoints only. We present a method which allows us to visualize even higher-resolution volumetric datasets. Our approach provides images similar to texture-based volume-rendering techniques at interactive frame-rates and full resolution. Our approach is based on an out-of-core point-based rendering approach. We first preprocess the data, grouping the points within the dataset according to their color on disc and read them when needed from disc to immediately stream them to the rendering hardware. The high resolution of the dataset and the density of the datapoints allows us to use a pure point-based rendering approach, the density of points with equal or similar values within the dataset can be considered as being high enough to display regions and contours using points only. With our approach we achieve interactive frame-rates for volumes exceeding 5123 pixels. The images generated are similar to those using volume-rendering approaches combined with sharp transfer-functions where only a limited number of values selected for display. With our data-stream-based approach interactivity is not restricted to navigation through the dataset itself, it also allows us to change the values of interest to be displayed in real-time, enabling us to change display-parameters and thus looking for interesting and important features and contours interactively. For a human brain extracted from a 753×1050×910 coloured dataset (courtesy of A. W. Toga, UCLA) we achieved frame-rates of 20 frames/second and more, depending on the values selected. We describe a new way to interactively display high-resolution datasets without any loss of detail. By using points instead of textured volumes we reduce the amount of data to be transferred to the graphics hardware when compared to hardware-supported texture-based volume rendering. Using a data-organization optimized for reading from disk, we reduce the number of disk-seeks, and thus the overall update-time for a change of parameter-values.