This paper introduces a technique to properly sample volume boundaries in hardware texture-based Volume Visualization. Prior techniques render a volume with a set of uniformly-spaced proxy geometries that sample (and represent) a set of uniform-depth slices. While this is sufficient for the core of a volume, it does not consider a sample's partial overlap at the boundaries of a volume, and this failing can lead to significant artifacts at these boundaries. Increasing the sampling rate doesn't solve the problem - but the proper calculation will. While these artifacts might not be easily visible with large datasets, this paper expands on the fundamentals of visualization by presenting a correct handling of sampling at boundaries - which is missing from previous literature. Our technique computes the non-unit depth contributions of the volume at the boundaries. We use fragment programs to perform this adaptive border sampling to compute the partial sample contributions and to match sampling-planes at the volume boundaries with the sampling geometry in the core of the volume.
This paper discusses issues with the limited precision of hardware texture-based volume visualization. We will describe the compositing OVER operator and how fixed-point arithmetic affects it. We propose two techniques to improve the precision of fixed-point compositing and the accuracy of hardware-based volume visualization. The first technique is to perform dithering of color and alpha values. The sedond technique we call exponent-factoring, and captures significanly more numeric resolution than dithering, but can only produce monochromatic images.
We introduce a multi-layered image cache system that is
designed to work with a pool of rendering engines to
facilitate an interactive, frameless, asynchronous rendering
environment. Our system decouples the rendering from the
display of imagery. Therefore, it decouples render frequency
and resolution from display frequency and resolution, and
allows asynchronous transmission of imagery instead of the
compute--send cycle of standard parallel systems. It also
allows local, incremental refinement of imagery without
requiring all imagery to be re-rendered. Images are placed
in fixed position in camera (vs. world) space to eliminate
occlusion artifacts. Display quality is improved by
increasing the number of images. Interactivity is improved
by decreasing the number of images.
We extend the notion of multi-resolution spatial data approximation of static datasets to spatio-temporal approximation of time-varying datasets. By including the temporal dimension, we allow a region of one time-step to approximate a congruent region at another time-step.
Approximations of static datasets are generated by refining an approximation until a given error-bound is met. To approximate time-varying datasets we use data from another time-step when that data meets a given error-bound for the current time-step. Our technique exploits the fact that time-varying datasets typically do not change uniformly over time. By loading data from rapidly changing regions only, less data needs to be loaded to generate an approximation. Regions that hardly change are not loaded and are approximated by regions from another time-step. Typically, common techniques only permit binary classification between consecutive time-steps. Our technique allows a run-time error-criterion to be used between
non-temporally consecutive time-steps. The errors between time-steps are calculated in a pre-processing step and stored in error-tables. These error-tables are used to calculate errors at run-time, thus no data needs to be accessed.
We present a multiresolution technique for interactive texture-based volume visualization. This method uses an adaptive scheme that renders the volume in a region-of- interest at a high resolution and the volume further away from this region at progressively lower resolutions. We use indexed texture maps, which allow for interactive modification of the opacity transfer function. Our algorithm is based on the segmentation of texture space into an octree, where the leaves of the tree define the original data and the internal nodes define lower-resolution approximations. Rendering is done adaptively by selecting high-resolution cells close to a center of attention and low-resolution cells away from this area. We limit the artifacts introduced by this method by modifying the transfer functions in the lower-resolution data sets and utilizing spherical shells as a proxy geometry. It is possible to use this technique for viewpoint-dependent renderings.