In recent years, multi-volume visualization has become an industry standard for analyzing and interpreting large surveys
of seismic data. Advances made in computer hardware and software have moved visualization from large, expensive
visualization centers to the desktop. Two of the greatest factors in achieving this have been the rapid performance
enhancements to computer processing power and increasing memory capacities. In fact, computer and graphics
capabilities have tended to more than double each year. At the same time, the sizes of seismic datasets have grown
dramatically. Geoscientists regularly interpret projects that exceed several gigabytes. They need to interpret prospects
quickly and efficiently and expect their desktop workstations and software applications to be as performant as possible.
Interactive, multi-volume visualization is important to rapid prospect generation.
Consequently, the ability to visualize and interpret multiple seismic and attribute volumes enhances and accelerates
the interpretation process by allowing geoscientists to gain a better understanding of the structural framework, reservoir
characteristics, and subtle details of their data. Therefore, we analyzed seismic volume visualization and defined four
levels of intermixing: data, voxel, pixel, and image intermixing. Then, we designed and implemented a framework to
accomplish these four levels of intermixing. To take advantage of recent advancements in programmable graphics
processing units (GPUs), all levels of intermixing have been moved from the CPU into the GPU, with the exception of
data intermixing. We developed a prototype of this framework to prove our concept. This paper describes the four levels
intermixing, framework, and prototype; it also presents a summary of our results and comments made by geoscientists
and developers who evaluated our endeavor.
In recent years, off-the-shelf graphics cards have provided the ability to program the graphics processing unit (GPU) as an alternative to using fixed function pipelines. We believe that this capability can enable a new paradigm in geoscience data visualization. In the past, the geoscience data preparation, interpretation, and simulation were all done by the central processing unit (CPU), and then the generated graphics primitives were fed into a GPU for visualization. This approach was dictated by the constraints imposed by the general-purpose graphics application programming interfaces (APIs). With GPU programming, this front-end processing can be done in the GPU and visualized immediately. After passing the geometry data into the GPU, parameters can be used to control these processes inside the GPU. The different algorithms associated with these processes can be applied at run time by loading a new shading program. To prove this concept, we designed and implemented Java-based shader classes, which operate on top of Cg, a high-level language for graphics programming. These shader classes load Cg shaders to provide a new method for visualizing and interacting with geoscience data. The results from this approach show better visual quality for seismic data display and dramatically improved performance for large 3D seismic data sets. For editing geological surfaces, tests demonstrate performance levels 10 times faster than the typical approach. This paper describes the use of these shaders and presents the results of shader application to geoscience data visualization.