A computer program called ISOLEY allows interactive visualization of Computational Fluid Dynamics (CFD) scalar and
vector functions by means of a user-specified sections of the data. The application is based on table lookups which govern
both isosurface generation on hexahedral grid cells and recursive subdivision of the cells. The program supports
Gouraud-shaded color maps of the data, surface-on-surface maps, and deformation surfaces of vector fields. The execution of
the code for animated sweeps is improved by presorting the cells in the database and maintaining an active set of cells to be
rendered. The program is implemented in the C language under UNIX and makes use of the NASA Ames Panel Library as
a user interface.
We propose a method for visualizing 3-D scalar data defmed on unstructured grid data by means of
tetrahedral primitives. In these primitives, data are distributed linearly not only along edge lines but along
any line segments. This characteristic is well suited to hnëar interpolation, which is a very effective
method of visualization in terms of computational cost.
Because of the wide use of the 3-D fmite element method (FEM), there is a strong need for volume visualization
of the unstructured grid data that is output by 3-D FEM analysis. In this kind of analysis,
various kinds of fmite elements, which are composed of unstructured grid data, are used in a mixed form
to represent a complicated 3-D space. In a finite element, data are expressed by means of the element's
own interpolation function. In terms of data processing, a simple interpolation function is most suitable,
because it consumes few computational resources. We therefore select a linear tetrahedral element (LTE),
and introduce a concept of element subdivision to other fmite elements. In our method, each fmite element
is first reconstructed as a set of LTEs that approximates its interpolation function.
We visualize iso-valued surfaces from 3-D scalar data by using an LTE as a processing primitive. For this,
we have developed two methods. One is a method for extracting triangular facets as iso-valued surfaces
and rendering them by a traditional shading algorithm. The other is a method for rendering LTEs directly
in order to visualize iso-valued surfaces.
We apply these methods to analysis of thermal stress in a semi-conductor chip and simulation of air-flow
in a clean room, and confirm their effectiveness.
In this paper, we propose a high-speed and direct imaging algorithm for constructing equi-valued surfaces from
3D grid data in scientific and enneering fields. Our basic idea is to generate and draw polygons simultaneously
by processing the cells spanned by grids in order of decreasing distance from the current viewpoint. Equi-valued
surfaces are generated in five tetrahedrons into which the cells are subdivided, and are sent to a graphics device.
The execution order of each of the tetrahedrons is identical and determined by the current viewpoint. Since the
algorithm does not need a store of intermediate polyhedral data, depth calculation, or depth buffer memory for
hidden surface removal, the user can get a quick response to changes of the view direction and of the surface constant
C in interactive graphics. This algorithm is particularly powerful for imaging multiple surfaces associated with
multiple surface constants in semi-transparent display.
This paper presents a parallel scheme for three dimensional data visualization at interactive rates. The scheme is
particularly suitable for multiprocessor systems with distributed frame buffers and is currently implemented on an AT&T
Pixel Machine, a parallel computer based on mesh connected digital signal processors with a distributed frame buffer.
Nearly linear performance increase with the number of processors in the mesh is obtained by partitioning the original
three dimensional data into sub-blocks and processing each sub-block in parallel. The approach is very flexible in
implementing a variety of visualization techniques, such as volume compositing (translucent models), binary-class and
percentage mixtures and surface based volume rendering.
In this paper, we will describe the interactive visualization and database interface system under
development at SRI's David Sarnoff Research Center (Sarnoff) and its most recent application. This
application, and the thrust of this paper, addresses the problem of developing a system of tools for
helping analysts manage, and draw timely conclusions from information. The successful approach to
this problem relies heavily on interactive data visualization tools and the analyst's ability to pose "whatif"
questions interactively. It also involves the application of "expert associates" to partially automate
the analyst's task of gaining insight from data and identifying anomalies.
Continuing advances in supercomputer technology give the scientist/engineer the ability to run increasingly
complex computational experiments and simulations. Gaining insight from the flood of simulation resuits
is a difficult task for the scientist. This paper presents the Visual interpretation System (VIS), an easy
to use, interactive, discipline-independent tool for understanding multidimensional data sets. Components
of the VIS are a database manager, a user interface, and a visualization manager. The database manager
facilitates discipline-independent visualization and lets the scientist manipulate data with familiar names and
attributes. The visualization manager uses an optical model to generate 3D images with a variety of options
including opaque and transparent structures, cutouts, and region highlighting. The effectiveness ofthe VIS
is demonstrated using data from a 3D simulation of a transistor device.
We have found three techniques useful for understanding a sequence of two-dimensional simulation results or a static
three-dimensional image. An animated sequence of color-level plots often isolates phenomena that are difficult to find
in static contour plots; a scheme for selecting equi-spaced colors is presented. Sound can be used to augment animated
color-level plots with the values of scalar parameters. When viewing two- and three-dimensional objects, the viewer's
perspective can be effectively chosen by 'flying' around an object like a helicopter. These methods have been used for
displaying the results of semiconductor simulations.
In this paper, ''feature extraction' ' refers to the process of interactively extracting meaning from data sets
that represent continuous fields. These fields can be quite complex in various ways, even if the analyst has a
thorough understanding of the basic physics underlying the processes in a given application. In interactive
environments like this, it is essential that multidimensional graphics input be provided in a way in which an
analyst can easily learn to explore field domains, looking for features that clearly communicate the behaviour
of the underlying processes. This paper reviews existing style guides, and outlines a proposal for extending
these to more general multidimensional graphics input styles. Conclusions to date are that advanced easy-touse
interactive feature extraction paradigms for high order 3D fields are now feasible, and are easily
specifiable in a style guide format. Some examples of the application of the proposed style guide are demonstrated
in the accompanying videotape.
This paper describes an approach to synthetic three-dimensional object manipulation using three different
haptic I/O devices in a virtual workspace on a graphics superworkstation. The devices involve the operator in
unique mode8 of interaction that require positioning a six degree-of-freedom sensor, applying torques to a static
ball, or creating interpreted hand ge8tures. With these devices, the user can select, rotate and deposit synthetic
virtual objects in the micro world. The micro world is an "artificial reality" in which elementary physical forces
of gravity, volume preservation, collision, and external user input may be applied. The techniques developed
overcome some of the difficulties experienced with two-dimensional input devices in a three-dimensional space.
Furthermore, the ability of the user to continuous modify physical constraints while observing the results in
real-time facilitates data interpretation tasks.
A range of stereoscopic display technologies exist which are no more intrusive, to the user, than a pair of spectacles. Combining such a display system with sensors for the position and orientation of the user's point-of-view results in a greatly enhanced depiction of three-dimensional data. As the point of view changes, the stereo display channels are updated in real time. The face of a monitor or display screen becomes a window on a three-dimensional scene. Motion parallax naturally conveys the placement and relative depth of objects in the field of view. Most of the advantages of "head-mounted display"
technology are achieved with a less cumbersome system. To derive the full benefits of stereo combined with motion parallax, both stereo channels must be updated in real time. This may limit the size and complexity of data bases which can be viewed on processors of modest resources, and restrict the use of additional three-dimensional cues, such as texture mapping, depth cueing, and hidden surface elimination.
Effective use of "full 3D" may still be undertaken in a non-interactive mode. Integral composite holograms have often been advanced as a powerful 3D visualization tool. Such a hologram is typically produced from a film recording of an object on a turntable, or a computer animation of an object rotating about one axis. The individual frames of film are multiplexed, in a composite hologram, in such a way as to be indexed by viewing angle. The composite may be produced as a cylinder transparency, which provides a stereo view of the object as if enclosed within the cylinder, which can be viewed from any angle. No vertical parallax is usually provided (this would require increasing the dimensionality of the multiplexing scheme),
but the three dimensional image is highly resolved and easy to view and interpret. Even a modest processor can duplicate the effect of such a precomputed display, provided sufficient memory and bus
bandwidth. This paper describes the components of a stereo display system with user point-of-view tracking for interactive 3D, and a digital realization of integral composite display which we term virtual integral holography. The primary drawbacks of holographic display - film processing turnaround time, and the difficulties of displaying scenes in full color -are obviated, and motion parallax cues provide easy 3D interpretation even for users who cannot see in stereo.
This paper describes experience using a volumetric workstation to extract meaning
from complex transparent data. The hardware and software of the workstation are
briefly summarized, and initial experience is reported. Medical data has primarily
been studied, but the techniques should apply to other types of data.
This presentation consists of an overview of scientific images and animation produced recently at vanous
research centers. Strategies of visualization are discussed with respect to these images and the data
they represent. This discussion focuses not only on software issues, such as interactivity and data handling,
but also on visual and cognitive issues associated with visualization.
Presented images include computational fluid dynamics simulations, meteorological and atmospheric
simulations and recordings, astrophysical simulations, and field recording of natural data. Specific visualization
techniques under discussion include color contour mapping in two and three dimensions, surface
and isosurface mapping, volume rendering, and glyph and particle representation.
In addition to explanations of techniques and interpretations of data provided by these techniques, different
strategies within each technique are explored. Comparisons are also made between different strategies
relative to identical or similar databases.
In the process of these explanations and comparisons, some general ideas about visualization are revealed.
These points are emphasized, as they relate to a specific database, ranges of similar databases, and
scientific datain general. Thes.e points are ofinterest to scientists working in visualization, as they indicate
efficient and effective routes to the better understanding of large databases.
As the data being presented to us by computers becomes more complex, the problem of being able to extract meaning
becomes increasingly difficult It is argued that some of this difficulty can be reduced by not relying solely on the visual
channel for data presentation. It is argued that the use of nonspeech audio cues as a means of data presentation can
significantly augment the communication of meaningful information.
A class of data displays, characterized generally as Auditory Data Representation, is described and motivated. This type of
data representation takes advantage of the tremendous pattern recognition capability of the human auditory channel. Audible
displays offer an alternative means of conveying quantitative data to the analyst to facilitate information extraction, and are
successfully used alone and in conjunction with visual displays. The Auditory Data Representation literature is reviewed,
along with elements of the allied fields of investigation, Psychoacoustics and Musical Perception. A methodology for applied
research in this field, based upon the well-developed discipline of psychophysics, is elaborated using a recent experiment as a
case study. This method permits objective estimation of a data representation technique by comparing it to alternative
displays for the pattern recognition task at hand. The psychophysical threshold of signal to noise level, for constant pattern
recognition performance, is the measure of display effectiveness.
Our research group has been working for several years on the development of auditory
alternatives to visual graphs, primarily in order to give blind science students and scientists access
to instrumental measurements. In the course of this work we have tried several modes for auditory
presentation of data: synthetic speech, tones of varying pitch, complex waveforms, electronic music,
and various non-musical sounds. Our most successful translation of data into sound has been
presentation of infrared spectra as musical patterns. We have found that if the stick spectra of two
compounds are visibly different, their musical patterns will be audibly different.
Other possibilities for auditory presentation of data are also described, among them listening to
Fourier transforms of spectra, and encoding data in complex waveforms (including synthetic speech).
Interpretation of multi-dimensional complex data usually involves extracting the relationship between several
variables. This is typically done with an interactive visual system . Iugh resolution volumetric data imaging, color,
animation, and multiple views are effective tools for data interpretation. Sound can provide an additional and
complementary perceptual channel. This presentation focuses on the use of sound with a multi-dimensional imaging
system to facilitate the interactive interpretation of complex data.
Our methods and system are presented with data from a simulation which computes electron density, hole density,
and potential throughout the volume of a three-dimensional semiconductor. The spatial changes and relationships
of the three scalar fields are the object of study. Normally the field relations would he examined through
multiple visualizations. here, sound is used to augment the visualization by permitting a user to visually concentrate
on one field, while listening to the other.
Two of the three scalar fields from the simulation arc selected for interpretation and visualized. The 3-dimensional
vector gradient of one of them is sonified at a selected focal point within the semiconductor solid. As the current
focus i interactively moved through the solid, the representative sound is altered accordingly. The sonification
is composed such that local minima and maxima of one of the fields can he found without looking at it.
A scientific animation, though complete in itself, is rarely seen as complete without
a musical score. Sadly the music is often an afterthought showing little relation to the
visual materials being presented. This report illustrates a correlative technique that builds
a sonic score from the same data that creates the graphics. When the musical score
represents aurally what is seen visually a high degree of integration between graphic and
sonic elements is achieved.
In mapping data into visual or aural materials the question of what is aesthetically
successful can find itself in conflict with what is scientifically useful. In graphics a simple
color map can often hide pertinent detail in an image. A differentiated map can
overemphasize detail and hence distort overall forms and patterns. These same problems
exist with pitch quantization and time sample intervals when building sonic maps of
Employing a series of four studies different levels of sonic detail are illustrated. It is
hoped they will serve to encourage investigation in the integration of aural/visual
materials into coherent and wholistic data presentation.
We present and discuss several Dynamic Statistical Graphics tools designed to help the data analyst visually discover and
formulate hypotheses about the structure of multivariate data. All tools are based on the notion of the "data space", a
representation of multivariate data as a high-dimensional space which has a dimension for each variable (column of the data)
and a point for each case (row of the data). The data space is projected orthogonally onto the "visual space", a threedimensional
space which is seen and manipulated by the data analyst. The visual space has a point-like object for each case
and can have a vector-like object for each variable. The three dimensions of the visual space are orthogonal linear
combinations of the variables.
We discuss the notion of a "Guided Tour" of multivariate data space, and present guided tour tools. These tools include:
I 6D-rotation, a tool for dynamically rotating, in six-dimensional (6D) space, from one 3D portion of the data space to
another while displaying the dynamically changing projection in the visual-space;
. hD-residualization, a tool that determines, at the user's request, the largest invisible 3D-space - i.e., the largest 3D
space orthogonal to the visual space. This space is used with the visual space so that 6D-rotation can occur between two
new 3D portions of the data space;
I projection-cuing, a group of three tools that use change in object brightness as a cue to show change in aspects of the
projection of objects from the data space to the visual space during hD-rotation.
In addition to these tools for touring high-dimensional mu1tivariate space, we discuss tools for manipulating the 3D visual
space, and a tool for looking at the relationship between two data spaces. Finally, we present a guided tour implementation
in which the user manipulates joysticks and sliders to dynamically and smoothly
We describe the application of statistical clustering algorithms (approximate fuzzy C-means
(AFCM) and ISODATA) and a Bayesian/maximum likelihood (BfML) classifier for data dimension
reduction and information extraction with MRI. Analyses were performed on 140
cranial and 6 body MR image data sets obtained at 1.5 Tesla (GE Signa) with a variety of
pathologies. Cluster analysis methods were run in an unsupervised mode and used to segment
image data sets into 32 classes. Unsupervised classification of new image data sets was
achieved by training the B/ML classifier on the 32 cluster data set and using the second-order
statistics to assign each new image pixel to a cluster centroid in feature space. A translation
table was then used to combine these cluster assignments into nine "superclusters" or tissue
types. Tissue classification results were evaluated using visual assessment by a radiologic
expert and by statistical comparison with a "gold standard" tissue map. Comparison of the
newly classified data to the gold standard image using a confusion matrix showed an overall
accuracy of 91%. We have found that this approach can improve the diagnostic specificity
of MRI and can be applied to new data in an unsupervised mode with a high degree of accuracy.
The large size and multiple bands of todays satellite data require increasingly powerful tools in
order to display and interpret the acquired imagery in a timely fashion. Pixar has developed two major
tools for use in this data interpretation. These tools are the Electronic Light Table (ELT), and an
extensive image processing package, ChapiP. These tools operate on images limited only by disk
volume size, currently 3 Gbytes.
The Electronic Light Table package provides a fully windowed interface to these large 12 bit
monochrome and multiband images, passing images through a software defined image interpretation
pipeline in real time during an interactive roam. A virtual image software framework allows interactive
modification of the visible image. The roam software pipeline consists of a seventh order polynomial
warp, bicubic resampling, a user registration affine, histogram drop sampling, a 5x5 unsharp mask, and
per window contrast controls. It is important to note that these functions are done in software, and
various performance tradeoffs can be made for different applications within a family of hardware
configurations. Special high spped zoom, rotate, sharpness, and contrast operators provide interactive
region of interest manipulation. Double window operators provide for flicker, fade, shade, and
difference of two parent windows in a chained fashion. Overlay graphics capability is provided in a
PostScfipt* windowed environment (NeWS**).
The image is stored on disk as a multi resolution image pyramid. This allows resampling and
other image operations independent of the zoom level. A set of tools layered upon ChapIP allow
manipulation of the entire pyramid file. Arbitrary combinations of bands can be computed for arbitrary
sized images, as well as other image processing operations. ChapIP can also be used in conjunction
with ELT to dynamically operate on the current roaming window to append the image processing
function onto the roam pipeline. Multiple ChapiP operations can be thus chained.
A videotape showing the use of ELT and ChapIP with multispectral data will be presented.
In this paper we discuss data exploration as a particularly difficult case within the general problem of data
visualization. We describe (1) a novel graphic technique for displaying multidimensional data visually and
(2) an auditory display integrated with the visual display that allows us to represent multidimensional
data in sound. The visual/auditory display employs an "iconographic" technique that seeks to exploit
the spontaneous perceptual capacity to sense and discriminate texture. Structures in data to be analyzed
can appear, both visually and aurally, as distinct textural regions and contours when the data are
represented iconographically. Sound can be used to reinforce the visual presentation or to augment the
dimensionality of the visual display. The immediate focus of the work reported here is to investigate how
best to transform data into perceptible visual and auditory textures, that is, how best to "perceptualize"
the data. A key problem we discuss is deciding which fields of a multidimensional data set should be
represented in the visual domain and which in the auditory domain. This activity is part of the University
of Lowell's Exploratory Visualization (Exvis) project, a multidisciplinary effort to develop new paradigms
for the exploration and analysis of data with high dimensionality.
The simulation of the physics near a black hole has received significant attention in recent
years with the development of the techniques of Numerical Relativity. This work is concerned with
the dynamics of dense, hot relativistically moving fluid near a black hole event horizon as it has a
head-on collision with the black hole. Since the resulting fluid dynamics is quite complex and occurs
in a region of large spacetime curvature, there are several challenges to be overcome in visualizing
some of the physical variables. We describe the salient features of the physics involved and the
techniques used to visualize the fluid properties and present an example of the collision between a'
dense concentration of hot fluid and a non-rotating black hole. We also discuss some of the needs for
effective visualization which need to be addressed in the near future.
The Lake Erie Forecasting System is a cooperative project by university, private and governmental institutions to provide
continuous forecasting of three-dimensional structure within the lake. The forecasts will include water velocity
and temperature distributions throughout the body of water, as well as water level and wind-wave distributions at the
lake's surface. Many hydrodynamic features can be extracted from this data, including coastal jets, large-scale thermocline
motion and zones of upwelling and downwelling. A visualization system is being developed that will aid in
understanding these features and their interactions. Because of the wide variety of features, they cannot all be adequately
represented by a single rendering technique. Particle tracing, surface rendering, and volumetric techniques
are all necessary. This visualization effortis aimed towards creating a system that will provide meaningful forecasts
for those using the lake for recreational and commercial purposes. For example, the fishing industry needs to know
about large-scale thermocline motion in order to find the best fishing areas and power plants need to know water intAke
temperatures. The visualization system must convey this information in a manner that is easily understood by
these users. Scientists must also be able to use this system to verify their hydrodynamic simulation. The focus of the
system, therefore, is to provide the information to serve these diverse interests, without overwhelming any single user
with unnecessary data.
In vivo anatomy is now routinely displayed as 2-D and 3-D images obtained from Computed X-ray Tomograpy (CT),
Magnetic Resonance Imaging (MRI), and other diagnostic modalities. Most current medical visualization methods
rely on pixel intensities to segment the data into tissues. However, structural features must be differentiated by a
human operator, and geometric measurements of the anatomy are tedious and error prone to compute. This paper
describes processing and imaging methods to aid the interpretation of CT studies of the spine. These procedures
incorporate knowledge of the symmetry, shapes, and spatial relationships of vertebrae to locate the spinal cord and
major components of vertebral bone from CT slices of the spine and automatically compute anatomical measurements.
Results of these methods are shown as applied to the cervical (neck) and lumbar (lower back) regions of the
Magnetic Resonance (MR) images of the human heart provide three dimensional geometric information
about the location of cardiac structures throughout the cardiac cycle. Analysis of this four dimensional
data set allows detection of abnormal cardiac function related to the presence of coronary artery disease.
To assist in this analysis, quantitative measurements of cardiac performance are made from the MR data
including ejection fractions, regional wall motion and myocardial wall thickening.
Analysis of cardiac performance provided by quantitative analysis of MR data can be aided by computer
graphics presentation techniques. Two and three dimensional functional images are computed to indicate
regions of abnormality based on the previous methods. The two dimensional images are created using
color graphics overlays on the original MR image to represent performance. Polygon surface modeling
techniques are used to represent data which is three dimensional, such as blood pool volumes. The
surface of these images are color encoded by regional ejection fraction, wall motion or wall thickening.
A functional image sequence is constructed at each phase of the cardiac cycle and displayed as a movie
loop for review by the physician. Selection of a region on the functional image allows visual interpretation
of the original MR images, graphical plots of cardiac function and tabular results. Color encoding is based
on absolute measurements and comparison to standard normal templates of cardiac performance.
Most databases for spherically distributed data are not structured in a manner consistent with their geometry.
As a result, such databases possess undesirable artifacts, including the introduction of "tears" in the data when they
are mapped onto a flat file system. Furthermore, it is difficult to make queries about the topological relationship
among the data components without performing real arithmetic. Therefore, a new representation for spherical data
is introduced called the sphere quadree, which is based on the recursive subdivision of spherical triangles obtained by
projecting the faces of an icosahedron onto a sphere. Sphere quadtrees allow the representation of data at multiple
levels and arbitrary resolution. For actual data, such a hierarchical data structure provides the ability to correlate
geographic data by providing a consistent reference among data sets of different resolutions or data that are not
geographically registered. Furthermore, efficient search strategies can be easily implemented for the selection of
data to be rendered or analyzed by a specific technique. In addition, sphere quadtrees offer significant potential for
improving the accuracy and efficiency of spherical surface rendering algorithms as well as for spatial data management
and geographic information systems.
Data is the raw material of Visualization in Scientific Computing (ViSC). ViSC seeks to transform calculated or measured
data into graphical forms better suited for interpretation. Data is the lifeblood of any visualization system, and its representation
must support structures appropriate to scientific computing as well as graphical forms. This representation must also permit efficient
transport and storage.
We discuss Flux, a general data format supported by the apE, a flexible visualization environment developed at the Ohio
Supercomputer Center. Flux effectively represents grids, volumes, polygonal objects, images, motion paths, particles, and other
structures, in a format which is both portable and compact. The format is also suitable for dynamic transport in a data flow system.
The Flux format has been designed with significant participation from the scientific community, and will be compared with
other data representations currently in use. The implementation of the format on a wide variety of mainframes and workstations
will also be presented,as well as a series of examples from a variety of scientific applications. The Flux format will be shown
to be a flexible solution to the problems of data representation for scientific visualization.
Despite the advancement ofvisualization techniques for scientific data over the last several years, there are still
significant problems in bringing thday's technology into the hands ofthe typical scientist. For example, there
are other computer science domains outside of computer graphics like data management that are required to
make visualization effective. One role ofdatamanagement can be expressedby the need for a class ofdata models
that is matched to the structure of scientific data as well as to how such data may be used. Unfortunately, the
critical component of data management is typically missing in most visualization systems.
Traditional methods ofhandling scientific data such as flat sequential files are generally inefficient in storage,
access or ease-of-use for large complex data sets particularly for applications like visualization. Modern,
commercial relational data management systems do not offer an effective solution because they are oriented
towards business applications. The relational model does not accommodate multidimensional or hierarchical
structures often found in scientific data sets nor provide adequate performance for the size, complexity and type
ofaccess dictated by such data sets. In contrast, these data base management systems have been quite viable
for a large class ofnon-spatial metadata management.
There is a need for a data (base) model that possesses elements of a modern data base management system but
is oriented toward scientific data sets and applications. Such a model must be easy to use, support large diskbased
data sets and accommodate scientific data structures. The NSSDC's Common Data Format (CDF) is one
implementation of a scientific data model, which provides abstract support for a class of data that can be described
by a multidimensionalbiock structure. Although alldata do notuit within this framework, alarge variety
of scientific data do.
With increasingly complex digital simulations and computations, large volumes of numerical
output are generated, and users must select more effective techniques for handling and displaying
such output in order to extract relevant information. In this study, techniques such
as tracking, steering, 2D/3D color displays and animation are imbedded in implicit and explicit
Finite Element codes for solving complex engineering problems. With these techniques
the investigator can more fully utilize the computer time and better understand results of the
long and costly computations. This investigation demonstrates the effectiveness of different
visualization techniques in bringing insight into a physical process, easing the burden of debugging
and reducing the time from initial design phase to the final product.
This work addresses the problem of managing a huge amount of digital color images. It
focuses the possibility to store and retrieve them to make them available for a wide
range of application. Key point of the architecture that has been designed and implemented
is the distribution of the devices needed by an image system on several nodes of
a Local Area Network. Optical disks are used to store the large amount of data required
To efficiently access a high number of images, an image data base is provided. All
textual information, such as image descriptions, image characteristics, physical data allocation
and archive devices, are managed by a relational data base placed on a host
system connected to the LAN by a channel attachment. A particular attention has been
paid in designing the user interlace: the user interacts with the system through a workstation
(PS/2) without taking care of the related processes even if they involve remote
The user interlace is "window driven". It is oriented to a non DP professional and it does
not require any documentation.
The main peculiarities of the architecture are: distributed functions, multi-user environment,
interactivity, modularity and non-procedurality (the user has only to describe
"what" has to be done, rather than "how" to do it).
This paper presents a general overview of the current research and development in scientific visualization at the University of
Oklahoma (OU) Geosciences Computing Network (OCN) and the Center for the Analysis and Prediction of Storms (CAPS).
The discussion includes a description of hardware components and preliminary results in the use of interactive and noninteractive
data analysis software in both UNIX and VMS environments. New tools and techniques are being developed and
used to study data from severe thunderstorm models, meteorological observing systems and other general fluid dynamics
applications. These techniques are also being used to assist in the development of a new breed of real-time meteorological
modeling, forecasting and information systems. Our systems will utilize visualization throughout the entire operating cycle
including data ingest, prediction, and display and analysis of end products.
There are some "Grand Challenges" of data visualization which scientists in every application domain confront
at one time or another. For example, how to access large volumes of data, how to isolate the "important"
pieces of information, how to map scientific data to primitives suitable for graphical algorithms, how to
achieve the speed of useful interaction, why won't off-the-shelf software do what I need it to do, how to
make hardcopy, etc., are just some of the common problems. This panel session addresses issues like these,
from the point of view of the scientist who uses visualization as a research tool. Each panelist was invited to
submit a short "position paper", which is reproduced here.