Relation network is constructed by discovering relations between objects. Discovering relations is challenging and
usually time consuming job. For example, most relation in protein-protein interaction networks has been discovered one
by one empirically. However, if we know some objects have similar functions, we can make inference of the relationship
between objects. And these inferences can avoid false trial and errors in discovering relations. Ontology is a structured
representation of conceptual knowledge. This hierarchical knowledge can be applied at inference of relation between
objects. Objects with similar functions share similar ontology terms. Therefore, combining relation network with
ontology makes it possible to reflect that kind of knowledge and we can infer unknown relations.
In this paper, we propose a visualization method in 3D space, to examine specific relation network based on a proper
ontology structure. To gather related ontology terms, we added a degree of freedom to conventional layered drawing
algorithm so that the position of the term in an ontology tree can move like a mobile. And we combined it with modified
spring embedder model to map relation network onto the ontology tree. We have used protein-protein interaction data
from Ubiquitination Information System for relation network, and Gene Ontology for ontology structure. The proposed
method lays out the protein relation data in 3D space with a meaningful distance measure. Finally, we have designed
experiments to verify the relationship between Euclidean distance of each protein and existence of interaction. The
results support that our method provides a means to discover new relation based on visualization.
Rendering a lot of data results in cluttered visualizations. It is difficult for a user to find regions of interest from contextual data especially when occlusion is considered. We incorporate animations into visualization by adding positional motion and opacity change as a highlighting mechanism. By leveraging our knowledge on motion perception, we can help a user to visually filter out her selected data by rendering it with animation. Our framework of adding animation is the animation transfer function, where it provides a mapping from data and animation frame index to a changing visual property. The animation transfer function describes animations for user selected regions of interest. In addition to our framework, we explain the implementation of animations as a modification of the rendering pipeline. The animation rendering pipeline allows us to easily incorporate animations into existing software and hardware based volume renderers.
Our goal is to enable an individual analyst to utilize and benefit from millions of visualization instances created by a community of analysts. A visualization instance is the combination of a specific set of data and a specific configuration of a visualization providing a visual depiction of that data. As the variety and number of visualization techniques and tools continues to increase, and as users increasingly adopt these tools, more visualization instances will be created (today, perhaps only viewed for a moment and thrown away) during the solution of analysis tasks. This paper discusses what fraction of these visualization instances are worth keeping and why, and argues that keeping more (or all) visualization instances has high value and very low cost. Even if a small fraction is retained the result over time is still a large number of visualization instances and the issue remains, how can users utilize them? This paper describes what new functionality users need to utilize all those visualization instances, illustrated by examples using an information workspace tool based on zoomable user interface principles. The paper concludes with a concise set of principles for future analysis tools that utilize spatial organization of large numbers of visualization instances.
Charts and tables are commonly used to visually analyze data. These graphics are simple and easy to understand, but
charts show only highly aggregated data and present only a limited number of data values while tables often show
too many data values. As a consequence, these graphics may either lose or obscure important information, so
different techniques are required to monitor complex datasets. Users need more powerful visualization techniques to
digest and compare detailed multi-attribute data to analyze the health of their business. This paper proposes an
innovative solution based on the use of pixel-matrix displays to represent transaction-level information. With pixelmatrices,
users can visualize areas of importance at a glance, a capability not provided by common charting
techniques. We present our solutions to use colored pixel-matrices in (1) charts for visualizing data patterns and
discovering exceptions, (2) tables for visualizing correlations and finding root-causes, and (3) time series for
visualizing the evolution of long-running transactions. The solutions have been applied with success to product
sales, Internet network performance analysis, and service contract applications demonstrating the benefits of our
method over conventional graphics. The method is especially useful when detailed information is a key part of the
The water monitoring network in Northern California provides us with an integrated flow and water-quality
dataset of the Sacramento-San Joaquin Delta, the reservoirs, and the two main rivers feeding the Delta, namely
the Sacramento and the San Joaquin rivers. Understanding the dynamics and complex interactions among the
components of this large water supply system and how they affect the water quality, and ecological conditions for
fish and wildlife requires the assimilation of large amounts of data. A multivariate, time series data visualization
tool which encompasses various components of the system, in a geographical context, is the most appropriate
solution to this challenge. We have developed an abstract representation of the water system, which uses
various information visualization techniques, like focus+context techniques, graph representation, 3D glyphs,
and colormapping, to visualize time series data of multiple parameters.
This paper presents a technique that allows viewers to visually analyze, explore, and compare a storage controller's performance. We present an algorithm that visualizes storage controller's performance metrics along a traditional 2D grid or a linear space-filling spiral. We use graphical "glyphs" (simple geometric objects) that vary in color, spatial placement and texture properties to represent the attribute values contained in a data element. When shown together, the glyphs form visual patterns that support exploration, facilitate discovery of data characteristics, relationships, and highlight trends and exceptions. We identified four important goals for our project: 1. Design a graphical glyph that supports flexibility in its placement, and in its ability to represent multidimensional data elements. 2. Build an effective visualization technique that uses glyphs to represent the results gathered from running different tests on the storage controllers by varying their performance parameters. 3. Build an effective representation to compare the performance of storage controller(s) during different time intervals. 4. Work with domain experts to select properties of storage controller performance data that are most useful to visualize.
Excessive edge density in graphs can cause serious readability issues, which in turn can make the graphs difficult
to understand or even misleading. Recently, we introduced the idea of providing tools that offer interactive edge
bending as a method by which edge congestion can be disambiguated. We extend this direction, presenting a
new tool, Edge Plucking, which offers new interactive methods to clarify node-edge relationships. Edge Plucking
expands the number of situations in which interactive graph exploration tools can be used to address edge
Exploratory simulation involves the combination of computational steering and visualization at interactive speeds.
This presents a number of challenges for large scientific data sets, such as those from astrophysics. A computational
model is required such that steering the simulation while in progress is both physically valid and scientifically useful. Effective and appropriate visualization and feedback methods are needed to facilitate the discovery
process. Smoothed Particle Hydrodynamics (SPH) techniques are of interest in the area of Computational Fluid
Dynamics (CFD), notably for the simulation of astrophysical phenomena in areas such as star formation and
evolution. This paper discusses the issues involved with creating an exploratory simulation environment for SPH.
We introduce the concepts of painting and simulation trails as a novel solution to the competing concerns of
interactivity and accuracy, and present a prototype of a system that implements these new ideas. This paper
describes work in progress.
Different visual representations of a tree can provide different views
of the same data, leading the viewer to obtain different information,
and gain different knowledge. Any visual representation of a tree,
therefore, may potentially obscure some important aspects of the data,
or sometimes even mislead the user.
We create Tree-Panels, a tree visualization system that provides
four simultaneous visualizations of a tree.
Our user study shows that different tree representations
used in Tree-Panels can uncover different and complementary
information about the data.
Multivariate data sets exist in a wide variety of fields and parallel coordinates visualizations are commonly used for
analysing such data. This paper presents a usability evaluation where we compare three types of parallel coordinates
visualization for exploratory analysis of multivariate data. We use a standard parallel coordinates display with manual
permutation of axes, a standard parallel coordinates display with automatic permutation of axes, and a multi-relational 3D
parallel coordinates display with manual permutation of axes. We investigate whether a 3D layout showing more relations
simultaneously, but distorted by perspective effects, is advantageous when compared with a standard 2D layout. The
evaluation is accomplished by means of an experiment comparing performance differences for a class of task known to
be well-supported by parallel coordinates. Two levels of difficulty of the task are used and both require the user to find
relationships between variables in a multivariate data set. Our results show that for the manual exploration of a complex
interrelated multivariate data set, the user performance with multi-relational 3D parallel coordinates is significantly faster.
In simpler tasks, however, the difference is negligible. The study adds to the body of work examining the utility of 3D
representations and what properties of structure in 3D space can be successfully used in 3D representations of multivariate
Polygonal meshes are used in many application scenarios. Often the generated meshes are too complex not allowing
proper interaction, visualization or transmission through a network. To tackle this problem, simplification
methods can be used to generate less complex versions of those meshes.
For this purpose many methods have been proposed in the literature and it is of paramount importance that
each new method be compared with its predecessors, thus allowing quality assessment of the solution it provides.
This systematic evaluation of each new method requires tools which provide all the necessary features (ranging
from quality measures to visualization methods) to help users gain greater insight into the data.
This article presents the comparison of two simplification algorithms, NSA and QSlim, using PolyMeCo, a
tool which enhances the way users perform mesh analysis and comparison, by providing an environment where
several visualization options are available and can be used in a coordinated way.
A digital situation table which allows a team of experts to cooperatively analyze a situation has been developed. It is
based on a horizontal work table providing a general overview of the situation. Tablet PCs referenced precisely to the
scene image using a digital image processing algorithm display a detailed view of a local area of the image. In this way a
see-through effect providing high local resolution at the position of the tablet PC is established. Additional information
not fitting the bird's eye view of the work table can be displayed on a vertical screen. All output devices can be
controlled using tablet PCs where each team member has his own tablet PC. An interaction paradigm has been developed
allowing each team member to interact with a high degree of freedom and ensuring cooperative teamwork.
ParaView is a popular open-source general-purpose scientific visualization application. One of the many visualization tools
available within ParaView is the volume rendering of unstructured meshes. Volume rendering is a technique that renders
a mesh as a translucent solid, thereby allowing the user to see every point in three-dimensional space simultaneously.
Because volume rendering is computationally intensive, ParaView now employs a unique parallel rendering algorithm to
speed the processes. The parallel rendering algorithm is very flexible. It works equally well for both volumes and surfaces,
and can properly render the intersection of a volume and opaque polygonal surfaces. The parallel rendering algorithm
can also render images for tiled displays. In this paper, we explore the implementation of parallel unstructured volume
rendering in ParaView.
In this paper we propose a technique called storage-aware spatial prefetching that can provide significant performance
improvements for out-of-core visualization. This approach is motivated by file chunking in which a
multidimensional data file is reorganized into multidimensional sub-blocks that are stored linearly in the file.
This increases the likelihood that data close in the n-dimensional volume represented by the file will be closer
together in the physical file. Chunking has been demonstrated to improve the typical access to such data, but it
requires a complete re-organization of the file and sometimes efficient access is only achieved if multiple different
chunking organizations are maintained simultaneously. Our approach can be thought of as on-the-fly chunking,
but it does not require physical re-organization of the data or multiple copies with different formats. We also
describe an implementation of our technique and provide some performance results that are very promising.
Proc. SPIE 6495, NeuroVis: combining dimensional stacking and pixelization to visually explore, analyze, and mine multidimensional multivariate data, 64950H (29 January 2007); https://doi.org/10.1117/12.706110
The combination of pixelization and dimensional stacking uniquely facilitates the visualization and analysis of
large, multidimensional databases. Pixelization is the mapping of each data point in some set to a pixel in a 2D
image. Dimensional stacking is a layout method where N dimensions are projected onto the axes of an information
display. We have combined and expanded upon both methods in an application named NeuroVis that supports
interactive, visual data mining. Users can spontaneously perform ad hoc queries, cluster the results through
dimension reordering, and execute analyses on selected pixels. While NeuroVis is not intrinsically restricted to
any particular database, it is named after its original function: the examination of a vast neuroscience database.
Images produced from its approaches have now appeared in the Journal of Neurophysiology and NeuroVis itself
is being used for educational purposes in neuroscience classes at Emory University. In this paper we detail the
theoretical foundations of NeuroVis, the interaction techniques it supports, an informal evaluation of how it has
been used in neuroscience investigations, and a generalization of its utility and limitations in other domains.
Massive dataset sizes can make visualization difficult or impossible. One solution to this problem is to divide a
dataset into smaller pieces and then stream these pieces through memory, running algorithms on each piece. This
paper presents a modular data-flow visualization system architecture for culling and prioritized data streaming.
This streaming architecture improves program performance both by discarding pieces of the input dataset that
are not required to complete the visualization, and by prioritizing the ones that are. The system supports a
wide variety of culling and prioritization techniques, including those based on data value, spatial constraints, and
occlusion tests. Prioritization ensures that pieces are processed and displayed progressively based on an estimate
of their contribution to the resulting image. Using prioritized ordering, the architecture presents a progressively
rendered result in a significantly shorter time than a standard visualization architecture. The design is modular,
such that each module in a user-defined data-flow visualization program can cull pieces as well as contribute to
the final processing order of pieces. In addition, the design is extensible, providing an interface for the addition
of user-defined culling and prioritization techniques to new or existing visualization modules.
Visualization systems have evolved into live-design environments in which users explore information by constructing
coordinated multiview visualizations rapidly and interactively. Although these systems provide built-in support for well-known
coordinations, they do not allow invention of novel coordinations or customization of existing ones. This paper
presents a categorization of 29 coordination patterns that have proven to be broadly useful for visual data analysis. By
coupling coordination with visual abstraction, it has been possible to realize 27 of these patterns in Improvise, demonstrated
here with five example visualizations.
In many important application domains such as Business and Finance, Process Monitoring, and Security, huge
and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic
and interactive analysis tools for mining useful information from these data repositories. Many data analysis
algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful
clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly
interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for
comparative analysis of similarity characteristics of a given data set represented by different similarity definitions.
We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive
discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is
based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used
with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application
on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We
advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it
allows a more effective perception of similarity relationships and class distribution characteristics.
The paper describes several treemap techniques used to visualize among other aspects the productivity of rule sets in
deriving non-standard spellings in old German texts. The treemap visualization will help finding typical rule sequences
depending on the localization of the spellings and their epoch. First evaluation results and user comments are displayed.
The use of aerial photographs, satellite images, scanned maps and digital elevation models necessitates the
setting up of strategies for the storage and visualization of these data in an interactive way. In order to obtain
a three dimensional visualization it is necessary to map the images, called textures, onto the terrain geometry
computed with Digital Elevation Model (DEM). Practically, all of these informations are stored in three different
files: DEM, texture and geo-localization of the data. In this paper we propose to save all this information in a
single file for the purpose of synchronization. For this, we have developed a wavelet-based embedding method
for hiding the data in a color image. The texture images containing hidden DEM data can then be sent from
the server to a client in order to effect 3D visualization of terrains. The embedding method is integrable with
the JPEG2000 coder to accommodate compression and multi-resolution visualization.
Most terrain models are created based on a sampling of real-world terrain, and are represented using linearly-interpolated
surfaces such as triangulated irregular networks or digital elevation models. The existing methods for the creation of
such models and representations of real-world terrain lack a crucial analytical consideration of factors such as the errors
introduced during sampling and geological variations between sample points. We present a volumetric representation of
real-world terrain in which the volume encapsulates both sampling errors and geological variations and dynamically
changes size based on such errors and variations. We define this volume using an octree, and demonstrate that when
used within applications such as line-of-sight, the calculations are guaranteed to be within a user-defined confidence
level of the real-world terrain.
We are presenting a system dedicated to the visualization and exploration of graphs. It is based on clustering
and visualization through a spring-embedder-type algorithm. A particularity of the program is the control given
to the user during actual simulations. We also introduce heuristics that help increase the processing speed of
the simulation to rapidly yield useful information relative to the topological organization of graphs and the
information they contain.
Traditional Star Coordinates displays a multi-variate data set by mapping it to two Cartesian dimensions. This technique
facilitates cluster discovery and multi-variate analysis, but binding to two dimensions hides features of the data. Three-dimensional
Star Coordinates spreads out data elements to reveal features. This allows the user more intuitive freedom to
explore and process the data sets.
Three-dimensional Star Coordinates is implemented by extending the data structures and transformation facilities of
traditional Star Coordinates. We have given high priority to maintaining the simple, traditional interface. We simultaneously
extend existing features, such as scaling of axes, and add new features, such as system rotation in three dimensions.
These extensions and additions enhance data visualization and cluster discovery.
We use three examples to demonstrate the advantage of three-dimensional Star Coordinates over the traditional system.
First, in an analysis of customer churn data, system rotation in three dimensions gives the user new insight into the data.
Second, in cluster discovery of car data, the additional dimension allows the true shape of the data to be seen more easily.
Third, in a multi-variate analysis of cities, the perception of depth increases the degree to which multi-variate analysis can
There is presently a variety of methods by which to create visualizations, and many of these require a great deal
of manual intervention. Even with those methods by which it is easy to create a single visual representation,
understanding the range of possible visual representations and exploring amongst them is difficult. We present
a generalized interface, called cogito, that permits the user to control exploration of the visualization output of
various manual tools, all without the requirement to modify the original tool. Programming within the cogito
API is required to connect to each tool, but it is not onerous. We consider that the exploratory experience or
activity is valuable, and that it is possible to easily create this experience for standard tools that do not normally
permit exploration. We illustrate this approach with several examples from different kinds of manual interfaces
and discuss the requirements of each.
We describe a system that enables us to perform exploratory empirical experiments with graph visualization techniques
by incorporating them into games that can be played on an internet site, the Graph Games Website
(http://www.cs.kent.ac.uk/projects/graphmotion/). We present a general discussion of games as a test-bed for empirical
experiments in graph comprehension, and explain why they might, in particular, provide a useful way to do exploratory
experiments. We then discuss the requirements for games that can be used and describe some individual games including
those we have tried on our own site.
The main part of the paper describes our own experiment in setting up a graph-gaming site where we have carried out
tests on the benefits of different graph visualizations. We discuss the design of the site, describe its underlying
architecture and present a promising initial trial on movement in graph visualization that gathered the results from over
70,000 played games. We then present some statistical conclusions that can be drawn from the resulting data. Finally, we
summarize the lessons that have been learned and discuss ideas for future work.
Understanding usage patterns of various university resources is important when making budget and departmental
allocations. Computer labs are one of the most highly used classrooms on campus. In order to best make use of them,
IT professionals must know how the variables of platform, seat count, lab location, and departmental association might
influence usage patterns. After conducting user studies and developing and getting feedback on several iterations of
visualizations the client's goals were discussed. Key goals in this process include seeing trends over time, detailed
usage reports, aggregate data viewing, and being able to detect outliers. Four visualization techniques, consisting of
geospatial maps, tree maps, radial maps, and spectrum maps were created to handle these goals. It is evident that a
number of different visualizing techniques are needed, including static and interactive versions.
In recent years, multi-volume visualization has become an industry standard for analyzing and interpreting large surveys
of seismic data. Advances made in computer hardware and software have moved visualization from large, expensive
visualization centers to the desktop. Two of the greatest factors in achieving this have been the rapid performance
enhancements to computer processing power and increasing memory capacities. In fact, computer and graphics
capabilities have tended to more than double each year. At the same time, the sizes of seismic datasets have grown
dramatically. Geoscientists regularly interpret projects that exceed several gigabytes. They need to interpret prospects
quickly and efficiently and expect their desktop workstations and software applications to be as performant as possible.
Interactive, multi-volume visualization is important to rapid prospect generation.
Consequently, the ability to visualize and interpret multiple seismic and attribute volumes enhances and accelerates
the interpretation process by allowing geoscientists to gain a better understanding of the structural framework, reservoir
characteristics, and subtle details of their data. Therefore, we analyzed seismic volume visualization and defined four
levels of intermixing: data, voxel, pixel, and image intermixing. Then, we designed and implemented a framework to
accomplish these four levels of intermixing. To take advantage of recent advancements in programmable graphics
processing units (GPUs), all levels of intermixing have been moved from the CPU into the GPU, with the exception of
data intermixing. We developed a prototype of this framework to prove our concept. This paper describes the four levels
intermixing, framework, and prototype; it also presents a summary of our results and comments made by geoscientists
and developers who evaluated our endeavor.