PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
When very large datasets are visualized in a thinwire setting, where the bandwidth is both limited and unreliable, current techniques break down. This paper proposes the responsive visualization system as a solution. We describe the architecture of such a system, and algorithmic challenges to support it. We introduce a natural and scalable visualization interface called the zooming telescope. We apply our techniques to the visualization of the TIGER dataset, a geographic description of the United States. We describe a suite of automatic and semi-automatic tools for processing the TIGER dataset, including the construction of level-of-detail (LOD) hierarchies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple and yet useful approach to visualize a variety of structures from sampled data is the Maximum Intensity Projection (MIP). Higher valued structures of interest pass in the projection over occluding structures. This can make MIP images difficult to interpret due to the loss of depth information. Animating about the data is one key way to try to decipher such ambiguities. The challenge is that MIP is inherently expensive and thus high frame rates are difficult to achieve. Variations to the original MIP algorithm and classification can help to further alleviate ambiguities and also provide improved image quality and very different visualizations. But they make the technique even more expensive. In addition, they require much parameter searching and tweaking. As today's data sizes are increasingly getting larger, current methods only allow very limited interaction. We explore a view-dependent approach using concepts from image-based rendering. A novel multi-layered image representation storing scalar information is computed at a view sample and then warped to the user's view. We present algorithms using OpenGL to quickly compute MIP and its variations using commodity off-the-shelf graphics hardware to achieve near interactive rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast and efficient surface triangulation (mesh generation) of 3D objects, hopefully with guaranteed accuracy, has become increasingly important for on-line multimedia applications. Two complementary but conflicting objectives - maximizing the approximation quality and minimizing the storage requirement - must be simultaneously and delicately balanced in order to achieve such an optimal surface triangulation. Such a dual-objective optimization problem generally poses excessive demand on computation. In order to tackle this provably NP-hard and computationally intensive problem, a concurrent agent based surface triangulation approach has been explored. The performance of such a naturally convergent and optimal solution is quite promising especially when the surface is approximated by both inside-out and outside-in, namely a bi-directional approximation approach. However, the bi-directional inside-and-outside approach is critically dependent on an appropriate initial triangulation. The initial triangulation required should contain these inside-and-outside triangles that the main algorithm uses for splitting to converge to a better fit of the surface and for merging to reduce the space consumption. In this paper, the construction of such an initial triangulation is the main focus, and a corresponding initialized triangulation process for 3D surface is detailed. The convergence and optimality of the bi-directional inside-and-outside approach are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Best quadratic simplicial spline approximations can be computed, using quadratic Bernstein-Bezier basis functions, by identifying and bisecting simplicial elements with largest errors. Our method begins with an initial triangulation of the domain; a best quadratic spline approximation is computed; errors are computed for all simplices; and simplices of maximal error are subdivided. This process is repeated until a user-specified global error tolerance is met. The initial approximations for the unit square and cube are given by two quadratic triangles and five quadratic tetrahedra, respectively. Our more complex triangulation and approximation method that respects field discontinuities and geometrical features allows us to better approximate data. Data is visualized by using the hierarchy of increasingly better quadratic approximations generated by this process. Many visualization problems arise for quadratic elements. First tessellating quadratic elements with smaller linear ones and then rendering the smaller linear elements is one way to visualize quadratic elements. Our results show a significant reduction in the number of simplices required to approximate data sets when using quadratic elements as compared to using linear elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced medical imaging technologies have enabled biologists and other researchers in biomedicine, biochemistry and bio-informatics to gain better insight in complex, large-scale data sets. These datasets, which occupy large amounts of space, can no longer be stored on local hard drives. San Diego Supercomputer Center (SDSC) maintains a large data repository, called High Performance Storage System (HPSS), where large-scale biomedical data sets can be stored. These data sets must be transmitted over an open or closed network (Internet or Intranet) within a reasonable amount of time to make them accessible in an interactive fashion to the researchers all over the world. Our approach deals with extracting, compressing and transmitting these data sets using the Haar wavelets, over a low- to medium-bandwidth network. These compressed data sets are then transformed and reconstructed into a 3-D volume on the client side using texture mapping in Java3D. These data sets are handled using the Scalable Visualization Toolkits provided by the NPACI (National Partnership for Advanced Computational Infrastructure). Sub-volumes of the data sets are extracted to provide a detailed view of a particular region of interest (ROI). This application is being ported to C++ platform to obtain higher rendering speed and better performance but lacks platform independency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a semi-automatic technique for segmenting a large cryo-sliced human brain data set that contains 753 high resolution RGB color images. This human brain data set presents a number of unique challenges to segmentation and visualization due to its size (over 7 GB) as well as the fact that each image not only shows the current slice of the brain but also unsliced deeper layers of the brain. These challenges are not present in traditional MRI and CT data sets. We have found that segmenting this data set can be made easier by using the YIQ color model and morphology. We have used a hardware-assisted interactive volume renderer to evaluate our segmentation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work focuses on the attempt to establish an analogue and contiguity between visualization and holography. The majority of methods for processing and analysis data work in the virtual space with its virtual images and shapes. Scientific visualization is an extraction of the significant information from the data space and its presentation as a visual shapes, applicable to the visual thinking. A holography gives us possibility to percept the visual shapes as the virtual object in a form of a real 3D copy, i.e. hologram. We propose to add a light field surrounding the virtual object in description of its computer model. The object is determined as the set of points emit the coherent light. A scheme of direct calculation and printing the digital hologram of the virtual object is also taken into consideration. The result of realization of the scheme for the virtual shape and for the real object is indistinguishable on the stage of reconstruction. This brings us to the concept of the Real Virtuality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The smooth molecular surface is defined as the surface which an external probe sphere touches as it is rolled over the spherical atoms of a molecule. The previous methods to compute smooth molecular surface assumed that each atom in a molecule has a fixed position without thermal motion or uncertainty. In real world, the position of an atom in a molecule is fuzzy because of its uncertainty in protein structure determination and thermal energy of the atom. In this paper, we propose a method to compute smooth molecular surface for fuzzy atoms. The Gaussian distribution is used for modeling the fuzziness of each atom, and an extended-radius p-probability sphere is computed for each atom with a certain confidence level. An extended-radius p-probability sphere is defined for atom i as the smallest sphere containing the atom i with a probability p. The fuzzy molecular surface is defined as a collection of molecular surfaces constructed from extended-radius p-probability spheres for each probability p. We have implemented a program for visualizing three-dimensional molecular structures including the fuzzy molecular surface using multi-layered transparent surfaces, where the surface of each layer has a different confidence level and the transparency associated with the confidence level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic virtual worlds potentially can provide a much richer and more enjoyable experience than static ones. To realize such worlds, three approaches are commonly used. The first of these, and still widely applied, involves importing traditional animations from a modeling system such as 3D Studio Max. This approach is therefore limited to predefined animation scripts or combinations/blends thereof. The second approach involves the integration of some specific-purpose simulation code, such as car dynamics, and is thus generally limited to one (class of) application(s). The third approach involves the use of general-purpose physics engines, which promise to enable a range of compelling dynamic virtual worlds and to considerably speed up development. By far the largest market today for real-time simulation is computer games, revenues exceeding those of the movie industry. Traditionally, the simulation is produced by game developers in-house for specific titles. However, off-the-shelf middleware physics engines are now available for use in games and related domains. In this paper, we report on our experiences of using middleware physics engines to create a virtual world as an interactive experience, and an advanced scenario where artificial life techniques generate controllers for physically modeled characters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collaborative tools are developed to support work being undertaken by dispersed teams. As well as allowing voice and audio, several initiatives have supported collaborative information-rich tasks by enabling dispersed participants to share their visualization insights and to exercise some distributed control. In previous work, tools for collaborative visualization have been based on dataflow visualization systems, allowing visual programs to be rapidly prototyped and allowing not only the sharing of final results but also the process of obtaining them. However there are a number of issues: (1) the need for software policy changes, according to different meeting styles; (2) the presence of competing continuous flows, including voice and video of the participants and visualization movie sequences, in addition to bulk data flows; and (3) the dynamics of available resources which vary between participants or between mobile and office situations or within a single meeting. This need for adaptation is being studied in the Visual Beans project in the UK. The technologies under study include component technology, based on Java and CORBA, the use of continuous media in CORBA components, quality of service (QoS) monitoring and the use of open bindings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edges in grayscale digital imagery are detected by localizing the zero crossings of filtered data. To achieve this objective, truncated time or frequency sampled forms (TSF/FSF) of the Laplacian-of-Gaussian (LOG) filter are employed in the transform domain. Samples of the image are transformed using the discrete symmetric cosine transform (DSCT) prior to adaptive filtering and the isolation of zero crossings. This paper evaluates an adaptive block-wise filtering procedure based on the FSF of the LOG filter which diminishes the edge localization error, emphasizes the signal-to-noise ratio (SNR) around the edge, and extends easily to higher dimensions. Theoretical expressions and applications to digital images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a fast method for computing simulated Atomic Force Microscope (AFM) image scans (including tip artifacts). The basic insight is that the array of depth values in the depth buffer of a graphics system is analogous to the height field making up an AFM image, and thus the ability of graphics hardware to compute the depth of many pixels in parallel can be used to radically speed up the AFM imaging simulation. We also present a method for reconstructing better approximations to the true shape from AFM images distorted by tip artifacts. These algorithms are implemented using operations from 3D grayscale mathematical morphology operations of dilation and erosion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an efficient algorithm for progressive coding of cutting plane data extracted from large-scale computational field simulation (CFS) datasets. Since cutting planes are frequently used for examination of 3D simulation results, efficient compression of their geometry, topography, and the associated field data is important for good visualization performance, especially when the simulation is running on a geographically remote server or the simulation results are stored in a remote repository. Progressive coding is ideal for exploratory visualization since the data can be presented naturally starting with a coarse view and progressing down to the detail. In our algorithm, each cutting plane is reduced at the server to a set of triangle strips containing contour lines. On the local visualization machine (the client), the original surface is reconstructed by triangulating the space between the triangle strips. The more contour lines used, the higher the reconstruction accuracy obtained. It can quickly show an area of interest without modifying that section of the original triangle mesh. In generating the data to be sent to the client, the algorithm can smoothly trade-off computation and the accuracy of the representation by altering the cutting plane generation procedure or by adjusting the accuracy of the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a looseless compression scheme is presented for Finite Element Analysis(FEA) data. In this algorithm, all FEA cells are assumed to be tetrahedra. Therefore a cell has at most four neighboring cells. Our algorithm starts with computing the indices of the four adjacent cells for each cell. The adjacency graph is formed by representing a cell by a vertex and by drawing an edge between two cells if they are adjacent. The adjacency graph is traversed by using a depth first search, and the mesh is split into tetrahedral strips. In a tetrahedral strip, every two consecutive cells share a face, and thus only one vertex index has to be specified for defining a tetrahedron. Therefore the memory space required for storing the mesh is reduced. The tetrahedral strips are encoded by using four types of instructions and converted into a sequence of bytes. Unlike most 3D geometrical compression algorithms, vertex indices are not changed in our scheme. Rearrangement of vertex indices is not required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although texture-based methods provide a very promising way to visualize 3D vector fields, their dense 3D texture hinders the visualization of a 3D volume. In this paper, we introduce the concept of 3D significance map, and describe how significance values are derived from the intrinsic properties of a vector field. Based on the 3D significance map, we can control transfer functions for comprehensible LIC volume rendering by highlighting significant regions and neglecting insignificant information. We also present a 3D streamline illumination model that can reveal the flow direction embedded in a solid LIC texture. Experimental results illustrate the feasibility of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume rendering requires too long time to generate a high-quality image. Even though there are many sampling points in a volume space, many of them do not contribute to generate the final light intensity of pixels. This paper proposes a approach which reduces the load of the sampling processing on the operation level and shows the implementation results with volume data. The results show that the approach is 1.33~1.78 times faster than traditional approach. The approach is characterized by reducing the time complexity of shading calculation and shading composition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We implemented a hybrid immersive visualization system for a five dimensional (5D) coupled bottom boundary layer-sedimentation model. This model predicts sediment resuspension, transport, and resulting distributions for shallow water regions on continental shelves. One variable of interest, suspended sediment concentration (SSC), is 5D and varies by longitude, latitude, depth, time, and grain size. At each grid point there are twenty values for SSC, representing grain sizes ranging from 2.36 to 3306 micrometers . Currently the most common methods of analyzing the SSC distribution are only 2D, e.g., point profiles, cross-sections, map views at equal water depths, and time series. Traditional methods require multiple sets of plots that are analyzed manually. Good 3D methods are needed that will allow researchers to investigate the complex relationships between variables and see the underlying physical processes more comprehensively, especially within the wave boundary layer close to the ocean bottom. This paper presents the work in progress on the motivation, requirements, and overall design of the visualization system, along with the latest efforts to incorporate volume visualization as an effective means of understanding the SSC variable. The system is optimized for deployment in a CAVE. We also describe the extension of this system to other problem domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time varying simulations are common in many scientific domains to study the evolution of phenomena or features. The data produced in these simulations is massive. Instead of just one dataset of 5123 or 10243 (for regular gridded simulations) there could now be hundreds to thousands of timesteps. For datasets with evolving features, feature analysis and visualization tools are crucial to help interpret all the information. For example, it is usually important to know how many regions are evolving, what are their lifetimes, do they merge with others, how does the volume/mass change, etc. Therefore, feature based approaches, such as feature tracking and feature quantification are needed to follow identified regions over time. In our previous work, we have developed a methodology for analyzing time-varying datasets which tracks 3D amorphous features as they evolve in time. However, the implementation is for single-processor non-adaptive grids and for massive multiresolution datasets this approach needs to be distributed and enhanced. In this paper, we describe extensions to our feature extraction and tracking methodology for distributed AMR simulations. Two different paradigms are described, a fully distributed and a partial- merge strategy. The benefits and implementations of both are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A framework for parallel visualization at Pacific Northwest National Laboratory (PNNL) is being developed that utilizes the IBM Scaleable Graphics Engine (SGE) and IBM SP parallel computers. Parallel visualization resources are discussed, including display technologies, data handling, rendering, and interactivity. Several of these resources have been developed, while others are under development. These framework resources will be utilized by programmers in custom parallel visualization applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the World Wide Web (WWW) continues to grow at an enormous rate, finding useful information on the WWW is becoming more and more tedious and time consuming. This paper introduces a new visualization technique called TugOfWar for the visualization of search results on the WWW. TugOfWar provides a simple, 2-Dimensional (2D), and interactive graphical user interface to help the users comprehend and filter such results. The TugOfWar interface visually displays the relationship between the query keywords and the displayed documents (inter-document relationships) and the relevance of the query keywords to each individual document (intra-document relationships). Operations on the interface such as zoom in/out, add/remove keywords, or display details of a document increase the power of visualization. A large number of documents may be displayed on one screen, which allows users to judge document relevance more quickly and accurately rather than by linearly flipping through result pages. The TugOfWar technique is intended to provide a global overview of the search results and may be combined with other visualization techniques to provide more details about the contents of a specific document. Several examples are used to illustrate the advantages of the TugOfWar approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As more and more information is available on the Internet, search engines and bookmark tools become very popular. However, most search tools are based on character-level matching without any semantic analysis, and users have to manually organize their bookmarks or favorite collections without any convenient tool to help them identify the subjects of the Web pages. In this paper, we introduce an interactive tool that automatically analyzes, categorizes, and visualizes the semantic relationships of web pages in personal bookmark or favorites collections based on their semantic similarity. Sophisticated data analysis methods are applied to retrieve and analyze the full text of the Web pages. The Web pages are clustered hierarchically based on their semantic similarities. A utility measure is recursively applied to determine the best partitions that are visualized by what we call the Semantic Treemap. Various interaction methods such as scrolling, zooming, expanding, selecting, searching, filtering etc. are provided to facilitate viewing and querying for information. Furthermore, the hierarchical organization as well as the semantic similarities among Web pages can be exported and visualized in a collaborative 3D environment, allowing a group of people to compare and share each other's bookmarks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Network managers and system administrators have an enormous task set before them in this day of growing network usage. This is particularly true of e-commerce companies and others dependent on a computer network for their livelihood. Network managers and system administrators must monitor activity for intrusions and misuse while at the same time monitoring performance of the network. In this paper, we describe our visualization techniques for assisting in the monitoring of networks for both of these tasks. The goal of these visualization techniques is to integrate the visual representation of both network performance/usage as well as data relevant to intrusion detection. The main difficulties arise from the difference in the intrinsic data and layout needs of each of these tasks. Glyph based techniques are additionally used to indicate the representative values of the necessary data parameters over time. Additionally, our techniques are geared towards providing an environment that can be used continuously for constant real-time monitoring of the network environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Closed streamlines are an integral part of vector field topology, since they behave like sources respectively sinks but are often neither considered nor detected. If a streamline computation makes too many steps or takes too long, the computation is usually terminated without any answer on the final behavior of the streamline. We developed an algorithm that detects closed streamlines during the integration process. Since the detection of all closed streamlines in a vector field requires the computation of any streamlines we extend this algorithm to a parallel version to enhance computational speed. To test our implementation we use a numerical simulation of a swirling jet with an inflow into a steady medium. We built two different Linux clusters as parallel test systems where we check the performance increase when adding more processors to the cluster. We show that we have a very low parallel overhead due to the neglectable communication expense of our implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach to modeling and visualization of non-Newtonian flows that is based on lattice-Boltzmann techniques is described. Important advantages over traditional, finite-element methods include the speed and simplicity of the update and the reduced storage, which is linear in the number of nodes. Because the quantities of interest in the solution depend critically upon directional flow densities, a technique for their direct display is suggested. The HSV (hue, saturation, value) color model is a hexcone whose intersection with any plane of constant V>0 yields a hexagon that matches the isotropic flow grid. Thus, a natural representation emerges. An auxiliary display is provided in which the HSV cone is shown in partial transparency along with the magnitudes of the contributing densities for the pixel currently identified by the cursor. A variation on LIC is suggested to enhance field visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The real world data distribution is seldom uniform. Clutter and sparsity commonly occur in visualization. Often, clutter results in overplotting, in which certain data items are not visible because other data items occlude them. Sparsity results in the inefficient use of the available display space. Common mechanisms to overcome this include reducing the amount of information displayed or using multiple representations with a varying amount of detail. This paper describes out experiments on Non-Linear Visual Space Transformations (NLVST). NLVST encompasses several innovative techniques: (1) employing a histogram for calculating the density of data distribution; (2) mapping the raw data values to a non-linear scale for stretching a high-density area; (3) tightening the sparse area to save the display space; (4) employing different color ranges of values on a non-linear scale according to the local density. We have applied NLVST to several web applications: market basket analysis, transactions observation, and IT search behavior analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many well-used techniques in exploratory visualization that select, filter or highlight particular aspects of the visualization to gain a better understanding of the structure and makeup of the underlying information. Indeed, distortion techniques have been developed that deform and move different spatial elements of the representation allowing the user to view and investigate internal aspects of the visualization. But this distortion may cause the user to misunderstand the spatial structure and context of surrounding information and works better when the user knows what feature they are looking for. We believe that regular separation techniques, that separate and generate space round features or objects of interest clarifies the visual representations, are underused and that their use should be encouraged. We describe related research and literature, present some new methods, and classify the realizations by what type of separation is used and what information is being separated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the shift from production to information society, a parallel development has taken place in processing geo information. Today, the focus is often more on intelligent and complex use and analysis of existing data than on data acquisition. The tasks of users now are to find appropriate data, as well as appropriate analysis or mining methods, for their specific exploration goals. This paper first presents an integrated approach that uses metadata technology to guide users through data and method selection. Important prerequisites in the decision process are the user's correct understanding of geodata qualities and, to this end, the availability of metadata. Therefore, the core of the presented approach is then described in detail, i.e. metadata visualization and generation. The visualization part aims to make the user aware of the goal-related geodata qualities. It consists of an automated semantic level-of-detail method, using abstraction hierarchies and linked visualization functions. The underlying metadata is provided via a repository-based generator, which creates descriptive metadata by analysis and interpretation of the original geodata. Finally, an outlook over the next steps in automated support for geodata mining is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional interference detection for visualization has taken a virtual-virtual approach, that is, both the intersector and the intersectee are virtual geometries. But, we have learned that there are advantages in combining both physical models and virtual models in the same space. The physical model has many properties that are difficult to mimic in an all-virtual environment. A realistic interaction is achieved by casting the physical model as a twin to the virtual model. The virtual twin has the ability to interact with other virtual models in software. The two combined into a single system allow for a more effective haptic visualization environment than virtual interaction alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The field of information visualization is in permanent expansion and new and innovative ways of visualizing large volumes of abstract data are being developed. The use of virtual metaphoric worlds is one of them, but these visualizations per se are only truly useful if the user is provided a means of exploring the information. A common way of data exploration is navigation. In the case of three-dimensional (3D) information visualization, navigation as a means of information exploration attains even more importance due to the extra exploitable dimension. Nonetheless, navigation in large virtual worlds is still a difficult task and not only for naive users; there is anecdotal evidence that electronic navigation is considered difficult even by the virtual worlds builders. Wayfinding, knowing where to go, is sometimes perceived as the hardest part; other times, it is the locomotion, getting there, that is found difficult. This paper presents a navigation strategy that attempts to solve these problems by combining physical/metaphoric navigation with semantic navigation. We present a framework for navigating large virtual worlds that relies heavily on the use of visual metaphors. The combination of physical and semantic navigation embedded in the metaphor components allows for a powerful data exploration and electronic navigation mechanism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applications in scientific visualization often involve extensive data sets, which represent scalar or vector valued functions resulting from experimental measurements. Ray Casting (RC) is a well-known approach for visualizing a multi-dimensional volume data. It casts a large number of rays from the viewer into the volume and then computes the progressive attenuation along each ray in the volume. It is known that the ray-casting algorithm requires extensive computation time. In this study, we propose two techniques to speed up the ray-casting process. First, we derive an estimation of ray step sizes during its progressive re-sampling without invocating viewing transformation operations. Hence lengthy computations of trigonometric functions and matrix multiplications can be avoided. Furthermore, by exploring the ray coherence property, we propose a scan-line algorithm so that ray/volume intersections can be found efficiently. The proposed algorithm is implemented and compared with the conventional RC algorithm. Experimental results show that the proposed algorithm performs three to four times faster than the conventional RC algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a neuro-medical imaging system called the Brain Slicer, which allows neuroscientists to construct a three-dimensional digital brain atlas from an array of high-resolution parallel section images and obtain arbitrary oblique section images from the digital atlas. This application is based on a new data structure, the Scalable Hyper-Space File (SHSF). The SHSF is a generalized data structure that can represent a hyperspace of any dimension. The two-dimensional SHSF is a scalable linear quadtree and the three-dimensional SHSF is a scalable linear octree. Unlike the normal linear quadtree and octree, the data structure uses a scalable linear coding scheme. It recursively uses fixed-length linear code to encode the hyperspace, which is efficient in terms of storage space and accessing speed. The structure lends itself well to pipelined parallel operations in constructing the volumetric data set, so that it enjoys excellent performance even though the huge data set imposes heavy disk I/O requirements. The data structure can provide different levels of detail; therefore it can be used in an environment where the bandwidth and computation power is limited, such as the Internet and slow desktop computers. We envision that this methodology can be used in many areas other than neuro-medical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thesauri, such as Roget's Thesaurus, show the semantic relationships among terms and concepts. Understanding these relationships can lead to a greater understanding of linguistic structure and could be applied to creating more efficient natural-language recognition and processing programs. A general assumption is that focus and context displays of hyperbolic trees accelerate browsing ability over conventional trees. It is believed that allowing the user to visually browse the thesaurus will be more effective than keyword searching, especially when the terms in the thesaurus are not known in advance. The visualization can also potentially provide insight into semantic structure of terms. The novelty of this visualization lay in its implementation of various direct manipulation functions, tightly coupled windows, and how data is read into visualization. The direct manipulation functions allow the user to customize the appearance of the tree, to view the density of terms associated with particular concepts, and to view the thesaurus entries associated with each term. Input data is in an XML file format. The extensibility and ability to model complex hierarchies made XML a convenient choice. The object-oriented design of the code allows for displaying virtually any hierarchical data if it is in the XML format.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Natural gas from a well contains water and hydrocarbons. It is necessary to separate the liquid components from such gas streams before use. An innovative type of separation facility, called Twister, has been developed for this purpose, and CFD models have been developed to assist in the design of Twister. However, it is difficult to verify the mathematical models directly and experimentally. To investigate the behavior of Twister and to verify the CFD models, a simulator using air and water vapor was set up in the laboratory. This simulator was instrumented with a highly sensitive electrical capacitance tomography (ECT) system based on an HP LCR meter and a purpose-designed multiplexer. Two ECT sensors, each with 8 measurement electrodes, were built taking into consideration the demanding operational conditions, such as sensitivity, temperature, pressure, geometry and location. This paper presents the first experimental results, showing that water droplets distributions in a flowing gas can be visualized using ECT, and the tomography system developed is robust and offers the possibility for further development to field operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the well known relational bibliometric methods is the co-word analysis. The co-occurrence of words can be illustrated in a matrix. By the means of various mathematical methods like cluster analysis and others the matrix can be illustrated in a two-dimensional science-map that represents a well structured rendition of information. The key question of this paper is how can different objects e.g. key-words and authors or institutions be linked by co-word analysis. In bibliographic documents key-words, authors and institutions are elements of each document. Thus we can talk of a direct linkage between these objects according to the joint occur in one document. The properties of one author to appear with certain keywords in certain articles allows a linkage over the documents. Indirect linkage can be found if documents, raised from the connection between authors and key-words, are eliminated in the functional relationship and new networks are generated which show an indirect linkage of authors on the basis of keywords. This paper will show a method how different networks can be linked directly and indirectly by using co-word analysis. The combination of content maps by co-word analysis will be shown using a technology monitoring concerning fuel cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Communication of regional geographic information to the population as a whole should be a municipal priority, but sadly it is not. From traffic patterns to weather information to emergency information to proposed highways, a city or county has, in electronic form, all of this useful information and more. With the ubiquity of web browsers and the arrival of online 3D graphics technologies such as VRML and Java 3D,this information could and should be made available. By using Java andJava3D, the rendering power of an OpenGL-type application can be combined with multithreading, allowing a program to invisibly access data sets from Internet sites with dedicated threads while processing user interaction with another. Any type of relevant data can be transformed into a three-dimensional interpretation and mapped over the terrain that the user is analyzing. This prototype is designed to be extremely extensible and expandable in order to accommodate future revisions and/or portability. This paper discusses the issues surrounding the creation of such a model, along with challenges, problems, and solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of different resources and a body of new technology has been empowering visualization applications. At the same time, supportive and mostly experimental techniques aimed at increasing the representation power and interpretability of complex data, such as sonification, are beginning to establish a foundation that can be used in real applications. This work presents an architecture and a corresponding prototype implementation of a visualization system that incorporates some of these research and technological aspects, such as visualization on the web, distributed visualization, and sonification. The current development of the prototype is presented, as well as its implications and planned improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently there is spent a lot of effort to establish information systems and global infrastructures enabling both data suppliers and users to describe (-> eCommerce, metadata) as well as to find appropriate data. Examples for this are metadata information systems, online-shops or portals for geodata. The main disadvantages of existing approaches are insufficient methods and mechanisms leading users to (e.g. spatial) data archives. This affects aspects concerning usability and personalization in general as well as visual feedback techniques in the different steps of the information retrieval process. Several approaches aim at the improvement of graphical user interfaces by using intuitive metaphors, but only some of them offer 3D interfaces in the form of information landscapes or geographic result scenes in the context of information systems for geodata. This paper presents GeoCrystal, which basic idea is to adopt Venn diagrams to compose complex queries and to visualize search results in a 3D information and navigation space for geodata. These concepts are enhanced with spatial metaphors and 3D information landscapes (library for geodata) wherein users can specify searches for appropriate geodata and are enabled to graphic-interactively communicate with search results (book metaphor).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.