PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Virtual Environment (VE) techniques provide a powerful tool for the visualization of the 3D of a teleoperation work site, particularly when 'live' video display is inadequate for the task operation or its transmission is constrained, for example by limited bandwidth. However, the ability of VE to cope with the dynamic phenomena of typical teleoperation work sites is severely limited by its per-defined model- based nature. Thus, an on-line composing mechanism is needed to make it environment complaint. For this purpose, this paper describes an on-line technique for camera calibration and an interactive VE modeling method that works ona single 2D image. Experiments have shown that the methods are convenient and effective for on-line VE editing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We define four different tasks which are common in immersive visualization. Immersive visualization takes place in virtual environments, which provide an integrated system of 3D auditory and 3D visual display. The main objective of our research is to find out the best possible ways to use audio in different tasks. In the long run the goal is more efficient utilization of the spatial audio in immersive visualization application areas. Results of our first experiment have proven that navigation is possible using auditory cues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The cost of visualizing computational fluid dynamics and other flow field data sets is increasing rapidly due to ever-increasing grid sizes that constantly strain platform memory capacity and bandwidth. To address this problem of 'big data', techniques have been developed in two areas: out-of-core visualization, which exploits the fact that most flow visualizations require a very sparse traversal of the data set, and remote visualization, in which images are rendered by large-scale computing systems and transmitted via network to desktop systems. A new method, which combines out-of-core and remote techniques, offers a potentially significant improvement in both scalability and cost. By incorporating new techniques for spatial partitioning, data prediction, and explicit memory management, this new method enables desktop computing applications to selectively read the contents of massive data sets from remote servers connected by local or wide area networks. Initial testing has shown that local memory usage is nearly independent of data ste size, overcoming the key limitation of prior out- of-core methods. By performing the visualization computations and graphics rendering on the local/desktop platform, the new method also provides a significant improvement in price-performance ratio compared to current remote visualization methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization techniques are very useful when exploring large amounts of information especially when dealing with data flow. A pixel-oriented visualization technique based on the CGR algorithm has been designed to help recognize type of flowing data on the fly. The CGR method-originally developed for the analysis of genomic sequences- and modified here to allow for coding bit sequences- is an algorithm that produces images where pixels dynamically display current frequencies of small groups of bits in the observed sequence. Qualitative and quantitative expressions of order, regularity, structure and complexity of sequences are perceptible from CGR images that consequently may be used for classification or identification purposes. The method has been applied to a wide range of files including texts of different languages, images with different formats, and data or software of various origins. It is observed that CGR images are file-specific and may be consequently used as data signatures. Not only type of files can be easily identified, but subclasses of data are also decipherable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A velocity field, even one that represents a steady state flow, implies a dynamical system. Animated velocity fields is an important tool in understanding such complex phenomena. This paper looks at a number of techniques that animate velocity fields and propose two new alternatives. These are texture advection and streamline cycling. The common theme among these techniques is the use of advection on some texture to generate a realistic animation of the velocity field. Texture synthesis and selection for these methods are presented. Strengths and weaknesses of the techniques are also discussed in conjunctions with several examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel method for visualizing the results of an image search. Current approaches to visualizing WWW image searches rank results in a linear list and presents them as a sorted thumbnail grid. The method outlined in this paper visually clusters images based on the user's search terms. To accomplish this, a flexible image retrieval method which incorporates a combination of content-based and textural image matching is used. A new information visualization is used to display the search results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cluster analysis is an exploratory data mining technique that involves grouping data points together based on their similarity. Objects or data points are often similar to points in more than one cluster; this is typically quantified by a measure of membership in a cluster, called fuzziness. Visualizing membership degrees in multiple clusters is the main topic of this paper. We use Orca, a java-based high-dimensional visualization environment, as the implementation platform to test several approaches, including convex hulls, glyphs, coloring schemes, and 3D plots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume rendering requires the use of gradient information used as surface normal information, for application of lighting models. However, for interactive applications on- the-fly calculation of gradients is too slow. The common solution to this problem is to quantize gradients of trivariate scalar fields and pre-compute a look-up table prior to the application of a volume rendering method. A number of techniques have been proposed for the quantization of normal vectors, but few have been applied to or adapted for the purpose of volume rendering. We describe a new data- dependent method to quantize gradients using an even number of vectors in a table. The quantization scheme we use is based on a tessellation of the unit sphere. This tessellation represents an 'optimally' distributed set of unit normal vectors. Staring with a random tessellation, we optimize the size and distribution of the tiles with a simulated annealing approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic holography simulates both the optical recording and reconstruction by computer. In this paper we describe several visualization methods for synthetic reconstruction of holograms. Such reconstructions are used in combination with computer generated holograms for verification purposes and also for the exploration of optically recorded holograms. To visualize a reconstruction we calculate the emitted wave field for several planes in different distances to the hologram by applying wave propagation operators. The planes are combined to a volumetric data set. This data set can be expressed as a complex valued function of three independent variables. An appropriate visualization technique is volume rendering of the function values. Opacity and intensity of the generated voxels are proportionally weighted to the intensity of the given data set. This representation approaches the underlying physical process. Another method is iso surface generation for the intensities of the function values which leads to approximations of focal points of the object coded in the hologram. Applications of our methods are given and examples are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an octree-based approach for iso surface extraction from large volumetric scalar-valued data. Given scattered points with associated function values, we impose an octree structure of relatively low resolution. Octree construction is controlled by original data resolution and cell-specific error values. For each cell in the octree, we compute an average function value and additional statistical data for the original points inside the cell. Once a specific iso value is specified, we adjust the initial octree by expanding its leaves based on a comparison of the statics with the iso value. We tetrahedrize the centers of the octree's cells to determine tetrahedral meshes decomposing the entire spatial domain of the dat, including a possibly specified region of interest (ROI). Extracted iso surfaces are crack-free inside an ROI, but cracks can appear at the boundary of an ROI. The initial iso surface is an approximation of the exact one, but its quality suffices for a viewer to identify an ROI where more accuracy is desirable. In the refinement process, we refine affected octree nodes and update the triangulation locally to produce better iso surface representations. This adaptive and user- driven refinement provides a means for interactive data exploration via real-time and local iso surface extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a system for comparing surface meshes using different distance metrics and mapping the results to different visual presentations. Hierarchical and multi- resolution (HMR) methods produce meshes with different levels of details. Different HMR methods produce meshes with varying quality. The surface mesh comparison system presented here allows the user to qualitatively compare and investigate the merits of the meshes produced by different HMR algorithms as well as how different resolution meshes degrade as they are simplified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unstructured grid discretizations have become increasingly popular for computational modeling of engineering problems involving complex geometries. However, the use of 3D unstructured grids complicates the visualization task, since the resulting data sets are irregular both geometrically and topologically. The need to store and access additional information about the structure of the grid can lead to visualization algorithms which incur considerable memory and computational overhead. These issues become critical with large data sets. In this paper, we present a layer data organization technique for data from 3D aerodynamics simulations using unstructured grids. This type of simulations typically model air flow surrounding an aircraft body, and the grid resolutions are very fine near the aircraft body. Scientists are usually interested in visualizing the flow pattern near the wing, sometimes very close to the wing surface. We have designed an efficient way to generate the layer representation, and experimented it with different visualization methods, form isosurface rendering, volume rendering to texture-based vector-field visualization. We found that the layer representation facilitates interactive exploration and helps scientists to quickly find regions of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The complexity of physical phenomena often varies substantially over space and time. There can be regions where a physical phenomenon/quantity varies very little over a large extent. At the same time, there can be small regions where the same quantity exhibits highly complex variations. Adaptive mesh refinement (AMR) is a technique used in computational fluid dynamics to simulate phenomena with drastically varying scales concerning the complexity of the simulated variables. Using multiple nested grids of different resolutions, AMR combines the topological simplicity of structured-rectilinear grids, permitting efficient computational and storage, with the possibility to adapt grid resolutions in regions of complex behavior. We present methods for direct volume rendering of AMR data. Our methods utilize AMR grids directly for efficiency of the visualization process. We apply a hardware-accelerated rendering method to AMR data supporting interactive manipulation of color-transfer functions and viewing parameters. We also present a cell-projection-based rendering technique for AMR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization is an important weapon in the management and control of the vast flood of data now generated. In order to be effective and useful it is important that such visualizations are designed to accommodate the variabilities of the tasks to which they will be put, and for the data they will be expected to be able to display. Such a view necessarily means that not all visualizations are always applicable. To this end, work has been done on visualizing software and systems with the aim of creating intelligence amplifying tools that aid, rather than try to replace the user and his intuition and domain knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a technique for automatically generating graphical presentations of a program execution. Viewers can customize the presentation and examine particular aspects of the running computation by creating a specification of the program's entities and properties of interest. We identify three goals for visualizations that display consecutive computation states, known as program visualizations. First, a visualization must present all of the entities and properties described by viewers and no other information. Second, the graphical representations assigned to various program entities must be sufficiently distinctive to permit views to easily recognize entities of different types, despite similarities in graphical characteristics denoting common properties of those entities. Third, to maintain continuity of the animation over time, graphical elements used to present one state of the program must be reserved and subsequently used to represent the same or similar entities or properties in other states. Based on these goals, we have developed an algorithm that assigns graphical objects to each program entity of interest. The algorithm relies on a characterization of the available graphical objects and attributes to determine the graphical elements that best display the data contained in the entities and their properties. For views that have a greater number of properties than the available number of graphical elements, we have developed heuristics for deciding which properties can be depicted by overloading the same graphical attribute. The automatic presentation is flexible and permits viewers to intervene and determine entirely or partly the graphical design of a visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a model for constructing complex visualization instances by describing them as a set of smaller interconnected modules. The module-relationship structure of the model allows users to explore a given visualization instance a piece at a time, and then to relate modules to ne another to explore the data more deeply. This is known as compositional visualization. Each model is a visualization in itself and represent some aspect or view of the data. When a number of such smaller visualization views are considered conjunctively the result is a broader view of the data that includes the aspects provided by each module. Compositional visualization is one technique for dealing with data that is too large or complex to visualize using a single visualization. The model first decomposes data into a collection of simpler data modules. The data modules are then mapped to simple visualization models. The visualization modules are combined to form larger visualization instances. However, the decomposition of data is not necessarily equivalent to the composition of visualization, thus a visualization may give a false impression of data if poorly constructed. The reasons for this and ways to overcome it are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method for visualization of multivariate data in a lower dimension, primarily in 2D. The method, called Distance Conservation with Filtering (DCF), creates a parameterized mapping of a data set in a lower dimension. Special functions, called filters, extract the most important pairs of points with distances between them to be preserved. A particular construction of a filter and the corresponding algorithm for learning the mapping parameters are described in detail. The DCF is favorably compared with other visualization methods on a number of data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe three systems that use natural or event-based sounds as means of data delivery. In these systems we have mapped data to natural sounds using metaphors. In the first system we evaluate the use of sounds of air, horn, and train to convey ordered numeric values between 1 to 6. An example of the metaphor used here is the association of speed values to the sound of a moving train at different speeds. In the second system, we use sounds of ocean waves to convey whether the exposure in a protein structural alignment is buried, partially exposed or fully exposed. The metaphor used here is the association of sound with how exposed the user is with respect to the ocean. In the third system, we map animal sounds such as the sound of a roaring lion or a chirping bird to certain stocks based on user preferences. The behavior of the stocks are then sounded by the use of whistles and car crash to signify the movement in process of the stocks. An up whistling sound can be clearly associated with an uptrend. We present and discuss the results of user evaluation studies for all the three systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space physicists have traditionally viewed their data with methods such as pixel-based contour representations; however with more advanced instruments being flown, a more advanced visualization method is needed. This document will identify and communicate the principles necessary to achieve the requirements of the current and future space physics community and perhaps meet the needs in other scientific arenas. The software we have developed was design as an extensible tool from the design phase as well as being modular in respect to terms of data system, plotting packages, and user interfaces. We will discuss some of the key design criteria used for developing the new program and the implementation details of using an open-source graphics package for our plotting requirements. In addition, certain issues in upgrading legacy applications will also be discussed. This paper will effectively outline an approach other programmers can use in their own software developments. Successfully using this model will considerably reduce overall development time in constructing data analysis systems while allowing the individual developer to concentrate on specific algorithms unique to their domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scatter graphs are a popular medium for visualizing spatial- semantic structures derived from abstract information spaces. For small spaces such graphs can be an effective means of reducing high-dimensional information into two or three spatial dimensions. As dimensionally increases, representing the thematic diversity of documents using spatial proximity alone becomes less and less effective. This paper reports an experiment designed to determine whether, for larger spaces, benefits are to be gained from adding visual links between document nodes as an additional means of representing the most important semantic relationships. Two well known algorithms, minimum spanning trees (MST) and pathfinder associative networks (PFNET), were tested against both a scatter graph visualization, derived from factor analysis, and a traditional list-based hypertext interface. It was hypothesized that visual links would facilitate users' comprehension of the information space with corresponding gains in information space with corresponding gains in information seeking performance. Navigation performance and user impression were analyzed across a range of different search tasks. Results indicate both significant performance gains and more positive user feedback for MST and PFNET visualizations over scatter graphs. Performance on all visualizations was generally poorer and never better than that achieved on the text list interface although the magnitude of these differences was found to be highly task dependent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As computer and network intrusions become more and more of a concern, the need for better capabilities, to assist in the detection and analysis of intrusions also increase. System administrators typically rely on log files to analyze usage and detect misuse. However, as a consequence of the amount of data collected by each machine, multiplied by the tens or hundreds of machines under the system administrator's auspices, the entirety of the data available is neither collected nor analyzed. This is compounded by the need to analyze network traffic data as well. We propose a methodology for analyzing network and computer log information visually based on the analysis of the behavior of the users. Each user's behavior is the key to determining their intent and overriding activity, whether they attempt to hide their actions or not. Proficient hackers will attempt to hide their ultimate activities, which hinders the reliability of log file analysis. Visually analyzing the users''s behavior however, is much more adaptable and difficult to counteract.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a framework in which a versatile visualization system can be developed to symbolically represent database records. Such a system can handle special visualization needs of individual applications without reprogramming the system. In addition to the regular visualization components, this framework has a rule component, which contains a rule interpreter, a rule base and a graphics base. The rules in the rule base and graphical templates in the graphics base capture application-specific knowledge about the data. The rule interpreter uses the rules to select and place graphical symbols to represent the dat records. Only the rules and templates need to be switched to adapt such a visualization system to different applications. A rule-based database visualization system has been built based on the framework. As a test of the system, it is adapted to draw collision diagram for the Ohio Department of Transportation, which has a database of traffic accident records. Each symbol in a collision diagram is graphical depiction of an accident. Rules and templates are defined so that the symbols can reveal vehicle direction, point of impact and other related information about the accident.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the visualization of the relationships in e-commerce transactions. To date, many practical research projects have shown the usefulness of a physics-based mass- spring technique to layout data items with close relationships on a graph. We describe a market basket analysis visualization system using this technique. This system is described as the following: (1) integrates a physics-based engine into a visual data mining platform; (2) use a 3D spherical surface to visualize the cluster of related data items; and (3) for large volumes of transactions, uses hidden structures to unclutter the display. Several examples of market basket analysis are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents NVIS, an interactive graphical tool used to examine the weights, topology, and activations of a single artificial neural networks (ANN), as well as the genealogical relationships between members of a population of ANNs as they evolve under an evolutionary algorithm. NVIS is unique in its depiction of nodal activation values, its usage of family tree diagrams to indicate the origin of individual networks, and the degree of interactivity it allows the user while the learning process takes place. The authors have made use of these feature to obtain insights into both the workings of single neural networks and the evolutionary process, based upon which we consider NVIS to be an effective visualization tool of value to designers, users, and students of ANNs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the development of software tools involved in the visualization of simulations, detectors and events used in high energy and nuclear physics experiments taking place at large particle accelerators. Two such accelerators are RHIC at Brookhaven National Lab completed in 1999 and the Large Hadron Collider (LHC) at CERN to be completed in 2005. One primary goal of RHIC is the formation of a quark-gluon plasma, a sate of matter though to exist shortly after the big bang. We discuss a simulation and visualization of the collision at RHIC of two gold nuclei and the predicted results for the quark-gluon plasma calculated using the parton cascade mode. ATLAS is a large detector with 20 million separate elements being constructed to measure physics beyond the standard Model at the LHC. A high end visual display for ATLAS is necessary because of the sheer size and complexity of the detector. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI, etc. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theater. We obtain real time visual display for events accumulated during simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BLUI is a virtual reality program that tallows drawing and sculpting in 3D space using a gestural interface. In this paper, we describe BLUIs use as an interface for annotation, display, and explanation in a variety of scientific visualizations. BLUI can display data from Digital Elevation Models. 3D forms can be positioned and modified, and general navigation can be performed. New lines, surfaces, and point clouds can be drawn and edited. For visualization playback, a line can be chosen along which a virtual camera will fly. Starting and ending positions on this line can be selected. In addition, a 'look at' line with start and end positions can be similarly chosen. A time-scale can then be established for the animation, at which point the user can display the scene from the viewpoint of a virtual camera flying along the first line and looking either straight ahead or at a point moving along the second line.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Qviz is a lightweight, modular, and easy to use parallel system for interactive analytical query processing and visual presentation of large datasets. Qviz allows queries of arbitrary complexity to be easily constructed using a specialized scripting language. Visual presentation of the result is also easily achieved via simple scripted and interactive commands to our query-specific visualization tools. This paper describes our initial experiences with the Qviz system for querying and visualizing scientific datasets, showing how Qviz has been used in two different applications: ocean modeling and linear accelerator simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of multimedia works presents a new range of problems associated with the visualization of the structure of the work. Tools for facilitating this process are inadequate and make use of existing methods that have been adapted to the task. Researching the methods used by multimedia practitioners has given some insight into what would be required of such a software tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.