PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The current research will investigate the possibility of developing a computing-visualization system using a public domain software system built on a personal computer. Visualization Toolkit (VTK) is available on UNIX and PC platforms. VTK uses C++ to build an executable. It has abundant programming classes/objects that are contained in the system library. Users can also develop their own classes/objects in addition to those existing in the class library. Users can develop applications with any of the C++, Tcl/Tk, and JAVA environments. The present research will show how a data visualization system can be developed with VTK running on a personal computer. The topics will include: execution efficiency; visual object quality; availability of the user interface design; and exploring the feasibility of the VTK-based World Wide Web data visualization system. The present research will feature a case study showing how to use VTK to visualize meteorological data with techniques including, iso-surface, volume rendering, vector display, and composite analysis. The study also shows how the VTK outline, axes, and two-dimensional annotation text and title are enhancing the data presentation. The present research will also demonstrate how VTK works in an internet environment while accessing an executable with a JAVA application programing in a webpage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scientific visualization, especially visualization exploration, enables information to be investigated and better understood. Exploration enables hands-on experimentation with the displayed visualizations and the underlying data. Most exploration techniques, by their nature, generate multiple realizations and many data instances. Thus, to best understand the information in coincident views, the manipulation information within one view may be 'directed' to other related views. These multiple views may be described as being closely coupled. Within this paper we advocate the use of coupled views for scientific visualization exploration. We describe, some key concepts of coupled views for visualization exploration and present how to encourage their use. The key concepts include: the scope of the correlation (between two specific views or many realizations), who initiates the correlation (whether the user or the system) and issues about 'what is correlated' (objects with a view, or the whole viewport).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the past five years a high speed ATM network has been developed at UBC that provides a campus testbed, a local testbed to the hospitals, and a National testbed between here and the BADLAB in Ottawa. This testbed has been developed to combine a commercial shared audio/video/whiteboard environment coupled with a shared interactive 3-dimensional solid model. This solid model ranges from a skull reconstructed from a CT scan with muscles and an overlying skin, to a model of the ventricle system of the human brain. Typical interactions among surgeon, radiologist and modeler consist of having image slices of the original scan shared by all and the ability to adjust the surface of the model to conform to each individuals perception of what the final object should look like. The purpose of this interaction can range from forensic reconstruction from partial remains to pre-maxillofacial surgery. A joint project with the forensic unit of the R.C.M.P. in Ottawa using the BADLAB is now in the stages of testing this methodology on a real case beginning with a CT scan of partial remains. A second study underway with the department of Maxiofacial reconstruction at Dalhousie University in Halifax Nova Scotia and concerns a subject who is about to undergo orthognathic surgery, in particular a mandibular advancement. This subject has been MRI scanned, a solid model constructed of the mandible and the virtual surgery constructed on the model. This model and the procedure have been discussed and modified by the modeler and the maxillofacial specialist using these shared workspaces. The procedure will be repeated after the actual surgery to verify the modeled procedure. The advantage of this technique is that none of the specialists need be in the same room, or city. Given the scarcity of time and specialists this methodology shows great promise. In November of this last year a shared live demonstration of this facial modeler was done between Vancouver and Dalhousie University in Halifax, Nova Scotia. With a fixed bandwidth of 40 mbits/s the latency was 80 msec. Currently an arbiter is being written to permit up to 10 individuals to join or exit the interactive workspace during a live session.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the visualization of application code. The goal is to provide a visual representation of the code that relieves the user from having to rely solely on textual representations. The visual representation of code can be correlated with visual displays of the data and displayed simultaneously within a single display. This prevents the user from having to change perceptual contexts or focus to a separate window. The visualization techniques are themselves based on the familiar metaphor of picture frames. Since we wish to provide the application data within the same display, representing operations and instructions as a frame around the application data provides a merged display with the data and operations correlated in an unobtrusive manner. The borders also work well by matching with the human perceptual systems ability to differentiate and characterize edges. The borders can be as simple as rectangles with the characteristics of the rectangle (i.e., thickness, color, consistency, etc.) identifying the type of operation. Portions of the frame can be made representative of the underlying data (i.e., current and termination conditions).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent availability of long and even complete genomic sequences opens a new field of research devoted to the general analysis of their global structure, without regard to gene interpretation. The exploration of such huge sequences (up to several megabases) needs new kind of data representation, allowing immediate visual interpretation of genomic structure and giving insights into the underlying mechanisms ruling it. Our approach takes advantages of the CGR (Chaos Game Representation) for creating images of large genomic sequences. The CGR method, modified here to allow for quantification, is an algorithm that produces pictures displaying frequencies of words (small sequences of the four nucleotides: G, A, T, C) and revealing nested patterns in DNA sequences. It is proved to be a quick and robust method to extract information from long DNA sequences allowing comparisons of sequences and detection of anomalies in frequency of words. Each species seems to be associated to a specific CGR image, which can therefore be considered as a genomic signature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe the efforts being carried out by the Laboratory of Computational Physics, the Scientific Visualization Laboratory and the Virtual Reality Laboratory of the Naval Research Laboratory towards visualization of large data sets. We have concentrated on fluid flow hydrodynamics data sets. We describe the fully threaded tree structure developed at NRL to tackle the problem of massive parallel calculations using local mesh refinement methods. This structure was implemented with the IBM Data Explorer environment, allowing a multiresolution visualization system that inherits the properties of the tree structure in a natural way. We also describe the visualization of these data sets in a virtual environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a parallel algorithm for preprocessing as well as real-time navigation of view-dependent virtual environments on shared memory multiprocessors. The algorithm proceeds by hierarchical spatial subdivision of the input dataset by an octree. The parallel algorithm is robust and does not generate any artifacts such as degenerate triangles and mesh foldovers. The algorithm performance scales linearly with increase in the number of processors as well as increase in the input dataset complexity. The resulting visualization performance is fast enough to enable interleaved acquisition and modification with interactive visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remotely sensed images, data extracted by analytical models and other ancillary data are the main sources of information both in spatial and multi-dimensional space for Earth observation applications. In general, the visualization process, by means of graphic aids, may facilitate the understanding of these complex data sets. However, in many cases, a single visualization technique is not sufficient to extract all data properties or cannot be used to explore different types of data. In this context, a graphical tool, based on a virtual reality system, has been developed to assess the usefulness of visualization techniques for the exploration of remotely sensed data. Several different 3- dimensional representations of the same data, linked visually by a 'data brushing' technique, have been developed in order to understand in a more effective way the structures inherently present in the data. The objective of this study is to get more insight about the techniques employed in remote sensing image processing to extract sub-pixel information, such as those based on fuzzy classifiers and linear spectral unmixing, in order to improve image classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work describes a way of interactive manipulation of structured objects by interaction rules. Symbols are used as graphical representation of object states. State changes lead to different visual symbol instances. The manipulation of symbols using interactive devices lead to an automatic state change of the corresponding structured object without any intervention of the application. Therefore, interaction rules are introduced. These rules describe the way a symbol may be manipulated and the effects this manipulation has on the corresponding structured object. The rules are interpreted by the visualization and interaction service. For each symbol used, a set of interaction rules can be defined. In order to be the more general as possible, all the interactions on a symbol are defined as a triple, which specifies the preconditions of all the manipulations of this symbol, the manipulations themselves, and the postconditions of all the manipulations of this symbol. A manipulation is a quintuplet, which describes the possible initial events of the manipulation, the possible places of these events, the preconditions of this manipulation, the results of this manipulation, and the postconditions of this manipulation. Finally, reflection functions map the results of a manipulation to the new state of a structured object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the creation of a visual environment for exploring landscape patterns and changes to such patterns over time. Dynamic landscape patterns can involve both spatial and temporal complexity. Exploration of spatio-temporal landscape patterns should provide the ability to view information at different scales to permit navigation of a vast amount of information in a manner that facilitates comprehension rather than confusion. One way of achieving this goal is to support selection, navigation and comparison of progressively refined segments of time and space. We have entitled this system Tardis after the time machine of Dr. Who, to emphasize the exploration of time dependent data and because our use of elastic presentation has the effect of providing more internal space than the external volume suggests. Of special concern in this research is the extent of the data and its inter- relationships that need to be understood over multiple scales, and the challenge inherent in implementing viewing methods to facilitate understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi wall stereo projection systems (MWSP) are an emerging display paradigm, promising a new quality in 3D-real-time interactions. Not much is known about the ergonomics of these systems. In this paper some basics of perception and approaches to improve the visual quality will be discussed and results of four experiments will be presented in order to obtain a better understanding of user interactions with existing projection technology. Due to the limited number of participants the experiments are considered as case-studies only. The first task was the estimation of absolute geometrical dimensions of simple objects. The second task was grabbing simple objects of different sizes. In order to classify MWSP, these tasks were compared to other display devices and compared to physical reality. We conducted two further experiments to compare different viewing devices for virtual reality (VR) like Head Mounted Displays (HMD), monitor, and the MWSP. For all of those experiments quantitative data was collected as a measure of interaction quality. The last two tests were supported by pre- and post questionnaires to obtain subjective judgement of the displays as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe several implementation aspects on generating a terrain representation from a digital elevation model (DEM) and visualizing scene from that representation of desktop environment. Most of terrain data is too huge to cram into system memory in low-end computer. Consequently, we must use secondary storage device as a part of system memory at run time. However, dynamic loading of data from a disk is relatively slow, in order to speed up access of terrain data on disk, we use two disk access mechanism: spatial clustering and smart pointer. For spatial clustering, we present a directory structure with Hana code as a key. And with smart pointer, we present an object access method, which can access objects regardless of each stored location. We represent an approximated terrain model generation algorithm which utilize our methods, and a region selective rendering algorithm which can retrieve each queried region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Representation of terrain at arbitrary level of accuracy is considered as a basic requirement for terrain rendering and ground analysis in a number of visual applications. In all of such applications, there exists compromise between the desire for accuracy and the amount of information to be needed. For visualization of terrain, an efficient way to achieve the compromise to display an approximation at high accuracy in specific areas of interest, while a coarse approximation is used over the other areas. And, for analytical purpose, it is efficient to employ a proper approximation of terrain with maximum precision for given tasks. Multiresolution terrain models offer the possibility to visualize and analyze terrain in 3D at different levels of precision. In this paper, we focus on a multiresolution representation of triangulated terrain, called HiT model, which can efficiently support rendering operations at interactive rate as well as analytical operations. HiT model is a new type of unified model for multiresolution description of triangulated terrain. In this paper, we'll describe HiT model in a formal way. We also presents algorithms for extraction of constant approximation and view-dependent approximation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a description of business process simulation is given. Crucial part in the simulation of business processes is the analysis of social contacts between the participants. We will introduce a tool to collect log data and how this log data can be effectively analyzed using two different kind of methods: discussion flow charts and self-organizing maps. Discussion flow charts revealed the communication patterns and self-organizing maps are a very effective way of clustering the participants into development groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A collection of entity descriptions may be conveniently represented by a set of tuples or a set of objects with appropriate attributes. The utility of relational and object databases is based on this premise. Methods of multivariate analysis can naturally be applied to such a representation. Multidimensional Scaling deserves particular attention because of its suitability for visualization. The advantage of using Multidimensional Scaling is its generality. Provided that one can judge or calculate the dissimilarity between any pair of data objects, this method can be applied. This makes it invariant to the number and types of object attributes. To take advantage of this method for visualizing large collections of data, however, its inherent computational complexity needs to be alleviated. This is particularly the case for least squares scaling, which involves numerical minimization of a loss function; on the other hand the technique gives better configurations than analytical classical scaling. Numerical optimization requires selection of a convergence criterion, i.e. deciding when to stop. A common solution is to stop after a predetermined number of iterations has been performed. Such an approach, while guaranteed to terminate, may prematurely abort the optimization. The incremental Multidimensional Scaling method presented here solves these problems. It uses cluster analysis techniques to assess the structural significance of groups of data objects. This creates an opportunity to ignore dissimilarities between closely associated objects, thus greatly reducing input size. To detect convergence it maintains a compact representation of all intermediate optimization results. This method has been applied to the analysis of database tables.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we will discuss a prototype visual programming environment for text applications called Eye-ConTact. Eye- ConTact is modeled on scientific visualization systems such as NAG's ExplorerTM. The user creates a flow chart of how the 'experiment' on a text is to be conducted. The map of this flow of data allows the researcher to see the logic of the study of a text and then reproduce such studies with other texts, or alter the analysis. To adapt the visual programming model to the community of textual scholars, the program must be usable and extendable by a community with limited resources for computing, and not entirely convinced that computationally-based research is relevant. The 'keep it simple' injunction is crucial in such a project. Furthermore, there are no defined or de facto standards for data formats or data representations in this community. We will discuss the original (1997) prototype. Feedback from demonstrations led to changes to the programming interface, and the underlying text analysis engine, in order to make the former more indicative of the steps in the analysis process, and the latter more portable and extensible. We will discuss the design decisions we made to address these two issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface visualization is very important within scientific visualization. The surfaces depict a value of equal density (an isosurface) or display the surrounds of specified objects within the data. Likewise, in two dimensions contour plots may be used to display the information. Thus similarly, in four dimensions hypersurfaces may be formed around hyperobjects. These surfaces (or contours) are often formed from a set of connected triangles (or lines). These piecewise segments represent the simplest non-degenerate object of that dimension and are named simplices. In four dimensions a simplex is represented by a tetrahedron, which is also known as a 3- simplex. Thus, a continuous n dimensional surface may be represented by a lattice of connected n-1 dimensional simplices. This lattice of connected simplices may be calculated over a set of adjacent n dimensional cubes, via for example the Marching Cubes Algorithm. We propose that the methods of this local-cell tiling method may be usefully- applied to four dimensions and potentially to N-dimensions. Thus, we organize the large number of traversal cases and major cases; introduce the notion of a sub-case (that enables the large number of cases to be further reduced); and describe three methods for implementing the Marching Cubes lookup table in four-dimensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we consider a parallel rendering model that exploits the fundamental distinction between rendering and compositing operations, by assigning processors from specialized pools for each of these operations. Our motivation is to support the parallelization of general scan-line rendering algorithms with minimal effort, basically by supporting a compositing back-end (i.e., a sort-last architecture) that is able to perform user-controlled image composition. Our computational model is based on organizing rendering as well as compositing processors on a BSP-tree, whose internal nodes we call the compositing tree. Many known rendering algorithms, such as volumetric ray casting and polygon rendering can be easily parallelized based on the structure of the BSP-tree. In such a framework, it is paramount to minimize the processing power devoted to compositing, by minimizing the number of processors allocated for composition as well as optimizing the individual compositing operations. In this paper, we address the problems related to the static allocation of processor resources to the compositing tree. In particular, we present an optimal algorithm to allocate compositing operations to compositing processors. We also present techniques to evaluate the compositing operations within each processor using minimum memory while promoting concurrency between computation and communication. We describe the implementation details and provide experimental evidence of the validity of our techniques in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the system called 'MADE.' MADE is an object-oriented software environment for research and application development for multimedia data processing. It is an integrated environment that supports algorithm development, management, and testing. It provides tools for image, sound, and graphic data processing even though the main emphasis is on image processing. The proposed system adopts a true object- oriented approach supporting well-separated data classes similar to IUE classes and provides multiple user interfaces for various classes of users. By separating the interface layer from data processing functions, it allows algorithm developers to write their functions without worrying about user interface programming. It is specially designed to work with both graphic and image objects in the same domain. It can display and edit image features in the graphic domain and process them in the image domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scientific visualization methods have become the new standards for analyzing scientific datasets. However, while these visualization routines provide an excellent qualitative means for data analysis, many researchers still require quantitative information about their data. This paper describes how the marching cubes iso-surface algorithm can be modified to produce not only the qualitative information about the shape of a surface but also some quantitative information regarding the volume of space contained within or beneath that surface. The original marching cubes algorithm decomposes a dataset into cubes based on the grid structure provided. It then searches each cube, determining whether or not the surface intersects that particular cube. During this search, volume calculations can be performed for each cube or partial cube that is contained within or beneath the surface. The results of these calculations can then be summed to obtain a measurement for the volume of space that is within or beneath the surface. These additional tasks can easily be incorporated into the marching cubes algorithm, providing the researcher with a volumetric measurement to support the more qualitative visual information generally produced by an iso-surface algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a discrete ray-tracing algorithm, which employs the adaptive hierarchical spatial subdivision (octree) technique for 3D uniform binary voxel space representation. The binary voxel space contains voxels of two kinds: 'surface' and 'non-surface.' Surface voxels include property information like the surface normal and color. The usage of octrees dramatically reduces the memory amount required to store 3D models. The average compression ratio is in the range between 1:24 up to 1:50 compared to uncompressed voxels. A fast ray casting algorithm called BOXER was developed, which allows rendering 256 X 256 X 256 and 512 X 512 X 512 volumes nearly in real-time on standard Intel-based PCs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we give an approach to analyze a surface topology complexity inside a cube in the Marching Cube (MC) algorithm. The number of the isosurface intersections with the cube diagonals is used as the complexity criterion. In the case of the trilinear interpolation we have the cubic equation on the each cube diagonal and there is a possibility to find the coordinates of the three intersections with the diagonal of the approximated surface. In the presented work a common technique for choosing the right subcase form the extended lookup table by using a surface complexity criterion is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a visualization system which has been used as part of a data-mining effort to detect fraud and abuse within state medicare programs. The data-mining process generates a set of N attributes for each medicare provider and beneficiary in the state; these attributes can be numeric, categorical, or derived from the scoring proces of the data- mining routines. The attribute list can be considered as an N- dimensional space, which is subsequently partitioned into some fixed number of cluster partitions. The sparse nature of the clustered space provides room for the simultaneous visualization of more than 3 dimensions; examples in the paper will show 6-dimensional visualization. This ability to view higher dimensional data allows the data-mining researcher to compare the clustering effectiveness of the different attributes. Transparency based rendering is also used in conjunction with filtering techniques to provide selective rendering of only those data which are of greatest interest. Nonlinear magnification techniques are used to stretch the N- dimensional space to allow focus on one or more regions of interest while still allowing a view of the global context. The magnification can either be applied globally, or in a constrained fashion to expand individual clusters within the space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the use of ions or glyphs is a common way of displaying multivariate data, these techniques do not scale well with dataset size. Displaying large amounts of data requires the placement of many icons in the display often resulting in images which are cluttered and where important patterns and structures are obscured. In this paper we present an adaptive multi-scale technique that uses concepts of abstraction and importance, combined with icon display that helps alleviate the problem of visual clutter. Abstraction functions are used to transform and reduce the data, importance functions are used to identify important areas within the data. Abstractions of abstractions are computed forming a multi-scale representation of the data which is used to display the data. The data is displayed by distributing a specified number of icons through it using the computed importance values. The multi-scale structure ensures that relative importance is maintained through the distribution of icons in the image. We demonstrate this technique by applying it to multivariate data defined over two dimensions. We show how a range of abstraction functions can be used with importance and display methods to display and explore a number of example datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an object-oriented design and implementation for ray tracing on a virtual distributed computing environment called Dove (Distributed Object based Virtual computing Environment). Dove consists of distributed objects which interact with one another via method invocation. It appears to user logically as a single parallel computer for a set of heterogeneous hosts connected by a network, and provides high performance via efficient implementation of parallelism, heterogeneity, portability, scalability and fault tolerance. We shall show that ray tracing software can be implemented and maintained in a distributed environment with ease and efficiency by providing three abstract classes: TaskManager, Tracer and ObjectStorage. TaskManager schedules the assignment of pixels to Tracer which in turn renders them. ObjectStorage receives a ray from Tracer and returns the nearest object-ray intersection. We shall show that various ray tracing algorithm can be built up by the design of subclasses for TaskManager which incorporate centralized/decentralized task scheduling and dynamic/static load balancing schemes respectively. We provide users with flexibility by the design of ObjectStorage with various subclasses for storing objects from simple array to complicated structures like hierarchical bounding volume, octree or regular grid of voxels. Moreover, ObjectStorage can be distributed either by replication or by partition, and partitioned one maintains a cache for storing objects to reduce communication overheads. We shall show that user can easily build up a distributed ray-tracing software with different mechanisms by minimal modifications using the proposed classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new technique that enhances the diffuse interreflection with the concepts of backward ray tracing. In this research, we have modeled the diffuse rays with the following conditions. First, as the reflection from the diffuse surfaces occurs in all directions, it is impossible to trace all of the reflected rays. We confined the diffuse rays by sampling the spherical angle out of the reflected rays around the normal vector. Second, the traveled distance of reflected energy from the diffuse surface differs according to the object's property, and has a comparatively short reflection distance. Considering the fact that the rays created on the diffuse surfaces affect relatively small area, it is very inefficient to trace all of the sampled diffused rays. Therefore, we set a fixed distance as the critical distance and all the rays beyond this distance are ignored. The result of this research is that as the improved backward ray tracing can model the illumination effects such as the color bleeding effects, we can replace the radiosity algorithm under the limited environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume rendering is a technique that visualize 2D image of object from 3D volume data on image screen. Ray casting algorithm, one of popular volume rendering techniques, generate image with detail and high quality compared with other volume rendering algorithms but since this is a highly time consuming process given large number of voxels, many acceleration techniques have been developed. Here we introduce new acceleration technique, efficient space leaping method. Our new space leaping method traverse volume data and projects 3D location of voxel onto image screen to find pixels that have non-zero value in final volume image and locations of non-empty voxels that are closest to ray. During this process, adaptive run-length encoding and line drawing algorithm are used to traverse volume data and find pixels with non-zero value efficiently. Then we cast rays not through entire screen pixel but only through projected screen pixels and start rendering process from non-empty voxel location directly. This new method shows significant time savings applied to surface extraction without loss of image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.