PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
An extensive family of advanced virtual reality-telepresence systems and components have been developed. The purpose of these systems and components is to facilitate recording, processing, display, and interaction with audio and video signal(s) representing a scene or subject of three-dimensions (3-D). An overview of the systems currently available for license includes: a color video camera with real-time simultaneous spherical FOV coverage; a similar camera for recording various sides of a 3-D subject; an image based system for real-time processing and distribution of said camera based images onto 3-D wireframes; resultant camcorders are generally referred to as virtual reality/telepresence 'VRT camcorders'TM; a 'VIDEOROOM'TM large theater display system in which the floor, walls, and ceiling form a continuous display about the viewer for display of said images; 'INaVISION'TM a HMD system for viewing the same said images; and interactive control devices for manipulating said 3-D image and audio signal(s). Applications, to include visual and auditory simulation, host vehicle control, remote vehicle control, video teleconferencing, and so on, are feasible applications for the above technology. Rough costs of systems and components, photographs of a prototype system, and component illustrations are provided. Future directions of R&D are presented (i.e., Project HEAVEN: Humankind Eternal-Life Artificial-Intelligence Virtual Environment Network).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The visualization domain handles complex data sets, which are visualized on a 2-D screen. This is achieved by data transformations which cause a loss of information. Data can be analyzed easier when interaction is supported. In the context of this paper, multidimensional input devices refer to interaction devices with more than 2 degrees of freedom. I will consider a 6-D-ball (the Spaceball from Spatial Systems) and a glove (the DataGlove from VPL, which incorporates a Polhemus 3Space Isotrack systems). These are devices that support full 3-D interaction. The paper is organized into two parts. The first part investigates the suitability of input devices for interaction in 3-D scenes in general. A system is realized that supports the comparison of the old mouse and dial devices with the above mentioned modern devices. In particular, the graphic's interactions identifying and transforming an object (translation and rotation) are investigated and the results are presented. This system can also be used to train users in getting familiar with the input devices. The second part of the paper describes the use of multidimensional input devices in scientific visualization applications, which are currently under research at FhG-IGD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The xyzscope is a product of a unique technology which provides 3-D display in real time and free space without the use of stereo-spectacles. A flying light spot scans a three dimensional region to create a true x-y-z oscilloscope. A planar light source, e.g., a CRT or LED array, is projected by a rotating lens into a display volume which has no enclosure. Physical objects can easily be combined with the display and easily removed. The display it bright and the system is inherently stable. It is sufficient to display just two points to establish a correct perception of depth. The extent of off-central viewing inversely with the depth dimension of the display. Correction of lens aberrations fixes the display in space and eliminates viewpoint deformations. Still photography of displays is accomplished with an ordinary single lens camera, and a photograph of a display which incorporates a physical object is presented following.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major problem with communication across cultures, whether professional or national, is that simple language translation if often insufficient to communicate the concepts. This is especially true when the communicators come from highly specialized fields of knowledge or from national cultures with long histories of divergence. This problem becomes critical when the goal of the communication is national negotiation dealing with such high risk items as arms negotiation or trade wars. Virtual Reality technology has considerable potential for facilitating communication across cultures, by immersing the communicators within multiple visual representations of the concepts, and providing control over those representations. Military distributed team training provides a model for virtual reality suitable for cross cultural communication such as negotiation. In both team training and negotiation, the participants must cooperate, agree on a set of goals, and achieve mastery over the concepts being negotiated. Team training technologies suitable for supporting cross cultural negotiation exist (branch wargaming, computer image generation and visualization, distributed simulation), and have developed along different lines than traditional virtual reality technology. Team training de-emphasizes the realism of physiological interfaces between the human and the virtual reality, and emphasizes the interaction of humans with each other and with intelligent simulated agents within the virtual reality. This approach to virtual reality is suggested as being more fruitful for future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual environment interfaces to computer programs in several diverse application areas are currently being developed. The users of virtual environments will require many different methods to interact with the environments and the objects in them. This paper reports on our use of virtual menus as a method of interacting with virtual environments. Several aspects of virtual environments make menu interactions different from interactions with conventional menus. We review the relevant aspects of conventional menus and virtual environments, in order to provide a frame of reference for the design of virtual menus. We discuss the features and interaction methodologies of two different versions of virtual menus which have been developed and used in our lab. We also examine the problems associated with our original version, and the enhancements incorporated into our current version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current methods for debriefing Navy Fighter Pilots after real and simulated missions are insufficient for handling the speed and complexity of modern air combat. The state of the art in tactical air combat debriefing is essentially a two-dimensional plus time view of a problem whose dimensionality consists of three spatial dimensions plus time plus other non-spatial parameters. The David Sarnoff Research Center (Sarnoff) is developing an advanced debriefing system for Navy fighter jet training, combat development, and research. Called CyberView, the system consists of an advanced interactive data visualization system displaying multi-dimensional abstract and concrete combat data in three-dimensions plus time, an interactive data analysis system for rapid data manipulation and studies, and a faster than real-tie predictive simulation based on the branch wargaming paradigm of military planning for 'what if?' analysis. In our research and development of CyberView, we are attempting to give pilots and battle planners greater awareness of the complex situations which occur during air operations, and the ability to look into the future at the effects of decisions on battle outcomes. Our envisioned system when complete will be capable of intuitively displaying combat errors to pilots, permitting the pilots to re-fight the same battles with better awareness of their situation, giving battle planners the ability to perform tradeoff studies on tactical decisions in order to optimize battle outcomes, and providing an analytical testbed for automated forces paradigms, algorithms, and effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major criterion in the design of backhoes (and other heavy machinery) is the ability of the operator to see all critical portions of the vehicle and the surrounding environment. Computer graphics provides a method for analyzing this ability prior to the building of full-scale wooden models. By placing the computer graphic camera at the operator's eyepoint, designers can detect poor placement of supports, blind spots, etc. In this type of analysis, the camera becomes an active, yet somewhat imperfect, participant in our understanding of what an operator of the backhoe 'sees'. In order to simulate a backhoe operator's vision from within a cab, one needs to expand the angle of view of the camera to mimic unfocused, peripheral vision. A traditional wide-angle lens creates extreme distortions that are not present in 'natural' vision, and is therefore hardly an adequate representation. The solution we arrived at uses seven cameras fanned out horizontally in order to capture a relatively undistorted 155 degree angle of view. In addition, another camera displays and numerically analyzes the percentage of the loader bucket visible and blocked. These two views are presented simultaneously in order to address both the 'naturalistic' and quantitative needs of the designers, as well as to point to the incompleteness of any one representation of a scene. In the next phase of this project we will bring this type of analysis into a machine environment more conducive to interactivity: a backhoe simulator with levers to control the vehicle and bucket positions, viewed through a virtual reality environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our goal is to visualize the shape and structure in a data set of earthquake hypocenters collected from the island of Hawaii over a twenty-three year period. The earthquakes provide information about the magma system in this region and define a collection of conduits and reservoirs around the active volcanos. Because the data is scattered, traditional visualization methods are difficult to use (without interpolation). In this paper, we present some preliminary efforts to extract and define the structures within the data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the combination of visualization techniques with animation to visualize geometrical and temporal engineering data created as part of a two vehicle accident investigation. The exemplar case discussed involved recreating the collision between an airborne motorcycle having jumped a hill and a three wheeled all terrain vehicle approaching the hill from the side opposite that of the motorcycle. Potential visibility of the all terrain vehicle's safety flag was critical to a litigation. The paper discusses the visualization of the recreated collision as viewed from a number of vantage points, including the motorcycle operator's perspective. Photogrammetry and conventional surveying techniques were employed to generate a three dimensional geometrical computer model of the hill. The hill had been leveled prior to the accident investigation. Matching photographs of the site taken at the time of the accident to images generated by the computer was necessary to validate the computer model. This process is discussed. Motion data for the two vehicles were generated based on eye witness reports and traditional accident investigation techniques. These data were visualized in a computer generated recreation of the accident.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of human motion can be advanced by analyzing motion, not only numerically, but also graphically. We present a system for the three-dimensional graphical analysis of human motion. The system involves the integrated operation of an image acquisition unit, a robot arm for 3-D target presentation, an image reconstruction unit, graphic analysis, and animation software. A five degree of freedom robot arm is used to present single targets or target trajectories for subjects to track. All the locations are software generated based on mathematical formulation of the desired sectoring of space. Two optoelectronic cameras are used to directly sense the positions of the diodes. Three-dimensional trajectories of each point are computed from the two sets of 2-D images. The 3-D trajectories of the robot and of the subject are reconstructed and displayed on a Silicon Graphics Iris Workstation. A variety of programs display kinematic features of the hand and joint trajectories synchronized with reconstructed images of the three-dimensional trajectory paths of individual limb segments. The user has real-time interactive control over the viewing angle, size, and screen position of the limb trajectories. Image representing all 3-D target and finger positions and scatterplots of target and finger distance, azimuth, and elevation from the shoulder can also be presented. Finally, software was developed to display the reconstructed motion of the arm by representing the various segments of the arm as surfaced cylinders. Effects of light source, shading and shadowing are used for calculations of brightness over the surface of the various cylinders. The end points of each cylinder are determined by the 3-D locations of infrared diodes which were attached to the subject's limb segments. The arm is animated to reproduce the velocity patterns inherent in the digital trajectory records. There are various interactive options for viewing the moving image of the arm together with representations of the trajectories of the individual joints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this research paper we give ways of graphing large space series, or time series, or space and time series. A pixel represents a region consisting of a number of points. The color of the pixel therefore is a function of all the values that there are within the region represented by the pixel. A lossy algorithm for image compression is also introduced here. A distance between the approximated image and the original image is introduced. This distance is inversely proportional to the number of data used, the geometric design of the data sampled from the original image, and the semivariogram. The distance decreases drastically as the sampling distance between consecutive points decreases from being equal to the zone of influence to being half of the zone of influence. As the sampling distance becomes smaller than half the zone of influence the distance decreases very slowly and converges to zero as the sampling distance converges to zero in which case we have sampled the whole space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes and compares two different methods for combining multisensor images into single integrated pictures for visual data analysis and data exploration. In the specific case considered here, the original images are thermal (IR) and visible. The first method preserves contrast in the thermal image and modulates local contrast by the structure of the high- frequency information in the visible image. This method produces a conventional gray-scale picture. The second method encodes the intensity at each pixel position in each image as the length of a line-segment, or 'limb,' of a stick-figure icon at the corresponding position in the output picture. This method produces an 'iconographic' picture. Although these two approaches differ significantly, they both satisfy the goal of incorporating the unique features of the thermal and visible images in a single integrated picture. We discuss the strengths and weaknesses of each method, and we suggest ways in which each might be improved and extended.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a joint research project in data visualization between MetLife and the CSTAR group at Andersen Consulting. The goal of the project was to produce a tool for retrieval, display, and analyzing competitive information that would run on conventional PC platforms. As part of the visualization scheme, icon clusters provide a graphical representation of each case's salient features, while 1, 2, or 3 dimensional maps of clusters reveal relationships among the cases. A graphical query facility allows users to specify cluster maps dynamically. In addition, the tool supports an ever-present 'bird's-eye-view' of the information space being analyzed. The tool is particularly valuable for exploring patterns among stories grouped and viewed in the 2 and 3 dimensional views which would be difficult to discern from a hardcopy version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For some time aircraft accident investigation has been a reverse engineering procedure. Failure analysis has taken a strict engineering approach, disregarding many of the alternatives relating to cause. As aircraft become automated, many relating factors will need to be considered in order to build an accurate reconstruction of an accident. The Aircraft Accident Construction Set will provide the investigator with the opportunity to investigate 'What if?' questions relating to causal factors. The system addressed both the vehicle and the human elements of performance. Significant integration of engineering principles and human factors probability data enhance the outcomes assessment. The pictorial display provides a near real- time image of the vehicle as supported by the engineering data, rather than the more traditional approach of line graphs and interpolative tables. This enables the investigator to consider far more data at a given time than in the past. Interaction between the workstation and the investigator is through a menu system derived from data tables and displayed as ICON symbology. Relational database operations and a user toolbox round out the system. Presentation of this system will provide a fundamental understanding of the capabilities of the system, and an actual air carrier accident will be examined briefly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tiller is a tool for the interactive analysis of four-dimensional data. It is based on a powerful yet simple interface consisting of two windows. One displays an image representing two data dimensions, the second presents a grid where the user can interactively select points. Selecting single points prompts Tiller to display individual images, allowing basic data browsing. Sequences of points may also be specified to rapidly produce custom animations. The ability to produce such animations quickly is an effective way to explore data where the interesting aspects and structure may not be known in advance. To date we have used Tiller primarily for the visualization of time series volumetric data. For each time step, we computer projections of the volume at different angles of rotation. Viewing the sequence of projections for one time step corresponds to visualizing the volume rotating around one axis. After computing the set of renderings for each time step, we use Tiller to view the data and to define animation sequences. The horizontal axis of the Tiller grid represents the rotation angle of the volume during rendering, the vertical axis represents time. Working with the grid, it is simple for the user to find interesting viewing angles and time intervals and to produce movies highlighting important aspects of the data. Currently we compute the volume renderings in advance and store them on disk. Our future plans are to utilize the same Tiller interface as the front-end to a real-time volume rendering application utilizing supercomputers such as the Connection Machine or the Cray.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithm Development for Visualization Techniques
We introduce an adaptive variant of the LUM smoother. The smoother operates on a sliding window and is designed to eliminate impulsive components with minimal distortion. In any particular window, the amount of filtering is adjusted based upon the quasi range measures of dispersion. As the results of simulations indicate, in most cases, the adaptive LUM smoother outperforms its fixed counterpart. Secondly, we generalize the two-stage LUM smoother to a multilevel order statistic filter. The generalization leads to the development of some useful filters: multiple window order statistic filters and asymmetric order statistic filters. We provide a detailed analytical and quantitative analysis of the proposed filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The visualization and animation of computational fluid dynamics (CFD) data is vital in understanding the varied parameters that exist in the solution field. Scientists need accurate and efficient visualization techniques. The animation of CFD data is not only computationally expensive but also expensive in the allocation of memory, both RAM and disk. Preserving animations of the CFD data visualizations is useful, since recreation of the animation is expensive when dealing with extremely large data structures. Researchers of CFD data may wish to follow a particle trace over an experimental fuselage design, but are unable to retain the animation for efficient retrieval without rendering or consuming a considerable amount of disk space. The spatial image resolution is reduced from 1280 X 1024 to 512 X 480 in going from the workstation format to a video format, therefore, a desire to save these animations on disk results. Saving on disk allows the animation to maintain the spatial and intensity quality of the rendered image and allows the display of the animation at approximately 30 frames/sec, the standard video rate. The goal is to develop optimal image compression algorithms that allow visualization animations, captures as independent RGB images, to be recorded to tape or disk. If recorded to disk, the image sequence is compressed in non-realtime with a technique which allows subsequent decompression at approximately 30 frames/sec to simulate the temporal resolution of video. Initial compression is obtained through mapping RGB colors in each frame to a 12-bit colormap image. The colormap is animation sequence dependent and is created by histogramming the colors in the animation sequence and mapping those colors with relation to specific regions of the L*a*b* color coordinate system to take advantage of the uniform nature of the L*a*b* color system. Further compression is obtained by taking interframe differences, specifically comparing respective blocks between consecutive frames. If no change has occurred within a block a zero is recorded otherwise the entire block containing the 12-bit indices of the colormap is retained. The resulting block differences of the sequential frames in each segment will be saved after huffman coding and run length encoding. Playback of an animation will avoid much of the computations involved with rendering the original scene by decoding and loading the video RAM through the pixel bus. The algorithms will be written to take advantage of the systems hardware, specifically the Silicon Graphics VGX graphics adapter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Combining the geometry of the behavior of dynamical systems with a computer generated solid model creates a complete environment for mechanical and visual feedback. Dynamical systems are represented mathematically by non-linear coupled differential equations. The investigation of the equations usually is limited to the behavior of the parameter space. When inconsistencies arise between the mathematical model and the physical system, either the model is modified or laboratory tests are conducted on the physical system. It is possible to combine these two methodologies. Using a commercial modeller, a physical model can be constructed for the system under investigation, in this example a single-legged, hopping robot. The state equations for hopping robots in laboratory environments are well documented and extensively researched. By programming the modeller's animation keyframes with the appropriate script of time- and space-dependent motion amplitudes derived from the mathematical model, all of the individual functioning components can be subjected to their appropriate dynamics. A manifold visualizer was developed that computes a manifold of the geometry's behavior that can be viewed at the same time as the physical model is animated. The complete virtual environment has both the system dynamics and the physical modelling feedback.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.