The Naval Research Laboratory's Digital Mapping, Charting, and Geodesy Analysis Program is investigating the extension of the National Imagery and Mapping Agency's Vector Product Format (VPF) to handle a wide range of non-manifold 3D objects for modeling and simulation. The extended VPF, referred to as VPF+, makes use of a non-manifold data structure for modeling 3D synthetic environments. The data structure uses a boundary representation method.
In this paper, a technique for generation of terrain elevation data for synthetic environments from laser-radar images will be demonstrated. The method is characterized by maintaining the high resolution that is available in source data without too much loss of significant information in its data reduction process. Separation of forest information as well as other types of outstanding objects from the ground is another important step in the process of generating a reliable terrain elevation model. As a consequence, it is possible to generate terrain models with a resolution of less than 0.5 m and where data have been highly reduced.
The operational applicability of Earth Observation data to facilitate decision making is demonstrated with examples taken from the work of the Western European Union Satellite Center in Madrid. Analysis and reporting techniques based on 3D representations of the surface of the Earth and Virtual Reality are described.
We have developed a set of tools that attack the problem of rapid construction of 3D urban terrains containing buildings, roads, trees, and other features. Heretofore, the process of creating such databases has been painstaking, with no integrated set of tools to model individual buildings, apply textures, place objects accurately with respect to other objects, and insert them into a database structure appropriate for real-time display. Since fully automated techniques for routinely building 3D urban environments using machine vision have not yet been entirely successful, our approach has been to build a set of semiautomated tools that support and make efficient a human interpreter, running a PC under Windows NT.
We propose an approach in simulation and visualization of data coming from such sensors as radar, lidar, sonar. The main purpose--investigation of capabilities of objects visual images and spatial scenes reconstruction technology on the base of data coming from imaging sensors. Main application--visual image and virtual reality scene formation, visual image data and validation enhancement during virtual environment synthesis in simulators. The investigation was carried out on the base of the developed program models of sensors, which operate in virtual environment.
In order to overcome the difficulties of testing infrared imaging systems, an hardware infrared simulator has been developed. It consists of a panel of infrared micro-emitters, which simulates a real scenario generated by a computer software, the electronics and the optics required to allow the imaging system to see the scene, as it would come from the real world. A description ofthe main concepts involved, ofthe main components and ofthe results obtained, is given. To test a complete (optics, electronics, display, etc.) infrared imaging system the availability of a real scenario is needed, in which all the possible situations are present. This means that a very great variety of conditions should be satisfied. For instance, the natural or man made events, like forest fires, storms, volcanic eruptions, earth or air pollution, industrial factories, explosions, battlefields, launch of missiles, etc., should be present, and the unit under test should be in the conditions in which it really will be required to operate. This approach is obviously almost impossible to reach and in most of the situations it would be too expensive. In order to overcome these difficulties, it is possible to simulate, in laboratory, the required conditions: a scene can be projected toward the system under test (SUT), which can simulate the events that the system should be able to see. The projected scene should, at the same time, cover the full field of view of the SUT, have at least its angular resolution and the scene signal dynamic. At CREO laboratories a hardware simulator has been developed, which can satisfy most of the above mentioned requirements.
The development of digital terrain databases relies on the synthesis of data from several diverse sources. At the Georgia Tech Research Institute, a methodology has been developed and employed over the past ten years to create high resolution (approximately 1 meter grid spacing) digital models of specific world locations. These models include geographic feature types and height information. This process depends on the merging of data from multiple sources including stereo aerial photography, panchromatic imagery, digital maps, and satellite data; these data are then processed through multispectral processing algorithms to create the digital representation.
Scene matching technique is one of the most basic and important techniques in the modern information processing domain. Scene matching is the space registration process of two images taken from the same scene by two different sensors so that their relative displacement is gotten.
Stressful working condition, large volume of data and complexity of the battlefield analysis process challenge even the most experienced situation analyst. An expert system tool called ExpertANALYST has previously been developed to help EW analysts process the large amount of information that current EW sensor systems collect. The expert system in the ExpertANALYST processes relatively low level data and does not consider many sources of information. In order for it to produce more sophisticated analysis, it must have access to the same information that is available to human analysts. The paper describes GIS/Expert System prototype software that extends the analysis capabilities of the expert system. The extension adds geo-spatial analysis capability through the interaction with a commercial GIS. This tool allows testing analyst- supplied hypothesis using knowledge defined in the rule base and the GIS data. Preliminary results are promising; however, the limited availability of fuzzy geo-spatial data prompts for further investigation on the use of fuzzy set- based techniques in situation analysis.
Given data points that sample an unknown function in one independent variable, techniques are described and illustrated that generate additional data points `similar to' the given points. These techniques are optimal in two key respects. First, each technique models the data using a continuous family of functions, where each function is the smoothest possible in that energy is minimized. Here energy is a linear combination of lack-of-smoothness (defined as integrated squared second derivative of the function) and lack-of-fit (defined as sum squared deviation of the function from either the given points or the given points displaced to intersect their least squares line). Second, many members of the family compete in a robust evolutionary process to acquire energy, and the result of this competition determines the relative contribution of each member function. The techniques model the given points in that they yield probability density functions of the dependent variable for any value of the independent variable. Thus they enable the implementation of many pattern recognition and data visualization procedures.
Traditional antipersonnel land mines are an effective military tool, but they are unable to distinguish friend from foe, or civilian from military personnel. The concept described here uses an advanced moving target indicator (MTI) radar to scan the minefield in order to detect movement towards or within the minefield, coupled with visual identification by a human operator and a communication link for command and control. Selected mines in the minefield can then be activated by means of the command link. In order to demonstrate this concept, a 3D, interactive simulation has been developed. This simulation builds on previous work by integrating a detailed analytical model of an MTI radar. This model has been tailored to the specific application of detection of slowly moving dismounted entities immersed in ground clutter. The model incorporates the effects of internal scatterer motion and antenna scanning modulation in order to provide a realistic representation of the detection problem in this environment. The angle information on the MTI target detection is then passed to a virtual 3D sight which cues a human operator to the target location. In addition, radar propagation effects and an experimental design in which the radar itself is used as a command link are explored.
In this paper we briefly describe the Virtual SpacePlane project, describe the requirements, goals and objectives of our project, and discuss the design and implementation of the space weather capability. We also describe the changes that we made to the user interface and VSP software architecture to accommodate the space weather capability. The paper concludes with a presentation of our results and suggestions for future work.
This paper emphasizes the requirement for user modeling by presenting the necessary information to motivate the need for and use of user modeling for intelligent agent development. The paper will present information on our current intelligent agent development program, the Symbiotic Information Reasoning and Decision Support (SIRDS) project. We then discuss the areas of intelligent agents and user modeling, which form the foundation of the SIRDS project. Included in the discussion of user modeling are its major components, which are cognitive modeling and behavioral modeling. We next motivate the need for and user of a methodology to develop user models to encompass work within cognitive task analysis. We close the paper by drawing conclusions from our current intelligent agent research project and discuss avenues of future research in the utilization of user modeling for the development of intelligent agents for virtual environments.
This paper deals with adaptive force feedback control of haptic devices that will enable users to interact more realistically with a virtual environment. A system based approach to designing an indirect adaptive output feedback control for a class of single-input single-output nonlinear systems in the explicit input-output differential representation is presented. It is assumed that the zero dynamics associated with the coupled haptic interface/virtual environment is a nonlinear system that is exponentially stable, that is, the system enjoys a minimum- phase property. The proposed nonlinear adaptive controller is implemented using output feedback that can be obtained from the sensors available in the system.
This article presents the implementation of a distributed system of virtual reality, through the integration of services offered by the CORBA platform (Common Object Request Broker Architecture) and by the environment of development of 3D graphic applications in real time, the WorldToolkit, of Sense8. The developed application for the validation of this integration is that of a virtual city, with an emphasis on its traffic ways, vehicles (movable object) and buildings (immovable objects). In this virtual world, several users can interact, each one controlling his/her own car. Since the modeling of the application took into consideration the criteria and principles of the Transport Engineering, the aim is to use it in the planning, project and construction of traffic ways for vehicles. The system was structured according to the approach client/server utilizing multicast communication among the participating nodes. The chosen implementation for the CORBA was the Iona's ORBIX software. The performance results obtained are presented and discussed in the end.
Both industrials and French government services are inclined to using realistic sensor simulation models in the specification, the conception and the qualification of the weapon systems. When considering the development of multi mode weapon systems, that offer enhanced capabilities, an important new domain in the field of sensor simulation is the multi spectral one, especially for infrared, millimetric radar and acoustic spectrums. CHORALE will precisely be a simulation tool for modeling the battlefield `seen' by an infrared, millimetric radar and acoustic sensor. It fulfills the requirements for a realistic infrared simulation. It will then evolve to millimeter radar and acoustic spectrum (which doesn't concern this paper's topic).
Scene appearance for a continuous range of viewpoint can be represented by a discrete set of images via image morphing. In this paper, we present a new robust image morphing scheme based on 2D wavelet transform and interval field interpolation. Traditional mesh-base and field-based morphing algorithms, usually designed in the spatial image space, suffer from very high time complexity and therefore make themselves impractical in real-time virtual environment applications. Compared with traditional morphing methods, the proposed wavelet-based interval morphing scheme performs interval interpolation in both the frequency and spatial spaces. First, the images of the scene can be significantly compressed in the frequency domain with little degradation in visual quality and therefore the complexity of the scene can be significantly reduced. Second, since a feature point in the image may correspond to a neighborhood in a subband image in the wavelet domain, we define feature interval for the wavelet-transformed images for an accurate feature matching between the morphing images. Based on the feature intervals, we employ the interval field interpolation to morph the images progressively in a coarse-to-fine process. Finally, we use a post-warping procedure to transform the interpolated views to its desired position. A nice future of using wavelet transformation is its multiresolution representation mode, which enables the progressive morphing of scene.
Physically realistic synthesis of FLIR imagery requires intensive phenomenology calculations of the spectral band thermal emission and reflection from scene elements in the database. These calculations predict the heat conduction, convection, and radiation exchange between scene elements and the environment. Balancing this requirement is the need for imagery to be presented to a display in a timely fashion, often in real time. In order to support these conflicting requirements, some means of overcoming the gap between real time and high fidelity must be achieved. Over the past several years, the US Army Night Vision and Electronic Sensors Directorate (NVESD) has been developing a real-time forward looking infrared sensor simulation known as Paint the Night (PTN). As part of this development, NVESD has explored schemes for optimizing signature models and for mapping model radiometric output into parameters compatible with OpenGL, real-time rendering architectures. Relevant signature and mapping optimization issues are discussed, and a current NVESD PTN real-time implementation scheme is presented.
The availability of a software system yielding quick numerical models to predict ballistic behavior is a requisite asset for any research laboratory engaged in material behavior. What is especially true about accessibility of rapid prototyping for terminal impaction is the enhancement of a system structure that will direct the specific material and impact situation toward a specific predictive model, yielding an outcome of physical significance. This, of course, is of particular importance when the ranges of validity are at stake and the pertinent constraints associated with the impact are unknown. Hence, a compilation of semi-empirical predictive penetration relations for various physical phenomena has been organized into a data structure for the purpose of developing a knowledge-based decision-aided expert system to predict the terminal ballistic behavior of projectiles and targets.
This paper presents a simulating model of multi-sensors system. On this basic, a simulating system of multi-sensors is established and realized on microcomputer. The system is an experimental means for research of multi-sensor systems. In this paper a large number of simulating operations are done with the aid of the established model, and some useful conclusions are obtained. The research results in this paper will be advantageous to the development of multi-sensors data fusion system.
The configuration and characteristics of the flight management and control system of an UAV is quite different from that of a manned aircraft. A visual flight simulation is a most important way to improve the performance and effectiveness of UAVs in the practical field use. In this paper, a visual flight Real Time Simulation Environment (RTSE) for UAVs are programmed in Java language combined with Virtual Reality Modeling Language (VRML) and C++ language. The advantages of Java program in this project are introduced. Its disadvantages are overcome through three layer program. The bottom layer is the device-driven system layer with the hardware access capability programmed in C language. The middle layer is the dynamical link library that is made up of the native methods of JAVA application programmed in C++ language. The top layer is the application program programmed in Java language and VRML. The RTSE has the ability to provide significant training, demonstration and assessment of the UAVs economically, and reduce the operator workload.
Achieving a realistic, scaleable, effective distributed mission training (DMT) capability poses a variety of technical challenges in the areas of fidelity, visual systems, the High Level Architecture, computer-generated actors, reconfigurability, software architectures, and interoperability. While some aspects of these issues have received sufficient research attention to adequately support the initial deployment of distributed mission training systems, for most of the technical challenges much remains to be learned to attain the potential of large-scale DMT. In this paper, we discuss the illuminate the most pressing research issues that must be addressed to achieve the potential of DMT.
In this paper, we will examine recent initiatives at the U.S. Army's Communication and Electronics Command Research and Engineering Directorate (CERDEC) to develop and establish and System of Systems Integration capability, together with the infrastructure and simulation facilities required to conduct collaborative distributed prototyping and evaluation. This new capability will permit the CERDEC's various Directorates in conjunction with other stakeholders in the RDA process to accomplish its mission of developing and integrating C4I2WS Systems for the XXI Century Army.
Current financial, schedule and risk constraints mandate reuse of software components when building large-scale simulations. While integration of simulation components into larger systems is a well-understood process, it is extremely difficult to do while ensuring that the results are correct. Illgen Simulation Technologies Incorporated and Litton PRC have joined forces to provide tools to integrate simulations with confidence. Illgen Simulation Technologies has developed an extensible and scaleable, n-tier, client- server, distributed software framework for integrating legacy simulations, models, tools, utilities, and databases. By utilizing the Internet, Java, and the Common Object Request Brokering Architecture as the core implementation technologies, the framework provides built-in scalability and extensibility.