The Joint Synthetic Battlespace (JSB) envisioned within the Department of Defense modeling and simulation master plan requires a distributed virtual environment (DVE) wide consistent threat environment to achieve a useful mission rehearsal, training, test and evaluation capability. To achieve this objective, all threats in the DVE must appear at compatible levels of fidelity to all the entities operating in the DVE and they must interact with human- operated and computer-controlled entities in a realistic fashion. Achieving this goal is not currently possible for two reasons. First, each primary aircraft simulator training system developer has created their own threat system and made their own modeling decisions to support a specific user for a select few predetermined conditions. This traditional threat simulation approach is expensive and leads to ongoing difficulties in maintaining threat currency as intelligence updates are made, new weapons are introduced and new theaters of operation are identified. Second, the threat system interaction on a distributed network must be coordinated. The individualized nature of current threat systems precludes the possibility of introducing coordinated threats. The Distributed Mission Training Integrated Threat Environment (DMTITE) project is developing an effective solution to these issues. The DMTITE project is identifying the requirements for a distributed threat environment and building a demonstrator DOD High Level Architecture compatible system that can provide realistic threats for pilots to train against. The DMTITE prototype will instantiate a variety of threats for use in distributed training scenarios, including surface threats, air threats, radars, and jamming systems. A key element of the system will be the provision of realistic behaviors for the threat systems. We based DMTITE on a general software design methodology and software architecture for computer-generated forces (CGFs) that naturally supports `variety' in performance for a given type of CGF and allows us to organize and build vastly different CgFs within the same architecture. This approach allows us to provide a range of threat skill levels for each threat modeled within the DMTITE system and we can readily expand the system to accommodate peer-to-peer communication and group tactics. In this paper, we present a brief overview of the DMTITE requirements and a component-wise decomposition of the system. We also describe the structure of the major components of the DMT threat systems' decision mechanism. The progress of the initial prototype will also be discussed.
The advent of requirements for worldwide deployment of space assets in support of Air Force operational missions has resulted in the need for a Manned SpacePlane (MSP) that can perform these missions with minimal preflight preparation and little, if any, in-orbit support from a mission control center. Because successful mission accomplishment will depend almost completely upon the MSP crew and the on-board capabilities of the spaceplane, the MSP user interface is a crucial component of successful mission accomplishment. In recognition of this fact, the USAF Phillips Laboratory in conjunction with USAF Space Command initiated the Virtual SpacePlane (VSP) project. To function effectively as an MSP interface development platform, the VSP must demonstrate the capability to simulate anticipated MSP missions and portray the MSP in operation throughout its entire flight regime, from takeoff through space operations and on to recovery via a horizontal landing at an airfield. Therefore, we architected, designed, and implemented a complete VSP that can be used to simulate anticipated Manned SpacePlane missions. The primary objective of the VSP is to be a virtual prototype for user interface design and development, the VSP software architecture and design facilities uncovering, refining and validating MSP user interface requirements. The Virtual SpacePlane reuses software components developed for the Virtual Cockpit and Solar System Modeler (SM) distributed virtual environment (DVE) applications, the Common Object Database (CODB) architecture, and Information Pod (Pod) interface tools developed in our labs. The Virtual Cockpit and Solar System Modeler supplied baseline interface components and tools, 3D graphical models, vehicle motion dynamics models, and DVE communication capabilities. Because we knew that the VSP's requirements would expand and evolve over the life of the project, we use the CODB architecture to facilitate our use of Rapid Evolutionary and Exploratory Prototyping to uncover application requirements and evaluate solutions. The Information Pod provides the paradigm and architectural framework for the user interface development. To achieve accurate and high fidelity performance for the VSP throughout its operational regime, the system integrates aerodynamics and astrodynamics models from the VC, SM and other sources into a single seamless high fidelity model of the VSP's dynamics. In this paper we discuss the background to the VSP project, its requirements, and the current user interface design. We summarize the VSP's current status and outline our plans for further VSP interface development and testing.
The US military sees a great use for software agent technology in its `synthetic battlespace', a Distributed Virtual Environment currently used as a training and planning arena. The Computer Generated Actors (CGAs) currently used in the battlespace display varying capabilities, but state of the art falls short of presenting believable agents. This lack of `believability' directly contributes to simulation and participant inconsistencies. Even if CGAs display believable behavior no formalized methodology exists for judging that display or for comparing CGA performance. A formal method is required to obtain a quantitative measurement of performance for use in assessing a CGA's performance in some simulation, and thus its suitability for use in the battlespace. This paper proposes such a quantitative evaluation method for determining an agent's observed degree of performance as related to skills. Since the method delineates what is being measured and the criteria upon which the measurement is based, it also explains the particular evaluation given for specific military CGAs.
The advent of requirements for rapid and economical deployment of national space assets in support of Air Force operational missions has resulted in the need for a Manned SpacePlane (MSP) that can perform military missions with minimal preflight preparation and little if any in-orbit support from a mission control center. In this new approach to space operations, successful mission accomplishment will depend almost completely upon the MSP crew and upon the on- board capabilities of the spaceplane. In recognition of the challenges that will be faced by the MSP crew and to begin to address these challenges, the USAF Air Force Research Laboratory (Phillips Laboratory) initiated the Virtual SpacePlane (VSP) project. To support the MSP, the VSP must demonstrate a broad, functional subset of the anticipated missions and capabilities of the MSP throughout its entire flight regime, from takeoff through space operations and on through landing. Additionally, the VSP must execute the anticipated MSP missions in a realistic and tactically sound manner within a distributed virtual environment. Furthermore, the VSP project must also uncover, refine and validate MSP user interface requirements, design and demonstrate an intelligent user interface for the VSP, and design and implement a prototype VSP that can be used to demonstrate Manned SpacePlane missions. To enable us to make rapid progress on the project, we employed portions of the Virtual Cockpit and Solar System Modeler distributed virtual environment applications, and the Common Object Database (CODB) architecture tools developed in our labs. The Virtual Cockpit and Solar System Modeler supplied baseline interface components and tools, 3D graphical models, vehicle motion dynamics models, and VE communication capabilities. We use the CODB architecture to facilitate our use of Rapid Evolutionary and Exploratory Prototyping to uncover application requirements and evaluate solutions. The Information Pod provides the paradigm and architectural framework for the user interface development. To achieve accurate and high fidelity performance for the VSP throughout its operational regime, the system integrates aerodynamics and astrodynamics models into a single seamless high fidelity model of the VSP's dynamics. In this paper we discuss the software architecture and design of the Virtual SpacePlane and describe how it supports the transition between motion models, the design of the dynamics software module, and techniques for employment of multiple dynamics models within a single virtual environment actor. We describe how we used rapid prototyping to refine requirements, improve the implementation, and accommodate new requirements throughout the project. We conclude the paper with a brief discussion of results and present suggestions for additional work.
We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.
This paper will present controller designs for two-time scale systems with a unified approach using delta operators that guarantee stability with a known H(infinity ) norm bound and also allow the incorporation of reliability with respect to failures in some control channels. Here, sensor (or measurement) outages case will be studied. By such a unified approach, the procedure and result for the solution are shown simpler and more accurate than those of discrete system using shift operators.
A theoretical analysis model of fiber-optic sliding sensor is presented. The sliding sensor is designed for robot hands or intelligent mechanical clamping apparatuses. In the sliding sensor designing, a sliding ball has been used as sliding transfer device of the sensor. In the center of the in side sliding ball, a 4-face-prismoid reflective mirror has been fixed with the ball to determine the 2D rotation angles, and the angles were measured by a five-fiber-optic probe. The theoretical characteristic functions of the sliding sensor are deduced and simulated.
This paper defines the use of Simulation Based Design (SBD) in the system acquisition process. A case study is presented that focuses on a combined RF intelligence data collection and strike system that was modeled and simulated for design and DT&E and OT&E. This approach provides SBD tools that can steer design and DT&E/OT&E more effectively, conduct necessary computations that otherwise would be unavailable early in the acquisition cycle, and product simulation results that may not be possible in a field test environment (due to OPSEC constraints and the inability to test systems in realistic large dynamic environments).
Spherical environment maps capture all of the light visible from a given point in space. From such a representation it is possible to render any perspective view along any direction from a fixed eye position located at the center of the sphere. In this paper this idea is extended to allow for eye movement within a bounded region of space. A novel method for reconstructing views from a set of precaptured images is presented. Image information is manipulated to create new views of the captured environment. These views provide correct parallax information from the particular viewpoint and, therefore, enable occluded regions to be viewed. Unlike other related methods, there is no reliance on geometric information, thus making the approach completely content independent. Basic interpolation techniques are employed to improve the quality of the reconstructed images and results are presented.
In today's civil flight training simulators only the cockpit and all its interaction devices exist as physical mockups. All other elements such as flight behavior, motion, sound, and the visual system are virtual. As an extension to this approach `Virtual Flight Simulation' tries to subsidize the cockpit mockup by a 3D computer generated image. The complete cockpit including the exterior view is displayed on a Head Mounted Display (HMD), a BOOM, or a Cave Animated Virtual Environment. In most applications a dataglove or virtual pointers are used as input devices. A basic problem of such a Virtual Cockpit simulation is missing force feedback. A pilot cannot touch and feel buttons, knobs, dials, etc. he tries to manipulate. As a result, it is very difficult to generate realistic inputs into VC systems. `Seating Bucks' are used in automotive industry to overcome the problem of missing force feedback. Only a seat, steering wheel, pedal, stick shift, and radio panel are physically available. All other geometry is virtual and therefore untouchable but visible in the output device. In extension to this concept a `Seating Buck' for commercial transport aircraft cockpits was developed. Pilot seat, side stick, pedals, thrust-levers, and flaps lever are physically available. All other panels are simulated by simple flat plastic panels. They are located at the same location as their real counterparts only lacking the real input devices. A pilot sees the entire photorealistic cockpit in a HMD as 3D geometry but can only touch the physical parts and plastic panels. In order to determine task performance with the developed Seating Buck, a test series was conducted. Users press buttons, adapt dials, and turn knobs. In a first test, a complete virtual environment was used. The second setting had a plastic panel replacing all input devices. Finally, as cross reference the participants had to repeat the test with a complete physical mockup of the input devices. All panels and physical devices can be easily relocated to simulate a different type of cockpit. Maximal 30 minutes are needed for a complete adaptation. So far, an Airbus A340 and a generic cockpit are supported.
For some of today's simulations very expensive, heavy and large equipment is needed. Examples are driving, shipping, and flight simulators with huge and expensive visual and motion systems. In order to reduce cost, immersive `Virtual Simulation' becomes very attractive. Head Mounted Displays or Computer Animated Virtual Environments, Datagloves, and cheap `Seating Bucks' are used to generate a stereoscopic virtual environment for a trainee. Such systems are already in use for caterpillar, submarine, and F15-fighter simulation. In our approach we partially simulate an Airbus A340 cockpit. All interaction devices such as side stick, pedals, thrust-levers, knobs, buttons, and dials are modeled as 3D geometry. All other parts and surfaces are formed by images (textures). Some devices are physically available such as sidesticks, pedals, and thrust-levers. All others are replaced by plastic panels to generate a forced feedback for the pilots. A simplified outside visual is available to generate immersive flight simulations. A virtual Primary Flight display, Navigation display, and a virtual stereoscopic Head Up Display are used in a first approach. These virtual displays show basic information necessary to perform a controlled flight and allow basic performance analysis with the system. All parts such as physical input devices, virtual input devices, flight mechanics, traffic, and rendering run in a distributed environment on different high end graphics work stations. The `Virtual Cockpit' can logically replace an also available conventional cockpit mockup in the flight simulation.
The U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Engineering, and Development Center (MRDEC) Advanced Simulation Center has recognized the need for re- configurable visualization in support of hardware-in-the- loop (HWIL) simulations. AMCOM MRDEC made the development of re-configurable visualization tools a priority. SimSight, developed at AMCOM MRDEC, is designed to provide 3D visualization to HWIL simulations and after action reviews. Leveraging both the latest hardware and software visual simulation technologies, SimSight displays a concise, 3D view of the simulated world providing the HWIL engineer with unprecedented power to analyze quickly the progress of a simulation from pre-launch to impact. Providing 3D visualization is only half the solution; data management, distribution, and analysis is the companion problem being dealt with by AMCOM MRDEC with the development of Fulcrum, a cross-platform data capture, distribution, analysis, and display framework of which SimSight will become a component.
In this paper we consider such problems of simulation complexes tools creation and application as simulation methodologies and complex mobile objects operators training, based upon the use of psychophysiological feedback of the subject of control in virtual environment, simulator architecture, simulation support tools organization for virtual environment meant for complex mobile objects which contain in the automated control systems various sensors, novel problems which are possible to be solved, using virtual environment we propose. The peculiarity of proposed simulation methodology is the use of hardware and software for operator's psychophysiological state parameters control. Considered tools environment provides the following novel features and capabilities: loading control and purposeful training efficiency increase, efficient interactive control of operator training, estimation of realistic formation of synthetic environment, estimation of operator's perception of integral multispectral visualization, estimation of the impact of synthetic environment realistic level and estimation of the impact of some factors of synthetic situations upon operator.