PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Abstraction is key to model construction for simulation. While traditionally considered to be a black art, the importance of methodological support for abstraction is becoming increasingly evident. Complex distributed simulation systems, supporting multiresolution model sharing, presuppose effective ways to develop and correlate the underlying abstractions. Although a conceptual and computational framework grounded in mathematical systems theory has been around for some time, discussions of abstraction issues still often proceed through individual anecdotal experiences with little cumulative impact. This paper reviews theory and concepts available for working with abstraction in model construction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we review motivations for multilevel resolution modeling (MRM) within a single model, an integrated hierarchical family of models, or both. We then present a new depiction of consistency criteria for models at different levels. After describing our hypotheses for studying the process of MRM with examples, we define a simple but policy-relevant problem involving the use of precision fires to halt an invading army. We then illustrate MRM with a sequence of abstractions suggested by formal theory, visual representation, and approximation. We milk the example for insights about why MRM is different and often difficult, and how it might be accomplished more routinely. It should be feasible even in complex systems such as JWARS and JSIMS, but it is by no means easy. Comprehensive MRM designs are unlikely. It is useful to take the view that some MRM is a great deal better than none and that approximate MRM relationships are often quite adequate. Overall, we conclude that high-quality MRM requires new theory, design practices, modeling tools, and software tools, all of which will take some years to develop. Current object-oriented programming practices may actually be a hindrance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing complexity of systems requires the use of simulation to help guide the engineers and decision makers by predicting behavior of these systems under future conditions. There are three elements of these analyses: (1) characterizing the conditions - how many parameters are needed at what resolution over what span of time, (2) characterizing the source system - what is it we want to know, how well do we need to define the system's state and (3) characterizing the outputs - what variables tell us the most. The simulation process itself must be cost effective. The total simulation experiment must be done in a timely manner on available computers. We must try to minimize the number of parameters that characterize the environment, a minimum number of components in the model, a minimum span of simulation clock time, and a minimum number of output variables. Where we are using a body of input data to characterize the system, data clustering can help with this reduction process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose to employ the ART2 neural network to cluster the high dimensional vectors for the preservation of statistics in hierarchical simulation. The experiments show that ART2 serves this purpose quite well. The inter- and intra-cluster difference calculated indicated that ART2 clusters data according to Euclidean distance 'approximately'. The numerical results also indicate that the 'vigilance parameter' determines the degree of similarity of vectors in the same cluster by controlling the entire variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An 'evaluation' approach devised for an inductive reasoning system called logic-based discrete-event inductive reasoner is the focus of this paper. The underlying inductive reasoning methodology utilizes abstractions as its primary means to deal with lack of knowledge. Based on abstractions and their treatments as assumptions, the logic-based discrete-event inductive reasoning system allows non- monotonic predictions. The evaluation approach takes into account explicitly the role of abstractions employed in non- monotonically derived multiple predictions. These predictions are ranked according to the type and number of abstractions used. The proposed evaluation approach is also discussed in relation to the dichotomy of model validation and simulation correctness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force has defined a hierarchy of models that describes the scope and level of fidelity of simulations. This 'Hierarchy of Models' begins at Level I, Engineering Analysis, and progresses to Level V, Campaign results. Separation of simulations into these levels was perfectly adequate to describe the purpose and use of a simulation and its characteristics relative to another simulation of the same system. Simulation, however, can be used to support two general types of activities: analysis and development. In analysis, we are trying to understand a system or environment so we can answer a question or test a hypothesis. The 'hierarchy of models' was developed to categorize analytical simulations. In development, we are trying synthesize a system that meets certain requirements, build the components of the system, integrate the components, and then verify that the system meets requirements. In a development effort, simulation supports: requirements analysis, functional analysis and allocation, preliminary design, detailed design, build, integrate, and test. The hierarchy of models does not provide a solid basis for establishing simulation characteristics to support a large scale development and integration effort. Other simulation characteristics are needed to be able to define the types of simulations and simulation requirements. This paper discusses these characteristics and proposes a new set of classes and characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early phases of large projects require responsive guidance from simulation. As such, the simulation response time must be short, while still maintaining validity. Typically, to get development times down, only portions of the system are simulated, and the implications, along with empirically derived results for other portions, are woven together to provide a system level result. This weaving requires a framework so that the result are traceable to their constituents, a necessary condition for the decision makers to consider them valid. We will discuss the structure of the framework that we've developed to support broadband telecommunications network engineering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the instrument composition system is to allow a simulation user to dynamically create instruments as a simulation executes. Instruments can include graphical displays, data collectors, and debugging aides. Instruments are made up of small building blocks which can be easily combined into larger, more complex instruments. Through the use of an attribute server, the actors and instruments in a simulation can interact without direct knowledge of each other. Instead, each actor publishes the attributes which it has available. An instrument subscribes to the attributes in which it is interested, and is notified whenever the value of one of these attribute changes. An instrument can also publish attributes for use by other instruments. Since the attribute server is distributed, the publisher of an attribute need not execute on the same machine as the subscriber. This allows CPU intensive data visualization to execute on separate machines from the simulation, minimizing the impact on the simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article explores using discrete-event simulation for the design and control of defence oriented fixed-sensor- based detection system in a facility housing items of significant interest to enemy forces. The key issues discussed include software development, simulation-based optimization within a modeling framework, and the expansion of the framework to create real-time control tools and training simulations. The software discussed in this article is a flexible simulation environment where the data for the simulation are stored in an external database and the simulation logic is being implemented using a commercial simulation package. The simulation assesses the overall security level of a building against various intruder scenarios. A series of simulation runs with different inputs can determine the change in security level with changes in the sensor configuration, building layout, and intruder/guard strategies. In addition, the simulation model developed for the design stage of the project can be modified to produce a control tool for the testing, training, and real-time control of systems with humans and sensor hardware in the loop.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The difficulty and complexity of the problems and systems that are the subject of today's simulations and simulation systems continue to increase. Simulations are required to model larger 'systems of systems' to address complicated issues. Combined with the more complicated systems, simulations must also address very specific technical issues that require significant subject matter expertise. One method of addressing these additional requirements is with automated support provided by an expert system. Intelligent Systems are more flexible than conventional system. They can respond in ways that are more complex and can deliver highly tailored recommendations. The systems can provide multiple answers with different degrees of certainty and thereby manage the process or provide the simulation environment, however, is not necessarily a straightforward task. This paper discusses two approaches to integrate the C Language Integrated Production Systems as an Expert System Server. For each of the approaches, we provide the rationale, structure, features, benefits, and limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation technology has been widely used in all aspects of military applications. In different applications, the fidelity requirements are also different, and so are the simulation models. Using the development of new military aircraft as an example, the simulation fidelity requirement would be very high. In this case, the simulation model has to include many details. On the other hand, the pilot training simulator has to satisfy the real-time simulation requirement; its fidelity cannot be very high and the model has to be simplified. Therefore, the reusability of military platform's simulation models is very low. This paper suggests an Object-Oriented approach to the modeling of military platforms. A helicopter is chosen to be the sample platform. To limit the scope of the problem, only the dynamical model of the helicopter is considered. This model includes equations of motion, kinematics, power plant, and interaction with the environment. The helicopter dynamical models can have many levels of detail. In a constructive simulation, it is possible that only the positions of the helicopter are of interest; therefore, a simple kinematic model may be sufficient. On the other hand, in a wargame simulation, the helicopter responds to control commands and moves from one position to another. A point-mass model can represent such motion. In a helicopter pilot trainer, though, a six-degree-of-freedom model is needed to represent both linear motion and the roll-pitch-yaw orientation. However, the real-time simulation requirement prohibits the model to use sophisticated aerodynamic models. Thus, in some applications, only two-dimensional motion is needed; in other applications, a four-degree-of-freedom model is sufficient. The object-oriented approach uses the concept of hierarchy and inheritance to build classes ofcomponents. Based on the fidelity requirements, classes and sub-classes can be replaced. This approach greatly increases the reusability ofthe model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As distributed modeling and simulation finds great success in the marketplace, its underlying technology has begun to reveal significant limitations. For example, the lack of high fidelity will limit a simulation's ability to represent more robust situations such as design and test of complex systems, analysis of manufacturing and lifecycle support, high fidelity training support, and C4I planning. Consequently, simulations must now begin to reproduce, as faithfully as possible, the high-fidelity interactions that are critical to analyzing and formulating successful designs, tactics, and business strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MOOSE: a Multimodeling Simulation Environment Supporting Modeling and Programming
Our goal is to promote the publication and standardization of digital objects stored on the web to enable model and object reuse. A digital object is an electronic representation of a physical object, including models of the object. There remain many challenges and issues regarding the representation and utilization of these objects, since it is not clear, for example, who will create certain objects, who will maintain the objects, and what levels of abstraction will be used in each object. This article introduces some of the technical and philosophical issues regarding digital object publication, with the aim of enumerating technical, sociological and financial problems to be addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object oriented physical modeling (OOPM) is an object- oriented methodology for constructing physical models by emphasizing a clear framework to organize the geometry and dynamics of the models. An environment called MOOSE is under development to explore this OOPM concept. MOOSE provides a solid connection between blackboard models and software models in an unambiguous way, capturing both static and dynamic semantics of objects. Even though this facility reinforces the relation of 'model' to 'program', an adequate validation technique for modeling processes has not yet been developed. In this paper, we propose a validation method for the modeling process in MOOSE. This method utilizes a fuzzy simulation approach to encode uncertainty arisen from human reasoning process into computer simulation components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The state of the art in computer simulation involves designing models, facilitating model execution, and analyzing the results. A key problem in modeling is the absence of a common modeling framework for sharing models among distributed researchers and industrial scientists and engineers. Distributed simulation (that is, execution) has received a significant degree of attention in the literature. We find that distributed modeling requires a similar degree of care in its exposition to achieve goals of sharing and reuse. Most models are created to solve a specific problem, and often languish and are discarded, when others need them but can neither find them nor understand them. Model authors not currently have the rich search environment available to those who wish to find remote documents containing key words. On the World Wide Web, one can search for such documents. For simulation, we need a similar capability in searching for models, objects and classes. Unfortunately, no such software architecture or standard exists to achieve this. We present such an architecture and call it the Distributed Modeling Markup Language (DMML). DMML permits specification of three key types of modeling information: 1) conceptual model, defining classes; 2) geometry model, and 3) dynamic model elaborating behaviors. We have studied related standards such as UML, HLA and VHDL and have isolated our contribution as one that yields a general physical object standard for distributed modeling. This standard is robust and general, yet preserves legacy code, so that model authors may reuse code as well as model components. Keywords: Simulation, Dynamic Multimodel, Object-Oriented Modeling, Object Oriented Physical Modeling, Visualization, Model Abstraction, Distributed Modeling, Application Framework
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As we model a complex system, modelers need a way to better handle multiple perspectives of the system. Finding a model of a complex system that is at the right level of detail for a specific purpose is a difficult task. Also there is a need to simulate the model under a time constraint, when the modeled system has to meet a given deadline in order to avoid hard/soft disasters. Considering these different needs, our question is 'How to determine the optimal model that simulates the system by a given deadline while still producing good quality at the right level of detail.' We try to answer this question on multimodeling object oriented simulation environment (MOOSE). The proposed framework has three phase: (1) Generation of multimodels in MOOSE using both structural and behavioral abstraction technique, (2) Assessment of quality and runtime for each generated model, (3) Selection of the optimal model for a given real-time constraint. A more detailed model is selected when we have enough time to simulate but a less detailed model is selected when the deadline is immediate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MOOSE is an application framework under the development at University of Florida, which is used for modeling and simulation. MOOSE is based on Object Oriented Physical Modeling (OOPM), and consists of a Human Computer Interface (HCI), translator, and engine. A human model author builds the model of a physical system with the help of graphical user interface (GUI) and represents his/her model with a picture. The MOOSE GUI contains two types of modelers: conceptual and dynamic. The conceptual modeler supports a model author to define classes and relations among classes in a form of class hierarchy that represents the conceptual model. The dynamic modeler assists a model author to build dynamic models for each of the classes defined in the conceptual model. The dynamic model types supported are functional block model, finite state mode, equation model, system dynamics model, and rule based model. We are currently performing research to enlarge the HCI capability by adopting 3D graphics to provide a more immersive and natural environment for better interfacing geometry and dynamic models. We suggest 3D GUI with MOOSE plug-ins and APIs, a static modeler and 3D scenarios. When a user selects a physical object on the 3D graphic window, they use this 'handle' to get the conceptual model that relates to that object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the systematic approach used for the independent verification and validation of a Joint-Modeling and Simulation System (J-MASS) threat system model. Verification is the process of determining that a model implementation accurately represents the developers conceptual description and specifications. To minimize cost and maximize benefit, the verification process should parallel the model development process. It should begin with initial model development and continues throughout the model development process. This approach results in early identification of problems. These can be resolved cost effectively as the model development progresses instead of at the end of model development where cost for changes can be significant. When performing the verification tasks, the verification team reports frequently tothe program office and the model developer to assure that any findings are available for immediate action, and that verification activities are in consonance with program needs. The verification process is divided into the following verificationtasks: software requirements verification, engineering design verification (top level), detailed design verification (software design), system test support, documentation verification, software verification, and verification reporting. The valithtion process determines the degree to which a model is an accurate representation of the real world phenomenon from the perspective of the intended use(s) ofthe model. The objective of the validation effort is to document the differences between the model and the actual system that may impact the intended use of the model. To accomplish a validation effort in a timely manner and at minimum cost, the validation process should be integrated into the model development and model verification processes. The validation process consists of the following tasks: determine the model user requirements, establish the validation data baseline, develop a validation test matrix, test the model, compare model parametric test data to the validation data baseline, compare model performance data to the baseline, determine any impacts to model use that result from differences, and develop the model validation report. This paper outlines the procedures used to accomplish verification and validation of J-MASS models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's modeling and simulation world is rich in 3D models and distributed architectures. There is, however; a need to interject operational intelligence data into this environment. The ability to represent intelligence information in a realistically modeled environment provides a unique, exploitable, global awareness capability. We have designed and implemented the global awareness visual information system protocol data unit (PDU) generator, a prototype interface that allows information in intelligence databases to be data sources for distributed interactive simulation (DIS) capable systems. The interface provides tools, using middleware, to create database views and an agent to obtain database information for publication as PDUs. The current implementation provides information from sample intelligence databases for visualization in an off- the-shelf 3D DIS viewer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Air Force Research Laboratory develops the Advanced IR Countermeasures Assessment Model (AIAM), an in-house analysis tool for the National Air Intelligence Center (NAIC). AIAM allows NAIC analysts to predict the most effective countermeasure response by a foreign aircraft when engaged by IR missiles. This paper discusses enhancements to AIAM. These enhancements include the addition of IRCM decoys with lift and thrust forces and IRCMs with large spatial extent. A model is added which represents the IR emission form aircraft engines as an extended plume in addition to a point source. A Flare Toolkit is included, allowing the analyst to create a custom flare based on whatever information is available for use in the engagement simulation. A model for the trajectory followed by an IRCM attached to a flexible tether is added.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation of large complex systems for the purpose of evaluating performance and exploring alternatives is a computationally slow process, currently still out of the domain of real-time applications. To overcome this limitation, one approach is to obtain a 'metamodel' of the system, i.e., a 'surrogate' model which is computationally much faster than the simulator and yet is just as accurate. We describe the use of Neural Networks (NN) as metamodeling devices which may be trained to mimic the input-output behavior of a simulation model. In the case of discrete event system (DES) models, the process of collecting the simulation data needed to obtain a metamodel can also be significantly enhanced through concurrent estimation techniques which enable the extraction of information from a single simulation that would otherwise require multiple repeated simulations. We will present applications of two benchmark problems in the C3I domain: A tactical electronic ground-based radar sites; and an aircraft refueling and maintenance system as a component of a typical Air Tasking Order. A comparative analysis with alternative metamodeling approaches indicates that a NN captures significant nonlinearities in the behavior of complex systems that may otherwise not be accurately modeled.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perturbation analysis is an efficient method for performance analysis of discrete event dynamic systems. It yields gradient information from single sample path observation. Over last two decades, various perturbation analysis techniques have been developed to handle a large class of problems. Coupling is a method aiming at efficiently generating multiple samples of random variables. It has a wide range of applications in applied probability. This paper is concerned with perturbation analysis via coupling. This approach offers a great versatility of the form of gradient estimators. It is also potentially helpful for variance reduction in perturbation analysis. It is demonstrated in this paper that several known perturbation analysis techniques can be reviewed as special ways of coupling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation plays a vital role in analyzing many discrete- event systems, particularly, in comparing alternative system designs with a view to optimize system performance. Usually, using simulation to analyze complex systems can be both prohibitively expensive and time consuming. We present an effective algorithm to intelligently allocate computing budget for discrete-event simulation experiments. These algorithms dynamically determine the best simulation lengths for all simulation experiments and thus significantly reduce the total computation cost for a desired confidence level. This provides an optimal way to find an optimal design. We also compare our algorithms with traditional two-stage procedures and with the techniques for the multi-armed bandit problem through numerical experiments. Numerical testing shows that our approach is more than fifteen times faster than the compared methods for the same simulation quality requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for efficiently solving stochastic optimization problems of discrete event systems. The new method, the nested partitions (NP) method, uses partitioning, random sampling, selection of a promising index, and backtracking techniques to crete a Markov chain which has been proven with probability one to converge to a global optimum. One important feature of the NP method is that it can combine global search and local search procedures in a natural way. In particular, many sample path analysis techniques such as perturbation analysis and concurrent simulation can be effectively incorporated into the method. The NP method is demonstrated through a numerical example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite enthusiastic researches all over the world, completely autonomous robots are yet today an utopia. But pure teleoperated Robotics System, as generally used in unknown or dangerous environment, have also their limitations and drawbacks. The introduction of a partial autonomy, where appropriate, could greatly enhance the performances of the man-machine systems. The interactive autonomy objective is to hide sophisticated systems behind simple interfaces and to transparently provide help to the user. These principles can be implemented to control a manipulation arm or a mobile vehicle. Telecontrol is generally associated with video images, nevertheless in specific applications or under special circumstances, the images have a poor quality, can be degraded when using the systems or are not available. This implies the introduction of a 3D model that can be used as stand alone or as augmented reality display. Existing internet technologies can be used for interfacing the real and the virtual worlds. VRML provides the 3D aspects, Java is the unifying language between different computer system, browsers and plug-ins are completing the team. Using these technologies we have developed a multi client/server application to remotely view and control a mobile robot. In this paper we give the description of this application and we provide a basic presentation of the tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Web-based simulation, a subject of increasing interest to both simulation researchers and practitioners, has the potential to significantly influence the application and availability of simulation as a problem-solving technique. Web technologies also portend cost-effective distributed modeling and simulation. These applications will require solutions to the systems interoperability problem similar to the DoD High Level Architecture (HLA). The suitability of the HLA to serve 'mainstream' simulation is examined.Approaches for incorporating discrete event simulation conceptual frameworks within the HLA are described and ongoing research in this area noted. Issues raised include a discussion of the appropriate roles for a simulation-support language and a simulation-support architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Maui High Performance Computing Center has recently established a remote IR Data Library and Web-based Simulation environment. This environment enables a large number of researchers to access, review, and process IR data sets via the internet. The IR data was collected with the ARPA-sponsored Airborne IR Measurement System (AIRMS) sensor under the completed AIRMS program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the infrastructure of designing a multi-user wargaming environment over the WWW architecture is proposed. The proposed infrastructure is based upon the existence of a multi-user virtual reality system on top of the WWW environment. Hence, the necessary conditions for a network- based virtual reality system to support designing a wargaming environment are analyzed and studied in this paper. Finally, a prototype of a multi-user wargaming system was implemented with the SharedWeb system. The SharedWeb system is a multi-user virtual reality system built over the WWW environment. The SharedWeb system provides a seamless integration of virtual reality technique with the WWW architecture, which makes it an excellent platform for our experiment. The 3D tank combat simulation is the result of this prototyping process. The scenario of this 3D tank wargame is a two-company drill and each company has two tanks. Since this application is simply a proof of principle, the player can only control the movement and direction of each tank as well as issue the fire command. The experience presented in this paper may provide some though on designing a new generation of war game system for the year 2000.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the real-time simulation of dynamic systems, including hardware—in-the-loop (HITL), the direct determination of simulation errors resulting from finite numerical integration step sizes and input/output frame rates has always been elusive. The method for dynamic error measurement introduced in this paper is based on comparing two real—time simulations, one with the nominal integration step size and the second with an augmented step size. Accurate interpolation formulas are then used to convert the discrete times associated with the data sequence outputs from the second simulation to the discrete times for the first simulation. This permits a direct calculation of the difference between the two solutions at common discrete times, from which the deviation of either simulation from the solution for zero step size can be calculated with a simple formula. Although only approximate, this procedure results in calculated dynamic errors that are surprisingly accurate, as is demonstrated in example simulations in the paper. Furthermore, it permits direct determination of the dynamic errors associated with each of the constituents making up the total simulation error. For example, in a multi-rate simulation run on a single processor, it allows separate examination of the dynamic errors associated with each frame rate, which in turn permits the determination of an optimal frame-rate ratio. In a multi-processor simulation, it allows the dynamic errors associated with each processor simulation to be determined separately. It also permits direct determination of the effect of input/output sample rates on the overall dynamic accuracy. When lack of run-to-run repeatability becomes a problem for HITL simulations due to the presence of noise in the hardware, logged hardware output data from the first simulation can be used to provide computer inputs for subsequent simulations in order to restore the repeatability needed for on-line calculation of simulation errors. With the use of extrapolation methods already developed for asynchronous simulation, all of the above techniques can be implemented on-line in a real-time simulation environment
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that the efficiency of simulating large, complex dynamic systems can be significantly improved by using multi-rate integration, that is, by employing different integration frame rates for the simulation of fast and slow subsystem. Also, it is apparent that the overall speed of such simulations can be increased very significantly with the utilization of multiple processors. These considerations become especially important when the calculations must be run at real-time speed, as in hardware- in-the-loop simulations. However, there are significant issues regarding the scheduling algorithms for multi-rate integration within a single processor, as well as the data transfers between multiple processors. In this paper we show how these problems can be greatly simplified by letting each subsystem simulation run asynchronously with respect to other subsystem simulations, either using fixed or variable integration step sizes. Key to the successful implementation of both multi-rate integration and real-time, variable-step integration is the use of accurate extrapolation formulas to convert data sequences for one frame rate to another and to compensate for time mismatches and latencies in any real- time data sequences. Each processor in a multi-processor simulation can then be assigned to an identifiable subsystem or group of subsystems and run at its own frame rate, either fixed or variable. In this paper several examples of such simulations, both real time and non-real time, are presented. Utilization of the asynchronous methodology has the potential of greatly reducing the difficulties associated with interconnecting and testing many processors and hardware subsystems in a complex hardware-in-the-loop simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a new methodology for predicting the behavior of macroeconomic variables. The approach is based on System Dynamics and Fuzzy Inductive Reasoning. A four- layer pseudo-hierarchical model is proposed. The bottom layer makes predications about population dynamics, age distributions among the populace, as well as demographics. The second layer makes predications about the general state of the economy, including such variables as inflation and unemployment. The third layer makes predictions about the demand for certain goods or services, such as milk products, used cars, mobile telephones, or internet services. The fourth and top layer makes predictions about the supply of such goods and services, both in terms of their prices. Each layer can be influenced by control variables the values of which are only determined at higher levels. In this sense, the model is not strictly hierarchical. For example, the demand for goods at level three depends on the prices of these goods, which are only determined at level four. Yet, the prices are themselves influenced by the expected demand. The methodology is exemplified by means of a macroeconomic model that makes predictions about US food demand during the 20th century.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simultaneous events are usually a source of modeling errors, not only for this is a difficult concept to capture, but also because current modeling methodologies usually do not provide good solutions for the problem. This topic has been subjected to a large number of discussion and solutions. Most common approaches are based on the assignment of static priorities to events, activities, or processes, in, respectively, event, activity and process world views. In modular methodologies a common approach is to assign priorities to components. Recently, these modeling formalisms have introduced the possibility for handling simultaneous events at the same time, i.e., in parallel. Parallelism removes the difficult task of assigning priorities to models. Dynamic structure models introduced a new dimension to this subject: how to define model structure when simultaneous transitions occur. The parallel dynamic structure discrete event formalism offers both a simple and powerful solution to this problem and the advantage of removing ambiguity when structural changes occur.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.