KEYWORDS: Prototyping, Java, Analytical research, Information operations, Systems modeling, Space operations, Document management, Performance modeling, Telecommunications, Control systems
Net-centric information spaces have become a necessary concept to support information exchange for tactical warfighting
missions using a publish-subscribe-query paradigm. To support dynamic, mission-critical and time-critical operations,
information spaces require quality of service (QoS)-enabled dissemination (QED) of information. This paper describes
the results of research we are conducting to provide QED information exchange in tactical environments. We
have developed a prototype QoS-enabled publish-subscribe-query information broker that provides timely delivery of
information needed by tactical warfighters in mobile scenarios with time-critical emergent targets. This broker enables
tailoring and prioritizing of information based on mission needs and responds rapidly to priority shifts and unfolding
situations. This paper describes the QED architecture, prototype implementation, testing infrastructure, and empirical
evaluations we have conducted based on our prototype.
Mobile robots are excellent examples of systems that need to show a
high level of autonomy. Often robots are loosely supervised by humans
who are not intimately familiar with the inner workings of the robot. We cannot generally predict exact environmental conditions in
which the robot will operate in advance. This means that the behavior
must be adapted in the field. Untrained individuals cannot (and
probably should not) program the robot to effect these changes. We
need a system that will (a) allow re-tasking, and (b) allow adaptation of the behavior to the specific conditions in the field. In this paper we concentrate on (b). We will describe how to assemble
controllers, based on high-level descriptions of the behavior. We will show how the behavior can be tuned by the human, despite not knowing how the code is put together. We will also show how this can be done automatically, using reinforcement learning, and point out the problems that must be overcome for this approach to work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.