The Self Healing Minefield (SHM) is comprised of a networked system of mobile anti-tank landmines. When the mines detect a breach, each calculates an appropriate response, and some fire small rockets to “hop” into the breach path, healing the breach. The purpose of the SHM is to expand the capabilities of traditional obstacles and provide an effective anti-tank obstacle that does not require Anti-Personnel (AP) submunitions. The DARPA/ATO sponsored program started in June 2000 and culminated in a full 100-unit demonstration at Fort Leonard Wood, MO in April 2003. That program went from “a concept” to a prototype system demonstration in approximately 21 months and to a full tactically significant demonstration in approximately 33 months. Significant accomplishments included the following: (1) Demonstration of a working, scalable (order of a hundred nodes), ad hoc, self-healing RF network. (2) Demonstration of an innovative distributed time synchronization scheme that does not rely on GPS. (3) Demonstration of a non-GPS based, self-mapping, relative geolocation system. (4) Development of an innovative distributed safe, arm, and fire system that allows for independent firing of eight rockets within a single node. (5) Development of a small rocket design with a novel geometry that meets the propulsion requirements.
This work investigates a distributed approach for fusion and sensor management in unattended sensor networks. The distributed approach not only improves robustness to node failure, but also reduces network communications load significantly over that of the more traditional non-managed centralized processing approach. Monte Carlo simulations show that bandwidth reductions of factors of two to three over that of traditional architectures are achievable, depending on such factors as radio communications range and node availability.
Today's Warfighter requires new capabilities that reduce the kill chain timeline. The capability to maintain track on mobile Time Sensitive Targets (TSTs) throughout the entire targeting cycle is a step towards that goal. Continuous tracking provides strike assets with high confident, actionable, targeting information, which reduces the time it takes to reacquire the target prior to prosecution. The Defense Advanced Research Program Agency (DARPA) Dynamic Tactical Targeting (DTT) program is developing new sensor resource management and data fusion technologies for continuous coordination of tactical sensor resources to detect and identify mobile ground targets and maintain track on these known high-value targets. An essential concept of the DTT approach is the need for the fusion system and the resource manager to operate as part of a closed loop process that produces optimum collection plans against the designated high value TSTs. In this paper, we describe this closed loop approach used within the DTT system. The paper also describes other aspects of the DTT program, including overall program status, the DTT distributed architecture, details of the fusion and dynamic sensor management components, and concludes with current evaluation results.
We present a design concept for an integrated communication and sensor network that employs swarms of Unmanned Aerial Vehicles (UAVs). UAVs are deployed in two types of swarms: sensor swarms or communication swarms. Sensor swarms are motivated by the belief that adversaries will force future confrontations into urban settings, where advantages in surveillance and weapons are diminished. A sensor system is needed which can provide high-resolution imagery and an unobstructed view of a hazardous environment fraught with obstructions. These requirements can be satisfied by a swarm of inexpensive UAVs which “work together” by arranging themselves into a flight configuration that optimizes their integrated sensing capability. If a UAV is shot down, the swarm reconfigures its topology to continue the mission with the surviving assets. We present a methodology that integrates the agents into a formation that enhances the sensing operations while minimizing the transmission of control information for topology adaptation. We demonstrate the performance tradeoff between search time and number of UAVs employed, and present an algorithm that determines the minimum swarm size necessary to meet a targeted search completion time within probabilistic guarantees. A communication swarm provides an infrastructure to distribute information provided by the sensor swarms, and enables communication between dispersed ground locations. UAVs are “guided” to locations that provide the best support for an underlying ground-based communication network and for dissemination of data collected by sensor swarms.
Transformation of military information systems to a network-centric paradigm will remove traditional barriers to
interoperability and enable dynamic access to information and analysis resources. The technical challenges of
accomplishing network-centric warfare (NCW) require the engineering of agile distributed software components imbued
with the ability to operate autonomously on behalf of human individuals, while maintaining system level integrity,
security, and performance efficiency on a grand scale.
In this paper, we will describe how agents provide a critical technology enabler for applying emerging commercial
technologies, such as web services, into network-centric warfare problems. The objective of our research is developing
and sharing battlespace awareness and understanding. Our agent information service manages information collection and
dissemination/publishing activities on behalf of fusion services in an autonomous, yet controllable fashion. Agents
improve the scalability and reliability at the system of systems level through dynamic selection and exploitation of web
services based upon needs and capabilities.
As military tactics evolve toward execution centric operations the ability to analyze vast amounts of mission relevant
data is essential to command and control decision making. To maintain operational tempo and achieve information
superiority we have developed Vigilant Advisor, a mobile agent-based distributed Plan Execution Monitoring system.
It provides military commanders with continuous contingency monitoring tailored to their preferences while
overcoming the network bandwidth problem often associated with traditional remote data querying. This paper presents
an overview of Plan Execution Monitoring as well as a detailed view of the Vigilant Advisor system including key
features and statistical analysis of resource savings provided by its mobile agent-based approach.
A fuzzy logic based expert system for resource management has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar autonomous naval platforms defending their group against attackers. This paper provides an overview of the resource manager (RM) including the current form of the fuzzy decision trees that make up the RM. It also provides a detailed discussion of the design and results of hardware simulation experiments that test the RM and the network centric concept. The experiments employ multiple computers each running its own copy of the RM and TCP/IP connections for communication. A detailed discussion of the hardware simulator SRES is provided including the various radar receivers that can be used with it and its ability for generating simulated radio frequency targets. The ability of the RMs residing on each platform of an allied group to allocate different EA techniques distributed across multiple platforms is considered. The war game that is used to set up the experiments is the same one employed for the digital simulation experiments with the exception simulated sensors are replaced with real hardware and each platform runs on a different computer.
Capability engineering, a new methodology with the potential to transform defence planning and acquisition, is
described. The impact of capability engineering on existing defence business processes and organizations is being
explored in Canada during the course of a four-year Technology Demonstration Project called Collaborative Capability
Definition, Engineering and Management (CapDEM). Having completed the first of three experimentation spirals
within this project, a high-level capability engineering process model has been defined. The process begins by mapping
strategic defence guidance onto defence capabilities, using architectural models that articulate the people, process and
materiel requirements of each capability when viewed as a system-of-systems. For a selected capability, metrics are
rigorously applied to these models to assess their ability to deliver the military capability outcomes required by a set of
predefined tasks and force planning scenarios. By programming the modification of these tasks and planning scenarios
over time according to evolving capability objectives, quantifiable capability gaps are identified, that in turn drive the
process towards options to close these gaps. The implementation plan for these options constitutes a capability evolution
roadmap to support defence-investment decisions. Capability engineering is viewed as an essential enabler to meeting
the objective of improved capability management, subsuming the functions of capability generation, sustainment and
This document presents a high-level description of the Land Force Command Support System architecture. It explains the goals of automating the Army Command and Control processes, the main operational functions of Land Force Command Support System to meet those goals, and the major technological components of the System.
There is increasingly a requirement for new capabilities to operate in a coalition rather than just within a country's own
network centric (or platform centric) force. This paper discusses an approach for characterising a coalition C4ISR
architecture in the future timeframe, for the purpose of analysing information exchange and interoperability issues that
may occur when introducing a new system into a mix of future systems and legacy systems with the requirement to
ensure NCW readiness.
To characterise the C4ISR environment two timeframes, 2010 and 2020, are envisaged for the architecture. These two
timeframes represent different types of models of the architecture. The closer timeframe represents a physical model of
the C4ISR environment, with the assumption that the time is no further out than what is covered by defence capability
plans and knowledge about legacy systems that will still be in use. Its purpose is to allow constructive information
exchange with potential future coalition partners regarding interfaces and interoperability. The distant timeframe is set
beyond the plans for future capability development. However, known capabilities will still be present at that time. This
timeframe represents a requirement and functional concepts model of the architecture. Its purpose is to allow the
development of new concepts perhaps more aligned to NCW thinking.
The approach utilises systems engineering as a basis for the process and a combination of architecture products for
documentation. The work is supported by the use of a collaborative engineering environment and a number of common
systems engineering tools such as DOORS, CORE and Systems Architect.
The twentieth century saw the armies of the US and the UK successfully meet a number of extreme demands imposed by changes in weapons technology and by politico-military events. In many cases, on both sides of the Atlantic, this has demanded a greater or lesser transformation of military organisation and practice.
The present paper attempts a broad conspectus of the reactions of both armies to the most significant of these technological challenges, such as the magazine rifle, war gases, the tank, indirect-fire artillery, radio control, the atomic bomb, the guided missile and the digital computer.
It seems that the US Army has been much more prepared than the British to re-organise itself to meet technological change. The British Army not only seems to have transformed itself less often, but also as a response to pressures other than those of technology.
The author concludes that there are certain principles that have held good throughout a century of sometimes dizzying technological change, and which will be worth holding on to. The force transformation we see may not be entirely the one we expect.
Changes in the nature of battlespace information services, combined with the drive to digitization, are raising expectations of the ability of network-centric systems to provide information throughput and timeliness. At a level often abstracted from the systems perspective, it becomes necessary to consider the nature of the underlying network and its ability to adapt, recover, and organise in the face of increasing demands and non-optimal environments. Without this consideration, it may be that the capabilities of the underlying network act to restrict the exploitation of Network-Enabled Capability.
Autonomic networks and autonomic computing are being presented as a possible aid to sustaining critical infrastructures of dynamic nodes. Although the focus of much commercial activity, autonomic networks are also believed to have relevance in the military environment and, most importantly, in supporting emerging battlespace information systems and digitization initiatives.
Albeit well understood in biological contexts, autonomic principles have yet to be proven in commercial technological environments and, more importantly, in the context of military demands. Derived from this, key issues relate to the true nature of autonomic networks, the benefits accruing from such networks, and those challenges compounded by increasing demands from the ongoing development of military technology and digitization trends.
This paper presents an examination of the demands made by the evolution of battlespace information services, some of the applicable technologies to address those demands, and examines the state of current and emerging technology to determine the perceived nature of autonomic networks in the context of Network-Enabled Capability.
Netcentric thinking provides the ability of outside devices and systems to insinuate themselves into the operation of an embedded device. Netcentric systems are defined as a set of connected devices, embedded devices, information appliances, desktop computers and servers. In a sensor-to-strike scenario, the chain of events that connect the initiation of a control event to its result is not within a closed space. The chain may incorporate information gathering devices and weapons a thousand miles apart, or any of a myriad of devices in a more local confederated environment. System architectural approaches now need to consider determinism not only on the control side, but the communications side as well. This affects the design and use of computer hardware and software, and supporting tools. System designers can no longer work within the confines of closed systems. Systems cannot be constructed in a context where designers have relative control over all aspects of the design. This paper investigates two critical software technologies that address the open systems aspects of network centric systems. In particular, publish/subscribe mechanisms and service discovery mechanisms are investigated. Issues relating to determinism, reliability, predictability, security, and scalability are discussed.
The goal of the DARPA Dynamic Optical Tags (DOTs) program is to develop a small, robust, persistent, 2-way tagging, tracking and locating device that also supports communications at data rates greater than 100 kbps and can be interrogated at significant range. These tags will allow for two-way data exchange and tagging operations in friendly and denied areas. The DOTs will be passive and non-RF. To accomplish this, the DOTs program will develop small, thin, retro-reflecting modulators. The tags will operate for long periods of time (greater than two months) in real-world environmental conditions (-40° to +70° C) and allow for a wide interrogation angle (±60°). The tags will be passive (in the sleep mode) for most of the time and only become active when interrogated by a laser with the correct code. Once correctly interrogated, the tags will begin to modulate and retro-reflect the incoming beam. The program will also develop two tag specific transceiver systems that are eye-safe, employ automated scanning algorithms, and are capable of short search and interrogate times.
The future model of the US Army's Future Combat Systems (FCS) and the Future Force reflects a combat force that utilizes lighter armor protection than the current standard. Survival on the future battlefield will be increased by the use of advanced situational awareness provided by unattended tactical and urban sensors that detect, identify, and track enemy targets and threats. Successful implementation of these critical sensor fields requires the development of advanced sensors, sensor and data-fusion processors, and a specialized communications network. To ensure warfighter and asset survivability, the communications must be capable of near real-time dissemination of the sensor data using robust, secure, stealthy, and jam resistant links so that the proper and decisive action can be taken. Communications will be provided to a wide-array of mission-specific sensors that are capable of processing data from acoustic, magnetic, seismic, and/or Chemical, Biological, Radiological, and Nuclear (CBRN) sensors. Other, more powerful, sensor node configurations will be capable of fusing sensor data and intelligently collect and process data images from infrared or visual imaging cameras. The radio waveform and networking protocols being developed under the Soldier Level Integrated Communications Environment (SLICE) Soldier Radio Waveform (SRW) and the Networked Sensors for the Future Force Advanced Technology Demonstration are part of an effort to develop a common waveform family which will operate across multiple tactical domains including dismounted soldiers, ground sensor, munitions, missiles and robotics. These waveform technologies will ultimately be transitioned to the JTRS library, specifically the Cluster 5 requirement.
In this work, we consider the problem of reliable data
dissemination in mobile wireless sensor networks. We propose a
Localized Gradient Management Algorithm (LGMA) that operates as a
mobility sub-module for data-centric protocols. LGMA allows sensor
nodes more responsibility in keeping the gradients in their
neighborhood operational by using local information deduced from
their environment. Performance comparisons of LGMA versus a
Sink-Oriented data dissemination protocol with location updates
show that LGMA provides considerably higher event delivery ratio
under multiple scenarios and rates of sink and network mobility
while incurring much lower communication overhead and enabling
faster gradient repairs.
In this paper we propose a new routing protocol called the Directional Flow Routing (DFR) which takes advantages of directional antenna technology envisioned in near future for ad hoc networks. DFR is a source routing protocol where the route is completely specified by a Directional Flow Vector (DFV) between the source and the destination. DFV is a time varying straight function joining the source and the destination and is computed using the relative velocity and position vectors between the two. A packet is delivered at the destination by aligning the directional antennas of nodes along the flow vector such that packets flow along the instantaneous line joining the source and the sink. DFR is a stateless source routing protocol which has the potential to efficiently handle large scale and dense ad hoc networks with very high mobility rates. The paper presents the conceptual framework for the DFR routing paradigm. Although we also propose a simple protocol for practical implementation of the concept, we do not intend to analyze the performance of the protocol in this paper. Rather, the focus of this paper lies in discussing the design choices that would be necessary for the protocol implementation. It is also intended to highlight the issues and practical challenges involved in designing algorithms using directional antennas in general and should serve as guidelines for future research.
The Stream Control Transmission Protocol (SCTP), a general-purpose
transport layer protocol standardized by the IETF, has been a promising
candidate to join UDP and TCP as a core protocol. The new SCTP features
such as multi-homing, multi-streaming, and enhanced security can
significantly improve the performance of FCS applications.
Multi-streaming provides an aggregation mechanism in an SCTP association
to accommodate heterogeneous objects, which belong to the same
application but may require different type of QoS from the network.
However, the current SCTP specification lacks an internal mechanism to
support the preferential treatment among its streams. We introduce the
concept of subflow and propose to modify the current SCTP such that the
streams are grouped into several subflows according to their required
QoS. It is also proposed that each subflow should implement its own
congestion control to prevent the so-called false sharing. To
compare the throughput differences, analytic models have been derived
for the current SCTP and for the subflow-capable SCTP with different
congestion control mechanisms. Simulations with ns-2 have been used to
qualitatively demonstrate the throughput differences of these
designs in a simplified diff-serv network. The analytical models are
confirmed to accurately reflect the SCTP behavior. The simulation also
shows that our proposed solution is able to efficiently support QoS
among the SCTP streams.
We present Dynamic Survivable Resource Pooling (DSRP) that provides
survivable access to resources and services in battlefield networks. The
servers accessed by mobile users (e.g., FCS backbone managers, TPKI,
Bandwidth Brokers, Situation Awareness/Common Network Picture, SIP) are pooled together for higher availability and failover; the Name
Servers (NSs) are responsible for maintaining server pools, load balancing, and server discovery. In the DSRP scheme, NSs are placed on a virtual
backbone (VB): a highly distributed, scalable, and survivable network
formed and maintained through one-hop beacons. By making locally scoped
decisions, VB is capable of reorganizing both itself and pool registrations
in response to mobility, failures, and partitioning. A proof-of-concept of
the DSRP successfully demonstrated its survivability.
Multicasting at the IP layer has not been widely adopted due to a combination of technical and non-technical
issues. End-system multicast (also called application-layer multicast) is an attractive alternative to IP layer
multicast for reasons of user management (set-up and control) and attack avoidance. Sessions can be established
on demand such that there are no static points of failure to target in advance.
In end-system multicast, an overlay network is built on top of available network services and packets are
multicasted at the application layer. The overlay is organized such that each end host participating in a multicast
communication re-sendsmulticasted messages to some of its peers, but not all of them. Thus end-system multicast
allows users to manage multicast sessions under varying network conditions without being dependent on specific
network conditions or specific network equipment maintaining multicast state information.
In this paper we describe a variety of proposed end-system multicast solutions and classify them according to
characteristics such as overlay building technique, management, and scalability. Comparing these characteristics
across different end-system multicast solutions is a step toward understanding which solutions are appropriate
for different battlespace requirements and where further research is needed.
Vulnerabilities are a growing problem in both the commercial and government sector. The latest vulnerability information compiled by CERT/CC, for the year ending Dec. 31, 2002 reported 4129 vulnerabilities representing a 100% increase over the 2001  (the 2003 report has not been published at the time of this writing). It doesn’t take long to realize that the growth rate of vulnerabilities greatly exceeds the rate at which the vulnerabilities can be fixed. It also doesn’t take long to realize that our nation’s networks are growing less secure at an accelerating rate. As organizations become aware of vulnerabilities they may initiate efforts to resolve them, but quickly realize that the size of the remediation project is greater than their current resources can handle. In addition, many IT tools that suggest solutions to the problems in reality only address "some" of the vulnerabilities leaving the organization unsecured and back to square one in searching for solutions. This paper proposes an auditing framework called NINJA (acronym for Network Investigation Notification Joint Architecture) for noninvasive daily scanning/auditing based on common security vulnerabilities that repeatedly occur in a network environment. This framework is used for performing regular audits in order to harden an organizations security infrastructure. The framework is based on the results obtained by the Network Security Assessment Team (NSAT) which emulates adversarial computer network operations for US Air Force organizations. Auditing is the most time consuming factor involved in securing an organization's network infrastructure. The framework discussed in this paper uses existing scripting technologies to maintain a security hardened system at a defined level of performance as specified by the computer security audit team. Mobile agents which were under development at the time of this writing are used at a minimum to improve the noninvasiveness of our scans. In general, noninvasive scans with an adequate framework performed on a daily basis reduce the amount of security work load as well as the timeliness in performing remediation, as verified by the NINJA framework. A vulnerability assessment/auditing architecture based on mobile agent technology is proposed and examined at the end of the article as an enhancement to the current NINJA architecture.
Various approaches for transporting digital video over Ethernet and SONET networks are presented. Commercial analog and digital frame grabbers are utilized, as well as software running under Microsoft Windows 2000/XP. No other specialized hardware is required. A network configuration using independent VLANs for video channels provides efficient transport for high bandwidth data. A framework is described for implementing both uncompressed and compressed streaming with standard and non-standard video. NTSC video is handled as well as other formats that include high resolution CMOS, high bit-depth infrared, and high frame rate parallel digital. End-to-end latencies of less than 200 msec are achieved.
Network Centric Operations, in the context of this paper, consists of much more than "network centric warfare." It connects the joint war fighting "enterprise" with tactical "decider-sensor-effector" linkages. It includes the integration of all levels of warfare operations and, in particular, integration with the cognitive processes employed by battle commanders. Network Centric Operations have the potential to provide information superiority to the battle commander but only if the system is Joint-Oriented and Commander -Centric as well as network-centric.