The U.S. Army has embarked on an important campaign to field a lighter, more agile force, capable of being deployed in a fraction of the time currently required. The survivability of this force will depend more heavily on the use of integrated command and control capabilities with unsurpassed situational understanding for all levels of command. Arrays of small, low cost sensors will play a key role in detecting, locating, tracking, and identifying targets, particularly in areas where the terrain or other circumstances prevent traditional high performance sensors from providing critical information. Individual sensor types will provide modest performance but with a wide range of sensing modalities. When deployed in large numbers, the data fused from multiple sensing modalities will provide a detailed view of the battlespace over a wide area. A critical element necessary to deploy unattended ground sensor technology is the underlying communications and networking infrastructure. Communication networks will constitute the major challenge to making unattended ground sensors networks practical.
The vanguard US Army Science and Technology program for the transformation to a medium weight force is the Future Combat Systems (FCS). Critical to the effectiveness of this force is overarching knowledge of the distribution and intent of all the forces on the battlefield. Smart sensor networks and information management are key enablers for the FCS system of systems strategic goals. The role of sensors and information management is enabling FCS victory is described as well as the network centric warfare that will be the cornerstone of the battlefield in the near future. The US Army Communication-Electronics Command's development of sensors and information management assets is also reviewed as they influence the envisioned FCS.
There is a revolution going on in the world of sensors. In the year 2000, it is now possible to proliferate sensors and mount them on any platform stationary or moving, from the individual soldier on up. We can literally buy sensors by the pound and toss them out into the environment as throwaways. It is possible to rapidly configure wireless networks of sensors that collect, relay, process, and archive the outputs of large arrays ofheterogeneous sensor fields. For those ofus who have spent our professional lives providing high end sensors to the military, this realization comes as quite a shock for it changes the way we think about conducting military functions such as intelligence, surveillance, and targeting. This monograph briefly explores the genesis of this capability and where it may lead for future military forces and operations.
Distributed sensor networks will play a key role in the network centric warfighting environments of the future. We envision a ubiquitous sensing `fabric,' comprising sensors distributed over the terrain and carried on manned and unmanned, terrestrial and airborne vehicles. As a complex `system of systems,' this fabric will need to adapt and self-organize to perform a variety of higher-level tasks such as surveillance and target acquisition. The topology and availability of the sensors will be constantly changing, as will the needs of users as dictated by evolving missions and operational environments. In this work, focusing on the task of target tracking, we address approaches for locating and organizing sensing and processing resources and present algorithms for suitably fusing the observations obtained from a varied and changing set of sensors. Run-time discovery and access of new sensing resources are obtained through the use of Java Jini, treating sensing resources as `services' and viewing higher-level processes such as tracking as clients. Algorithms for fusing generic sensor observations for target tracking are based on the extended Kalman filter, while detection and track initiation are based on a new likelihood projection technique. We present results from an implementation of these concepts in a real- time sensor testbed and discuss lessons learned.
Atmospheric scattering of ultraviolet light is examined as a mechanism for short-range, non-line-of-sight (NLOS) communication between nodes in energy-constrained distributed sensor networks. A test bed for evaluating NLOS UV communication hardware and modulation schemes is described, and the bit error rate measured in the test bed is used to validate a numerical performance model. Design tradeoffs for a baseband UV transceiver are discussed and performance estimates obtained from the validated numerical model are presented.
Wireless integrated sensor networks involve information processing over large wired and wireless networks with limited bandwidth. Moreover, the computing capabilities of sensing devices are usually limited because of design restrictions, limited power supply, and other mission specific requirements. Analyzing data sets collected over such sensor networks usually requires downloading voluminous data sets to a central site. This fundamentally impairs the scalability and the overall response time of the application. In a mission critical applications, data analysis in such networks must deliver results within a certain time frame and often slower response completely defeats the purpose of analyzing sensor data. This paper presents a framework for collective data analysis from distributed heterogeneous data that calls for a fundamentally different perspective. This approach analyzes data in a distributed fashion without downloading everything to a central site. We examine several of the unsupervised Collective Data Mining algorithms for performing tasks associated with this extraction of useful information from sensor arrays. We further present a manner in which these algorithms may be incorporated into the knowledge extraction process from the sensor networks, and also propose an architecture geared towards the use of these algorithms.
Data aggregation in sensor networks refers to the task of accumulating sensed data at appropriate fusion centers within the network and making inferences based on the received information. This paper discusses methodologies for efficiently choosing these data aggregation centers or nodes within the wireless sensor network. We consider a plurality of sensor nodes that are deployed in a harsh terrain that consists of opaque obstructions. By making some simplifying assumptions with regards to the geometry of the region in which the nodes are deployed (domain) and some system parameters such as the positions of the individual nodes and the loading on those nodes, we show that the choice of the node best located to serve as a data aggregation center can be formulated as a convex optimization problem. Our simplest formulation is a linear program (LP) that computes the geographic of the nodes within a domain. Similar optimization formulations are then developed for more detailed scenarios. We also describe an implementation methodology for choosing the optimal aggregation centers. Simulation results show that by implementing our algorithms we see a 10-15% improvement in the achieved throughput per domain over methods that randomly select nodes to operate as the data fusion center. A corresponding improvement in the latency is also noted. We consider more intricate cases of the problem wherein, factors such as load and message priorities are taken into consideration when determining the optimal fusion center.
Future network centric systems will rely heavily on telecommunication network technology to provide the connectivity needed to support distributed C4ISR requirements. To develop and validate emerging network centric concepts, designers will need communication and network M&S tools to assess the ability of large-scale networks to achieve the required communication performance. Current network and communication simulation tools are highly accurate and provide detailed data for communication and network designers. However, they are far too complex and inefficient to model large scale networks. To model these networks, lighter weight abstract modeling and simulation (M&S) tools and techniques are required. To meet these requirements, Lockheed Martin Advanced Technology Laboratories (ATL) is applying abstract network modeling techniques, developed for large scale signal processing applications, to model complex, distributed network architectures. Rather than modeling the detailed radio, network protocol and individual data transactions, our approach uses abstract stochastic models to simulate the low-level radio and protocol functions to significantly reduce the complexity and execution times. This paper describes the abstract modeling tools and techniques we are developing, discusses how ATL applied them to Office of the Deputy Under Secretary of Defense for Science and Technology's (ODUSD S&T) Smart Sensor Web (SSW) network and how we are planning to extend them.
A new type of wireless sensor network is discussed for the digital battlefield and network-centric warfare. This network is rapidly deployable, and has unique features specifically suited to imaging sensors (visible, IR, imaging radar, low-light) and wireless local area network applications.
As the DoD faces more varied operations and dynamic force structures, it becomes increasingly difficult to manage the information space (data and services) as a homogeneous entity. It needs to be differentiated into specialized units (termed `habitats') that serve to provide context to individual human or automated components. These habitats must support: controlled component interaction, supplying contextual information to components, predictable recruitment of resources, and; distributed situational awareness.
The successful exploitation of the benefits of a digitized network-centric battlespace in a tactical Land Environment will add to the `Knowledge Edge' capability for the Australian Army. It will advance decision-making, accelerate the flow of information, more effectively synergize collection and processing within the intelligence system and speed up the targeting process, hence improving overall combat effectiveness. To make this a reality, a practical and feasible operational architecture is essential. This paper describes the process of developing an operational architecture for the Australian Army Enhanced Combat Force (land force in 2015) based on US DoD's C4ISR architecture framework.
In the Network-Centric Warfare (NCW) paradigm, battlespace agents autonomously perform selected tasks delegated by actors/shooters and decision-makers including controlling sensors. Network-Centric electronic warfare is the form of electronic combat used in NCW. Focus is placed on a network of interconnected, adapting systems that are capable of making choices about how to survive and achieve their design goals in a dynamic environment. The battlespace entities: agents, actor/shooters, sensors, and decision-makers are tied together through the information and sensors grids.
Mobile code opens a world of possibilities for Battlespace digitization. However, due to security issues associated with transporting code over such networks, mobile code in the battlefield may present serious risks. Attackers may attempt to thwart the end-user's mission by manipulating or destroying code prior to its final destination. To combat such acts, we propose an authentication method that can reside on any Internet server/client without the typical constraints that exist for firewalls and certificates. Our method consists of the construction of a digital signature at the server based on the characteristics of the mobile code itself. This signature, or mark, is then embedded within the code in a hidden manner using steganographic methods. Upon receipt of the mobile code, the client can use the key to extract the embedded mark and regenerate a mark from the received code. The two marks are compared to verify the integrity of the code and the authenticity of the sender. This technique is implemented for HTML code and the effectiveness of tamper detection is demonstrated. Mobile code authentication techniques, such as this, can provide the security necessary to permit the exploitation of this powerful computing medium on the networked battlefield.
Increasingly people work and live on the move. To support this mobile lifestyle, especially as our work becomes more intensely information-based, companies are producing various portable and embedded information devices. The late Mark Weiser coined the term, 'ubiquitous computing' to describe an environment where computers have disappeared and are integrated into physical objects. Much industry research today is concerned with ubiquitous computing in the work and home environments. A ubiquitous computing environment would facilitate mobility by allowing information users to easily access and use information anytime, anywhere. As war fighters are inherently mobile, the question is what effect a ubiquitous computing environment would have on current military operations and doctrine. And, if ubiquitous computing is viewed as beneficial for the military, what research would be necessary to achieve a military ubiquitous computing environment? What is a vision for the use of mobile information access in a battle space? Are there different requirements for civilian and military users of this technology? What are those differences? Are there opportunities for research that will support both worlds? What type of research has been supported by the military and what areas need to be investigated? Although we don't yet have all the answers to these questions, this paper discusses the issues and presents the work we are doing to address these issues.
The Australian Defence Force is a small force, dependent upon a few high value assets that act as force multipliers. Consequently, it cannot afford to sustain high attrition. The current Concept of Operations for these platforms is to operate them outside the threat envelope. Organic sensors and data links are used to maintain Situational Awareness, Combat Air Patrol is used to intercept hostile missile launch platforms, and Electronic Warfare self-protection is used as a last resort. Unfortunately, it is common for such high value assets to be slowly, non-stealthy, low agility, physically large platforms that follow predictable trajectories. Consequently, they are easy to target and track from a long range and have a high `sitting duck' factor.
Radio Frequency Directed Energy Weapons (RF DEW) have the potential to disrupt the operation of, or cause the failure of, a broad range of military electronic equipment. Over the past 30 years, there has been considerable effort in the development of these weapons. Recent reports suggest that a number of countries, including the USA and Russia, have fielded such weapons. This paper examines the potential performance of non-nuclear RF DEW.
The Army Research Laboratory (ARL) is conducting a variety of research programs in mobile communications and networking to support the digital battlefield of the future. The U.S. Army will require techniques such as mobile ad hoc networks to form and maintain wireless network connectivity in this dynamic battlefield environment. It will also require gateways that can bridge ad hoc networks to higher-echelon Internet-based backbone networks via a flexible combination of wireless, symmetric and asymmetric satellite, cellular, and wired network connections, as well as via legacy combat radio systems such as the Single Channel Ground and Airborne Radio System (SINCGARS). ARL has designed and built a mobile communications and networking Testbed on a High Mobility Multipurpose Wheeled Vehicle (HMMWV) to support our mobile wireless networking program. The Testbed is designed to combine various computing, networking, and communication techniques and technologies on a mobile platform relevant to the Army digital battlefield. With wire/wireless and symmetric/asymmetric WAN connections, the Testbed provides Internet gateway capabilities by direct or indirect routing using IP masquerade techniques. IP tunneling techniques are also used to implement mobile IP LANs with full connectivity to backbone networks via the Testbed. This paper discusses ARL research and experimentation in mobile gateway techniques such as IP masquerading and IP tunneling and describes the use of these techniques in the Testbed to provide mobile gateway functionality under actual field conditions.
The traditional manual method of path profile analysis is very time consuming and takes approximately 20 - 30 minutes for each path. With the advent of digital terrain data and high-speed computing, this process can be readily automated and the time for each path reduced to a fraction of a second. The calculation of point-to-point losses, however, still presents the user with limited information, and does not readily advise the planner of the coverage of a given transmitter. In this paper, we present a new approach based on area planning.
This paper presents an overview of a novel architectural framework for describing the Tactical Communications Systems. It then describes the utility of the framework in examining the provision of interfaces such that a broad range of information flows can be supported. The framework presented is applicable to both current and future tactical communications networks.
Current tactical communication networks consist of subnetworks of different types. Therefore a common network protocol has to be used for the transmission of data in such a heterogeneous network. Since some time there is the need for multiplexed transmission of time-critical and conventional data within a heterogeneous network. In this paper we discuss the problem of multiplexed transmission of time-critical and of conventional data using a common standardized network protocol. Most of the tactical subnetworks are of low or medium transmission speed and guarantee a fixed transmission bandwidth at the access point. A mechanism is described which transmits time-critical data in such a type of subnetwork using a connectionless transport and a connectionless network protocol. The current transmission of conventional data using a connection oriented transport protocol and the same connectionless network protocol is assumed to be of lower priority, it is scheduled in a way to fill the remaining capacity, which has not been reserved for the transmission of time-critical data. Satellite- as well as HF-links are taken into account.
The vision for the Joint Tactical Radio System (JTRS) is to develop a family of affordable, high-capacity tactical radios to provide both line-of-sight and beyond-line-of- sight Command, Control, Communications, Computers and Intelligence (C41) capabilities to the warfighters. This family of software will be capable of transmitting voice, video and data; the architecture will be common, open, and used in a wide range of implementations. This paper addresses several operational and implementation concepts which fit within these vision and capability statements (quoted from the program office), but require thinking outside the JTRS box.
Our roadmap for ongoing and future Dynamically Reconfigurable Vision (DRV) development work is described, including the designs for a megapixel DRV camera with a true snapshot mode capability, a very low power DRV camera, and a short-wave IR DRV sensor for night vision applications.
Fielding and managing the dynamic, complex information systems infrastructure necessary for defense operations presents significant opportunities for revolutionary improvements in capabilities. An example of this technology trend is the creation and validation of the Joint Battlespace Infosphere (JBI) being developed by the Air Force Research Lab. The JBI is a system of systems that integrates, aggregates, and distributes information to users at all echelons, from the command center to the battlefield. The JBI is a key enabler of meeting the Air Force's Joint Vision 2010 core competencies such as Information Superiority, by providing increased situational awareness, planning capabilities, and dynamic execution. At the same time, creating this new operational environment introduces significant risk due to an increased dependency on computational and communications infrastructure combined with more sophisticated and frequent threats. Hence, the challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.
The performance of acoustical ground sensors depends heavily on the local atmospheric and terrain conditions. This paper describes a prototype physics-based decision aid, called the Acoustic Battlefield Aid (ABFA), for predicting these environ-mental effects. ABFA integrates advanced models for acoustic propagation, atmospheric structure, and array signal process-ing into a convenient graphical user interface. The propagation calculations are performed in the frequency domain on user-definable target spectra. The solution method involves a parabolic approximation to the wave equation combined with a ter-rain diffraction model. Sensor performance is characterized with Cramer-Rao lower bounds (CRLBs). The CRLB calcula-tions include randomization of signal energy and wavefront orientation resulting from atmospheric turbulence. Available performance characterizations include signal-to-noise ratio, probability of detection, direction-finding accuracy for isolated receiving arrays, and location-finding accuracy for networked receiving arrays. A suite of integrated tools allows users to create new target descriptions from standard digitized audio files and to design new sensor array layouts. These tools option-ally interface with the ARL Database/Automatic Target Recognition (ATR) Laboratory, providing access to an extensive library of target signatures. ABFA also includes a Java-based capability for network access of near real-time data from sur-face weather stations or forecasts from the Army's Integrated Meteorological System. As an example, the detection footprint of an acoustical sensor, as it evolves over a 13-hour period, is calculated.
Rockets, mortars, and artillery (RMA) are widely held, abundant, and inexpensive weapons that historically have been the most lethal 'killers' on the battlefield. The proliferation of non-conventional warheads (chemical and biological) has increased the RMA threat. Recently, new weapons--in particular directed-energy weapons--have shown promise in providing an active defense against RMA. The development and deployment of these advanced weapons is only part of the challenge of providing a total RMA active defense capability. Developing a BMC3I that can support this air battle is also a major challenge. Threat sizes and threat rates of RMA versus traditional air defense threats could easily be higher by one to two orders of magnitude. The implication of this larger threat on the complexity of a BMC3I system is profound. Relative to traditional threats, fighting such an air battle will result in a large demand on sensors to collect information on this dense threat and in a large surge in the dissemination of air picture, control, and status information through the BMC3I network (weapons, sensors, and command posts). A successful BMC3I system must have the architectural features and algorithmic approaches to manage these tasks efficiently. This paper will characterize the magnitude of this problem and discuss architectural and algorithmic challenges.
There are considerable difficulties in the integration, visualization, and overall management of battle-space information for the purpose of Command and Control (C2). One problem that we see as being important is the timely combination of digital information from multiple (possibly disparate) sources in a dynamically evolving environment. That is, there is a need to assimilate incoming data rapidly, so as to provide the battle commander with up-to- date knowledge about the battle-space and thereby to facilitate the command-decision process. In this paper, we present a spatial-temporal approach to obtaining accurate estimates of the constantly changing battlefield, based on noisy data from multiple sources.
This paper addresses the problem of threat engagement and dynamic weapon-target allocation (WTA) across the force or network-centric force optimization. The objective is to allocate and schedule defensive weapon resources over a given period of time so as to minimize surviving target value subject to resource availability and temporal constraints. The dynamic WTA problem is a NP-complete problem and belongs to a class of multiple-resource-constrained optimal scheduling problems. Inherent complexities in the problem of determining the optimal solution include limited weapon resources, time windows under which threats must be engaged, load-balancing across weapon systems, and complex interdependencies of various assignments and resources. We present a new hybrid genetic algorithm (GA) which is a combination of a traditional genetic algorithm and a simulated annealing-type algorithm for solving the dynamic WTA problem. The hybrid GA approach proposed here uses a simulated annealing-type heuristics to compute the fitness of a GA-selected population. This step also optimizes the temporal dimension (scheduling) under resource and temporal constraints. The proposed method provides schedules that are near-optimal in short cycle times and have minimal perturbation from one cycle to the next. We compare the performance of the proposed approach with a baseline WTA algorithm.
Future command and control (C2) systems must be constructed in such a way that they are extensible both in terms of the kinds of scenarios they can handle and the type of manipulations that they support. This paper presents an open architecture that uses commercial standards and implementations where appropriate. The discussion is framed by our ongoing work with a course of action planner and generator that uses genetic algorithms together with an abstract wargamer to suggest a small number of possible COAs (FOX).
The `digitized force' offers the potential for revolutionary improvements in force effectiveness. However, a possible spoiler is information overload--instances where crews are so overwhelmed with information that they are unable to process the data and make the correct decision. Lockheed Martin Systems Integration has developed decision aiding technologies to address this information overload problem. Our experience developing decision aiding systems dates back to 1985. A significant recent contrast effort was the U.S. Army's Rotorcraft Pilot's Associate (RPA) Advanced Technology Development Program. RPA culminated in 1999 with a full mission laboratory evaluation and a flight demonstration on a modified Longbow Apache at the Army's Yuma Proving Ground. In addition to the RPA Program, LMSI has provided decision aiding systems for evaluation in several Army Battle Labs. LMSI is currently developing a `Core' Decision Aiding System, utilizing COTS-based technologies, that can serve as the foundation for application to a number of platforms. This paper will provide an overview of these decision aiding efforts including a summary of capabilities, evaluation results and future research, development and production efforts.
In this paper, some image registration algorithms are investigated for the purpose of image fusion. A hybrid scheme which uses both feature-based and intensity-based methods is proposed. In this scheme, an edge-based image registration approach is developed to guide the intensity- based registration which uses optical flow estimation. The idea of coarse-to-fine multiscale iterative refinement is also studied. Experiments show that this approach is robust and efficient.