Unmanned ground and air systems operating in collaboration have the potential to provide future Joint Forces a significant capability for operations in complex terrain. Collaborative Engagement Experiment (CEE) is a consolidation of separate Air Force, Army and Navy collaborative efforts within the Joint Robotics Program (JRP) to provide a picture of the future of unmanned warfare. The Air Force Research Laboratory (AFRL), Material and Manufacturing Directorate, Aerospace Expeditionary Force Division, Force Protection Branch (AFRL/MLQF), The Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) Joint Technology Center (JTC)/Systems Integration Laboratory (SIL), and the Space and Naval Warfare Systems Center - San Diego (SSC San Diego) are conducting technical research and proof of principle experiments for an envisioned operational concept for extended range, three dimensional, collaborative operations between unmanned systems, with enhanced situational awareness for lethal operations in complex terrain. This paper describes the work by these organizations to date and outlines some of the plans for future work.
There are many challenges in the area of interoperability of unmanned systems: increasing levels of autonomy, teaming and collaboration, long endurance missions, integration with civilian and military spaces. Several currently available methods and technologies may aid in meeting these and other challenges: consensus standards development, formal methods, model-based engineering, knowledge and ontology representation, agent-based systems, and plan language research. We believe the future of unmanned systems interoperability depends on the integration of these methods and technologies into a domain-independent plan language for unmanned systems.
Unmanned ground and air systems operating in collaboration have the potential to provide future Joint Forces a significant capability for operations in complex terrain. Ground and air collaborative engagements potentially offer force conservation, perform timely acquisition and dissemination of essential combat information, and can eliminate high value and time critical targets. These engagements can also add considerably to force survivability by reducing soldier and equipment exposure during critical operations. The Office of the Secretary of Defense, Joint Robotics Program (JRP) sponsored Collaborative Engagement Experiment (CEE) is a consolidation of separate Air Force, Army and Navy collaborative efforts to provide a Joint capability. The Air Force Research Laboratory (AFRL), Material and Manufacturing Directorate, Aerospace Expeditionary Force Division, Force Protection Branch (AFRL\MLQF), The Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) Joint Technology Center (JTC)/Systems Integration Laboratory (SIL), and the Space and Naval Warfare Systems Center-San Diego (SSC San Diego) are conducting technical research and proof of principle for an envisioned operational concept for extended range, three dimensional, collaborative operations between unmanned systems, with enhanced situational awareness for lethal operations in complex terrain. This program will assess information requirements and conduct experiments to identify and resolve technical risks for collaborative engagements using Unmanned Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs). It will research, develop and physically integrate multiple unmanned systems and conduct live collaborative experiments. Modeling and Simulation systems will be upgraded to reflect engineering fidelity levels to greater understand technical challenges to operate as a team. This paper will provide an update of a multi-year program and will concentrate primarily on the JTC/SIL efforts. Other papers will outline in detail the Air Force and Navy portions of this effort.
The viability of Unmanned Systems as tools is increasingly recognized in many domains. As technology advances, the autonomy on board these systems also advances. In order to evaluate the systems in terms of their levels of autonomy, it is critical to have a set of standard definitions that support a set of metrics. As autonomy cannot be evaluated quantitatively without sound and thorough technical basis, the development of autonomy levels for unmanned systems must take into account many factors such as task complexity, human interaction, and environmental difficulty. An ad hoc working group assembled by government practitioners has been formed to address these issues. The ultimate objectives for the working group are: (1) To determine the requirements for metrics for autonomy levels of unmanned systems. (2) To devise methods for establishing metrics of autonomy for unmanned systems. (3) To develop a set of widely recognized standard definitions for the levels of autonomy for unmanned systems. This paper describes the interim results that the group has accomplished through the first four workshops that the group held. We report on the initial findings of the workshops toward developing a generic framework for the Autonomy Levels for Unmanned Systems (ALFUS).
A need exists for United States military forces to perform collaborative engagement operations between unmanned systems. This capability has the potential to contribute significant tactical synergy to the Joint Force operating in the battlespace of the future. Collaborative engagements potentially offer force conservation, perform timely acquisition and dissemination of essential combat information, and can eliminate high value and time critical targets. Collaborative engagements can also add considerably to force survivability by reducing soldier and equipment exposure during critical operations. This paper will address a multiphase U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) Joint Technology Center (JTC) Systems Integration Laboratory (SIL) program to assess information requirements, Joint Architecure for Unmanned Systems (JAUS), on-going Science and Technology initiatives, and conduct simulation based experiments to identify and resolve technical risks required to conduct collaborative engagements using unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV). The schedule outlines an initial effort to expand, update and exercise JAUS, provide early feedback to support user development of Concept of Operations (CONOPs) and Tactics, Techniques and Procedures (TTPs), and develop a Multiple Unified Simulation Environment (MUSE) system with JAUS interfaces necessary to support an unmanned system of systems collaboartive engagement.
Sparse array processing methods are typically used to improve the
spatial resolution of sensor arrays for the estimation of
direction of arrival (DOA). The fundamental assumption behind
these methods is that signals that are received by the sparse
sensors (or a group of sensors) are coherent. However, coherence
may vary significantly with the changes in environmental, terrain,
and, operating conditions. In this paper canonical correlation
analysis is used to study the variations in coherence between
pairs of sub-arrays in a sparse array problem. The data set for
this study is a subset of an acoustic signature data set, acquired
from the US Army TACOM-ARDEC, Picatinny Arsenal, NJ. This data set
is collected using three wagon-wheel type arrays with five
microphones. The results show that in nominal operating
conditions, i.e. no extreme wind noise or masking effects by
trees, building, etc., the signals collected at different sensor
arrays are indeed coherent even at distant node separation.
Operators of Unmanned Ground Vehicle (UGVs) need a graphical assistant to guide them through every step of UGV operations from route planning to reconnaissance reporting. The system will use digital elevation and vectorized terrain data to perform tasks such as determining lines-of-sight for communications, evaluating mobility characteristics of terrain, and determining mobility corridors. The system will also provide an interface between battlefield sensors and the operator, speeding collection of information about the enemy, terrain, and other critical battlefield features. The system will report information collected on the battlefield back to tactical operations center through the battlefield C4I system where it will be integrated with information from other battlefield sensors.
The fuzzy logic adaptive controller for helicopters (FLAC-H) demonstration is a cooperative effort between the US Army Simulation, Training, and Instrumentation Command (STRICOM), the US Army Aviation and Troop Command, and the US Army Missile Command to demonstrate a low-cost drone control system for both full-scale and sub-scale helicopters. FLAC-H was demonstrated on one of STRICOM's fleet of full-scale rotary-winged target drones. FLAC-H exploits fuzzy logic in its flight control system to provide a robust solution to the control of the helicopter's dynamic, nonlinear system. Straight forward, common sense fuzzy rules governing helicopter flight are processed instead of complex mathematical models. This has resulted in a simplified solution to the complexities of helicopter flight. Incorporation of fuzzy logic reduced the cost of development and should also reduce the cost of maintenance of the system. An adaptive algorithm allows the FLAC-H to 'learn' how to fly the helicopter, enabling the control system to adjust to varying helicopter configurations. The adaptive algorithm, based on genetic algorithms, alters the fuzzy rules and their related sets to improve the performance characteristics of the system. This learning allows FLAC-H to automatically be integrated into a new airframe, reducing the development costs associated with altering a control system for a new or heavily modified aircraft. Successful flight tests of the FLAC-H on a UH-1H target drone were completed in September 1994 at the White Sands Missile Range in New Mexico. This paper discuses the objective of the system, its design, and performance.