The Navy and Marine Corps will increasingly need to operate unmanned air vehicles from ships at sea. Fused multi-sensor systems are desirable to ensure these operations are highly reliable under the most demanding at-sea conditions, particularly in degraded visual environments. The US Navy Sea-Based Automated Launch & Recovery System (SALRS) program aims at enabling automated/semi-automated launch and recovery of sea-based, manned and unmanned, fixed- and rotary-wing naval aircraft, and to utilize automated or pilot-augmented flight mechanics for carefree shipboard operations. This paper describes the goals and current results of SALRS Phase 1, which aims at understanding the capabilities and limitations of various sensor types through sensor characterization, modeling, and simulation, and assessing how the sensor models can be used for aircraft navigation to provide sufficient accuracy, integrity, continuity, and availability across all anticipated maritime conditions.
Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain
sensing and increase opportunities for improving line of sight communications. While numerous
military missions would benefit from coordinated UAV-UGV operations, foundational capabilities
that integrate stove-piped tactical systems and share available sensor data are required and not yet
available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially
SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative
capabilities for surveillance, targeting, and improved communications based on PackBot UGV and
Raven UAV platforms. We integrate newly available technologies into computational, vision, and
communications payloads and develop sensing algorithms to support vision-based target tracking.
We first simulated and then applied onto real tactical platforms an implementation of Decentralized
Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a
moving target in an open environment. In addition, system integration with AeroVironment's Digital
Data Link onto both air and ground platforms has extended our capabilities in communications range
to operate the PackBot as well as in increased video and data throughput. The system is brought
together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides
simultaneous waypoint navigation and traditional teleoperation. We also present several recent
capability accomplishments toward PackBot-Raven coordinated operations, including single OCU
display design and operation, early target track results, and Digital Data Link integration efforts, as
well as our near-term capability goals.
Fielded military unmanned systems are currently extending the reach of the U.S. forces in surveillance and
reconnaissance missions. Providing long-range eyes on enemy operations, unmanned aerial vehicles (UAVs), such as the
AeroVironment Raven, have proven themselves indispensable without risking soldiers' lives. Meanwhile, unmanned
ground vehicles (UGVs), such as the iRobot PackBot, are quickly joining ranks in Explosive Ordnance Disposal (EOD)
missions to identify and dispose of ordnance or to clear roads and buildings. UAV-UGV collaboration and the benefit of
force multiplication is increasingly more tangible. iRobot Corporation and CMU Robotics Institute are developing the
capability to simultaneously control the Raven small UAV (SUAV) and PackBot UGV from a single operator control
unit (OCU) via waypoint navigation. Techniques to support autonomous collaboration for pursuing and tracking a
dismounted soldier will be developed and integrated on a Raven-PackBot team. The Raven will survey an area and
geolocate an operator-selected target. The Raven will share this target location with the PackBot and together they will
collaboratively pursue the target intelligently to maintain track on the target. We will accomplish this goal by
implementing a decentralized control and data fusion software architecture. The PackBot will be equipped with on-board
waypoint navigation algorithms, a Navigator Payload containing a stereo-vision system, GPS, and a high-accuracy IMU.
The Raven will have two on-board cameras, a side-looking and a forward-looking optical camera. The Supervisor OCU
will act as the central mission planner, allowing the operator to monitor mission events and override vehicle tasks.
We address the development of a local bus architecture for robot systems that facilitates modular development, and increases the reliability of systems composed of heterogeneous sensors and actuators. The communications bus is based on Control Area Network (CAN), and supports distributed processing in physically separate nodes. Modular cabling and a modular software interface facilitate assembly and modification, and all bus communication is browseable for configuration and troubleshooting. We demonstrate two implementations of this system, and discuss its performance and capabilities compared to alternate communication architectures, with specific emphasis on mobile robots.
Air and ground vehicles exhibit complementary capabilities and
characteristics as robotic sensor platforms. Fixed wing aircraft offer broad field of view and rapid coverage of search areas. However, minimum operating airspeed and altitude limits, combined with attitude uncertainty, place a lower limit on their ability to detect and localize ground features. Ground vehicles on the other hand offer high resolution sensing over relatively short ranges with the disadvantage of slow coverage. This paper presents a decentralized architecture and solution methodology for seamlessly realizing the collaborative potential of air and ground robotic sensor platforms. We provide a framework based on an established approach to the underlying sensor fusion problem. This provides transparent integration of information from heterogeneous sources. An information-theoretic utility measure captures the task objective and robot inter-dependencies. A simple distributed solution mechanism is employed to determine team member sensing trajectories subject to the constraints of individual vehicle and sensor sub-systems. The architecture is applied to a mission involving searching for and localizing an unknown number of targets in an user specified search area. Results for a team of two fixed wing UAVs and two all terrain UGVs equipped with vision sensors are presented.
Decentralized systems require no central controller or center where information is fused or commands generated. Information theoretic ideas have been previously used to develop optimal fusion algorithms for decentralized sensing and data fusion systems. The work described in this paper aims to develop equivalent algorithms for the control of decentralized systems. The methods and algorithms described center on the use of mutual information gain as a measure in choosing control actions. Two example problems are described; area coverage for purposes of surveillance and navigation, and sensor management for cuing and hand-off operations. The motivation for this work is the control of multiple unmanned air vehicles (UAVs).