Enabling warfighters to efficiently and safely execute dangerous missions, unmanned systems have been an increasingly valuable component in modern warfare. The evolving use of unmanned systems leads to vast amounts of data collected from sensors placed on the remote vehicles. As a result, many command and control (C2) systems have been developed to provide the necessary tools to perform one of the following functions: controlling the unmanned vehicle or analyzing and processing the sensory data from unmanned vehicles. These C2 systems are often disparate from one another, limiting the ability to optimally distribute data among different users. The Space and Naval Warfare Systems Center Pacific (SSC Pacific) seeks to address this technology gap through the UxV to the Cloud via Widgets project. The overarching intent of this three year effort is to provide three major capabilities: 1) unmanned vehicle control using an open service oriented architecture; 2) data distribution utilizing cloud technologies; 3) a collection of web-based tools enabling analysts to better view and process data. This paper focuses on how the UxV to the Cloud via Widgets system is designed and implemented by leveraging the following technologies: Data Distribution Service (DDS), Accumulo, Hadoop, and Ozone Widget Framework (OWF).
KEYWORDS: 3D modeling, Data modeling, Video, Atomic force microscopy, Visualization, Unmanned systems, 3D video streaming, Clouds, Video surveillance, 3D image processing
Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.
KEYWORDS: Data modeling, Human-machine interfaces, Control systems, Interfaces, Video, Feedback control, RGB color model, Sensors, Transparency, Standards development
Precedent has shown common controllers must strike a balance between the desire for an integrated user interface design
by human factors engineers and support of project-specific data requirements. A common user-interface requires the
project-specific data to conform to an internal representation, but project-specific customization is impeded by the
implicit rules introduced by the internal data representation. Space and Naval Warfare Systems Center Pacific (SSC
Pacific) developed the latest version of the Multi-robot Operator Control Unit (MOCU) to address interoperability,
standardization, and customization issues by using a modular, extensible, and flexible architecture built upon a sharedworld
model. MOCU version 3 provides an open and extensible operator-control interface that allows additional
functionality to be seamlessly added with software modules while providing the means to fully integrate the information
into a layered game-like user interface. MOCU's design allows it to completely decouple the human interface from the
core management modules, while still enabling modules to render overlapping regions of the screen without interference
or a priori knowledge of other display elements, thus allowing more flexibility in project-specific customization.
Space and Naval Warfare Systems Center, San Diego (SSC San Diego) has developed an unmanned vehicle and sensor operator control interface capable of simultaneously controlling and monitoring multiple sets of heterogeneous systems. The modularity, scalability and flexible user interface of the Multi-robot Operator Control Unit (MOCU) accommodates a wide range of vehicles and sensors in varying mission scenarios. MOCU currently controls all of the SSC San Diego developmental vehicles (land, air, sea, and undersea), including the SPARTAN Advanced Concept Technology Demonstration (ACTD) Unmanned Surface Vehicle (USV), the iRobot PackBot, and the Family of Integrated Rapid Response Equipment (FIRRE) vehicles and sensors. This paper will discuss how software and hardware modularity has allowed SSC San Diego to create a single operator control unit (OCU) with the capability to control a wide variety of unmanned systems.
In the area of logistics, there currently is a capability gap between the one-ton Army robotic Multifunction Utility/Logistics and Equipment (MULE) vehicle and a soldier’s backpack. The Unmanned Systems Branch at Space and Naval Warfare Systems Center (SPAWAR Systems Center, or SSC), San Diego, with the assistance of a group of interns from nearby High Tech High School, has demonstrated enabling technologies for a solution that fills this gap. A small robotic transport system has been developed based on the Segway Robotic Mobility Platform (RMP). We have demonstrated teleoperated control of this robotic transport system, and conducted two demonstrations of autonomous behaviors. Both demonstrations involved a robotic transporter following a human leader. In the first demonstration, the transporter used a vision system running a continuously adaptive mean-shift filter to track and follow a human. In the second demonstration, the separation between leader and follower was significantly increased using Global Positioning System (GPS) information. The track of the human leader, with a GPS unit in his backpack, was sent wirelessly to the transporter, also equipped with a GPS unit. The robotic transporter traced the path of the human leader by following these GPS breadcrumbs. We have additionally demonstrated a robotic medical patient transport capability by using the Segway RMP to power a mock-up of the Life Support for Trauma and Transport (LSTAT) patient care platform, on a standard NATO litter carrier. This paper describes the development of our demonstration robotic transport system and the various experiments conducted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.