Many fielded mobile robot systems have demonstrated the importance of directly estimating the 3D shape of objects in the robot's vicinity. The most mature solutions available today use active laser scanning or stereo camera pairs, but both approaches require specialized and expensive sensors. In prior publications, we have demonstrated the generation of stereo images from a single very low-cost camera using structure from motion (SFM) techniques. In this paper we demonstrate the practical usage of single-camera stereo in real-world mobile robot applications. Stereo imagery tends to produce incomplete 3D shape reconstructions of man-made objects because of smooth/glary regions that defeat stereo matching algorithms. We demonstrate robust object detection despite such incompleteness through matching of simple parameterized geometric models. Results are presented where parked cars are detected, and then recognized via license plate recognition, all in real time by a robot traveling through a parking lot.
In support of the U.S. Army vision for increased mobility, survivability, and lethality, the Army Research Laboratory is currently developing a new version of the low frequency ultra-wideband (UWB) synthetic aperture radar (SAR) to support forward imaging. One of the goals in the development of this version of the radar is to make it affordable. This paper presents a study of various forward imaging radar configurations that could be employed in a forward imaging radar system to achieve good imaging resolution with a reasonable number of transmitters/receivers. This study provided us with insights to efficiently configure our transmitter/receiver array. In this study, we examined various radar configurations such as monostatic and some variations of bistatic cases. We provide the analysis of the synthetic aperture radar (SAR) image resolution for these configurations and show the effectiveness of the bistatic configuration with only two transmitters at the ends of the physical array. In addition to the analysis, we also provide simulation results to demonstrate the expected imaging resolutions with respect to the radar configuration and the imaging geometry. Finally, we also consider the use of two squinted transmitters at the two ends and exploit the forward motion of the vehicle to form image on the two sides.
Vehicle-borne smuggling is widespread because of the availability, flexibility and capacity of the cars and trucks. Inspecting vehicles at border crossings and checkpoints are key security elements. At the present time, most vehicle security inspections at home and abroad are conducted manually. Remotely operated vehicle inspection robots could be integrated into the operating procedures to improve throughput while reducing the workload burden on security personnel. The robotic inspection must be effective at detecting contraband and efficient at clearing the "clean" vehicles that make up the bulk of the traffic stream, while limiting the workload burden on the operators.
In this paper, we present a systems engineering approach to robotic vehicle inspection. We review the tactics, techniques and procedures to interdict contraband. We present an operational concept for robotic vehicle inspection within this framework, and identify needed capabilities. We review the technologies currently available to meet these needs. Finally, we summarize the immediate potential and R&D challenges for effective contraband detection robots.
This paper presents an algorithm for online image-based terrain classification that mimics a human supervisor's segmentation and classification of training images into "Go" and "NoGo" regions. The algorithm identifies a set of image chips (or exemplars) in the training images that span the range of terrain appearance. It then uses the exemplars to segment novel images and assign a Go/NoGo classification. System parameters adapt to new inputs, providing a mechanism for learning. System performance is compared to that obtained via offline fuzzy c-means clustering and support vector machine classification.
The Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at the University of Tennessee is currently developing a modular approach to unmanned systems to increase mission flexibility and aid system interoperability for security and surveillance applications. The main focus of the IRIS research is the development of sensor bricks where the term brick denotes a self-contained system that consists of the sensor itself, a processing unit, wireless communications, and a power source. Prototypes of a variety of sensor bricks have been developed. These systems include a thermal imaging brick, a quad video brick, a 3D range brick, and a nuclear (gamma ray and neutron) detection bricks. These bricks have been integrated in a modular fashion into mobility platforms to form functional unmanned systems. Research avenues include sensor processing algorithms, system integration, communications architecture, multi-sensor fusion, sensor planning, sensor-based localization, and path planning. This research is focused towards security and surveillance applications such as under vehicle inspection, wide-area perimeter surveillance, and high value asset monitoring. This paper presents an overview of the IRIS research activities in modular robotics and includes results from prototype systems.
The US Navy and other Department of Defense (DoD) and Department of Homeland Security (DHS) organizations are increasingly interested in the use of unmanned surface vehicles (USVs) for a variety of missions and applications. In order for USVs to fill these roles, they must be capable of a relatively high degree of autonomous navigation. Space and Naval Warfare Systems Center, San Diego is developing core technologies required for robust USV operation in a real-world environment, primarily focusing on autonomous navigation, obstacle avoidance, and path planning.
State-of-the-art unmanned ground vehicles are capable of understanding and adapting to arbitrary road terrain for navigation. The robotic mobility platforms mounted with sensors detect and report security concerns for subsequent action. Often, the information based on the localization of the unmanned vehicle is not sufficient for deploying army resources. In such a scenario, a three dimensional (3D) map of the area that the ground vehicle has surveyed in its trajectory would provide a priori spatial knowledge for directing resources in an efficient manner. To that end, we propose a mobile, modular imaging system that incorporates multi-modal sensors for mapping unstructured arbitrary terrain. Our proposed system leverages 3D laser-range sensors, video cameras, global positioning systems (GPS) and inertial measurement units (IMU) towards the generation of photo-realistic, geometrically accurate, geo-referenced 3D terrain models. Based on the summary of the state-of-the-art systems, we address the need and hence several challenges in the real-time deployment, integration and visualization of data from multiple sensors. We document design issues concerning each of these sensors and present a simple temporal alignment method to integrate multi-sensor data into textured 3D models. These 3D models, in addition to serving as a priori for path planning, can also be used in simulators that study vehicle-terrain interaction. Furthermore, we show our 3D models possessing the required accuracy even for crack detection towards road surface inspection in airfields and highways.
The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.
The use of head-aimed vision systems for remote navigation and weapons aiming greatly increases the mission performance of armed unmanned ground vehicles. Head-aimed human/robotic vision interfaces greatly improves situational awareness. Task performance in target tracking and threat identification is increased by 200 to 300 percent.
Trends in combat technology research point to an increasing role for uninhabited vehicles and other robotic elements in modern warfare tactics. However, real-time control of multiple uninhabited battlefield robots and other semi-autonomous systems, in diverse fields of operation, is a difficult problem for modern warfighters that, while identified, has not been adequately addressed.
Soar Technology is applying software agent technology to simplify demands on the human operator. Our goal is to build intelligent systems capable of finding the best balance of control between the human and autonomous system capabilities. We are developing an Intelligent Control Framework (ICF) from which to create agent-based systems that are able to dynamically delegate responsibilities across multiple robotic assets and the human operator. This paper describes proposed changes to our ICF architecture based on principles of human-machine teamwork derived from collaborative discourse theory. We outline the principles and the new architecture, and give examples of the benefits that can be realized from our approach.
This paper describes a system for implementing adjustable autonomy levels in simulated unmanned vehicles using an approach based upon the fields of deontics and Joint Intention Theory (JIT). It discusses Soar Technology's Intelligent Control Framework research project (ICF), the authors' use of deontics in the creation of adjustable autonomy for ICF, and some possible future directions in which the research could be expanded. Use of deontics and JIT in ICF has allowed us to define system-wide formal limits on the behavior of the unmanned systems controlled by ICF, to increase the flexibility of our adjustable autonomy system, and to decrease the granularity of the autonomy adjustments. This set of formalisms allows the unmanned system maximal autonomy in the default case, while allowing the user and supervisory agents to constrain that autonomy in situations when necessary. Unlike more strictly layered adjustable autonomy formalisms, our adjustable autonomy formalism can be used to restrict subsets of autonomous behaviors, rather than entire systems, in response to situational requirements.
The field of autonomous vehicles is a rapidly growing one, with significant interest from both government and industry sectors. Autonomous vehicles represent the intersection of artificial
intelligence (AI) and robotics, combining decision-making with real-time control. Autonomous vehicles are desired for use in search and rescue, urban reconnaissance, mine detonation, supply convoys, and more. The general adage is to use robots for anything dull, dirty, dangerous or dumb. While a great deal of research has been done on autonomous systems, there are only a handful of fielded examples incorporating machine autonomy beyond the level of teleoperation, especially in outdoor/complex environments. In an attempt to assess and understand the current state of the art in autonomous vehicle development, a few areas where unsolved problems remain became clear. This paper outlines those areas and provides suggestions for the focus of science and technology research. The first step in evaluating the current state of autonomous vehicle development was to develop a definition of autonomy. A number of autonomy level classification systems were reviewed. The resulting working definitions and classification schemes used by the authors are summarized in the opening sections of the paper. The remainder of the report discusses current approaches and challenges in decision-making and real-time control for autonomous vehicles. Suggested research focus areas for near-, mid-, and long-term development are also presented.
This paper describes an extension of scripts, which have been used to control sequences of robot behavior, to facilitate
human-robot coordination. The script mechanism permits the human to both conduct expected, complementary
activities with the robot and to intervene opportunistically taking direct control. Scripts address the six major issues
associated with human-robot coordination. They allow the human to visualize the robot's mental model of the situation
and build a better overall understanding of the situation and what level of autonomy or intervention is needed. It also
maintains synchronization of the world and robot models so that control can be seamlessly transferred between human
and robot while eliminating "coordination surprise". The extended script mechanism and its implementation in Java on
an Inuktun micro-VGTV robot for the technical search task in urban search and rescue is described.
Collecting environmental data in coastal bays presents several challenges to the scientist. One of the most pressing
issues is how to efficiently and reliably gather data in shallow water areas-environments that often preclude the use of
traditional boats. Obstacles that are encountered in such environments include difficulty in covering large territories
and the presence of inaccessible areas due to a variety of reasons, such as soft bottoms or contamination. There is also a
high probability of disturbing the test area while placing the sensors. This paper outlines the development of a remotely
operated boat and its real-time control system.
In support of Canadian Forces transformation, Defence R&D Canada (DRDC) has established an ongoing program to develop machine intelligence for semi-autonomous vehicles and systems. Focussing on mine clearance and remote scouting for over a decade, DRDC Suffield has developed numerous UGVs controlled remotely over point-to-point radio links. Though this strategy removes personnel from potential danger, DRDC recognized that human factors and communications bandwidth limit teleoperation and that only networked, autonomous unmanned systems can conserve these valuable resources. This paper describes the outcome of the first autonomy project, Autonomous Land Systems (ALS), designed to demonstrate basic autonomous multivehicle land capabilities.
In 2002 Defence R&D Canada changed research direction from pure tele-operated land vehicles to general autonomy for land, air, and sea craft. The unique constraints of the military environment coupled with the complexity of autonomous systems drove DRDC to carefully plan a research and development infrastructure that would provide state of the art tools without restricting research scope. DRDC's long term objectives for its autonomy program address disparate unmanned ground vehicle (UGV), unattended ground sensor (UGS), air (UAV), and subsea and surface (UUV and USV) vehicles operating together with minimal human oversight. Individually, these systems will range in complexity from simple reconnaissance mini-UAVs streaming video to sophisticated autonomous combat UGVs exploiting embedded and remote sensing. Together, these systems can provide low risk, long endurance, battlefield services assuming they can communicate and cooperate with manned and unmanned systems. A key enabling technology for this new research is a software architecture capable of meeting both DRDC's current and future requirements. DRDC built upon recent advances in the computing science field while developing its software architecture know as the Architecture for Autonomy (AFA). Although a well established practice in computing science, frameworks have only recently entered common use by unmanned vehicles. For industry and government, the complexity, cost, and time to re-implement stable systems often exceeds the perceived benefits of adopting a modern software infrastructure. Thus, most persevere with legacy software, adapting and modifying software when and wherever possible or necessary -- adopting strategic software frameworks only when no justifiable legacy exists. Conversely, academic programs with short one or two year projects frequently exploit strategic software frameworks but with little enduring impact. The open-source movement radically changes this picture. Academic frameworks, open to public scrutiny and modification, now rival commercial frameworks in both quality and economic impact. Further, industry now realizes that open source frameworks can reduce cost and risk of systems engineering. This paper describes the Architecture for Autonomy implemented by DRDC and how this architecture meets DRDC's current needs. It also presents an argument for why this architecture should also satisfy DRDC's future requirements as well.
The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described
by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating
practical, cost effective, autonomous intelligent systems capable of completing military missions in complex
operating environments." The mobility requirement for ground-based mobile systems operating in urban settings
must increase significantly if robotic technology is to augment human efforts in these roles and environments.
The intelligence required for autonomous systems to operate in complex environments demands advances in
many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation,
and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve
its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent
mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning
algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience.
These algorithms seek to exploit available world representations of the environment and the inherent dexterity of
the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The
primary focus of the paper is to present the intelligent mobility research within the framework of the research
methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future
direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical
research gap. This research will create effective intelligence to improve the mobility of ground-based mobile
systems operating in urban settings to assist the Canadian Forces in their future urban operations.
Complexity is a dominant, multi-dimensional attribute of the battlespace, and is evident in the geography, manmade
infrastructure, force asymmetry and organizational processes. The Unmanned Aerial Vehicle represents a strategic
enabler for military operations in complex environments by providing a flexible means of acquiring real-time
information and deriving actionable knowledge. Limitations arising from remotely piloted UAV operation together with
the desired operational flexibility in complex environments both dictate the need for increasingly autonomous UAV
operation within a rigorous airspace integration framework. UAV autonomy relies primarily on access to missioncritical
information from on-board sensors and networked datalink, together with comprehensive, efficient and robust
algorithms for decisions on course of action. Global battlefield networking extends the notion of individual vehicle
operation to a coordinated team, whose members carry out complementary and/or redundant tasks. DRDC research on
cooperative teaming of UAVs covers in particular the development and implementation of cooperative control based on
model predictive control. In the context of operations in complex environments, the present paper discusses the selected
approach to cooperative control, and presents applications to formation flight, collision avoidance, real-time
implementation and multi-processing, and fault-detection, isolation and recovery.
Defence R&D Canada (DRDC) is exploiting a synthetic environment to explore the use of multiple coordinated sensors to perform maritime surveillance. A distributed architecture is proposed in which teams from other DRDC labs, industry and academia can experiment with solutions based on constructive or virtual simulations that run locally at their facilities, but participate through a distributed simulation employing the High Level Architecture. The problem is set in the context of the surveillance of traffic routes and fishing vessels and consists, in its most general form, of a dynamic m-vehicle, n-target coordination problem that requires task assignment and trajectory generation components in the solution. An example solution to a reduced form of the problem that was generated with a human-in-the-loop simulator is provided.
The MultiAgent Tactical Sentry (MATS) project addressed a Canadian Forces (CF) requirement to remotely detect NBC threats. This requirement was met by integrating a suite of primary NBC sensors onto a remotely operated vehicular platform. From inception to completion, the project spanned 30 months. End user trials continue with the initial production run systems and consequently, the CF techniques, tactics, and procedures (TTPs) are evolving on a continual basis.
This paper describes the field-oriented philosophy of the Institute for Safety Security Rescue Technology (iSSRT) and summarizes the activities and lessons learned during calendar year 2005 of its two centers: the Center for Robot-Assisted Search and Rescue and the NSF Safety Security Rescue industry/university cooperative research center. In 2005, iSSRT participated in four responses (La Conchita, CA, Mudslides, Hurricane Dennis, Hurricane Katrina, Hurricane Wilma) and conducted three field experiments (NJTF-1, Camp Hurricane, Richmond, MO). The lessons learned covered mobility, operator control units, wireless communications, and general reliability. The work has collectively identified six emerging issues for future work. Based on these studies, a 10-hour, 1 continuing education unit credit course on rescue robotics has been created and is available. Rescue robots and sensors are available for loan upon request.
In this paper, we present preliminary work on a novel wearable joystick for gloves-on human/computer interaction
in hazardous environments. Interacting with traditional input devices can be clumsy and inconvenient for the
operator in hazardous environments due to the bulkiness of multiple system components and troublesome wires.
During a collapsed structure search, for example, protective clothing, uneven footing, and "snag" points in
the environment can render traditional input devices impractical. Wearable computing has been studied by
various researchers to increase the portability of devices and to improve the proprioceptive sense of the wearer's
intentions. Specifically, glove-like input devices to recognize hand gestures have been developed for general-purpose
applications. But, regardless of their performance, prior gloves have been fragile and cumbersome to
use in rough environments. In this paper, we present a new wearable joystick to remove the wires from a simple,
two-degree of freedom glove interface. Thus, we develop a wearable joystick that is low cost, durable and robust,
and wire-free at the glove. In order to evaluate the wearable joystick, we take into consideration two metrics
during operator tests of a commercial robot: task completion time and path tortuosity. We employ fractal
analysis to measure path tortuosity. Preliminary user test results are presented that compare the performance
of both a wearable joystick and a traditional joystick.
With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue
applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general
guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance
testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has
been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a
workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all
aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many
failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The
process also demonstrated that the testing by the manufacturer can provide important design data that can be used to
identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor
systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in
We study the problem of dispersing a group of small robots in an unknown environment. The objective is to
cover the environment as much as possible while staying within communications range. We assume there is no
central control, the environment is unknown and with complex obstacles, the robots operate without any central
control, and have only limited communications with other robots and limited sensing capabilities. We present
algorithms and validate them experimentally in the Player/Stage simulation environment.
For the Wayfarer Project, funded by the US Army through TARDEC, we have developed technologies that enable manportable PackBot Wayfarer UGVs to perform autonomous reconnaissance in urban terrain. Each Wayfarer UGV can autonomously follow urban streets and building perimeters while avoiding obstacles and building a map of the terrain. Each UGV is equipped with a 3D stereo vision system, a 360-degree planar LIDAR, GPS, INS, compass, and odometry. The Hough transform is applied to LIDAR range data to detect building walls for street following and perimeter following. We have demonstrated Wayfarer's ability to autonomously follow roads in urban and rural environments, while building a map of the surrounding terrain. Recently, we have developed a ruggedized version of the Wayfarer
Navigation Payload for use in rough terrain and all-weather conditions. The new payload incorporates a compact Tyzx G2 stereo vision module and a high-performance Athena Guidestar INS/GPS unit.
Many emerging UV (Unmanned Vehicle) cooperative control systems utilizing mission decomposition and
generic UV management techniques are UAV (Unmanned Aerial Vehicle) oriented and transition well from model
simulations to hardware due to the relative homogeneity of the air environment. Unmanned Surface Vehicles (USV's)
and other ground borne vehicles, to function robustly, must have an additional onboard capacity to negotiate local
environmental and indigenous operational factors in order to be commanded by a network, and this capacity is most
easily delineated into Skill Sets. The Autonomous Maritime Navigation Program (AMN) is developing USV systems
which target full intelligent autonomous operations and autonomy Skill Sets to allow USV's to perform unsupervised
complex missions over extended time periods at the platform level with minimum human supervision. Importantly, this
allows control systems developed for cooperating UV's to effectively control USV's by enabling local platform issues
decision making at the platform level. Using a 40 foot laboratory boat, advanced on-board control, sensing, data fusion,
physical plant and payload monitoring and management are being adapted and integrated as a system to replace
traditional human crew functions. This paper discusses a path to achieve the goal of full USV autonomy equipped with
skills to self manage, survive and navigate, and progress being made with enabling technology pieces. Initiatives and
partnerships have been formed with academia, industry, and other DoD laboratories to these ends in both independent
and collaborative RDT&E projects. Discussion includes ongoing work in sensing, data fusion, dynamic mission
planning, execution and boat operations, and integration to JAUS/TCS control protocols.
Unmanned ground and air systems operating in collaboration have the potential to provide future Joint Forces a significant capability for operations in complex terrain. Collaborative Engagement Experiment (CEE) is a consolidation of separate Air Force, Army and Navy collaborative efforts within the Joint Robotics Program (JRP) to provide a picture of the future of unmanned warfare. The Air Force Research Laboratory (AFRL), Material and Manufacturing Directorate, Aerospace Expeditionary Force Division, Force Protection Branch (AFRL/MLQF), The Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) Joint Technology Center (JTC)/Systems Integration Laboratory (SIL), and the Space and Naval Warfare Systems Center - San Diego (SSC San Diego) are conducting technical research and proof of principle experiments for an envisioned operational concept for extended range, three dimensional, collaborative operations between unmanned systems, with enhanced situational awareness for lethal operations in complex terrain. This paper describes the work by these organizations to date and outlines some of the plans for future work.
The University of Alabama in Huntsville (UAH) is currently investigating techniques and technologies for the integration of a small unmanned aerial vehicle (SUAV) with small unmanned ground vehicles (SUGV). Each vehicle has its own set of unique capabilities, but the efficient integration of the two for a specific application requires modifying and integrating both systems. UAH has been flying and testing an autonomously-controlled small helicopter, called the Flying Bassett (Base Airborne Surveillance and Sensing for Emergency Threat Tracking) for over a year. Recently, integrated operations were performed with four SUGVs, the Matilda (Mesa Robotics, Huntsville, AL), the US Navy Vanguard, the UAH Rover, and the Penetrator (Mesa Robotics).
The program has progressed from 1) building an air and ground capability for video and infrared surveillance, 2) integration with ground vehicles in realistic scenarios, to 3) deployment and recovery of ground vehicles. The work was done with the cooperation of the US Army at Ft. Benning, GA and Redstone Arsenal, AL, the Federal Bureau of Investigation in Huntsville, AL, the US Naval Reserve in Knoxville, TN, and local emergency organizations. The results so far have shown that when the air and ground systems are employed together, their utility is greatly enhanced.
This paper presents an automated classification system for images based on their visual complexity. The image complexity is approximated using a clutter measure, and parameters for processing it are dynamically chosen. The classification method is part of a vision-based collision avoidance system for low altitude aerial vehicles, intended to be used during search and rescue operations in urban settings. The collision avoidance system focuses on detecting thin obstacles such as wires and power lines. Automatic parameter selection for edge detection shows a 5% and 12% performance improvement for medium and heavily cluttered images respectively. The automatic classification enabled the algorithm to identify near invisible power lines in a 60 frame video footage from a SUAV helicopter crashing during a search and rescue mission at hurricane Katrina, without any manual intervention.
In this paper, we will present the technique of automatic registration and mosaicking for the multispectral images acquired by a mini-UAV platform. The mini-UAV in the research is manufactured and operated by Air-O-Space Internationl (AOSI) L.L.C., where a 3-band multispectral sensor system captures data at green (550nm), red (650 nm), and NIR (820nm) bands. The imagery is converted to a digital format and downlinked to the ground station in real-time. Automatic image registration is needed to co-register these three band images so that the final commerical products, such as pseduo-CIR image and NDVI image (e.g., for agicultural study), can be generated in near real-time. There are two types of image registration approaches: area-based and feature-based. Since most of image scenes are about crop fields, trees, grass, and soil, where no prominent feature details can be easily extracted, so the area-based method is adopted. The control point detection is the key for the successful automatic image registration and mosaicking. In order to control the false alarms during the control point detection, the potential exploration area, i.e., region of interest, is searched first; to remove the inaccurate detected control points, control point selection is conducted based on the occurrence frequency of the resulant coordinate displacements. For image mosaicking where the rotational misalignment can be large, the rotation is adjusted before the control point detection, which can greatly mitigate the limitation of the area-based method. The overall turn-around time (from image acquisition to commercial product generation) is about a couple of hours. This cost-effective UAV system including the developed software is very supportive to the timely decision-making in practical applications, such as agricultual and forestry monitoring.
In this paper we present an algorithm for the autonomous navigation of an unmanned aerial vehicle (UAV) following a moving target. The UAV in consideration is a fixed wing aircraft that has physical constraints on airspeed and maneuverability. The target however is not considered to be constrained and can move in any general pattern. We show a single circular pattern navigation algorithm that works for targets moving at any speed with any pattern where other methods switch between different navigation strategies in different scenarios. Simulation performed takes into consideration that the aircraft also needs to visually track the target using a mounted camera. The camera is also controlled by the algorithm according to the position and orientation of the aircraft and the position of the target. Experiments show that the algorithm presented successfully tracks and follows moving targets.
Within the military, the Explosive Ordnance Disposal (EOD) community has been an early adopter of robotic capabilities. The Joint Service EOD (JSEOD) Program is in the process of fielding its third generation of robotic systems to the EOD technicians. Robots have been an invaluable asset to the EOD technician, and they have been critical to operations in Iraq as we prosecute the IED problem. This paper provides a brief history of past EOD robotic systems, a description of currently fielded and supported systems, and the future of robotic programs within the Joint Service EOD community.
The Defence R&D Canada (DRDC) has been given strategic direction to pursue research to increase the independence and effectiveness of military vehicles and systems. This led to the creation of the Autonomous Land Systems (ALS) project that was completed in 2005 with a successful demonstration of semi-autonomous UGVs in open partially vegetated environments. Cohort is a newly funded project that will work to devleop effective UxV teams for urban and complex environments. This paper will briefly discuss the state of the UGV research at the completion of ALS and other research projects supporting Cohort. The goals and challenges of Cohort will be outlined as well as the research plan that is involving many of DRDC's laboratories from across Canada.
This paper provides an overview of the development and demonstration of intelligent autonomy technologies for control of heterogeneous unmanned naval air and sea vehicles and describes some of the current limitations of such technologies. The focus is on modular technologies that support highly automated retasking and fully autonomous dynamic replanning for up to ten heterogeneous unmanned systems based on high-level mission objectives, priorities, constraints, and Rules-of-Engagement. A key aspect of the demonstrations is incorporating frequent naval operator evaluations in order to gain better understanding of the integrated man/machine system and its tactical utility. These evaluations help ensure that the automation can provide information to the user in a meaningful way and that the user has a sufficient level of control and situation awareness to task the system as needed to complete complex mission tasks. Another important aspect of the program is examination of the interactions of higher-level autonomy algorithms with other relevant components that would be needed within the decision-making and control loops. Examples of these are vision and other sensor processing algorithms, sensor fusion, obstacle avoidance, and other lower level vehicle autonomous navigation, guidance, and control functions. Initial experiments have been completed using medium and high-fidelity vehicle simulations in a virtual warfare environment and inexpensive surrogate vehicles in flight and in-water demonstrations. Simulation experiments included integration of multi-vehicle task allocation, dynamic replanning under constraints, lower level autonomous vehicle control, automatic assessment of the impact of contingencies on plans, management of situation awareness data, operator alert management, and a mixed-initiative operator interface. In-water demonstrations of a maritime situation awareness capability were completed in both a river and a harbor environment using unmanned surface vehicles and a buoy as surrogate platforms. In addition, a multiple heterogeneous vehicle demonstration was performed using five different types of small unmanned air and ground vehicles. This provided some initial experimentation with specifying tasking for high-level mission objectives and then mapping those objectives onto heterogeneous unmanned vehicles that each have different lower-level autonomy software. Finally, this paper will discuss lessons learned.
In May 2003, the Federal Republic of Germany and the Republic of France awarded a contract to RHEINMETALL
LANDSYSTEME GmbH (Germany), MBDA (France) and THALES (France) for the joint development of a
technology demonstrator for a vehicle-based close-in countermine system. The objective of this cooperation project,
known as MMSR-SYDERA, is to show that, in a full-scale development program, it will be possible to meet the joint
operational requirements issued by the German and French armies, which are based on the following missions: Fast
route opening, Sensitive route opening and Area Clearing. In order to fulfill the three different missions and deal with an
extensive array of mine threats, the MMSR-SYDERA countermine system combines two modes of countermine
operation, i.e. triggering mines at a safe distance or with only easy-to-repair-damages (so-called decoying) or detecting
mines with sensors for low-order clearing. Thus, the plan requires for the MMSR-SYDERA system to be composed of
five vehicles deployed in different configurations in a convoy on the roads to be cleared. One year after the first paper,
this article reports the status of the Demonstrator as well as the first vehicle level trials, and focuses on specific topics
like the embedded safety components and behaviors linked to the remote control operation, and the wireless links used
between the vehicles. After industrial system trials in the second half of 2006, Customer's evaluations of the system
demonstrator will be carried out at the beginning of 2007.
The Army's ARV (Armed Robotic Vehicle) Robotic Technologies (ART) program is working on the development of
various technological thrusts for use in the robotic forces of the future. The ART program will develop, integrate and
demonstrate the technology required to advance the maneuver technologies (i.e., perception, mobility, tactical
behaviors) and increase the survivability of unmanned platforms for the future force while focusing on reducing the
soldiers' burden by providing an increase in vehicle autonomy coinciding with a decrease in the total number user
interventions required to control the unmanned assets. This program will advance the state of the art in perception
technologies to provide the unmanned platform an increasingly accurate view of the terrain that surrounds it; while
developing tactical/mission behavior technologies to provide the Unmanned Ground Vehicle (UGV) the capability to
maneuver tactically, in conjunction with the manned systems in an autonomous mode. The ART testbed will be
integrated with the advanced technology software and associated hardware developed under this effort, and incorporate
appropriate mission modules (e.g. RSTA sensors, MILES, etc.) to support Warfighter experiments and evaluations
(virtual and field) in a military significant environment (open/rolling and complex/urban terrain). The outcome of these
experiments as well as other lessons learned through out the program life cycle will be used to reduce the current risks
that are identified for the future UGV systems that will be developed under the Future Combat Systems (FCS) program,
including the early integration of an FCS-like autonomous navigation system onto a tracked skid steer platform.
Road sign detection is important to a robotic vehicle that automatically drives on roads. In this paper, road signs are detected by means of rules that restrict color and shape and require signs to appear only in limited regions in an image. They are then recognized using a template matching method and tracked through a sequence of images. The method is fast and can easily be modified to include new classes of signs. The road sign detection is used as part of a control system that autonomously drives a vehicle over paved roads. The primary application is to detect intersections, which are usually marked with street name signs or stop signs. An estimate of the range to the sign is computed based on the size of the sign and provides a cue to intersection detection software and driving control.
The RiSE robot is a biologically inspired, six legged climbing robot, designed for general mobility in scansorial (vertical walls, horizontal ledges, ground level) environments. It exhibits ground reaction forces that are similar to animal climbers and does not rely on suction, magnets or other surface-dependent specializations to achieve adhesion and shear force. We describe RiSE's body and leg design as well as its electromechanical, communications and computational infrastructure. We review design iterations that enable RiSE to climb 90° carpeted, cork covered and (a growing range of) stucco surfaces in the quasi-static regime.
We discuss the gait generation and control architecture of a bioinspired climbing robot that presently climbs a variety of vertical surfaces, including carpet, cork and a growing range of stucco-like surfaces in the quasi-static regime. The initial version of the robot utilizes a collection of gaits (cyclic feed-forward motion patterns) to locomote over these surfaces, with each gait tuned for a specific surface and set of operating conditions. The need for more flexibility in gait specification (e.g., adjusting number of feet on the ground), more intricate shaping of workspace motions (e.g., shaping the details of the foot attachment and detachment trajectories), and the need to encode gait "transitions" (e.g., tripod to pentapod gait structure) has led us to separate this trajectory generation scheme into the functional composition of a phase assigning transformation of the "clock space" (the six dimensional torus) followed by a map from phase into leg joints that decouples the geometric details of a particular gait. This decomposition also supports the introduction of sensory feedback to allow recovery from unexpected event and to adapt to changing surface geometries.
Climbing animal's feet use combinations of interlocking and bonding mechanisms in a staggering array of designs. The most successful climbers' feet exhibit a complex hierarchy of varied mechanical structures at multiple scales, combining small appendages that generate shear or adhesive forces with compliant suspension systems that promote intimate contact with surfaces. Recent progress is presented in mechanical and materials design that integrates novel dry adhesive and microspine structures mounted on passively compliant suspensions into successively improved generations of feet targeted at the RiSE (Robots in Scansorial Environments) family of climbing robots. The current version can ascend 90° carpeted, cork covered and a growing range of stucco surfaces in the quasi-static regime. Specifications of a "public interface" for integrating a broad range of synthetic appendages into the foot assemblies are presented in the hopes of encouraging as large as possible a community of MEMs and Nanomaterials designers to submit adhesive or friction enhancing materials for operational tests using the robot.
Robots can serve as hardware models for testing biological hypotheses. Both for this reason and to improve the state of the art of robotics, we strive to incorporate biological principles of insect locomotion into robotic designs. Previous research has resulted in a line of robots with leg designs based on walking and climbing movements of the cockroach Blaberus discoidalis. The current version, Robot V, uses muscle-like Braided Pneumatic Actuators (BPAs). In this paper, we use recorded electromyograms (EMGs) to drive robot joint motion. A muscle activation model was developed that transforms EMGs recorded from behaving cockroaches into appropriate commands for the robot. The transform is implemented by multiplying the EMG by an input gain thus generating an input pressure signal, which is used to drive a one-way closed loop pressure controller. The actuator then can be modeled as a capacitance with input rectification. The actuator exhaust valve is given a leak rate, making the transform a leaky integrator for air pressure, which drives the output force of the actuator. We find parameters of this transform by minimizing the difference between the robot motion produced and that observed in the cockroach. Although we have not reproduced full-amplitude cockroach motion using this robot, results from evaluation on reduced-amplitude cockroach angle data strongly suggest that braided pneumatic actuators can be used as part of a physical model of a biological system.
In this paper, we generate gaits for mixed systems, that is, dynamic systems that are subject to a set of nonholonomic
constraints. What is unique about mixed systems is that when we express their dynamics in body
coordinates, the motion of these systems can be attributed to two decoupled terms: the geometric and dynamic
phase shifts. In our prior work, we analyzed systems whose dynamic phase shift was null by definition. Purely
mechanical and principally kinematic systems are two classes of mechanical systems that have this property. We
generated gaits for these two classes of systems by intuitively evaluating their geometric phase shift and relating
it to a volume integral under well-defined height functions.
One of the contributions of this paper is to present a similar intuitive approach for computing the dynamic
phase shift. We achieve this, by introducing a new scaled momentum variable that not only simplifies the
momentum evolution equation but also allows us to introduce a new set of well-defined gamma functions which
enable us to intuitively evaluate the dynamic phase shift. More specifically, by analyzing these novel gamma
functions in a similar way to how we analyzed height functions, and by analyzing the sign-definiteness of the
scaled momentum variable, we are able to ensure that the dynamic phase shift is non-zero solely along the desired
Finally, we also introduce a novel mechanical system, the variableinertia snakeboard, which is a generalization
of the original snakeboard that was previously studied in the literature. Not only does this general system help
us identify regions of the base space where we can not define a certain type of gaits, but also it helps us verify
the generality and applicability of our gait generation approach.
The results of two usability experiments evaluating an interface for the operation of OctArm, a biologically inspired
robotic arm modeled after an octopus tentacle, are reported. Due to the many degrees-of-freedom (DOF) for the operator
to control, such 'continuum' robotic limbs provide unique challenges for human operators because they do not map
intuitively. Two modes have been developed to control the arm and reduce the DOF under the explicit direction of the
operator. In coupled velocity (CV) mode, a joystick controls changes in arm curvature. In end-effector (EE) mode, a
joystick controls the arm by moving the position of an endpoint along a straight line. In Experiment 1, participants used
the two modes to grasp objects placed at different locations in a virtual reality modeling language (VRML). Objective
measures of performance and subjective preferences were recorded. Results revealed lower grasp times and a subjective
preference for the CV mode. Recommendations for improving the interface included providing additional feedback and
implementation of an error recovery function. In Experiment 2, only the CV mode was tested with improved training of
participants and several changes to the interface. The error recovery function was implemented, allowing participants to
reverse through previously attained positions. The mean time to complete the trials in the second usability test was
reduced by more than 4 minutes compared with the first usability test, confirming the interface changes improved
performance. The results of these tests will be incorporated into future versions of the arm and improve future usability
This paper introduces a new generation wall-climbing robots named as City-climber, which has the capabilities to climb
walls, walk on ceilings, and transit between different surfaces. Unlike the traditional wall-climbing robots, the Cityclimber
robots use aerodynamic attraction which achieves good balance between strong adhesion force and high
mobility. Since the City-climber robots don't require perfect sealing as the vacuum suction technique does, the robots
can move on virtually any kinds of smooth or rough surfaces. The other novelties of the City-climber robots are the
modular design and high-performance on-board processing unit. The former feature achieves booth fast motion of each
module on planar surfaces and smooth transition between the surfaces by a set of two modules. The latter feature makes
the real-time signal processing and autonomous operation possible. We envision that the City-climber robots be used in
urban environments for search and rescue, weapon/tool delivery, inspection and surveillance purposes. To increase the
hardware and software reconfigurability, the self-contained City-climber robots use system-on-programmable-chip
(SoPC) technology for on-board perception and motion control. The video display several versions of the City-Climber
prototypes, illustrating the main areas of functionality and results of several key experimental tests, including 4.2kg
payload, operation on rough surfaces, locomotion over surface gaps, and inverted operation on ceiling, to name a few.
Evolution has produced organisms whose locomotive agility and adaptivity mock the difficulty faced by robotic scientists. The problem of locomotion, which nature has solved so well, is surprisingly complex and difficult. We explore the ability of an artificial eight-legged arachnid, or animat, to autonomously learn a locomotive gait in a three-dimensional environment. We take a physics-based approach at modeling the world and the virtual body of the animat. The arachnid-like animat learns muscular control functions using simulated annealing techniques, which attempts to maximize forward velocity and minimize energy expenditure. We experiment with varying the weight of these parameters and the resulting locomotive gaits. We perform two experiments in which the first is a naive physics model of the body and world which uses point-masses and idealized joints and muscles. The second experiment is a more realistic simulation using rigid body elements with distributed mass, friction, motors, and mechanical joints. By emphasizing physical aspects we wish to minimize, a number of interesting gaits emerge.
This paper describes the development of the octopus biology inspired OctArm series of soft robot manipulators. Each OctArm is constructed using air muscle extensors with three control channels per section that provide two axis bending and extension. Within each section, mesh and plastic coupler constraints prevent extensor buckling. OctArm IV is comprised of four sections connected by endplates, providing twelve degrees of freedom. Performance of OctArm IV is characterized in a lab environment. Using only 4.13 bar of air pressure, the dexterous distal section provides 66% extension and 380° of rotation in less than .5 seconds. OctArm V has three sections and, using 8.27 bar of air pressure, the strong proximal section provides 890 N and 250 N of vertical and transverse load capacity, respectively. In addition to the in-lab testing, OctArm V underwent a series of field trials including open-air and in-water field tests. Outcomes of the trials, in which the manipulator demonstrated the ability for adaptive and novel manipulation in challenging environments, are described. OctArm VI is designed and constructed based on the in-lab performance, and the field testing of its predecessors. Implications for the deployment of soft robots in military environments are discussed.
In this paper we present ideas toward solving constrained optimization problems in a spatially-distributed mobile
sensor/actuator network using decentralized computation. Notionally, each node of the network is considered
to be a distributed computational unit that evolves its state according to a pre-defined rule. First, we show
how to design coordination rules to ensure that the global state of the network evolves to the solution of a
prescribed constrained optimization problem. Our strategy uses a recurrent neural network structure to solve
the optimization problem in a global way. Next, we introduce ideas for the case when there is an absence of an allto-
all communication topology. Assuming each node can only communicate locally with its 'nearest neighbors,'
our approach is to use the notion of a consensus variable protocol that implements a distributed observer,
enabling local nodes to asymptotically obtain global information using only nearest neighbor communications.
Finally, we suggest superimposing the neural network structure on top of this distributed observer to solve the
global optimization problem using only local and nearest neighbor communications.
Much work has been undertaken recently toward the development of low-power, high-performance sensor networks. There are many static remote sensing applications for which this is appropriate. The focus of this development effort is applications that require higher performance computation, but still involve severe constraints on power and other resources. Toward that end, we are developing a reconfigurable computing platform for miniature robotic and human-deployed sensor systems composed of several mobile nodes. The system provides static and dynamic reconfigurability for both software and hardware by the combination of CPU (central processing unit) and FPGA (field-programmable gate array) allowing on-the-fly reprogrammability. Static reconfigurability of the hardware manifests itself in the form of a "morphing bus" architecture that permits the modular connection of various sensors with no bus interface logic. Dynamic hardware reconfigurability provides for the reallocation of hardware resources at run-time as the mobile, resource-constrained nodes encounter unknown environmental conditions that render various sensors ineffective. This computing platform will be described in the context of work on chemical/biological/radiological plume tracking using a distributed team of mobile sensors. The objective for a dispersed team of ground and/or aerial autonomous vehicles (or hand-carried sensors) is to acquire measurements of the concentration of the chemical agent from optimal locations and estimate its source and spread. This requires appropriate distribution, coordination and communication within the team members across a potentially unknown environment. The key problem is to determine the parameters of the distribution of the harmful agent so as to use these values for determining its source and predicting its spread. The accuracy and convergence rate of this estimation process depend not only on the number and accuracy of the sensor measurements but also on their spatial distribution over time (the sampling strategy). For the safety of a human-deployed distribution of sensors, optimized trajectories to minimize human exposure are also of importance.
The systems described in this paper are currently being developed. Parts of the system are already in existence and some results from these are described.
Robot and sensor networks are needed for safety, security, and rescue applications such as port security and
reconnaissance during a disaster. These applications rely on real-time transmission of images, which generally saturate the
available wireless network infrastructure. Knowledge-based compression is a method for reducing the video frame
transmission rate between robots or sensors and remote operators. Because images may need to be archived as evidence
and/or distributed to multiple applications with different post processing needs, lossy compression schemes, such as MPEG,
H.26x, etc., are not acceptable. This work proposes a lossless video server system consisting of three classes of filters
(redundancy, task, and priority) which use different levels of knowledge (local sensed environment, human factors associated
with a local task, and relative global priority of a task) at the application layer of the network. It demonstrates the
redundancy and task filters for a realistic robot search scenario. The redundancy filter is shown to reduce the overall
transmission bandwidth by 24.07% to 33.42%, and, when combined with the task filter, reduces overall transmission
bandwidth by 59.08%to 67.83%. By itself, the task filter has the capability to reduce transmission bandwidth by 32.95% to
33.78%. While knowledge-based compression generally does not reach the same levels of reduction as MPEG, there are
instances where the system outperforms MPEG encoding.
We present a method for estimating the global uncertainty of epipolar geometry with applications to autonomous
vehicle navigation. Such uncertainty information is necessary for making informed decisions regarding the
confidence of a motion estimate, since we must otherwise accept the estimate without any knowledge of the
probability that the estimate is in error. For example, we may wish to fuse visual estimates with information
from GPS and inertial sensors, but without uncertainty information, we have no principled way to do so. Ideally,
we would perform a full search over the 7-dimensional space of fundamental matrices to yield an estimate and its
related uncertainty. However, searching this space is computationally infeasible. As a compromise between fully
representing posterior likelihood over this space and producing a single estimate, we represent the uncertainty
over the space of translation directions in a calibrated framework. In contrast to finding a single estimate,
representing the posterior likelihood is always a well-posed problem, albeit an often computationally challenging
one. Given the posterior likelihood, we derive a confidence interval around the motion estimate. We verify the
correctness of the confidence interval using synthetic data and show examples of uncertainty estimates using
vehicle-mounted camera sequences.
Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.
Space and Naval Warfare Systems Center, San Diego (SSC San Diego) has developed an unmanned vehicle and sensor operator control interface capable of simultaneously controlling and monitoring multiple sets of heterogeneous systems. The modularity, scalability and flexible user interface of the Multi-robot Operator Control Unit (MOCU) accommodates a wide range of vehicles and sensors in varying mission scenarios. MOCU currently controls all of the SSC San Diego developmental vehicles (land, air, sea, and undersea), including the SPARTAN Advanced Concept Technology Demonstration (ACTD) Unmanned Surface Vehicle (USV), the iRobot PackBot, and the Family of Integrated Rapid Response Equipment (FIRRE) vehicles and sensors. This paper will discuss how software and hardware modularity has allowed SSC San Diego to create a single operator control unit (OCU) with the capability to control a wide variety of unmanned systems.
The Networked Intelligence, Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances. The foundations are built upon FORCEnet-the U.S. Navy's process to define C4ISR for net-centric operations-and the Navy Unmanned Systems Common Control Roadmap to develop technologies and standards for interoperability, data sharing, publish-and-subscribe methodology, and software reuse.
The paper defines the goals and boundaries for NISR with focus on the system architecture, including the design tradeoffs necessary for unmanned systems in a net-centric model. Special attention is given to two specific scenarios demonstrating the integration of unmanned ground and water surface vehicles into the open-architecture
web-based command-and-control information-management system of Composeable FORCEnet. Planned spiral development for NISR will improve collaborative control, expand robotic sensor capabilities, address multiple domains including underwater and aerial platforms, and extend distributive communications infrastructure for battlespace optimization for unmanned systems in net-centric operations.