In this work, we provide an overview of vision-based control for perching and grasping for Micro Aerial Vehicles. We investigate perching on at, inclined, or vertical surfaces as well as visual servoing techniques for quadrotors to enable autonomous perching by hanging from cylindrical structures using only a monocular camera and an appropriate gripper. The challenges of visual servoing are discussed, and we focus on the problems of relative pose estimation, control, and trajectory planning for maneuvering a robot with respect to an object of interest. Finally, we discuss future challenges to achieve fully autonomous perching and grasping in more realistic scenarios.
We address the key challenges for autonomous fast flight for Micro Aerial Vehicles (MAVs) in 3-D, cluttered environments. For complete autonomy, the system must identify the vehicle's state at high rates, using either absolute or relative asynchronous on-board sensor measurements, use these state estimates for feedback control, and plan trajectories to the destination. State estimation requires information from different sensors to be fused, exploiting information from different, possible asynchronous sensors at different rates. In this work, we present techniques in the area of planning, control and visual-inertial state estimation for fast navigation of MAVs. We demonstrate how to solve on-board, on a small computational unit, the pose estimation, control and planning problems for MAVs, using a minimal sensor suite for autonomous navigation composed of a single camera and IMU. Additionally, we show that a consumer electronic device such as a smartphone can alternatively be employed for both sensing and computation. Experimental results validate the proposed techniques. Any consumer, provided with a smartphone, can autonomously drive a quadrotor platform at high speed, without GPS, and concurrently build 3-D maps, using a suitably designed app.
The development of autonomous Micro Aerial Vehicles (MAVs) is significantly constrained by their size, weight and power consumption. In this paper, we explore the energetics of quadrotor platforms and study the scaling of mass, inertia, lift and drag with their characteristic length. The effects of length scale on masses and inertias associated with various components are also investigated. Additionally, a study of Lithium Polymer battery performance is presented in terms of specific power and specific energy. Finally, we describe the power and energy consumption for different quadrotors and explore the dependence on size and mass for static hover tests as well as representative maneuvers.
The past decade has seen an increased interest towards research involving Autonomous Micro Aerial Vehicles (MAVs). The predominant reason for this is their agility and ability to perform tasks too difficult or dangerous for their human counterparts and to navigate into places where ground robots cannot reach. Among MAVs, rotary wing aircraft such as quadrotors have the ability to operate in confined spaces, hover at a given point in space and perch1 or land on a flat surface. This makes the quadrotor a very attractive aerial platform giving rise to a myriad of research opportunities. The potential of these aerial platforms is severely limited by the constraints on the flight time due to limited battery capacity. This in turn arises from limits on the payload of these rotorcraft. By automating the battery recharging process, creating autonomous MAVs that can recharge their on-board batteries without any human intervention and by employing a team of such agents, the overall mission time can be greatly increased. This paper describes the development, testing, and implementation of a system of autonomous charging stations for a team of Micro Aerial Vehicles. This system was used to perform fully autonomous long-term multi-agent aerial surveillance experiments with persistent station keeping. The scalability of the algorithm used in the experiments described in this paper was also tested by simulating a persistence surveillance scenario for 10 MAVs and charging stations. Finally, this system was successfully implemented to perform a 9½ hour multi-agent persistent flight test. Preliminary implementation of this charging system in experiments involving construction of cubic structures with quadrotors showed a three-fold increase in effective mission time.
This paper describes the results of a Joint Experiment performed on behalf of the MAST CTA. The system developed for the Joint Experiment makes use of three robots which work together to explore and map an unknown environment. Each of the robots used in this experiment is equipped with a laser scanner for measuring walls and a camera for locating doorways. Information from both of these types of structures is concurrently incorporated into each robot's local map using a graph based SLAM technique.
A Distributed-Data-Fusion algorithm is used to efficiently combine local maps from each robot into a shared global map. Each robot computes a compressed local feature map and transmits it to neighboring robots, which allows each robot to merge its map with the maps of its neighbors. Each robot caches the compressed maps from its neighbors, allowing it to maintain a coherent map with a common frame of reference.
The robots utilize an exploration strategy to efficiently cover the unknown environment which allows collaboration on an unreliable communications channel. As each new branching point is discovered by a robot, it broadcasts the information about where this point is along with the robot's path from a known landmark to the other robots. When the next robot reaches a dead-end, new branching points are allocated by auction. In the event of communication interruption, the robot which observed the branching point will eventually explore it; therefore, the exploration is complete in the face of communication failure.
There are many examples in nature where large groups of individuals are able to maintain three-dimensional formations
while navigating in complex environments. This paper addresses the development of a framework and robot controllers
that enable a group of aerial robots to maintain a formation with partial state information while avoiding collisions. The
central concept is to develop a low-dimensional abstraction of the large teams of robots, facilitate planning, command, and
control in a low-dimensional space, and to realize commands or plans in the abstract space by synthesizing controllers for
individual robots that respect the specified abstraction.
The fundamental problem that is addressed in this paper relates to coordinated control of multiple UAVs in close
proximity. We develop a representation for a team of robots based on the first and second statistical moments of the
system and design kinematic, exponentially stabilizing controllers for point robots. The selection of representation permits
a controller design that is invariant to the number of robots in the system, requires limited global state information, and
reduces the complexity of the planning problem by generating an abstract planning and control space determined by the
moment parameterization. We present experimental results with a team of quadrotors and discuss considerations such as
aerodynamic interactions between robots.
The vision for the Micro Autonomous Systems Technologies MAST programis to develop autonomous, multifunctional,
collaborative ensembles of agile, mobile microsystems to enhance tactical situational awareness in urban
and complex terrain for small unit operations. Central to this vision is the ability to have multiple, heterogeneous
autonomous assets to function as a single cohesive unit, that is adaptable, responsive to human commands
and resilient to adversarial conditions. This paper represents an effort to develop a simulation environment for
studying control, sensing, communication, perception, and planning methodologies and algorithms.
Air and ground vehicles exhibit complementary capabilities and
characteristics as robotic sensor platforms. Fixed wing aircraft offer broad field of view and rapid coverage of search areas. However, minimum operating airspeed and altitude limits, combined with attitude uncertainty, place a lower limit on their ability to detect and localize ground features. Ground vehicles on the other hand offer high resolution sensing over relatively short ranges with the disadvantage of slow coverage. This paper presents a decentralized architecture and solution methodology for seamlessly realizing the collaborative potential of air and ground robotic sensor platforms. We provide a framework based on an established approach to the underlying sensor fusion problem. This provides transparent integration of information from heterogeneous sources. An information-theoretic utility measure captures the task objective and robot inter-dependencies. A simple distributed solution mechanism is employed to determine team member sensing trajectories subject to the constraints of individual vehicle and sensor sub-systems. The architecture is applied to a mission involving searching for and localizing an unknown number of targets in an user specified search area. Results for a team of two fixed wing UAVs and two all terrain UGVs equipped with vision sensors are presented.
We address the development of a local bus architecture for robot systems that facilitates modular development, and increases the reliability of systems composed of heterogeneous sensors and actuators. The communications bus is based on Control Area Network (CAN), and supports distributed processing in physically separate nodes. Modular cabling and a modular software interface facilitate assembly and modification, and all bus communication is browseable for configuration and troubleshooting. We demonstrate two implementations of this system, and discuss its performance and capabilities compared to alternate communication architectures, with specific emphasis on mobile robots.
We describe a framework for multi-vehicle control which explicitly incorporates the state of the communication network and the constraints imposed by specifications on the quality of the communications links available to each robot. In a multi-robot adhoc setting, the need for guaranteed communications is essential for cooperative behavior. We propose a control methodology that ensures local connectivity in multi-robot navigation. Specifically, given an initial and final configuration of robots in which the quality of each communication link is above some specified threshold, we synthesize controllers that guarantee each robot goes to its goal destination while maintaining the quality of the communication links above the given threshold. For the sake of simplicity, we assume each robot has a pre-assigned "base unit" with which the robot tries to maintain connectivity while performing the assigned task. The proposed control methodology allows the robot's velocity to align with the tangent of a critical communication surface such that it might be possible for the robot to move on the surface. No assumptions are made regarding the critical surface, which might be arbitrarily complex for cluttered urban environments. The stability of such technique is shown and three-dimensional simulations with a small team of robots are presented. The paper demonstrates the performance of the control scheme in various three-dimensional settings with proofs of guarantees in simple scenarios.
In this paper, we describe a framework for coordinating multiple robots in the execution of a cooperative manipulation task. The coordination is distributed among the robots that use modular, hybrid controllers in order to execute the task. Different plans and models of the environment are built using a mix of global and local information and can be shared by the team members using wireless communication. Our framework uses models and metrics of computation, control, sensing and information in order to dynamically assign roles to the robots, and define the most suitable control hierarchy given the requirements of the task and characteristics of the robots. To test our framework, we have developed a object oriented simulator that allows the user to create different types of robots, to define different environments and to specify various types of control algorithms and communication protocols. Using the simulator, we have been able to test the execution of this task in different scenarios with different numbers of robots. We can verify the efficacy of the robot controllers, the effects of different parameters on the system performance and the ability of the robots to adapt to changes in the environment. Results from experiments with two real robots and with the simulator are used to explore the multi-robot coordination in a cooperative manipulation task.
Transparency is a method proposed to quantify the telepresence performance of bilateral teleoperation systems. It is practically impossible to achieve transparency for all frequencies, however, previous research has shown that by proper manipulation of the individual transfer functions, transparent systems for limited frequency bands can be designed. In this paper we introduce a different approach. We first study the problem of designing systems that are transparent only for a given value of the output impedance, then, by combining this concept with that of time-adaptive impedance estimation, we postulate a new strategy for the design of transparent systems. In the proposed method, the output impedance estimate is updated at each time instant using adaptive ARMA modeling based on either the LMS or RLS algorithms. The current estimate of the output impedance is used to update some free-design system parameters in such a way that the system tries to achieve transparency. We refer to this strategy as asymptotic transparency. An example on how to use this strategy in the design of a system with position-forward and force-feedback paths is included.
People with disabilities such as quadriplegia can use mouth-sticks and head-sticks as extension devices to perform desired manipulations. These extensions provide extended proprioception which allows users to directly feel forces and other perceptual cues such as texture present at the tip of the mouth-stick. Such devices are effective for two principle reasons: because of their close contact with the user's tactile and proprioceptive sensing abilities; and because they tend to be lightweight and very stiff, and can thus convey tactile and kinesthetic information with high-bandwidth. Unfortunately, traditional mouth-sticks and head-sticks are limited in workspace and in the mechanical power that can be transferred because of user mobility and strength limitations. We describe an alternative implementation of the head-stick device using the idea of a virtual head-stick: a head-controlled bilateral force-reflecting telerobot. In this system the end-effector of the slave robot moves as if it were at the tip of an imaginary extension of the user's head. The design goal is for the system is to have the same intuitive operation and extended proprioception as a regular mouth-stick effector but with augmentation of workspace volume and mechanical power. The input is through a specially modified six DOF master robot (a PerForceTM hand-controller) whose joints can be back-driven to apply forces at the user's head. The manipulation tasks in the environment are performed by a six degree-of-freedom slave robot (the Zebra-ZEROTM) with a built-in force sensor. We describe the prototype hardware/software implementation of the system, control system design, safety/disability issues, and initial evaluation tasks.