Optical tweezers offer certain advantages such as multiplexing using a programmable spatial light modulator, flexibility in the choice of the manipulated object and the manipulation medium, precise control, easy object release, and minimal object damage. However, automated manipulation of multiple objects in parallel, which is essential for efficient and reliable formation of micro-scale assembly structures, poses a difficult challenge. There are two primary research issues in addressing this challenge. First, the presence of stochastic Langevin force giving rise to Brownian motion requires motion control for all the manipulated objects at fast rates of several Hz. Second, the object dynamics is non-linear and even difficult to represent analytically due to the interaction of multiple optical traps that are manipulating neighboring objects. As a result, automated controllers have not been realized for tens of objects, particularly with three dimensional motions with guaranteed collision avoidances. In this paper, we model the effect of interacting optical traps on microspheres with significant Brownian motions in stationary fluid media, and develop simplified state-space representations. These representations are used to design a model predictive controller to coordinate the motions of several spheres in real time. Preliminary experiments demonstrate the utility of the controller in automatically forming desired arrangements of varying configurations starting with randomly dispersed microspheres.
Collaborative teams of human operators and mobile ground robots are becoming popular in manufacturing plants
to assist humans with a lot of the repetitive tasks such as the packing of related objects into different units,
an operation known as kitting. In this paper, we present an ontology to provide a unified representation of all
kitting-related tasks, which are decomposed into atomic actions that are either computational involving sensing,
perception, planning, and control, or physical involving actuation and manipulation. The ontology is then used
in a stochastic integer linear program for optimum partitioning of the atomic tasks between the robots and
humans. Preliminary experiments on a single robot, single human case yield promising results where the kitting
operations are completed with lower durations and manipulation failure rates using human-robot partnership
versus just the human or only the robot. This success is achieved by the robot seeking human assistance for
visual perception tasks while performing the other tasks primarily on its own.
Robotic sensor networks (RSNs), which consist of networks of sensors placed on mobile robots, are being increasingly used for environment monitoring applications. In particular, a lot of work has been done on simultaneous localization and mapping of the robots, and optimal sensor placement for environment state estimation<sup>1</sup>. The deployment of RSNs, however, remains challenging in harsh environments where the RSNs have to deal with significant perturbations in the forms of wind gusts, turbulent water flows, sand storms, or blizzards that disrupt inter-robot communication and individual robot stability. Hence, there is a need to be able to control such perturbations and bring the networks to desirable states with stable nodes (robots) and minimal operational performance (environment sensing). Recent work has demonstrated the feasibility of controlling the non-linear dynamics in other communication networks like emergency management systems and power grids by introducing compensatory perturbations to restore network stability and operation<sup>2</sup>. In this paper, we develop a computational framework to investigate the usefulness of this approach for RSNs in marine environments. Preliminary analysis shows promising performance and identifies bounds on the original perturbations within which it is possible to control the networks.
Conference Committee Involvement (2)
Sensors for Next-Generation Robotics III
21 April 2016 | Baltimore, Maryland, United States
Sensors for Next-Generation Robotics II
22 April 2015 | Baltimore, Maryland, United States