Visual homing is a navigation method based on comparing a stored image of the goal location and the current image
(current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees,
employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms
using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the
goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in
the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing
method using scale change information (Homing in Scale Space, HiSS) from SIFT.
HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since
the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy.
We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt
based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined
with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new
parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal
We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2
stereo camera. We evaluate the performance of both methods using a set of performance measures described in this
A crucially important aspect for mission-critical robotic operations is ensuring as best as possible that an
autonomous system be able to complete its task. In a project for the Defense Threat Reduction Agency (DTRA) we
are developing methods to provide such guidance, specifically for counter-Weapons of Mass Destruction (C-WMD)
missions. In this paper, we describe the scenarios under consideration, the performance measures and metrics being
developed, and an outline of the mechanisms for providing performance guarantees.
We consider the scenario where an autonomous platform that is searching or traversing a building may observe unstable
masonry or may need to travel over unstable rubble. A purely behaviour-based system may handle these challenges but
produce behaviour that works against long-terms goals such as reaching a victim as quickly as possible. We extend our
work on ADAPT, a cognitive robotics architecture that incorporates 3D simulation and image fusion, to allow the robot
to predict the behaviour of physical phenomena, such as falling masonry, and take actions consonant with long-term
goals. We experimentally evaluate a cognitive only and reactive only approach to traversing a building filled with
various numbers of challenges and compare their performance. The reactive only approach succeeds only 38% of the
time, while the cognitive only approach succeeds 100% of the time. While the cognitive only approach produces very
impressive behaviour, our results indicate how much better the combination of cognitive and behaviour-based can be.