There have been large gains in the field of robotics, both in hardware sophistication and technical capabilities.
However, as more capable robots have been developed and introduced to battlefield environments, the problem of
interfacing with human controllers has proven to be challenging. Particularly in the field of military applications,
controller requirements can be stringent and can range from size and power consumption, to durability and cost.
Traditional operator control units (OCUs) tend to resemble laptop personal computers (PCs), as these devices are
mobile and have ample computing power. However, laptop PCs are bulky and have greater power requirements.
To approach this problem, a light weight, inexpensive controller was created based on a mobile phone running the
Android operating system. It was designed to control an iRobot Packbot through the Army Research Laboratory
(ARL) in-house Agile Computing Infrastructure (ACI). The hardware capabilities of the mobile phone, such as Wi-
Fi communications, touch screen interface, and the flexibility of the Android operating system, made it a compelling
platform. The Android based OCU offers a more portable package and can be easily carried by a soldier along with
normal gear requirements. In addition, the one hand operation of the Android OCU allows for the Soldier to keep an
unoccupied hand for greater flexibility.
To validate the Android OCU as a capable controller, experimental data was collected evaluating use of the
controller and a traditional, tablet PC based OCU. Initial analysis suggests that the Android OCU performed
positively in qualitative data collected from participants.
Large gains in the automation of human detection and tracking techniques have been made over the past several years.
Several of these techniques have been implemented on larger robotic platforms, in order to increase the situational
awareness provided by the platform. Further integration onto a smaller robotic platform that already has obstacle
detection and avoidance capabilities would allow these algorithms to be utilized in scenarios that are not plausible for
larger platforms, such as entering a building and surveying a room for human occupation with limited operator
However, transitioning these algorithms to a man-portable robot imparts several unique constraints, including limited
power availability, size and weight restrictions, and limited processor ability. Many imaging sensors, processing
hardware, and algorithms fail to adequately address one or more of these constraints.
In this paper, we describe the design of a payload suitable for our chosen man-portable robot, the iRobot Packbot. While
the described payload was built for a Packbot, it was carefully designed in order to be platform agnostic, so that it can be
used on any man-portable robot. Implementations of several existing motion and face detection algorithms that have
been chosen for testing on this payload are also discussed in some detail.
Pan-tilt-zoom (PTZ) cameras are frequently used in surveillance applications as they can observe a much larger region of
the environment than a fixed-lens camera while still providing high-resolution imagery. The pan, tilt, and zoom
parameters of a single camera may be simultaneously controlled by online users as well as automated surveillance
applications. To accurately register autonomously tracked objects to a world model, the surveillance system requires
accurate knowledge of camera parameters. Due to imprecision in the PTZ mechanism, these parameters cannot be
obtained from PTZ control commands but must be calculated directly from camera imagery. This paper describes the
efforts undertaken to implement a real-time calibration system for a stationary PTZ camera. The approach continuously
tracks distinctive image feature points from frame to frame, and from these correspondences, robustly calculates the
homography transformation between frames. Camera internal parameters are then calculated from these homographies.
The calculations are performed by a self contained program that continually monitors images collected by the camera as
it performs pan, tilt, and zoom operations. The accuracy of the calculated calibration parameters are compared to ground
truth data. Problems encountered include inaccuracies in large orientation changes and long algorithm execution time.