Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to
relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other
monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the
robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally
disturbing the target or nearby objects.
We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to
conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues
improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect
that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used
including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono
(two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV
teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks
derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint
Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Previous foveal/peripheral display systems have typically combined the foveal and peripheral views optically, in a single
eye, in order to provide simultaneously both high resolution and wide field of view from a limited number of pixels.
While quite effective, this approach can lead to cumbersome optical designs that are not well suited to head-mounted
displays. A simpler approach may be possible in the form of a dichoptic vision system, wherein each eye receives a
different field of view (FOV) of the same scene, at different resolutions. One eye would be presented with highresolution
narrow-FOV foveal imagery, while the other would receive a much wider peripheral FOV. Binocular overlap
in the central region would provide some degree of stereoscopic depth perception. It remains to be determined, however,
if such a system would be acceptable to users, or if binocular rivalry or other adverse side-effects would degrade visual
task performance compared to conventional head-mounted binocular displays. In this paper, we describe a preliminary
dichoptic foveal/peripheral vision system and suggest methods by which its usability and performance can be assessed.
This effort was funded by the U.S. Air Force Research Laboratory Human Performance Wing under SBIR Topic
Head-aimed vision systems provide significantly improved situational awareness, accuracy, and decision speed for the tele-operation of agile robots. With head-aimed vision, wherever the operator looks, a sensor system onboard the remote vehicle "looks". When done correctly, head-aimed vision crates a powerful sense of telepresence. An overall performance increase of 250% was documented in tests we ran of reconnaissance tasks for an unmanned ground vehicle. Operator workload was also reduced.
We have designed a minimally intrusive Operator Control Unit (OCU) intended to be used by a dismounted soldier. The OCU is operated using a combination of head aiming, plus a small wireless controller that is integrated in the grip of the soldier's rifle. This minimally intrusive OCU allows soldiers to navigate a software interface (for example, to call up a map), operate a remote camera system or other sensors on an unmanned vehicle, and/or tele-operate the vehicle itself, all while the soldiers have their heads up and their hands on their weapons. Central to the concept is the idea of a head-aimed software interface, where natural and intuitive head motion is used instead of traditional mouse movements to efficiently navigate, point or even select items in the display-operators simply move their heads in the direction that they want to "look" and the display is seamlessly updated with new information. When combined with the controller that is integrated in the weapon grip, this allows almost hands free operation, as opposed to operating a PDA or other standard controller system which generally occupies both hands and requires operators to look down at a screen.