For the TARDEC-funded Stingray Project, iRobot Corporation and Chatten Associates are developing technologies that
will allow small UGVs to operate at tactically useful speeds. In previous work, we integrated a Chatten Head-Aimed
Remote Viewer (HARV) with an iRobot Warrior UGV, and used the HARV to drive the Warrior, as well as a small,
high-speed, gas-powered UGV surrogate. In this paper, we describe our continuing work implementing semiautonomous
driver-assist behaviors to help an operator control a small UGV at high speeds. We have implemented an
IMU-based heading control behavior that enables tracked vehicles to maintain accurate heading control even over rough
terrain. We are also developing a low-latency, low-bandwidth, high-quality digital video protocol to support immersive
visual telepresence. Our experiments show that a video compression codec using the H.264 algorithm can produce
several times better resolution than a Motion JPEG video stream, while utilizing the same limited bandwidth, and the
same low latency. With further enhancements, our H.264 codec will provide an order of magnitude greater quality,
while retaining a low latency comparable to Motion JPEG, and operating within the same bandwidth.
The operator's situational awareness greatly affects mission performance for remote operations of Explosive Ordnance
Disposal (EOD) robots. Testing by Army EOD sergeants has shown that a Head-Aimed Remote Viewer (HARV) can significantly
increase mission performance in several key tasks, such as identifying secondary Improvised Explosive Devices
(IED's) and maneuvering in tight quarters. A HARV system improves the operator's situational awareness by providing an
intuitive, "look around" vision interface that DARPA research4 has shown provides a 400% improvement in the operator's
spatial understanding of the remote environment. This paper describes the results of functional testing conducted by US
Army civilian engineers and EOD sergeants at Picatinny Arsenal, in support of W15QKN-06-C-0190.
The use of head-aimed vision systems for remote navigation and weapons aiming greatly increases the mission performance of armed unmanned ground vehicles. Head-aimed human/robotic vision interfaces greatly improves situational awareness. Task performance in target tracking and threat identification is increased by 200 to 300 percent.
Head-aimed vision systems provide significantly improved situational awareness, accuracy, and decision speed for the tele-operation of agile robots. With head-aimed vision, wherever the operator looks, a sensor system onboard the remote vehicle "looks". When done correctly, head-aimed vision crates a powerful sense of telepresence. An overall performance increase of 250% was documented in tests we ran of reconnaissance tasks for an unmanned ground vehicle. Operator workload was also reduced.
We have designed a minimally intrusive Operator Control Unit (OCU) intended to be used by a dismounted soldier. The OCU is operated using a combination of head aiming, plus a small wireless controller that is integrated in the grip of the soldier's rifle. This minimally intrusive OCU allows soldiers to navigate a software interface (for example, to call up a map), operate a remote camera system or other sensors on an unmanned vehicle, and/or tele-operate the vehicle itself, all while the soldiers have their heads up and their hands on their weapons. Central to the concept is the idea of a head-aimed software interface, where natural and intuitive head motion is used instead of traditional mouse movements to efficiently navigate, point or even select items in the display-operators simply move their heads in the direction that they want to "look" and the display is seamlessly updated with new information. When combined with the controller that is integrated in the weapon grip, this allows almost hands free operation, as opposed to operating a PDA or other standard controller system which generally occupies both hands and requires operators to look down at a screen.
A head-aimed vision system greatly improves the situational awareness and decision speed for tele-operations of mobile robots. With head-aimed vision, the tele-operator wears a head-mounted display and a small three axis head-position measuring device. Wherever the operator looks, the remote sensing system "looks". When the system is properly designed, the operator's occipital lobes are "fooled" into believing that the operator is actually on the remote robot. The result is at least a doubling of: situational awareness, threat identification speed, and target tracking ability. Proper system design must take into account: precisely matching fields of view; optical gain; and latency below 100 milliseconds. When properly designed, a head-aimed system does not cause nausea, even with prolonged use.