This paper presents a scalable modeling technique that displays 3D data from a priori and real-time sensors developed by
Autonomous Solutions under contract with NAVEODTECHDIV and TARDEC. A novel algorithm provides structure
and texture to 3D point clouds while an octree repository management technique scales level of detail for seamless
zooming from kilometer to centimeter scales. This immersive 3D environment enables direct measurement of absolute
size, automated manipulator placement, and indication of unique world coordinates for navigation. Since a priori data is
updated by new information collected with stereovision and lidar sensors, high accuracy pose is not a requirement.
Autonomous Solutions has multiple projects involving mobile robot arms for
EOD applications. This paper will describe 3 such projects. In the first, a 5 DOF
robot arm of our own design is mounted on a 4-wheeled omni-directional platform.
The arm is capable of lifting 100 pounds at close to full extension, and it can lift
significantly more at shorter distances. The operator can specify either individual
joint control or "fly the gripper" control, where the individual joint velocities are
automatically calculated in response to commanded gripper velocities. In the second
project, 3D data of a scene of interest is gathered and then presented to the
EOD technician, along with a representation of the arm. The technician can then
rotate the scene and arm to obtain the best possible view for the task. In the final
project, a 3 DOF arm will be mounted to an omni-directional platform. The arm
and platform will be treated as a single 6 DOF manipulator, and the operator will
specify 6 DOF gripper velocity commands using a 6-axis input device, such as
those used for solid modeling programs.