The objective of this research is to extend the sensing capabilities of a multi-vehicle ground system by incorporating the environmental perception abilities of unmanned aerial vehicles.
The aerial vehicle used in this research is a Miniature Aircraft Gas Xcell RC helicopter. It is outfitted with a sensor payload containing stereo vision cameras, GPS, and a digital compass. Geo- referenced images are gathered using the above sensors that are used in this research to create a map of the operating region. The ground vehicle used in this research is an automated Suzuki Mini-Quad ATV. It has the following onboard sensors: single-vision camera, laser range device, digital compass, GPS, and an encoder. The ground vehicle uses the above sensors and the map provided by the helicopter to traverse the region, locate, and isolate simulated land mines. The base station consists of a laptop that provides a communication link between the aerial and ground vehicle systems. It also provides the operator with system operation information and statistics.
All communication between the vehicles and the base station is performed using JAUS (Joint Architecture for Unmanned Systems) messages. The JAUS architecture is employed as a means to organize inter-vehicle and intra-vehicle communication and system component hierarchy. The purpose of JAUS is to provide interoperability between various unmanned systems and subsystems for both military and commercial applications. JAUS seeks to achieve this through the development of functionally cohesive building blocks called components whose interface messages are clearly defined. The JAUS architecture allows for a layered control strategy which has specific message sets for each layer of control. Implementation of the JAUS architecture allows for ease of software development for a multi- vehicle system.
This experiment demonstrates how an air-ground vehicle system can be used to cooperatively locate and dispose of simulated mines.
This paper describes the development and performance of a sensor system that was utilized for autonomous navigation of an unmanned ground vehicle. Four different sensor types were integrated to identify obstacles in the vicinity of the vehicle and to identify smooth terrain that could be traversed at speeds up to thirty miles per hour. The paper also describes a sensor fusion approach that was developed whereby the output of all sensors was in a common grid based format. The environment around the vehicle was modeled by a 120×120 grid where each grid cell was 0.5m× 0.5m in size and where the orientation of the grid lines was always maintained parallel to the north-south and east-west lines. Every sensor output an estimate of the traversability of each grid cell. For the three dimensional obstacle avoidance sensors (rotating ladar and stereo vision) the three dimensional point data was projected onto the grid plane. The terrain traversability sensors, i.e. fixed ladar and monocular vision, estimated traversability based on smoothness of the spatial plane fitted to the range data or the commonality in appearance of pixels in the grid cell to those directly in front of the vehicle, respectively.