KEYWORDS: Sensors, Navigation systems, Safety, LIDAR, Operating systems, Open source software, Motion models, Monte Carlo methods, Robotics, Global Positioning System
Pavements used for construction and repair of airfield surfaces must be rigorously tested before use in the field. This testing is typically accomplished by trafficking simulated aircraft landing gear payloads across an experimental pavement test patch and measuring deflection, cracking, and other effects on the pavement and aggregate subbase. The landing gear payload is heavily weighted to simulate the pressures of landing and taxiing, and a large tractor pulls the landing gear repeatedly over the test patch while executing an intricate trafficking pattern to distribute the load on the patch in the desired manner. In conventional testing, a human drives the test vehicle, called a load cart, forward and backward over the experimental patch up to about 1000 times while carefully following a set of closely spaced lane markings. This is a dull job that is ripe for automation. One year ago, at this conference, we presented results of kitting the load cart, consisting of the tractor from a Caterpillar 621G scraper and a custom trailer carrying the landing gear simulacrum, with a custom vehicle interface and bringing it under tele-operation. In this paper, we describe the results of fully automating the load cart pavements test vehicle using the Robot Operating System 2 Navigation Stack. The solutions works without GPS, line following, or external tracking systems and involves minimal modifications to the vehicle. Using lidar and Adaptive Monte Carlo localization, the team achieved better than 6" cross-track accuracy with a lumbering, 300,000-pound vehicle.
The Air Force Civil Engineer Center’s C-17 Load Cart is a large, 150-ton machine based on a modified Caterpillar 621G scraper for testing experimental pavements used in airfield surface construction and repair. Long lasting, durable, preparein- place, minimally resourced pavements represent a critical technology for airfield damage repair, especially in expeditionary settings, and formulations must be tested using realistic loads but without the expense and logistical challenges of using real aircraft. The Load Cart is an articulated vehicle consisting of the 621G tractor and a custom trailer carrying a weighted set of landing gear to simulate the loads exerted during aircraft landing and taxiing. During the test a human driver repetitively traffics the vehicle hundreds of times over an experimental patch of pavement, following an intricate trafficking pattern, to evaluate wear and mechanical properties of the pavement formulation. The job of driving the Load Cart is dull, repetitive, and prone to errors and systematic variation depending on the individual driver. This paper describes the full-stack development of an autonomy kit for the Load Cart, to enable repeatable testing without a driver. Open-source code (Robot Operating System), commercial-off-the-shelf sensors, and a modular design based on open standards are exploited to achieve autonomous operation without the use of GNSS (which is challenged by operation inside a metal test building). The Vehicle Control Unit is a custom interface in PC-104 form factor allowing actuation of the Load Cart via CAN J1939. Operational modes include manual, tele-operation, and autonomous.
We present results from testing a multi-modal sensor system (consisting of a camera, LiDAR, and positioning system) for real-time object detection and geolocation. The system’s eventual purpose is to assess damage and detect foreign objects on a disrupted airfield surface to reestablish a minimum airfield operating surface. It uses an AI to detect objects and generate bounding boxes or segmentation masks in data acquired with a high-resolution area scan camera. It locates the detections in the local, sensor-centric coordinate system in real time using returns from a low-cost commercial LiDAR system. This is accomplished via an intrinsic camera calibration together with a 3D extrinsic calibration of the camera- LiDAR pair. A coordinate transform service uses data from a navigation system (comprising an inertial measurement unit and global positioning system) to transform local coordinates of the detections obtained with the AI and calibrated sensor pair into earth-centered coordinates. The entire sensor system is mounted on a pan-tilt unit to achieve 360-degree perception. All data acquisition and computation are performed on a low SWAP-C system-on-module that includes an integrated GPU. Computer vision code runs real-time on the GPU and has been accelerated using CUDA. We have chosen Robot Operating System (ROS1 at present but porting to ROS2 in the near term) as the control framework for the system. All computer vision, motion, and transform services are configured as ROS nodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.