KEYWORDS: Sensors, Navigation systems, Safety, LIDAR, Operating systems, Open source software, Motion models, Monte Carlo methods, Robotics, Global Positioning System
Pavements used for construction and repair of airfield surfaces must be rigorously tested before use in the field. This testing is typically accomplished by trafficking simulated aircraft landing gear payloads across an experimental pavement test patch and measuring deflection, cracking, and other effects on the pavement and aggregate subbase. The landing gear payload is heavily weighted to simulate the pressures of landing and taxiing, and a large tractor pulls the landing gear repeatedly over the test patch while executing an intricate trafficking pattern to distribute the load on the patch in the desired manner. In conventional testing, a human drives the test vehicle, called a load cart, forward and backward over the experimental patch up to about 1000 times while carefully following a set of closely spaced lane markings. This is a dull job that is ripe for automation. One year ago, at this conference, we presented results of kitting the load cart, consisting of the tractor from a Caterpillar 621G scraper and a custom trailer carrying the landing gear simulacrum, with a custom vehicle interface and bringing it under tele-operation. In this paper, we describe the results of fully automating the load cart pavements test vehicle using the Robot Operating System 2 Navigation Stack. The solutions works without GPS, line following, or external tracking systems and involves minimal modifications to the vehicle. Using lidar and Adaptive Monte Carlo localization, the team achieved better than 6" cross-track accuracy with a lumbering, 300,000-pound vehicle.
The capability to rapidly augment airbases with bio-concrete infrastructure to support parking, loading, unloading, rearming, and refueling operations is of interest to the Air Force, because it requires transportation of fewer raw materials to remote sites. Automation of the bio-cement delivery further reduces logistical requirements and mitigates hazards to personnel, especially in contested or austere environments. In this paper we discuss the full-stack development and integration of a robotic applique for a commercial tractor and present the test results for autonomous delivery of bio-cement bacteria, feed stock, and water for stabilization of a sandy test area. The tractor autonomously navigates, sprays, and avoids obstacles using robust and economical off-the-shelf components and software. For this first phase of the project, we employ GNSS for localization and automotive lidar for obstacle detection. We report on the design of the robotic applique, including the mechanical, electrical, and software components, which are mostly commercial-off-the-shelf or open source. We discuss the results of testing and calibration including tests of towing capacity, calibration of steering, measurement of liquid spray distribution, measurement of tracking errors, and determination of repeatability of positioning for refilling of the reservoir. We also examine higher order behaviors and chart a path forward for future development, which includes GNSS-denied navigation.
The Air Force Civil Engineer Center’s C-17 Load Cart is a large, 150-ton machine based on a modified Caterpillar 621G scraper for testing experimental pavements used in airfield surface construction and repair. Long lasting, durable, preparein- place, minimally resourced pavements represent a critical technology for airfield damage repair, especially in expeditionary settings, and formulations must be tested using realistic loads but without the expense and logistical challenges of using real aircraft. The Load Cart is an articulated vehicle consisting of the 621G tractor and a custom trailer carrying a weighted set of landing gear to simulate the loads exerted during aircraft landing and taxiing. During the test a human driver repetitively traffics the vehicle hundreds of times over an experimental patch of pavement, following an intricate trafficking pattern, to evaluate wear and mechanical properties of the pavement formulation. The job of driving the Load Cart is dull, repetitive, and prone to errors and systematic variation depending on the individual driver. This paper describes the full-stack development of an autonomy kit for the Load Cart, to enable repeatable testing without a driver. Open-source code (Robot Operating System), commercial-off-the-shelf sensors, and a modular design based on open standards are exploited to achieve autonomous operation without the use of GNSS (which is challenged by operation inside a metal test building). The Vehicle Control Unit is a custom interface in PC-104 form factor allowing actuation of the Load Cart via CAN J1939. Operational modes include manual, tele-operation, and autonomous.
Object detection AI’s enable robust solutions for fast, automated detection of anomalies in operating environments such as airfields. Implementation of AI solutions requires training models on a large and diverse corpus of representative training data. To reliably detect craters and other damage on airfields, the AI must be trained on a large, varied, and realistic set of images of craters and other damage. The current method for obtaining this training data is to set explosives in the concrete surface of a test airfield to create actual damage and to record images of this real data. This approach is extremely expensive and time consuming, results in relatively little data representing just a few damage cases and does not adequately represent damage to UXO and other artifacts that are detected. To address this paucity of training data, we have begun development of a training data generation and labeling pipeline that leverages Unreal Engine 4 to create realistic synthetic environments populated with realistic damage and artifacts. We have also developed a labeling system for automatic labeling of the detection segments in synthetic training images, in order to provide relief from the tedious and time-consuming process of manually labeling segments in training data and eliminate human errors incurred by manual labeling. We present comparisons of performance of two object detection AI’s trained on real and synthetic data and discuss cost and schedule savings enabled by the automated labeling system used for labeling of detection segments.
We present results from testing a multi-modal sensor system (consisting of a camera, LiDAR, and positioning system) for real-time object detection and geolocation. The system’s eventual purpose is to assess damage and detect foreign objects on a disrupted airfield surface to reestablish a minimum airfield operating surface. It uses an AI to detect objects and generate bounding boxes or segmentation masks in data acquired with a high-resolution area scan camera. It locates the detections in the local, sensor-centric coordinate system in real time using returns from a low-cost commercial LiDAR system. This is accomplished via an intrinsic camera calibration together with a 3D extrinsic calibration of the camera- LiDAR pair. A coordinate transform service uses data from a navigation system (comprising an inertial measurement unit and global positioning system) to transform local coordinates of the detections obtained with the AI and calibrated sensor pair into earth-centered coordinates. The entire sensor system is mounted on a pan-tilt unit to achieve 360-degree perception. All data acquisition and computation are performed on a low SWAP-C system-on-module that includes an integrated GPU. Computer vision code runs real-time on the GPU and has been accelerated using CUDA. We have chosen Robot Operating System (ROS1 at present but porting to ROS2 in the near term) as the control framework for the system. All computer vision, motion, and transform services are configured as ROS nodes.
KEYWORDS: Clouds, LIDAR, Detection and tracking algorithms, Data modeling, Algorithm development, Data acquisition, Sensors, Visualization, Object recognition, C++
The ability to rapidly assess damage to military infrastructure after an attack is the object of ongoing research. In the case
of runways, sensor systems capable of detecting and locating craters, spall, unexploded ordinance, and debris are necessary
to quickly and efficiently deploy assets to restore a minimum airfield operating surface. We describe measurements
performed using two commercial, robotic scanning LiDAR systems during a round of testing at an airfield. The LiDARs
were used to acquire baseline data and to conduct scans after two rounds of demolition and placement of artifacts for the
entire runway. Configuration of the LiDAR systems was sub-optimal due to availability of only two platforms for
placement of sensors on the same side of the runway. Nevertheless, results prove that the spatial resolution, accuracy, and
cadence of the sensors is sufficient to develop point cloud representations of the runway sufficient to distinguish craters,
debris and most UXO. Location of a complementary set of sensors on the opposite side of the runway would alleviate the
observed shadowing, increase the density of the registered point cloud, and likely allow detection of smaller artifacts.
Importantly, the synoptic data acquired by these static LiDAR sensors is dense enough to allow registration (fusion) with
the smaller, denser, targeted point cloud data acquired at close range by unmanned aerial systems. The paper will also
discuss point cloud manipulation and 3D object recognition algorithms that the team is developing for automatic detection
and geolocation of damage and objects of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.