Training object detection algorithms to operate in complex geo-environments remains a significant challenge, necessitating large and diverse datasets (i.e., unique backgrounds and conditions) that are not always readily available. Physically generating requisite data can also be both cost and time prohibitive depending on the object(s) and area(s) of interest, especially in the case of multi-spectral and hyper-spectral imagery. Thus, there is increasing interest in the use of synthetic data to supplement existing physical datasets. To this end, the US Army Engineer Research and Development Center (ERDC) continues to develop a computational test-bed with a tool suite called the VESPA or, the Virtual Environmental Simulation for Physics-based Analysis, to support synthetic multi-spectral and hyper-spectral EO/IR imagery generation. The VESPA consists of integrated (1) scene generation tools, (2) multi-fidelity models for simulating heat and mass transfer and atmospheric energy propagation in geo-environments and climates worldwide that are optimized for high performance computing (3) data interrogation utilities, and (4) component-level sensor models capable of producing AI/ML ready near- and far-field imagery that is comparable to that produced by real sensors. This study presents an overview of the VESPA, new advances/capabilities, and results from a recent detailed validation and verification study.
The increasing deployment of AI in critical sectors necessitates advancements in explainable AI (XAI) to ensure transparency and trustworthiness of AI decisions. This paper introduces a novel methodology that leverages the Virtual Environmental Simulation for Physics-based Analysis (VESPA) framework in conjunction with Randomized Input Sampling for Explanation (RISE) to provide enhanced explainability for AI models, particularly in complex simulated environments. VESPA, known for its high-fidelity, physics-based simulations across diverse conditions, generates a vast dataset encompassing various sensor configurations, environmental factors, and material responses. This dataset serves as the foundation for applying RISE, a model-agnostic approach that generates pixel-level importance maps by probing the AI model with masked versions of the input images. Through this integration, we offer a systematic way to visualize and understand the influence of different environmental elements on AI decisions. Our approach not only sheds light on the ”black box” of AI decision-making processes but also provides a scalable framework for evaluating AI models’ robustness and reliability under a wide array of simulated scenarios.
Automatic target recognition (ATR) algorithms that rely on machine learning approaches are limited by the quality of the training dataset and the out-of-domain performance. The performance of a two-step ATR algorithm (ATR-EnvI) that on fusing thermal imagery with environmental data is investigated using thermal imagery containing buried and surface object collected in New Hampshire, Mississippi, Arizona, and Panama. An autoencoder neural network is used to encode the salient environmental conditions for a given climatic condition into an environmental feature vector. The environmental feature vector allows for the inclusion of environmental data with varying dimensions to robustly treat missing data. Using this architecture, we evaluate the performance of the two-step ATR on a test dataset collected in an unseen climatic condition, e.g., tropical wet climate when the training dataset contains imagery collected in a similar condition, e.g., subtropical, and dissimilar climates. Lastly, we evaluate the impact of including physics-based synthetic training data has on performance for out-of-domain climates.
Autonomous vehicles (AVs) employ a wide range of sensing modalities including LiDAR, radar, RGB cameras, and more recently infrared (IR) sensors. IR sensors are becoming an increasingly common component of AVs’ sensor packages to provide redundancy and enhanced capabilities in conditions that are adverse for other types of sensors. For example, while RGB cameras are sensitive to lighting conditions and LiDAR performance is degraded in inclement weather such as rain, IR sensors are unaffected by lighting conditions and can contribute additional meaningful information in inclement weather. The US Army Corps of Engineers, Engineer Research and Development Center (ERDC) has developed the ERDC Computational Test Bed (CTB) to provide a suite of tools that can be used to support virtual development and testing of AVs. The CTB includes physics-based vehicle-terrain interaction, sensor and environment modeling, geo-environmental thermal modeling, software-inthe- loop capabilities, and virtual environment generation. Thermal modeling capabilities within the CTB utilize decades of near-surface phenomenology and autonomy research. Recent additions have been made to support large-domains commonly required for autonomous vehicle operations. These additions provide high-fidelity, physics-based thermal transfer and IR sensor models for creating high-quality synthetic imagery simulating IR sensors mounted on AVs. Highly parallelized thermal and IR sensor models for large-domain AV operations will be presented in this paper.
The United State Army Corp of Engineers (USACE) Engineering Research and Development Center (ERDC) has developed a suite of computational tools called the Computational Test Bed (CTB) for advanced high-fidelity physics-based autonomous vehicle sensor and environment simulations. These tools provide insights into onboard navigation, image processing, sensor fusion techniques, and rapid data generation for artificial intelligence and machine learning techniques across the full spectrum (visible, NIR, MWIR, and LWIR) and for various sensor modalities (LiDAR, EO, radar). This paper presents ERDC’s CTB that allows the community to design, develop, test, and evaluate the entire autonomy space from machine learning algorithm development using augmented synthetic data to large-scale autonomous system testing.
It is well established that object recognition by human perception and by detection/identification algorithms is confounded by false alarms (e.g., [1]). These false alarms often are caused by static or transient features of the background. Machine learning can help discriminate between real targets and false alarms, but requires large and diverse image sets for training. The potential number of scenarios, environmental processes, material properties and states to be assessed is overwhelming and cannot practically be explored by field/lab collections alone. High-fidelity, physics-based simulation can now augment training sets with accurate synthetic sensor imagery, but at a high computational cost. To make synthetic image generation practical, it should include the fewest processes and coarsest spatiotemporal resolution needed to capture the system physics/state and accomplish the training.
Among the features known or expected to generate false alarms are: (1.) soil/material variability (spatial heterogeneity in density, mineral composition, reflectance), (2.) non-threat objects (rocks, trash), (3.) soil disturbance (physical and spectral effects), (4.) soil processes (moisture migration, evaporation), (5.) surface hydrology (rainfall runoff and surface ponding), (6.) vegetation processes (transpiration, rainfall interception and evaporation, non-saturating rain events, multi-layer canopy, (including thatch), discrete versus parameterized vegetation), and (7.) energy reflected or emitted by other scene components. This paper presents a suite of computational tools that will allow the community to begin to explore the relative importance of these features and determine when and how individual processes must be included explicitly or through simplifying assumptions/parameterizations. The justification for this decision to simplify is driven ultimately by the performance of a detection algorithm with the generated synthetic imagery. Knowing the required level of modeling detail is critical for designing test matrices for building image sets capable of training improved algorithms.
A related consideration in the creation of synthetic sensor imagery is validation of these complex, coupled modeling tools. Very few analytical solutions or laboratory experiments include enough complexity to thoroughly test model formulations. Conversely, field data collection cannot normally be characterized and measured with sufficient spatial and temporal detail to support true validation. Intermediate-scale physical exploration of near surface soil and atmospheric processes (e.g., Trautz et al., [2]) offers an alternative that is intermediary to the laboratory column and field scales. This allows many field-scale-dependent processes and effects to be reproduced, manipulated, isolated, and measured within a well characterized and controlled test environment at requisite spatiotemporal resolutions in both the air and soil.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.