As the U.S. Army prepares for future conflicts and multi-domain operations, the need for methods to rapidly and continuously characterize the land-sea interface during littoral entry is paramount to ensure maneuverability across these domains. In the maritime domain, nearshore bathymetry and surf-zone sandbars define water depth and wave behavior, which in-turn drive landing tactics and the feasibility and configuration of littoral operations. In the land domain, beach and dune topography define slopes and transit paths, which drive staging area locations and effect maneuverability of both troops and equipment. Accurately predicting surf-zone state and littoral morphology evolution requires synthesizing a range of complex non-linear physics that drive these changes. Using imagery of the littorals from unmanned aerial systems and physics-based models, the U.S. Army Engineer and Development Center has developed novel data assimilation approaches to estimate water depth, littoral conditions, and beach sub-aerial topography from wave kinematics and photogrammetric algorithms and quantify their corresponding uncertainties. To improve the usefulness (speed of the calculations) and accuracy (accounting for known errors related to optical transfer functions and nonlinear wave dynamics) of this technology during littoral operations, approaches to develop machine-learning based computational tools which can directly translate short-sequences of littoral imagery into surf-zone characterization in real time by substituting or augmenting computationally complex models are being investigated. To accomplish this, a photo-realistic, non-linear wave model, Celeris, is used to generate synthetic imagery of a range of surf-zone environments. This synthetic imagery is crucial to developing the data sets necessary to train deep neural networks to solve the non-linear depth inversion problem from observations of wave kinematics.
It is well established that object recognition by human perception and by detection/identification algorithms is confounded by false alarms (e.g., ). These false alarms often are caused by static or transient features of the background. Machine learning can help discriminate between real targets and false alarms, but requires large and diverse image sets for training. The potential number of scenarios, environmental processes, material properties and states to be assessed is overwhelming and cannot practically be explored by field/lab collections alone. High-fidelity, physics-based simulation can now augment training sets with accurate synthetic sensor imagery, but at a high computational cost. To make synthetic image generation practical, it should include the fewest processes and coarsest spatiotemporal resolution needed to capture the system physics/state and accomplish the training.
Among the features known or expected to generate false alarms are: (1.) soil/material variability (spatial heterogeneity in density, mineral composition, reflectance), (2.) non-threat objects (rocks, trash), (3.) soil disturbance (physical and spectral effects), (4.) soil processes (moisture migration, evaporation), (5.) surface hydrology (rainfall runoff and surface ponding), (6.) vegetation processes (transpiration, rainfall interception and evaporation, non-saturating rain events, multi-layer canopy, (including thatch), discrete versus parameterized vegetation), and (7.) energy reflected or emitted by other scene components. This paper presents a suite of computational tools that will allow the community to begin to explore the relative importance of these features and determine when and how individual processes must be included explicitly or through simplifying assumptions/parameterizations. The justification for this decision to simplify is driven ultimately by the performance of a detection algorithm with the generated synthetic imagery. Knowing the required level of modeling detail is critical for designing test matrices for building image sets capable of training improved algorithms.
A related consideration in the creation of synthetic sensor imagery is validation of these complex, coupled modeling tools. Very few analytical solutions or laboratory experiments include enough complexity to thoroughly test model formulations. Conversely, field data collection cannot normally be characterized and measured with sufficient spatial and temporal detail to support true validation. Intermediate-scale physical exploration of near surface soil and atmospheric processes (e.g., Trautz et al., ) offers an alternative that is intermediary to the laboratory column and field scales. This allows many field-scale-dependent processes and effects to be reproduced, manipulated, isolated, and measured within a well characterized and controlled test environment at requisite spatiotemporal resolutions in both the air and soil.