It is well established that object recognition by human perception and by detection/identification algorithms is confounded by false alarms (e.g., ). These false alarms often are caused by static or transient features of the background. Machine learning can help discriminate between real targets and false alarms, but requires large and diverse image sets for training. The potential number of scenarios, environmental processes, material properties and states to be assessed is overwhelming and cannot practically be explored by field/lab collections alone. High-fidelity, physics-based simulation can now augment training sets with accurate synthetic sensor imagery, but at a high computational cost. To make synthetic image generation practical, it should include the fewest processes and coarsest spatiotemporal resolution needed to capture the system physics/state and accomplish the training.
Among the features known or expected to generate false alarms are: (1.) soil/material variability (spatial heterogeneity in density, mineral composition, reflectance), (2.) non-threat objects (rocks, trash), (3.) soil disturbance (physical and spectral effects), (4.) soil processes (moisture migration, evaporation), (5.) surface hydrology (rainfall runoff and surface ponding), (6.) vegetation processes (transpiration, rainfall interception and evaporation, non-saturating rain events, multi-layer canopy, (including thatch), discrete versus parameterized vegetation), and (7.) energy reflected or emitted by other scene components. This paper presents a suite of computational tools that will allow the community to begin to explore the relative importance of these features and determine when and how individual processes must be included explicitly or through simplifying assumptions/parameterizations. The justification for this decision to simplify is driven ultimately by the performance of a detection algorithm with the generated synthetic imagery. Knowing the required level of modeling detail is critical for designing test matrices for building image sets capable of training improved algorithms.
A related consideration in the creation of synthetic sensor imagery is validation of these complex, coupled modeling tools. Very few analytical solutions or laboratory experiments include enough complexity to thoroughly test model formulations. Conversely, field data collection cannot normally be characterized and measured with sufficient spatial and temporal detail to support true validation. Intermediate-scale physical exploration of near surface soil and atmospheric processes (e.g., Trautz et al., ) offers an alternative that is intermediary to the laboratory column and field scales. This allows many field-scale-dependent processes and effects to be reproduced, manipulated, isolated, and measured within a well characterized and controlled test environment at requisite spatiotemporal resolutions in both the air and soil.