Photosimulation is a widely used method for target detection experimentation. In the defence context, such experiments are often developed in order to derive measures for the effectiveness of camouflage techniques in the field. This assumes that there is a strong link between photosimulation performance and field performance which may hold for situations where the target and background are relatively stationary, such as in land environments. However, there has been some research to suggest that this assumption fails in maritime environments where both the target and background are moving, results implying that the dynamic nature of the search task led to many more cues in field observation compared to still image presented on a screen. In this paper, we explore the link between field observations and photosimulation and videosimulation. Two field observation trials were conducted, at different locations (Flinders and Darwin) and with different, but similarly sized small maritime craft. The small maritime craft deployed in the Flinders field trial in an open ocean environment was harder to detect in photosimulation than in the field. In contrast, the two small maritime craft deployed in the Darwin field trial in a littoral or coastal environment were easier to detect in videosimulation than in the field.
Photo-simulation is a widely used method for target detection experimentation. In the defence context, such experiments are often developed in order to derive measures for the effectiveness of camouflage techniques in the field. This assumes that there is a strong link between photo-simulation performance and field performance. In this paper, we report on a three-stage experiment exploring that link. First, a field experiment was conducted where observers performed a search and detection task seeking vehicles in a natural environment, simultaneously with image and video capture of the scene. Next, the still images were used in a photo-simulation experiment, followed by a video-simulation experiment using the captured video. Analysis of the photo-simulation results has shown that there is moderate linear correlation between field and photo-simulation detection results (Pearson Correlation Coefficient, PCC = 0.64), but photo-simulation results only moderately fit the field observation results, with a reduced χ2 statistic of 1.996. Detectability of targets in the field was, mostly, slightly higher than in photo-simulation. Analysis of the video-simulation results using videos of stationary and moving targets has also shown moderate correlation with field observation results (PCC = 0.62), but these are a better fit with the field observation results, with a reduced x2 statistic of 1.45. However, when considering videos of moving targets and videos of stationary targets separately, there appear to be two distinct trends, with video-simulation detection results being routinely higher for moving targets than the field observation results, while for stationary targets, the video-simulation detection results are mostly lower than the field observation, similar to the trend noted in the photo-simulation results. There were too few moving target videos to confidently perform a fit, but the fit statistics for the stationary target videos becomes similar to that of the photo-simulation, having a reduced χ2 = 1.897.
Evaluating the signature of operational platforms has long been a focus of military research. Human observations of targets in the field are perceived to be the most accurate way to assess a target's visible signature, although the results are limited to observers present in the field. Field observations do not introduce image capture or display artefacts, nor are they completely static, like the photographs used in screen based human observation experiments. A number of papers provide advances in the use of photographs and imagery to estimate the detectability of military platforms; however few describe advances in conducting human observer field trials.
This paper describes the conduct of a set of human field observation trials for detecting small maritime crafts in a littoral setting. This trial was conducted from the East Arm Port in Darwin in February 2018 with up to 6 observers at a time and was used to investigate incremental improvements to the observation process compared to small craft trials conducted in 2013. This location features a high number of potential distractors, which make it more difficult to find the small target crafts. The experimental changes aimed to test ways to measure time to detect, a result not measured at the previous small craft detection experiment, through the use of video monitoring of the observation line to compare with the use of observer-operated stop watches. This experiment also included the occasional addition of multiple targets of interest in the field of regard. Initial analysis of time-to-detect data indicates the video process may accurately assess the time to detect targets by the observers, but only if observers are effectively trained. Ideas on how to further automate the process for the human observer task are also described; however this system has yet to be implemented. This improved human observer trial process will assist the development of signature assessment models by obtaining more accurate data from field trials, including targets moving through a dynamic scene.
Proc. SPIE. 10432, Target and Background Signatures III
KEYWORDS: Target detection, 3D acquisition, Visual analytics, Digital photography, Visualization, Photography, 3D modeling, Vegetation, Airborne remote sensing, 3D image processing
Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than would be feasible to collect in field trials. Achieving this requires a method for generating synthetic imagery that is both verified to be realistic and produces the same visible signature analysis results as real images. Is target detectability as measured by image metrics the same for real images and synthetic images of the same scene? Is target detectability as measured by human observer trials the same for real images and synthetic images of the same scene, and how realistic do the synthetic images need to be?
In this paper we present the results of a small scale exploratory study on the second question: a photosimulation experiment conducted using digital photographs and synthetic images generated of the same scene. Two sets of synthetic images were created: a high fidelity set created using an image generation tool, E-on Vue, and a low fidelity set created using a gaming engine, Unity 3D. The target detection results obtained using digital photographs were compared with those obtained using the two sets of synthetic images. There was a moderate correlation between the high fidelity synthetic image set and the real images in both the probability of correct detection (Pd: PCC = 0.58, SCC = 0.57) and mean search time (MST: PCC = 0.63, SCC = 0.61). There was no correlation between the low fidelity synthetic image set and the real images for the Pd, but a moderate correlation for MST (PCC = 0.67, SCC = 0.55).
This paper presents the Mirage visible signature evaluation tool, designed to provide a visible signature evaluation capability that will appropriately reflect the effect of scene content on the detectability of targets, providing a capability to assess visible signatures in the context of the environment. Mirage is based on a parametric evaluation of input images, assessing the value of a range of image metrics and combining them using the boosted decision tree machine learning method to produce target detectability estimates. It has been developed using experimental data from photosimulation experiments, where human observers search for vehicle targets in a variety of digital images. The images used for tool development are synthetic (computer generated) images, showing vehicles in many different scenes and exhibiting a wide variation in scene content. A preliminary validation has been performed using k-fold cross validation, where 90% of the image data set was used for training and 10% of the image data set was used for testing. The results of the k-fold validation from 200 independent tests show a prediction accuracy between Mirage predictions of detection probability and observed probability of detection of r(262) = 0:63, p < 0:0001 (Pearson correlation) and a MAE = 0:21 (mean absolute error).
We present a technique for determining the perceived relative clutter among different images. This experiment involves participants ranking different sets of images in terms of clutter. The law of comparative judgment is then used to determine the relative levels of clutter on the physiological continuum. Also introduced are two metrics for predicting the level of clutter in an image. The first of these metrics uses a graph-based image segmentation algorithm and the second uses the change in gradients across the image. We present how these two metrics along with an existing metric based on wavelets can successfully predict the perceived clutter in an image.
Proc. SPIE. 9997, Target and Background Signatures II
KEYWORDS: Target detection, 3D acquisition, Video, Photography, Clouds, 3D modeling, Light sources and illumination, Target acquisition, 3D image processing, Received signal strength
This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.
Two texture metrics based on gray level co‐occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.
Over the past 50 years, the majority of detection models used to assess visible signatures have been developed and validated using static imagery. Some of these models are the German developed CAMAELEON (CAMou age Assessment by Evaluation of Local Energy Spatial Frequency and OrieNtation) model and the U.S. Army's Night Vision and Electronic Sensors Directorate (NVESD) ACQUIRE and TTP (Targeting Task Performance) models. All these models gathered the necessary human observer data for development and validation from static images in photosimulation experiments. In this paper, we compare the results of a field observation trial to a static photosimulation experiment.
The probability of detection obtained from the field observation trial was compared to the detection probability obtained from the static photosimulation trial. The comparison showed good correlation between the field trial and the static image photosimulation detection probabilities, where a Spearman correlation coefficient of 0.59 was calculated. The photosimulation detection task was found to be significantly harder than the field observation detection task, suggesting that to use static image photosimulation to develop and validate maritime visible signature evaluation tools may need correction to represent detection in field observations.
The TNO Human Factors Search 2 dataset is a valuable resource for studies in target detection, providing researchers with observational data against which image-based target distinctness metrics and detection models can be tested. The observational data provided with the Search 2 dataset was created by human observers searching colour images projected from a slide projector. Many target distinctness metrics studies are however carried out not on colour images but on images that have been processed into greyscale by various means. This is usually done for ease of analysis and meaningful interpretation. Utility of a metric is usually assessed by analysing the correlation between metric results and recorded observational results. However, the question remains of how well the results from the contrast metrics analysed from monochromatic images could be expected to compare to the observational results from colour images. We present results of a photosimulation experiment conducted using a monochromatic representation of the Search 2 dataset and an analysis of several target distinctness metrics. The monochromatic images presented to observers were created by processing the Search 2 images into L*, a* and b* colour space representations, and presenting the L* (lightness) image. The results of this experiment are compared with the original Search 2 results, showing strong correlation (0.83) between the monochrome and colour experiments in terms of correct target detection, and in terms of search time. Target distinctness metrics from analysis of these images are compared to the results of the photosimulation experiments, and the original Search 2 results.
The U.S. Army’s target acquisition models, the ACQUIRE and Target Task Performance (TTP) models, have
been employed for many years to assess the performance of thermal infrared sensors. In recent years, ACQUIRE
and the TTP models have been adapted to assess the performance of visible sensors. These adaptations have
been primarily focused on the performance of an observer viewing a display device. This paper describes an
implementation of the TTP model to predict field observer performance in maritime scenes.
Predictions of the TTP model implementation were compared to observations of a small watercraft taken in
a field trial. In this field trial 11 Australian Navy observers viewed a small watercraft in an open ocean scene.
Comparisons of the observed probability of detection to predictions of the TTP model implementation showed
the normalised RSS metric overestimated the probability of detection. The normalised Pixel Contrast using a
literature value for V50 yielded a correlation of 0.58 between the predicted and observed probability of detection.
With a measured value of N50 or V50 for the small watercraft used in this investigation, this implementation of
the TTP model may yield stronger correlation with observed probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.