Photo-simulation is a widely used method for target detection experimentation. In the defence context, such experiments are often developed in order to derive measures for the effectiveness of camouflage techniques in the field. This assumes that there is a strong link between photo-simulation performance and field performance. In this paper, we report on a three-stage experiment exploring that link. First, a field experiment was conducted where observers performed a search and detection task seeking vehicles in a natural environment, simultaneously with image and video capture of the scene. Next, the still images were used in a photo-simulation experiment, followed by a video-simulation experiment using the captured video. Analysis of the photo-simulation results has shown that there is moderate linear correlation between field and photo-simulation detection results (Pearson Correlation Coefficient, PCC = 0.64), but photo-simulation results only moderately fit the field observation results, with a reduced χ2 statistic of 1.996. Detectability of targets in the field was, mostly, slightly higher than in photo-simulation. Analysis of the video-simulation results using videos of stationary and moving targets has also shown moderate correlation with field observation results (PCC = 0.62), but these are a better fit with the field observation results, with a reduced x2 statistic of 1.45. However, when considering videos of moving targets and videos of stationary targets separately, there appear to be two distinct trends, with video-simulation detection results being routinely higher for moving targets than the field observation results, while for stationary targets, the video-simulation detection results are mostly lower than the field observation, similar to the trend noted in the photo-simulation results. There were too few moving target videos to confidently perform a fit, but the fit statistics for the stationary target videos becomes similar to that of the photo-simulation, having a reduced χ2 = 1.897.
This paper presents the Mirage visible signature evaluation tool, designed to provide a visible signature evaluation capability that will appropriately reflect the effect of scene content on the detectability of targets, providing a capability to assess visible signatures in the context of the environment. Mirage is based on a parametric evaluation of input images, assessing the value of a range of image metrics and combining them using the boosted decision tree machine learning method to produce target detectability estimates. It has been developed using experimental data from photosimulation experiments, where human observers search for vehicle targets in a variety of digital images. The images used for tool development are synthetic (computer generated) images, showing vehicles in many different scenes and exhibiting a wide variation in scene content. A preliminary validation has been performed using k-fold cross validation, where 90% of the image data set was used for training and 10% of the image data set was used for testing. The results of the k-fold validation from 200 independent tests show a prediction accuracy between Mirage predictions of detection probability and observed probability of detection of r(262) = 0:63, p < 0:0001 (Pearson correlation) and a MAE = 0:21 (mean absolute error).
We present a technique for determining the perceived relative clutter among different images. This experiment involves participants ranking different sets of images in terms of clutter. The law of comparative judgment is then used to determine the relative levels of clutter on the physiological continuum. Also introduced are two metrics for predicting the level of clutter in an image. The first of these metrics uses a graph-based image segmentation algorithm and the second uses the change in gradients across the image. We present how these two metrics along with an existing metric based on wavelets can successfully predict the perceived clutter in an image.