A new approach to linear discriminant analysis (LDA), called orthogonal rotational LDA (ORLDA) is presented. Using ORLDA and properly accounting for target size allowed development of a new clutter metric that is based on the Laplacian pyramid (LP) decomposition of clutter images. The new metric achieves correlation exceeding 98% with expert human labeling of clutter levels in a set of 244 infrared images. Our clutter metric is based on the set of weights for the LP levels that best classify images into clutter levels as manually classified by an expert human observer. LDA is applied as a preprocessing step to classification. LDA suffers from a few limitations in this application. Therefore, we propose a new approach to LDA, called ORLDA, using orthonormal geometric rotations. Each rotation brings the LP feature space closer to the LDA solution while retaining orthogonality in the feature space. To understand the effects of target size on clutter, we applied ORLDA at different target sizes. The outputs are easily related because they are functions of orthogonal rotation angles. Finally, we used Bayesian decision theory to learn class boundaries for clutter levels at different target sizes.
Perception tests establish the effects of spatially band-limited noise and blur on human observer performance. Previously, Bijl showed that the contrast threshold of a target image with spatially band-limited noise is a function of noise spatial frequency. He used the method of adjustment to find the contrast thresholds for each noise frequency band. A noise band exists in which the target contrast threshold reaches a peak relative to the threshold for higher- or lower-noise frequencies. Bijl also showed that the peak of this noise band shifts as high frequency information is removed from the target images.
To further establish these results, we performed forced-choice experiments. First, a Night Vision and Electronics Sensors Directorate (NVESD) twelve (12)-target infrared tracked vehicle image set identification (ID) experiment, second, a bar-pattern resolving experiment, and third, a Triangle Orientation Discrimination (TOD) experiment. In all of the experiments, the test images were first spatially blurred, then spatially band-limited noise was added. The noise center spatial frequency was varied in half-octave increments over seven octaves. Observers were shown images of varying target-to-noise contrasts, and a contrast threshold was calculated for each spatial noise band. Finally, we compared the Targeting Task Performance (TTP) human observer model predictions for performance in the presence of spatially band-limited noise with these experimental results.
This study determines the effectiveness of a number of image fusion algorithms through the use of the following image metrics: mutual information, fusion quality index, weighted fusion quality index, edge-dependent fusion quality index and Mannos-Sakrison’s filter. The results obtained from this study provide objective comparisons between the algorithms. It is postulated that multi-spectral sensors enhance the probability of target discrimination through the additional information available from the multiple bands. The results indicate that more information is present in the fused image than either single band image. The image quality metrics quantify the benefits of fusion of MWIR and LWIR imagery.