PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The application of advanced low observable treatments to ground vehicles has led to a requirement for a better understanding of effects of light scattering from surfaces. Measurements of the Bidirectional Reflectance Distribution Function (BRDF) fully describe the angular scattering properties of materials, and these may be used in signature simulations to quantitatively characterize the optical effects of surface treatments on targets. This paper reviews the theoretical and experimental techniques for characterizing the BRDF of surfaces and examines some of the popular parameterized BRDF representations that are used in signature calculations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Bidirectional Reflectance Distribution Function (BRDF) plays a major role to evaluate or simulate the signatures of natural and artificial targets in the solar spectrum. A goniometer covering a large spectral and directional domain has been recently developed by the ONERA/DOTA. It was designed to allow both laboratory and outside measurements. The spectral domain ranges from 0.40 to 0.95 micrometer, with a resolution of 3 nm. The geometrical domain ranges 0 - 60 degrees for the zenith angle of the source and the sensor, and 0 - 180 degrees for the relative azimuth between the source and the sensor. The maximum target size for nadir measurements is 22 cm. The spatial target irradiance non-uniformity has been evaluated and then used to correct the raw measurements. BRDF measurements are calibrated thanks to a spectralon reference panel. Some BRDF measurements performed on sand and short grass and are presented here. Eight bidirectional models among the most popular models found in the literature have been tested on these measured data set. A code fitting the model parameters to the measured BRDF data has been developed. The comparative evaluation of the model performances is carried out, versus different criteria (root mean square error, root mean square relative error, correlation diagram . . .). The robustness of the models is evaluated with respect to the number of BRDF measurements, noise and interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many measurements of polarization made with non- imaging polarimeters. Such measurements have been made in the laboratory, of the sky and of the ground. These measurements can be interpreted only when subsidiary information enables identification of the surface under study. Some measurements have been made with imaging polarimeters based upon film, but these were limited in radiometric accuracy by the medium, or by lack of sensitometry. Some investigators fabricated a polarimeter from videcon cameras, but this study was also limited by radiometric fidelity. With the advent of digital cameras with linear focal plane radiometric response, and software retaining this linearity in extracting the image from the camera, greater radiometric accuracy has been achieved. We report here measurements of polarization which we show to be related to scene radiance. The radiance levels covered include a wide dynamic range and facilitate study of low radiance levels in general previously inaccessible to measurement using an imaging device. We also include data from previous measurements with non-imaging devices and show that they are compatible with data collected using a digital camera. There is an inverse linear relationship between the logarithm of the polarization in recorded radiance and the logarithm of the recorded radiance in data obtained with both imaging and with non-imaging polarimeters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low detectability is a major consideration for combat platforms. Exposed surfaces are painted or coated black to minimize optical or near infrared detectability; this is a fallacy in regard to polarization. The percent polarization of a diffuse (non specular) surface is inversely proportional to the surface reflectance (also known as albedo). Thus a dark surface with a reflectance of 2% can have a percent polarization of approximately 100%. (The percent polarization is the ratio of the difference between two orthogonal polarized measurements ratioed to the sum multiplied by 100). Experimental measurements of diffuse surfaces with albedos between 2% and 90% show this inverse relationship to be obeyed from the ultraviolet to the near infrared. Imagery has been obtained on various aircraft coatings that verify the inverse relationship between surface albedo and percent polarization in the green, red and near infrared wavelength bands. The imagery was obtained in the three bands with the Kodak digital cameras, which downloaded on to CD ROMs. Imagery has also been obtained on laboratory samples that verify the inverse relationship between albedo and polarization. The conclusion is that very high polarization of a dark aircraft enhances the detectability such that it is easily recognized optically using polarization. This effect has not been recognized in signature reduction. Imagery will be presented and the inverse relationship between surface albedo and percent polarization will be demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarization imaging can provide significant improvements in contrast in a number of target detection and discrimination applications. A multi-spectral imaging polarimeter has been constructed for the development of discrimination methods that exploit the polarization properties of a scene. The Stokes vector of a given scene is computed from a sequence of retardance measurements made with the instrument. A significant effort has been made to create a fast polarimeter which can make the necessary retardance measurements to produce a set of Stokes images in a minimum amount of time before the scene changes significantly, which shows up as errors in the resultant Stokes images. A number of wavebands spanning from 600 to 850 nm are considered to determine the dependence of polarization on the wavelength of light both emitted and reflected from various scene topologies. The retardance measurements are made using a rotating quarter- waveplate as opposed to using liquid crystal technology and benefits and drawbacks are discussed for this type of device. Extensive calibration of the instrument is performed to ensure the accuracy of the retardance measurements. This paper will discuss calibration methods, general operation, and results characterizing the polarization properties of numerous targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The augmentation of passive IR conventional and hyperspectral imaging sensors with polarimetric capability offers enhanced discrimination of man-made and geophysical targets, along with inference of surface shape and orientation. In our efforts to size the design of IR polarimetric hyperspectral imagers to various remote discrimination applications, we have ascertained critical relationships between polarimetric SNR and pixel sizing. This relationship pertains primarily to realms wherein the objects to be sensed will be marginally resolved spatially. The determination of such application- specific relationships is key to the design of effective polarimetric sensors. To quantify this key trade-off relationship, we have employed the latest developmental version of SPIRITS, a detailed physics-based signature code which accounts for the various geometric, environmental illumination, and propagation effects. For complex target shapes, detailed accounting for such effects is especially crucial to accurate prediction of polarimetric signatures, and thus precludes hand calculation for all but simple uniformly planar objects. Key to accurate polarimetric attribute prediction is our augmentation of the Sandford-Robertson BRDF model to a Mueller/Stokes formalism that encompasses representation of fully general elliptically polarized reflections and linearly polarized thermal emissions in strict compliance with Kirchoff's Law. We discuss details of the polarimetric augmentation of the BRDF and present polarimetric discriminability-resolution trade-off results for various viewing aspects against a ground vehicle viewed from overhead.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aerial surveillance is an issue of key importance for warship protection. In addition to radar systems, infrared surveillance sensors represent an interesting alternative for remote observation. In this paper, we work on images providing by such a system and we propose an original approach to the tracking of complex patterns in noisy infrared image sequences. We have paid particular attention to robustness with regards to perturbations likely to occur (noise, 'lining effects'. . .). Our method relies on robust parametric motion estimation and on an original Markovian regularization scheme allows to handle with the appearance and the disappearance of objects in the scene. Numerous experiments performed on outdoor infrared image sequences underline the efficiency of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral imagery data (2.0 to 5.4 micrometer) was collected of plumes of ships by the NATO Special Working Group 4. It provides the means to study the signature of a target spectrally, spatially, and temporally. This experimental data has been used to validate the infrared signature of the plume of a ship as computed by NATO's flow-field program NPLUME v1.6 and the NATO Infra-Red Air Target Model NIRATAM v3.1. Two spatial positions in the spectral imagery data cube were selected. One which represents the background spectrum, and one which represents the spectrum of the plume of the ship. Theoretical spectra were computed by means of NPLUME v1.6 and NIRATAM v3.1. A computed background spectrum was fitted to the experimental background spectrum using a user-defined atmosphere in accordance with the meteorological conditions during the trial. A computed plume spectrum was fitted to the observed plume spectrum in order to determine the chemical composition of the exhaust gas. Since NIRATAM only takes into account plume radiation from CO, CO2, H2O, and soot, the analysis is necessarily limited to these species. Using the derived fitting parameters from the experimental data we make predictions about the infrared signature of the plume in two wavelength bands (mid-wave infrared and the long-wave infrared). The average transmission through the plume in the mid-wave infrared (3.0 to 5.0 micrometer) ranges from 65% close to the exit plane, to 100% where the plume dissolves in the ambient atmosphere. For the long-wave infrared (8.0 to 10.0 micrometer) the range in transmission is 90% to 100%. The active species in the mid-wave infrared and the long-wave infrared are the same for the plume as for the intervening atmosphere. The main difference is that the absorption features are deeper and wider for the plume. Based on this work we arrive at the conclusion that spectral imagery data of the plume of a ship can be adequately modeled using NIRATAM v3.1 in conjunction with NPLUME v1.6. Alternatively, the experimental data validates NIRATAM v3.1 and NPLUME v1.6. Some modifications to the NIRATAM source code have been proposed as a result of this study. A new release of NIRATAM and NPLUME which incorporates some of these changes is expected shortly (NIRATAM v3.2).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A signature model called SHIPIR was developed by W. R. Davis Engineering Ltd (DAVIS) and the Defence Research Establishment Valcartier (DREV). The IR scene component of the model incorporates a full-hemispherical background, the ability to define multiple ship targets, each with their own exhaust plume and flare decoy deployments, and an interactive engagement simulation with an IR observer or seeker model. The model runs on an entry-level Silicon Graphics (SGI) workstation. The program relies on the color image display for both signature analysis and to drive the engagement model. To achieve reasonable refresh rates and meet the necessary image resolution requirements, a unique set of display routines had to be devised to enhance the basic capabilities of the OpenGL graphics library. These routines, which include a multiple clipping plane algorithm, sub-image analysis, transparent plume-gas rendering, and automatic threshold detection, are described. Methods for predicting and assessing the image accuracy of a generic ship model are presented. Shortcomings of running the software on an Intel-based PC are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An integrated naval infrared target, threat and countermeasure simulator (SHIPIR/NTCS) has been developed. The SHIPIR component of the model has been adopted by both NATO and the US Navy as a common tool for predicting the infrared (IR) signature of naval ships in their background. The US Navy has taken a lead role in further developing and validating SHIPIR for use in the Twenty-First Century Destroyer (DD-21) program. As a result, the US Naval Research Laboratory (NRL) has performed an in-depth validation of SHIPIR. This paper presents an overview of SHIPIR, the model validation methodology developed by NRL, and the results of the NRL validation study. The validation consists of three parts: a review of existing validation information, the design, execution, and analysis of a new panel test experiment, and the comparison of experiment with predictions from the latest version of SHIPIR (v2.5). The results show high levels of accuracy in the radiometric components of the model under clear-sky conditions, but indicate the need for more detailed measurement of solar irradiance and cloud model data for input to the heat transfer and in-band sky radiance sub-models, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signature Characterization and Multisensor Systems
A statistical approach to background clutter modeling is described. A top-level description is given of a model used to generate spatial elements and thermal IR maps of high- resolution 3D background databases. The model employs fractional geometry techniques supplemented with a modified version of the random midpoint displacement. As input the model requires a material class map and a descriptor file containing information about the material classes identified in the class map. A stochastic interpolation scheme is used to generate high-resolution 3D background databases that display spatial correlation features that correlate well with empirical data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents results of experiments in infrared signature characterization using gray-level co-occurrence matrices (GLCMs). GLCMs are a method of characterizing image content and have been used for tasks such as image segmentation and texture synthesis. Image characteristics that are implicitly included in GLCMs are all of the histogram- based statistics as well as spatial structure and spatial phase. It is desired that GLCMs can be used to compare a pair of images and provide a meaningful, quantitative measure of similarity that correlates well with human observer results. The experiments presented here were primarily concerned with the infrared signatures of ground targets, but are extendable to any type of image. Tools and methodologies were developed to calculate the GLCMs for a measured image of a ground vehicle and compare it to a computer-generated image of a three-dimensional signature model. Multiple metrics were used to compare the resultant GLCMs and the most promising is a metric adapted from tracking algorithms which provides a quantitative measure of similarity of ensembles of GLCMs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The human visual system automatically segments some regions based on luminance gradients. Other regions are segmented based on texture gradients, without segmenting the internal constituent regions whose boundary luminance gradients created the texture appearance. When visual attention is directed to a location and size scale or shape within the texture, the internal constituent regions can be picked out, but then the perception of texture is lost. This suggests that texture segregation and luminance segregation occur parallel, but not at the same location at the same time. This paper presents a possible mechanization of the process of determining when and where spatial modulation is perceived as texture versus being perceived as region boundaries. It begins with multi- resolution spatial band-pass filtering, patterned after recent computational vision modeling theory. The algorithm examines the spatial distribution of zero-crossings, i.e., phase information, on each band-pass channel. Wide regions in which zero-crossings are dense are perceived as textures. Regions in which the zero-crossings can be enclosed in narrow, lineal bands are perceived as luminance gradient boundaries. The algorithm produces maps delineating regions perceived as texture, and regions perceived as luminance for each spatial band-pass channel, i.e., at multi-resolution scales. A second algorithm recombines the band-pass channel output with the maps to produce two images: one containing the texture and one containing the luminance gradients. In addition to providing insight into possible mechanisms of visual perception, the algorithm has potential application as an image segmentation pre-processor. The concept is to apply a texture segmentation algorithm to the texture image, apply a luminance segmentation algorithm to the luminance gradient image, then combine the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The signature of any vehicle does not exist as an entity in its own right, but depends on the environment, the interaction between the environment and the vehicle, and the background against which it is detected by a sensor. CAMEO-SIM was initially developed as a broad-band (0.4 - 14 micron) scene generation system for the assessment of air vehicle camouflage effectiveness, but it can be used to simulate any kind of object and its interactions with the environment. The thermal, spectral, spatial and directional effects of sources, surfaces and the atmosphere are modeled in a fully three-dimensional environment. CAMEO-SIM was designed to be a scaleable system that can produce images to different levels of fidelity. Rendering time can be balanced against the fidelity required so that the images produced are 'fit for purpose;' in its lowest fidelity operation it can create real-time in-band imagery but when operated at its highest fidelity the subtle, complex spectral and spatial effects that arise in the real- world are more closely captured. This paper describes the current system, details the verification tests that have been undertaken, and discusses the significance of particular effects such as shadows, and directional reflectance, on the accuracy of the final image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CAMEVA is a methodology developed at the Danish Defence Research Establishment (DDRE) for computerized CAMouflage EVAluation. Input is a single image comprising a highly resolved target as well as a proper amount of background. Based on that, CAMEVA predicts the target detectability as a function of range. Statistical distributions of features utilized during the perception process are extracted and the Bhattacharyaa distance measure is applied to estimate the relative separation between the target and the background. The absolute detection range is obtained by establishing a relation between the Bhattacharyaa distance and the target resolution. Thus by introducing parameters of the sensor, i.e. the human unaided eye, detectability as a function of the range is obtained. Theoretical aspects of CAMEVA are discussed and validation examples are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper two RCS prediction codes were investigated with respect to their applicability at millimeterwave radar bands. Two classes of land-targets, anti-tank mines and a jeep, were measured at two frequencies and afterwards the RCS calculations with the facet models of the objects, partly simplified, were done under the same conditions. The paper describes the experimental set-up and the simulation methods used. Furthermore the output of the simulations is compared with the experimental results. The importance of the comparison for simulations at millimeter-wavelengths is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ballistic missiles can separate in mid-course flight producing several components that include the warhead, control modules, booster segments, and debris. Since many warheads are spin- stabilized, laser radar range-Doppler imaging may provide signatures for identifying the warhead. Discrimination algorithms are most effective when they are based on the signatures expected from the target, however, an analytical model that relates the geometric and physical parameters of the target to its range-Doppler signature has not been available. This study developed a closed-form analytical formulation that models the range-Doppler signatures of a spinning conic warhead as a function of its parameters such as, angular velocity, half-cone angle, height, and aspect angle. Using the 3-D conic surface equation, the angle-of- incidence at an arbitrary point is expressed in terms of the geometric parameters of the target. A relationship that links the Doppler shift to the cross-range coordinate of the target is used to complete the formulation of a point return as a function of range and Doppler. The model predictions match the experimental data well and suggest that this closed-form analytical solution can be used for parameter identification and discrimination in ballistic missile defense.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with estimation of the tri-variate Probability Density Function (PDF) of the sea surface elevation and slope components for use in studies of radar sea scattering. The effects of Monte Carlo simulations of linear and nonlinear waves on radar backscatter from low grazing angles are compared to the results obtained from the tri- variate PDF. Simplified examples that eliminate one slope component are analyzed to show some of the differences between linear and nonlinear wave properties. The tri-variate PDF for the linear case is given in the form of a multivariate normal distribution whose covariance matrix is calculated from the wavenumber-direction spectrum. An extension to the nonlinear case is described. The results are consistent with present knowledge of nonlinear ocean wave dynamics and exhibit such well-known features as sharp crestedness and elevation PDF skewness. The second way to describe the wavy surface is to Monte Carlo representative surfaces and determine those portions illuminated by the radar for selected grazing angles. The stochastically generated and empirical PDFs are consistent with each other. Nonlinearization of the sea surface does not appreciably alter the wave spectrum but has a large effect on the shadowing that severely affects the sea scatter of a radar beam incident at a small grazing angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signature Characterization and Multisensor Systems
Robustness of automatic target recognition (ATR) to varying observation conditions and countermeasures is substantially increased by use of multispectral sensors. Assessment of such ATR systems is performed by captive flight tests and simulations (HWIL or complete modeling). Although the clutter components of a scene can be generated with specified statistics, clutter maps directly obtained from measurement are required for validation of a simulation. In addition, urban scenes have non-stationary characteristics and are difficult to simulate. The present paper describes a scanner, data acquisition and processing system used for the generation of realistic clutter maps incorporating infrared, passive and active millimeter wave channels. The sensors are mounted on a helicopter with coincident line-of-sight, enabling us to measure consistent clutter signatures under varying observation conditions. Position and attitude data from GPS and an inertial measurement unit, respectively, are used to geometrically correct the raw scanner data. After sensor calibration the original voltage signals are converted to physical units, i.e. temperatures and reflectivities, describing the clutter independently of the scanning sensor, thus allowing us the use of the clutter maps in tests of a priori unknown multispectral sensors. The data correction procedures are described and results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of sensors to discriminate objects in scenes depends on the scene composition and interactions with the available incident radiation. As sensors and camouflage techniques become more complex the nature of the energy interactions become more important to model accurately. Specific areas of interest are the influences of fluctuations in incident total solar loading radiation on terrain surfaces. The means used to produce 3D radiative calculations over the solar spectrum involves coupling the Air Force's Moderate- resolution Transmission (MODTRAN) code to the Army's 3D Atmospheric Illumination Module (AIM). The solar loading outputs calculated by these coupled codes are then used as input to the Army Smart Weapons Operability Enhancement (SWOE) thermal models. Variations in incident radiation produce surface temperature variations of up to 8 degrees Celsius. In the paper we describe the means of evaluating solar loading effects using a correlated-k-distribution-like algorithm to compress spectral processing, and show comparisons between measured and modeled results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of determining the difference between a target and the background is a very difficult and ill-posed problem, yet it is a problem constantly faced by engineers working in target detection and machine vision. Terms like target, background, and clutter are not well defined and are often used differently in every context. Clutter can be defined as a stationary noise process, anything non-target, or anything that looks like a target but is not. Targets can be defined by deformable templates, models, or by specific feature vectors. Models, templates and features must be defined before classification begins. Both models and feature vectors somehow hold the defining characteristics of the target, for example the gun barrel of a tank. Most importantly, feature vectors and models reduce the dimensionality of the problem making numerical methods possible. This paper explores several fairly recent techniques that provide promising new approaches to these old problems. Wavelets are used to de-trend images to eliminate deterministic components, and a trained support vector machine is used to classify the remaining complicated or stochastic components of the image. Ripely's K-function is used to study the spatial location of the wavelet coefficients. The support vector machine avoids the choice of a model or feature vector, and the wavelets provide a way to determine the non-predictability of the local image components. The K-function of the wavelet coefficients serves as a new clutter metric. The technique is tested on the TNO image set through several random simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic detection of targets in natural terrain images is a difficult problem when the size and brightness of the targets is similar to that of the background clutter. The best results are achieved by techniques that are built on modeling the images as a stochastic process and detection as a problem in statistical decision theory. The current paper follows this approach in developing a new stochastic model for images of natural terrain and introducing some novel detection techniques for small targets that are based on hypothesis testing of neighborhoods of pixels. The new stochastic model assumes the observed image to be a pointwise transform of an underlying stationary Gaussian random field. This model works well in practice for a wide range of electro-optic and synthetic aperture radar (SAR) natural images. Furthermore the model motivates the design of target detection algorithms based on hypothesis tests of the likelihood of pixel neighborhoods in the underlying Gaussian image. We have developed a suite of detection algorithms with this model, and have trailled them on ensembles of real infra-red and SAR images containing small artificially inserted targets at random locations. Receiver operating characteristics (ROCs) have been compiled, and the dependence of detection statistics on the target to background contrast ratio has been explored. The results show that for the infrared imagery the model-based algorithms compare favorably with the standard adaptive threshold detector and the generalized matched filter detector. In the case of SAR imagery with unobscured targets, the generalized matched filter performance is superior, but the model-based algorithms have the advantage of not requiring prior information on target statistics. While all algorithms have similar poor performance for infrared images with low contrast ratios, the new algorithms significantly outperform existing techniques where there is good contrast. Finally the advantages and disadvantages of applying such techniques in practical detection systems are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target and Background Representation for Synthetic Test Environments
The operation of the Direct Write Scene Generator (DWSG) to drive a fiber array projection system is reported in this paper. The fiber array absorbs the input radiation from the laser-based system and produces broadband infrared output through blackbody cavities fabricated on the ends of the optical fibers. A test program was begun to quantify the performance of the fiber array with respect to input laser, power, temporal response, spatial uniformity, IR output, and fiber-to-fiber crosstalk. Static and dynamic scenes will also be projected with the device and captured with a camera system. Preliminary projection of a simple scene has been accomplished.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Irma synthetic signature model was one of the first high resolution Infrared (IR) target and background signature models to be developed for tactical weapons application. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. The latest version of Irma, 4.1, was released in April 1998 during the Aerosense Conference. It incorporated a number of upgrades to both the physical models and software. Current development efforts are focused on the inclusion of circular polarization, hybrid ladar signature blending, an RF air-to-air channel, reconfigurable sensor model, and enhance user interface. These capabilities will be integrated into the next release, Irma 5.0, scheduled for completion in FY00. The purpose of this paper is to demonstrate the progress of the Irma 5.0 development effort. Irma is being developed to facilitate multi-sensor research and development. It is currently being used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of maximum range calculation for an electromagnetic system in a lossy media leads to a transcendental implicit equation. There were few suggestions to deal with that problem, beginning with an iterative method offered by L.V. Blake back in 1969, to some non realistic remarks found in few other texts. This work presents a simple method for taking atmospheric losses into account in maximum range evaluations. This consideration gains more significance as operating frequency goes higher towards mm-waves and beyond. The method presented is based on a single solution of a generic equation, and use this solution to solve a specific problem. The generic equation is a generalization of almost all electromagnetic systems' range equations, including communication links, radiometric sensors, monostatic radar and some EW scenarios. The work deals with both homogeneous and exponential atmospheres.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background clutter characterization in IR imagery has become an actively researched field and several clutter models have been reported. These models attempt to evaluate the target detection/recognition probabilities that are characteristic to a certain scene when specific target and human visual perception features are known. The prior knowledge assumed and required by these models is a severe limitation. Furthermore, the attempt to model subjective and intricate mechanisms such as human perception with simple mathematical formulae is controversial. In this paper, we introduce the idea of adaptive models that are dynamically derived from a set of examples by a supervised evolutionary learning scheme. A set of characteristic scene and target features with a demonstrated influence on the human visual perception mechanism is first extracted from the original images. Then, the correlation between these features and the results obtained by visual observer tests on the same set of images are captured into a model by the learning scheme. The effectiveness of the adaptive modeling principle is discussed in the final part of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FLIR Target Acquisition Model (FTAM) is an analytical tool used to evaluate range performance for man-in-the-loop target acquisition systems operating in the infrared spectral band. It represents an aggregation of the latest research done involving object detection by such authorities as Raytheon Missile Systems Company (RMSC), Georgia Technological Research Institute (GTRI), and The Institute for Defense Analysis (IDA). The Static and Dynamic probabilities of detection predicted by the model represent the expected performance achieved with a given sensor with regards to sensor characteristics, target signature, background clutter, and human observer psychophysics effects. This paper will address the methodology of FTAM's prediction method and present comparisons to the U.S. Army Night Vision and Electronic Sensor Directorate (NVESD) ACQUIRE 1.0 Range Performance Model for Target Acquisition Systems. FTAM results will be compared against available measured NVESD data to quantify its predictive capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mean search time of observers looking for targets in visual scenes with clutter is computed using the Fuzzy Logic Approach (FLA). The FLA is presented by the authors as a robust method for the computation of search times and or probabilities of detection for signature management decisions. The Mamdani/Assilian and Sugeno models have been investigated and are compared. A 44 image data set from TNO is used to build and validate the fuzzy logic model for detection. The input parameters are the: local luminance, range, aspect, width, wavelet edge points and the single output is search time. The Mamdani/Assilian model gave predicted mean search times from data not used in the training set that had a 0.957 correlation to the field search times. The data set is reduced using a clustering method then modeled using the FLA and results are compared to experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this work is to provide a model for the average time to detection for observers searching for targets in photo-realistic images of cluttered scenes. The proposed model builds on previous work that constructs a fixation probability map (FPM) from the image. This FPM is constructed from bottom- up features, such as local contrast, but also includes top- down cognitive effects, such as the location of the horizon. The FPM is used to generate a set of conspicuous points that are likely to be fixation points, along with initial probabilities of fixation. These points are used to assemble fixation sequences. The order of these fixations is clearly crucial for determining the time to fixation. Recognizing that different observers (unconsciously) choose different orderings of the conspicuous points, the present model performs a Monte- Carlo simulation to find the probability of fixating each conspicuous point at each position in the sequence. The three main assumptions of this model are: the observer can only attend to the area of the image being fixated, each fixation has an approximately constant duration, and there is a short term memory for the locations of previous fixation points. This fixation point memory is an essential feature of the model, and the memory decay constant is a parameter of the model. Simulations show that the average time to fixation for a given conspicuous point in the image depends on the distribution of other conspicuous points. This is true even if the initial probability of fixation for a given point is the same across distributions, and only the initial probability of fixation of the other points is distributed differently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three different models of the human visual search and detection capability (ORACLE, Visdet, and a formula by Travnikova) are used to predict the results of an experiment in which observers searched for military vehicles in complex rural scenes. The models predict either the mean time required to find the target, or the probability to find the target after a given amount of time, from a few physical parameters describing the scene (e.g. the mean scene luminance, the angular dimensions of the field of view and the target, the intrinsic target contrast, etc.). None of the models reliably predicts observer performance for most of the scenes used in this study. ORACLE and Visdet both overestimate the detection probability for most situations. The formula by Travnikova does not apply to the scenes used here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an infrared field reflectometer/emissiometer (EMIR) for field measurement of natural or man made samples of background in atmospheric windows. The data collected will be used to improve target and background data bases for more realistic IR scene generation. The measurement method is based on the comparison of luminances of a calibrated diffusing reflector and the sample under the same hemispherical irradiation and directional observation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents approaches for the characterization of saliency with respect to MMO (Man-Made Object) detection at the example of vehicle detection in infrared (IR) images. The methodology is based on an extended evaluation of gradient direction histograms presented in earlier AeroSense (1996, 1997) symposia. The detection of conspicuous image domains (ROI -- Regions of Interest) is an early signal near operation in the process of automated detection and recognition of MMO used in ATR (Automatic Target Recognition) algorithm chains. For this purpose, the ROI detection has to be fast and reliable. It can be used as an efficient data reduction device to speed up subsequent exploitation phases without loss of relevant information. Usually two complementary error classes are distinguished: class (alpha) (an interesting image domain was not detected) and class (beta) [an irrelevant image domain (clutter) has been labeled]. (beta) errors lead to an increased analysis workload in subsequent processing phases. In unfavorable cases much too many image domains are labeled and hence the ROI detection is ineffective. (alpha) errors are even more problematic since it is hard to compensate omissions in subsequent evaluation phases. The quality (efficiency and effectiveness) of the MMO detection restricts the ultimate achievable system performance and hence determines the possible application fields (e.g. on-board or ground based ATR). The optimization trade off between (alpha) and (beta) demands for application specific solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.