PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The detection of small targets in IR images is a problem requiring the enhancement of target signal and the suppression of clutter noise in the image. Classical methods require an a priori knowledge on target and clutter characteristics. We use a robust adaptive clutter whitener (RAWC) filter based on robust estimation of background texture parameters in order to whiten the clutter present in IR images. The output of this RAWC filter is then passed on Marr 2D wavelet filter for performing the optimal point target detection. Experimental results indicate the significant improvement over standard detection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The infrared signature of an aircraft is generally calculated as the sum of multiple components. These components are, typically: the aerodynamic skin heating, reflected solar and upwelling and downwelling radiation, engine hot parts, and exhaust gas emissions. For most airframes, the latter two components overwhelmingly dominate the IR signature. However, for small targets--such as small fighters and cruise missiles, particularly targets with masked hot parts, emissivity control, and suppressed plumes- -aerodynamic heating is the dominant term. This term is determined by the speed of the target, the sea-level air temperature, and the adiabatic lapse rate of the atmosphere, as a function of altitude. Simulations which use AFGL atmospheric codes (LOWTRAN and MODTRAN)--such as SPIRITS--to predict skin heating, may have an intrinsic error in the predicted skin heating component, due to the fixed number of discrete sea-level air temperatures implicit in the atmospheric models. Whenever the assumed background temperature deviates from the implicit model atmosphere sea- level temperature, there will be a measurable error. This error becomes significant in magnitude when trying to model the signatures of small, dim targets dominated by skin heating. This study quantifies the predicted signature errors and suggests simulation implementations which can minimize these errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The statistical characterization of complex real-world backgrounds is a crucial issue in the design of effective detection algorithms. The approach taken here is to monitor the environment and divide it into homogeneous partitions which are characterized by their probability distributions. A new technique for characterizing multivariate random data is described and the effectiveness of the approach is illustrated by two applications: concealed weapon detection and weak signal detection in strong non-Gaussian clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The utility of multiscale Hurst features are determined for segmentation of clutter in SAR imagery. These multiscale Hurst features represent a generalization of the Hurst parameter for fractional Brownian motion (fBm) where these new features measure texture roughness at various scales. A clutter segmentation algorithm is described using only these new Hurst parameters as features. The performance of the algorithm was tested on measured one foot resolution SAR data, and the results are comparable to other algorithms proposed in the literature. The advantage of the multiscale Hurst features is that they can be computed quickly and they can discriminate clutter well in unprocessed single polarization magnitude detected SAR imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays radar remote sensing using flying vehicles is one of the most perspective and highly developed methods of investigation of Earth surfaces. Application of the remote sensing equipment makes it possible to solve quickly the problems of ecology C ol I flood) , diagnostics of watercontent , quantity and ripening of biomass. The remote sensing methods give an impressionable results in studying of storms and different atmospheric formations, ice movement and so on. The authors of some papers made attempts to summurise the remote sensing measurrnent results for different frequences and angles of incidence. Also, the theory of this question had been. successfully developed for some types of surfaces. The main data? which are necessitive for developing and effective application of remote sensing sisteins, can't be receiced by the theory, because even having the complete theory for calculations it's necessary to have the integral characteristics of the surface, including the surface roughness, permittivity, water content and so on, which may be received only from experiments. The energy characteristics are one of the main for different types of Earth surfaces. Usually, the user in the first turn is interesting in receiving the dependence versus the types of the landscape, incidence angle, polarization and so on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier Transform Infrared Radiometers (FTIRs) are relatively new instruments in the applications of spectral radiometric characterization of targets. One of these FTIR instruments has been modified to provide a modulated spectrum as a source quantity. Interferometric scan signals are transmitted by microwave radio to an infrared receiver downrange in order to synchronize the detection of the modulated signals. The receiver uses the synchronization signals and the modulated infrared signals to obtain a source spectrum after propagation through the atmosphere. The technique appears to give good relative transmission estimates even up to 4 kilometers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, night vision sensor and display systems used in the pilotage or navigation of military helicopters are either long wave IR thermal sensors (8 - 12 microns) or image intensified, visible and near IR (0.6 - 0.9 microns), sensors. The sensor imagery is displayed using a monochrome phosphor on a Cathode Ray Tube or night vision goggle. Currently, there is no fielded capability to combine the best attributes of the emissive radiation sensed by the thermal sensor and the reflected radiation sensed by the image intensified sensor into a single fused image. However, recent advances in signal processing have permitted the real time image fusion and display of multispectral sensors in either monochrome or synthetic chromatic form. The merits of such signal processing is explored. A part task simulation using a desktop computer, video playback unit, and a biocular head mounted display was conducted. Response time and accuracy measures of test subject responses to visual perception tasks were taken. Subjective ratings were collected to determine levels of pilot acceptance. In general, fusion based formats resulted in better subject performance. The benefits of integrating synthetic color to fused imagery, however, is dependent on the color algorithm used, the visual task performed, and scene content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Off-road mobility at night is a critical factor in modern military operations. Soldiers traversing off-road terrain, both on foot and in combat vehicles, often use 2D viewing devices (such as a driver's thermal viewer, or biocular or monocular night-vision goggles) for tactical mobility under low-light conditions. Perceptual errors can occur when 2D displays fail to convey adequately the contours of terrain. Some off-road driving accidents have been attributed to inadequate perception of terrain features due to using 2D displays (which do not provide binocular-parallax cues to depth perception). In this study, photographic images of terrain scenes were presented first in conventional 2D video, and then in stereoscopic 3D video. The percentage of possible correct answers for 2D and 3D were: 2D pretest equals 52%, 3D pretest equals 80%, 2D posttest equals 48%, 3D posttest equals 78%. Other recent studies conducted at the US Army Research Laboratory's Human Research and Engineering Directorate also show that stereoscopic 3D displays can significantly improve visual evaluation of terrain features, and thus may improve the safety and effectiveness of military off-road mobility operation, both on foot and in combat vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improvements in the fidelity of predictive computer models have brought requirements for more robust reflectance modeling. These requirements have focused new interest in measurement processes and data representation. Representation of the data is of critical importance to rendering models such as ray tracers and radiance renders. In these cases concise and accurate reflectance representation drives speed performance of the modeling. Many types of reflectance representations exist but the bidirectional reflectance is the most general case, from which all the others can be derived. This paper explores the bidirectional reflectance function, its measurement techniques and linkages into predictive modeling. Limitations to each of these areas will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A methodology for implementing cost as a design requirement is described using design-for-assembly, process capability, Gaussian statistics, and improvement curves. The application of the method to the design of a millimeter wave transceiver in a radar-guided missile is presented. A cost savings of over 30% relative to a baseline design is documented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To detect and identify targets in their natural environment, millimeterwave sensors are appropriate tools for a variety of civilian and military applications. To generate high resolution images the synthetic aperture approach is the favorite method also for the millimeterwave bands. These wavelengths allow some simplifications in the algorithms to be applied, namely the image distortions due to range migration and depth of focus can be neglected. The short aperture time which is reduced by the factor of the wavelength for an equal cross range resolution, leads to a much lesser influence of flight instabilities of the airborne platform, than have to be taken into account for classical microwave SAR. It is also remarkable that the amount of speckle is considerably reduced for the higher millimeterwave-bands which allows single-look processing. The polarimetric high resolution experimental radar MEMPHIS (Millimeterwave Experimental Multifrequency Polarimetric High Resolution Imaging System) with simultaneous operating front-ends at 35 GHz and 94 GHz has been installed onboard a cargo aircraft in side looking configuration. Different scenes over land and sea were passed with this system and the data were processed by means of a SAR algorithm. The paper describes the system configuration and the SAR processing algorithm, gives representative results for the generated radar images and compares the results at 35 and 94 GHz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this tutorial paper, the approach to multimode sensor signal processing development is discussed with emphasis given on the data requirements for multimode signal processing algorithm development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passive MMW sensing is getting more and more attention as sensors in this spectral region get better. This development requires understanding of the passive MMW target detection scenario. This scenario consists of natural background elements and targets. Understanding of the behavior of backgrounds and targets as function of environmental conditions is vital for the analysis of any future sensor performance for this spectral region. During the past year, EORD has measured the radiometric properties of natural backgrounds and several man made objects using its dual frequency 140/220 GHz radiometer. This work will describe the measurement setup and give some of the results of background and target measurements. The measurement results will be correlated to the thermal IR radiometric data and the actual contact temperatures of the objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Development of target acquisition and target recognition algorithms in highly cluttered backgrounds in a variety of battlefield conditions demands a flexible, high fidelity capability for synthetic image generation. Cost effective smart weapons research and testing also requires extensive scene generation capability. The Irma software package addresses this need through a first principles, phenomenology based scene generator that enhances research into new algorithms, novel sensors, and sensor fusion approaches. Irma was one of the first high resolution synthetic infrared target and background signature models developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory, the Irma model was used exclusively to generate IR scenes for smart weapons research and development. in 1987, Nichols Research Corporation took over the maintenance of Irma and has since added substantial capabilities. The development of Irma has culminated in a program that includes not only passive visible, IR, and millimeter wave (MMW) channels but also active MMW and ladar channels. Each of these channels is co-registered providing the capability to develop algorithms for multi-band sensor fusion concepts and associated algorithms. In this paper, the capabilities of the latest release of Irma, Irma 4.0, will be described. A brief description of the elements of the software that are common to all channels will be provided. Each channel will be described briefly including a summary of the phenomenological effects and the sensor effects modeled in the software. Examples of Irma multi- channel imagery will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improvement in the capabilities of infrared, millimeter- wave, acoustic, and x-ray, sensors has provided means to detect weapons concealed beneath clothing and to provide wide-area surveillance capability in darkness and poor light for military special operations and law enforcement application. In this paper we provide an update on this technology, which we have discussed in previous papers on this subject. We present new data showing simultaneously obtained infrared and millimeter-wave images which are especially relevant because a fusion of these two sensors has been proposed as the best solution to the problem of concealed weapon detection. We conclude that the use of these various sensors has the potential for solving this problem and that progress is being made toward this goal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A description is given of a method of designing phase- correcting Fresnel zone plate lens antennas for operation at two or more frequency bands. A new analysis is derived for apportioning the amount of phase correction to give the desired performance at the chosen frequencies. The result permits the development of narrow-beam, high-gain behavior at two or more bands, with reduced gain at other frequencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target and Background Representation for Synthetic Test Environments
In modeling and simulations the importance of the natural environment has always been recognized with regard to its influence on contrast transmission. However, the variability of surface illumination and solar loading due to broken clouds, the resulting impact of dynamic range on recognition, and clouds as backgrounds, along with the traditional influences due to transmission and path radiance, are emerging areas of relevance due to improvements in the modeling of these effects. The Air Force LOWTRAN model has been the traditional choice for multi- waveband analysis of spectral atmospheric effects on systems performance. But this code only has spatially varying effects in the vertical direction. Dynamic range impacts of horizontally variable illumination conditions cannot be addressed. We describe a series of codes designed to allow the linking of predictions of cloud fractions, base heights, layer depths, and layer cloud types with a model to predict the cloud density structure. These results are coupled to a radiative transfer model. We describe the salient features of this physics based model. We then describe the point-to- point calculation method to produce path radiance and transmittance statistics at multi-channel resolution. The weighted spectra are used to describe the effects on a given sensor channel. We further describe the perspective view generation method used to render cloudy scenes from a variety of observer positions. The radiative transfer model is robust in the sense that its results are not limited to low cloud densities. The spectral region covered is the same as that treated by LOWTRAN and LOWTRAN output is used to initialize the upper boundary for incident direct (solar/lunar) and diffuse radiation source and used to determine the background molecular absorption (by modeled layer) of the scattering volume. Typical scattering volumes treated have an 8 km X 8 km footprint and are either 4 km, 8 km, or 16 km high. These volume choices can be used for addition of clouds as scene elements in simulations, usage of the surface illumination information as a positionally varying solar loading or brightness data set, and for path characterization for contrast transmission calculations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The number of available spaceborne and airborne systems will dramatically increase over the next few years. A common systematic approach toward verification of these systems will become important for comparing the systems' operational performance. The Commercial Remote Sensing Program at the John C. Stennis Space Center (SSC) in Mississippi has developed design requirements for a remote sensing verification target range to provide a means to evaluate spatial, spectral, and radiometric performance of optical digital remote sensing systems. The verification target range consists of spatial, spectral, and radiometric targets painted on a 150- by 150-meter concrete pad located at SSC. The design criteria for this target range are based upon work over a smaller, prototypical target range at SSC during 1996. This paper outlines the purpose and design of the verification target range based upon an understanding of the systems to be evaluated as well as data analysis results from the prototypical target range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The viability of extending the Direct Write Scene Generator (DWSG) to project to a sensor system with an optical telescope installed has been investigated. The test approach requires development of transmissive or reflective screens and/or collimator systems to expand the DWSG output to the sensor telescope. Several optical configurations have been examined to accommodate this capability. Measurements of the optical spot size on a camera with a zoom lens have been compared to CODE V predictions. Analysis has been performed to determine the practical limitations of this configuration with regard to testing sensors with a set field of view. A demonstration of operation of the DWSG through camera optics has been accomplished. The utility of this new capability to closed-loop operation has also been examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling of Sensor Effects and Target/Background Detectability
We present and demonstrate a method to characterize a background scene, to extrapolate the background characteristics into a specified target region, and to generate a synthetic target image with the visual characteristics of the surrounding background. The algorithm is based on a computational model of spatial pattern analysis in the front-end retinal-cortical visual system. It uses nonstationary multi-resolution spatial filtering to extrapolate the intensity and the intensity modulation amplitude of the surrounding background into the target region. The algorithm provides a method to compute the background-induced bias for use as a zero-reference in computational models of target boundary perception and shape discrimination. We demonstrate the method with a complex, heterogeneous scene containing many discrete objects and backgrounds. The contrast and texture of the visualization blends into the local background. In most cases, the target boundaries are difficult to see, and the target regions are difficult to distinguish from the background. The results provide insight into the capabilities and limitations of the underlying model to front-end human visual pattern analysis. They provide insight into scene segmentation, shape properties, and prior knowledge of scene organization and object appearance for modeling visual discrimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Georgia Tech Research Institute has developed an integrated suite of software for Visual and Electro-Optical (VISEO) detection analysis, under the sponsorship of the Army Aviation and Troop Command, Aviation Applied Technology Directorate. The VISEO system is a comprehensive workstation-based tool for multi-spectral signature analysis, LO design, and visualization of targets moving through real measured backgrounds. A key component of the VISEO system is a simulation of real measured backgrounds. A key component of the VISEO system is a simulation of human vision, called the Georgia Tech Vision (GTV) simulation. The algorithms used in the simulation are consistent with neurophysiological evidence concerning the functions of the human visual system, from dynamic light adaptation processes in the retinal receptors and ganglia to the processing of motion, color, and edge information in the striate cortex. The simulation accepts images seen by the naked eye or through direct-view optical systems, as well as images viewed on the displays of IR sensors, image intensifiers and night-vision devices. GTV outputs predicted probabilities that the target is fixated (Pfix) during visual search, and detected (Pd), and also identifies specific features of the target that contribute most to successful search and detection performance. This paper outlines the capabilities and structure of the VISEO system, emphasizing GTV. Example results of visible and IR signature reduction on the basis of VISEO will be shown and described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A procedure for calibration of a color video camera has been developed at EORD. The RGB values of standard samples, together with the spectral radiance values of the samples, are used to calculate a transformation matrix between the RGB and CIEXYZ color spaces. The transformation matrix is then used to calculate the XYZ color coordinates of distant objects imaged in the field. These, in turn, are used in order to calculate the CIELAB color coordinates of the objects. Good agreement between the calculated coordinates and those obtained from spectroradiometric data is achieved. Processing of the RGB values of pixels in the digital image of a scene using the CAMDET software package which was developed at EORD, results in `Painting Maps' in which the true apparent CIELAB color coordinates are used. The paper discusses the calibration procedure, its advantages and shortcomings and suggests a definition for the visible signature of objects. The Camdet software package is described and some examples are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An operational definition of ambient luminance is described in some detail for the time periods near dawn and dusk. A simple empirical model is developed to predict ambient solar luminance under clear sky conditions as a function of (1) observer azimuth viewing direction, (2) the sun altitude and (3) the sun azimuth. The equation is easily modified for any observer position on the earth, diurnal time period or particular day during the year. The subsequent model predicts ambient solar luminance as a function of time after sunset or before dawn for relative target/solar positions in a real scene. This formalism can be used to extrapolate laboratory observer threshold data to real world environments. Subsequent publications will describe the process for predicting probability of detection as a function of time for various target acquisition scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computation of optical flow can play an important role in background characterization and target detectability algorithms since motion is an important cue feature for target detection. A multiresolution gradient-based motion estimation algorithm is implemented based on Horn's approach, where the optical flow field is iteratively determined at each level of the multiresolution pyramid. Different masks and kernels are tested, and an adaptive error minimization approach is taken to deal with areas in the image that violate the optical flow constraint equation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clutter metrics are important image measures for evaluating the expected performance of sensors and detection algorithms. Typically, clutter metrics attempt to measure the degree to which background objects resemble targets. That is, the more target-like objects or attributes in the background the higher the clutter level. However, it is critically important that the characteristics of the sensor systems and the detection algorithms be included in any measure of clutter. For example, clutter to a coarse resolution sensor coupled with a pulse thresholding detection algorithm is not necessarily clutter to a second generation FLIR with a man in the loop. Using present state- of-the-art first and second order clutter metrics and respective performance studies, a new class of sensor/algorithm clutter metrics will be derived which explicitly use characteristics of the sensor and detection algorithms. A methodology will be presented for deriving sensor/algorithm dependent clutter metric coefficients and algorithms for a broad class of systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation and modeling has become more popular as computing hardware and techniques become increasingly available to support them. These models must be tested against the real world if the results are to be useful. Real-world limitations on instrument performance constrain field test design. The US Army Research Laboratory, Electronic Warfare signature measurements group at White Sands Missile Range, NM has a suite of instrumentation and two tracking systems that can be used for such targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the covering-blanket method widely used to estimate fractal dimension is improved. The D-dimensional area K, which has never been detailed in previous references, is clarified and further extended to a fractal signature as a function of scale and space. After defining two discrepancy measurements of multiscale fractal signature, an algorithm of man-made target detection, which is based on fractal signature change, is presented and tested on looking-forward infrared images collected by long wave infrared camera on sea surfaces. The results of using the D-dimensional area are compared with those of using the fractal dimension, and suggest that the proposed method performs better in detecting ship targets embedded in natural scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent stereo vision experiments demonstrated the enhancement of depth perception over single line of sight vision for improved vehicular navigation and target acquisition processes. The experiments involves the use of stereo vision headsets connected to visible and 8 - 12 micrometers IR imagers. The imagers were separated by up to 50 m (i.e., wider platform separation than human vision, or hyperstereo) and equipped with telescopes for viewing at ranges of tens of meters up to 4 km. The important findings were: (1) human viewers were able to discern terrain undulations for obstacle avoidance for vehicular navigation, and (2) human viewers were able to detect depth features within the scenes that enhanced the target acquisition process over using monocular or single line of sight viewing. For vehicular navigation improvement, stereo goggles were developed for headset display and simultaneous see through instrumentation viewing for vehicular navigation enhancement. For detection, the depth cues can be used to detect even salient target features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Engineers and scientists at the US Army's Night Vision and Electronic Sensors Directorate (NVESD) are in the process of evaluating the German CAMAELEON model, a signature evaluation model that was created for use in designing and evaluating camouflage in the visible spectrum and is based on computational vision methodologies. Verification and preliminary validation have been very positive. For this reason, NVESD has planned and is currently in the early execution phase of a more elaborate validation effort using data from an Army field exercise known as DISSTAF-II. The field exercise involved tank gunners, using the currently fielded M1 Abrams tank sights to search for, to target, and to `fire on' (i.e. to pull the trigger to mark target location) a variety of foreign and domestic vehicles in realistic scenarios. Data from this field exercise will be combined with results of a laboratory measurement of perceptual target detectabilities. The purpose of the laboratory measurement is to separate modeled effects from unmodeled effects in the field data. In the laboratory, observers will be performing a task as similar as possible to that modeled by CAMAELEON. An important feature of this data is that the observers will know where the target is located and will rate the detectability of the targets in a paired comparison experiment utilizing the X-based perceptual experiment testbed developed at the University of Tennessee. For the laboratory measurement the subjects will view exactly the same images as those to be analyzed by CAMAELEON. Three correlations that will be found are expected to be especially important. The correlation between perceptual detectability and model predictions will show the accuracy with which the model predicts human performance of the modeled task (rating target detectabilities). The correlation between laboratory and field data will show how well perceived detectability predicts tank gunner target detection in a realistic scenario. Finally, the correlation between model predictions and detection probabilities will show the extent to which the model can actually predict human field performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Knowledge of background properties is essential for various applications such as systems engineering and evaluation (e.g. electro-optical sensors or for camouflage design), operational planning and development of ATR algorithms. A series of field tests was conducted in the NEGEV desert in Israel, as a joint effort of the FGAN-FfO (Germany) and EORD (Israel) for characterizing properties of backgrounds in arid climatic regions. Diurnal cycles of background surface temperatures were measured during summer and winter periods in several sites in the NEGEV. The measurement equipment consisted of imaging cameras, most of them calibrated, covering the spectral region from the visible up to the thermal infrared. This paper presents the measurement set- up, the measurement techniques that were used, and some of the first analysis results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computer code SENSAT developed for radiometric investigations in remote sensing was extended to include two statistical clutter models of infrared background and the prediction of the target detection probability. The first one is based on the standard deviation of scene clutter estimated from scene data, the second one is based on the power spectral density of different classes of IR background as a function of temporal or spatial frequency. The overall code consists of modules describing the optoelectronic sensor (optics, detector, signal processor), a radiative transfer code (MODTRAN) to include the atmospheric effects, and the scene module consisting of target and background. The scene is evaluated for a certain pixel at a time. However, a sequence of pixels can be simulated by varying the range, view angle, atmospheric condition, or the clutter level. The target consists of one or two subpixel surface elements, the remaining part of the pixels represents background. Multiple paths, e.g. sun-ground-target-sensor, can also be selected. An expert system, based upon the IDL language, provides user-friendly input menus, performs consistency checks, and submits the required MODTRAN and SENSAT runs. A sample case of the detection probability of a sub-pixel target in a marine cluttered background is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ka-band measurements of a 2S-3 vehicle have been utilized to develop, verify and validate point scatterer models. IR measurements of the same vehicle have been converted to models for use in multispectral HWIL simulations. The data collection methodology, model development process, and techniques for verification and validation of these models are described. Finally, results of the model outputs in comparison to measurements are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling of Sensor Effects and Target/Background Detectability
Computer-augmented detection of targets generally refers to the localization of potential targets by computer processing of data from a variety of sensors. Automatic detection is applicable for data reduction purposes in the reconnaissance domain and is therefore aimed at reducing the workload for human operators with respect to activities such as to targeting individual targets on large areas or volumes for assessing the battlefield/battlespace situation. An increase of reliability and efficiency is expected. The results of automatic image evaluation are offered to the image analyst as hypotheses. In this paper image sequences from an infrared sensor (spectral range 3 - 5 micrometers ) are analyzed with the aim of finding Regions of Interest (ROIs), where the target-background segmentation is performed by means of blob evaluation. Also low contrast conditions can be successfully tackled if the directions of the gray value gradient are considered, which are nearly independent of the contrast. Blobs are generated by applying adaptive thresholds in the ROIs. Here the evaluation of histograms is very important for the extraction of structured features. It is assumed that the height, aspect angle, and camera parameters are approximately known for an estimation of target sizes in the image domain. This estimation yields important parameters for the target/clutter discrimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.