PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A series of experiments have been performed to verify the utility of algorithmic tools for the modeling and analysis of cold-target signatures in synthetic, top-attack, FLIR video sequences. The tools include: MuSES/CREATION for the creation of synthetic imagery with targets, an ARL target detection algorithm to detect imbedded synthetic targets in scenes, and an ARL scoring algorithm, using Receiver-Operating-Characteristic (ROC) curve analysis, to evaluate detector performance. Cold-target detection variability was examined as a function of target emissivity, surrounding clutter type, and target placement in non-obscuring clutter locations. Detector metrics were also individually scored so as to characterize the effect of signature/clutter variations.
Results show that using these tools, a detailed, physically meaningful, target detection analysis is possible and that scenario specific target detectors may be developed by selective choice and/or weighting of detector metrics. However, developing these tools into a reliable predictive capability will require the extension of these results to the modeling and analysis of a large number of data sets configured for a wide range of target and clutter conditions.
Finally, these tools should also be useful for the comparison of competitive detection algorithms by providing well defined, and controllable target detection scenarios, as well as for the training and testing of expert human observers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution, high data rate sensor streams acquired from the Navy Shared Reconnaissance Pod (SHARP), encompassing unsurpassed resolution EO and IR sensors, covering large tactical areas with detailed surveillance information, will overwhelm current signal processing and communications capabilities. However, the value and utility of these data streams is dependent on their subsequent
exploitation and timely dissemination to appropriate commanders. This situation renders real-time surveillance infeasible without significant advances in each of these areas: signal processing, communications, and interpretation. Data compression, encryption, and other related technologies play a vital role here. Here we focus on the target recognition problem from an ultra-high resolution SHARP sensor suite, specifically on the detection in the EO domain. The theory of correlation filters (MACH, MACE, etc.), developed by Casasent and company at CMU has been typically used for classification purposes in the past. Herein we develop innovative low-complexity Correlation Eigen-Filters (CEFs), which have the unique advantage of offering detection capability for one or multiple objects, over a wide range of aspect angles (up to full 360 degrees), using as few as a single filter. In the paper, we develop a theoretical analysis of the CEF filter design, and provide some application examples. Figure 1 illustrates a case in point: various military aircraft are detected with perfect performance (Pd = 1.0, Pfa = 0) by training CEF filters on examples aircraft in other imagery, and testing on sequestered data. We not only diverge from traditional correlation-filter methods in that we use the correlation filter as a detector, but also to develop a novel feature space in which to do discrimination analysis, figure 1c.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving target tracking is a challenging task and is increasingly becoming important for various applications. In this paper, we have presented target detection and tracking algorithm based on target intensity feature relative to surrounding background, and shape information of target. Proposed automatic target tracking algorithm includes two techniques: intensity variation function (IVF) and template modeling (TM). The intensity variation function is formulated by using target intensity feature while template modeling is based on target shape information. The IVF technique produces the maximum peak value whereas the reference target intensity variation is similar to the candidate target intensity variation. When IVF technique fails, due to background clutter, non-target object or other artifacts, the second technique, template modeling, is triggered by control module. By evaluating the outputs from the IVF and TM techniques, the tracker determines the real coordinates of the target. Performance of the proposed ATT is tested using real life forward-looking infrared (FLIR) image sequences taken from an airborne, moving platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The successful mission of autonomous airborne systems like unmanned aerial vehicles (UAV) strongly depends on the performance of automatic image processing used for navigation, target acquisition and terminal homing. In this paper we propose a method for automatic determination of missile position and orientation (pose) during target approach. The flight path is characterized by a forward motion, i.e. an approximately linear motion along the optical axis of the sensor system. Due to the lack of disparity, classical methods based on stereo triangulation are not suitable for 3D object recognition. To handle this we applied the SoftPOSIT algorithm, originally proposed by D. DeMenthon, and adapted it to fit our specific needs: The gathering of image points is done by multi-threshold segmentation, texture analysis, 2D tracking and edge detection. The reference points are updated in each loop and the calculated pose is smoothed using the quaternion representation of the model’s orientation in order to stabilize the computations. We will show some results of image based determination of trajectories for airborne systems. Terminal homing is demonstrated by tracking the 3D pose of a vehicle in an image sequence taken in oblique view and gathered by an infrared sensor mounted to a helicopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation of infrared imaging trackers and seekers will incorporate more sophisticated and smarter tracking algorithms, able to keep a positive lock on a targeted aircraft in the presence of countermeasures such as decoy flares. One approach consists in identifying targets with the help of pattern recognition algorithms that use features extracted from all possible target images observed in the missile's field of view. Artificial neural networks are known to be a tool of choice for such pattern classification tasks. For the situation at hand, probabilistic neural networks are particularly interesting because their performances can approach those of optimal Bayesian classifiers and they output an estimate of the actual probability that a target belongs to one class or another. We have endeavoured to evaluate the performances and the possibility of integrating such neural networks in the infrared imaging seeker emulator developed by Defense Research and Development Canada (DRDC) at Valcartier. The results reported here constitute a follow up on a preceding study in which a neural network was used to discriminate between aircrafts and flares from measured properties of their static images. In the present study, we consider the time evolution of image features. In particular, we define temporal characteristics of blob intensities and shapes that can be measured over a few frames and used to differentiate between aircrafts and flares. We build a neural network that uses these characteristics as input and which outputs the probability that an aircraft or a flare is being observed. We show the very positive results we have obtained in tests conducted with some real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic anomaly detection has been cited as a candidate method for remote processing of hyperspectral sensor imagery (HSI) to promote reduction of the extremely large data sets that make storage and transmission difficult. But automatic anomaly detection in HSI is itself a challenging problem owing to the impact of the atmosphere on spectral content and the variability of spectral signatures. In this paper, I propose to use the discriminant metric SAM (spectral angle mapper) and some of the advances made on the theory of semiparametric inference to design an anomaly detector that assumes no prior knowledge about the target and the clutter statistics. The detector will assume that the probability distribution function (pdf) of any object in a scene can be modeled as a distortion of a reference pdf. The maximum-likelihood method for the model is discussed along with its asymptotic behavior. The proposed anomaly detector is tested using real hyperspectral data and compared to a benchmark approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a pose-independent Automatic Target Detection and Recognition (ATD/R) System using data from an airborne 3D imaging ladar sensor. The ATD/R system uses geometric shape and size signatures from target models to detect and recognize targets under heavy canopy and camouflage cover in extended terrain scenes.
A method for data integration was developed to register multiple scene views to obtain a more complete 3-D surface signature of a target. Automatic target detection was performed using the general approach of “3-D cueing,” which determines and ranks regions of interest within a large-scale scene based on the likelihood that they contain the respective target. Each region of interest is further analyzed to accurately identify the target from among a library of 10 candidate target objects.
The system performance was demonstrated on five extended terrain scenes with targets both out in the open and under heavy canopy cover, where the target occupied 1 to 5% of the scene by volume. Automatic target recognition was successfully demonstrated for 20 measured data scenes including ground vehicle targets both out in the open and under heavy canopy and/or camouflage cover, where the target occupied between 5 to 10% of the scene by volume. Correct target identification was also demonstrated for targets with multiple movable parts that are in arbitrary orientations. We achieved a high recognition rate (over 99%) along with a low false alarm rate (less than 0.01%)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based Automatic Target Recognition (ATR) algorithms are adept at recognizing targets in high fidelity 3D LADAR imagery. Most current approaches involve a matching component where a hypothesized target and target pose are iteratively aligned to pre-segmented range data. Once the model-to-data alignment has converged, a match score is generated indicating the quality of match. This score is then used to rank one model hypothesis over another. The main drawback of this approach is twofold. First, to ensure the correct target is recognized, a large number of model hypotheses must be considered. Even with a highly accurate indexing algorithm, the number of target types and variants that need to be explored is prohibitive for real-time operation. Second, the iterative matching step must consider a variety of target poses to ensure that the correct alignment is recovered. Inaccurate alignments produce erroneous match scores and thus errors when ranking one target hypothesis over another. To compensate for such drawbacks, we explore the use of situational awareness information already available to an image analyst. Examples of such information include knowledge of the surrounding terrain (to assess potential occlusion levels) and targets of interest (to account for target variants).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D sensors provide unique opportunities for performing automatic target recognition (ATR). We describe an automated system that exploits 3D target geometry to perform rapid and robust ATR in the domain of military and civilian ground vehicles. The system identifies specific vehicles by comparing 3D LADAR data to model based LADAR predictions from highly-detailed CAD models with articulating parts. In addition to performing identification, the system solves for whole vehicle six-degree-of-freedom pose as well as detailed target articulation state. Because of its specificity, the identification system performs high probability of correct identification across a library of ~100 target models and exhibits robustness to occlusion, clutter and sensor noise. This identification capability is currently being extended for the purpose of classifying generic vehicle types (tanks, trucks, air defense units, etc.). The goal of the extended system is to perform vehicle classification before performing vehicle identification. This methodology provides a more flexible model-based ATR capability because it obviates the need for modeling all possible target types in advance. Classification enables the recognition of novel targets which have not been modeled or previously observed by the system. We classify targets based on general 3D morphology and characteristic 3D relationships between observed parts and features. This approach exploits the distinctive anatomy of different functional target types to achieve a more flexible and extensible target recognition capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neptec Design Group Ltd. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technology demonstrator in partnership with the Canadian DND. The system prototype was deployed for field testing at Defence Research and Development Canada (DRDC)-Valcartier. This paper discusses the performance of the developed algorithm using 3D scans acquired with an imaging LIDAR. 3D models of civilian and military vehicles were built using scans acquired with a triangulation laser scanner. The models were then used to generate a knowledge base for the recognition algorithm. A commercial imaging LIDAR was used to acquire test scans of the target vehicles with varying range, pose and degree of occlusion. Recognition and pose estimation results are presented for at least 4 different poses of each vehicle at each test range. Results obtained with targets partially occluded by an artificial plane, vegetation and military camouflage netting are also presented. Finally, future operational considerations are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neptec Design Group has developed a 3D automatic target recognition and pose estimation algorithm technology demonstrator in partnership with Canadian DND. This paper discusses the development of the algorithm to work with real sensor data. The recognition approach uses a combination of two algorithms in a multi-step process. The two algorithms provide uncorrelated metrics and are therefore using different characteristics of the target. This allows the potential target dataset to be reduced before the final selection is made. In a pre-processing phase, the object data is segmented from the surroundings and is re-projected onto an orthogonal grid to make the object shape independent of range. In the second step, a fast recognition algorithm is used to reduce the list of potential targets by removing unlikely cases. Then a more accurate, but slower and more sensitive, algorithm is applied to the remaining cases to provide another recognition metric while simultaneously computing a pose estimation. After passing some self-consistency checks, the metrics from both algorithms are then combined to provide relative probabilities for each database object and a pose estimate. Development of the recognition and pose algorithm relied on processing of real 3D data from civilian and military vehicles. The algorithm evolved to be robust to occlusions and characteristics of real 3D data, including the use of different 3D sensors for generating database and test objects. Robustness also comes from the self-validating abilities and simultaneous pose estimation and recognition, along with the potential for computing error bounds on pose. Performance results are shown for pseudo-synthetic data and preliminary tests with a commercial imaging LIDAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we report new results in an ongoing study to address the problem of classification of Laser radar targets. We first discuss the issue of representation of 3D object such that they remain unchanged under affine transformation. A set of Ladar signatures of tactical military targets are then represented in the invariant feature spaces and their inter-class separability is studied by adding random noise of varying characteristics to the Ladar signatures. A number of performance metrics such as interclass distances and ROC curves are used to demonstrate the classification behaviors of 35 different target signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes a system, and its component algorithms, for context-driven target vehicle detection in 3-D data that was developed under the Defense Advanced Research Projects Agency (DARPA) Exploitation of 3-D Data (E3D) Program. In order to determine the power of shape and geometry for the extraction of context objects and the detection of targets, our algorithm research and development concentrated on the geometric aspects of the problem and did not utilize intensity information. Processing begins with extraction of context information and initial target detection at reduced resolution, followed by a detailed, full-resolution analysis of candidate targets. Our reduced-resolution processing includes a probabilistic procedure for finding the ground that is effective even in rough terrain; a hierarchical, graph-based approach for the extraction of context objects and potential vehicle hide sites; and a target detection process that is driven by context-object and hide-site locations. Full-resolution processing includes statistical false alarm reduction and decoy mitigation. When results are available from previously collected data, we also perform object-level change detection, which affects the probabilities that objects are context objects or targets. Results are presented for both synthetic and collected LADAR data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The surface textures of natural objects often have the visible fractal-like properties. A similar pattern of texture could be found looking at the forests in the aerial photographs or at the trees in the outdoor scenes when the image spatial resolution was changed. Or the texture patterns are different at different spatial resolution levels in the aerial photographs of villages. It creates the difficulties in image segmentation and object recognition because the levels of spatial resolution necessary to get the homogeneously and correctly labeled texture regions differ for different types of landscape. E.g. if the spatial resolution was sufficient for distinguishing between the textures of agricultural fields, water, and asphalt, the texture labeled areas of forest or suburbs are hardly fragmented, because the texture peculiarities corresponding to two stable levels of texture spatial resolution will be visible in this case. A hierarchical texture analysis could solve this problem, and we did it in two different ways: we performed the texture segmentation simultaneously for several levels of image spatial resolution, or we subjected the texture labeled image of highest spatial resolution to a recurring texture segmentation using the texture cells of larger sizes. The both approaches turned out to be rather fruitful for the aerial photographs as well as for the outdoor images. They generalize and support the hierarchical image analysis technique presented in another our paper. Some of the methods applied were borrowed from the living vision systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Comprehensive two-dimensional gas chromatography (GCxGC) is a new technology for chemical separation. In GCxGC analysis, chemical identification is a critical task that can be performed by peak pattern matching. Peak pattern matching tries to identify the chemicals by establishing correspondences from the known peaks in a peak template to the unknown peaks in a target peak pattern. After the correspondences are established, information carried by known peaks are copied into the unknown peaks. The peaks in the target peak
pattern are then identified. Using peak locations as the matching features, peak patterns can be represented as point patterns and the peak pattern matching problem becomes a point pattern matching problem. In GCxGC, the chemical separation process imposes an ordering constraint on peak retention time (peak location). Based on the ordering constraint, the matching technique proposed in this paper forms directed edge patterns from point patterns and then matches the point patterns by matching the edge patterns. Preliminary
experiments on GCxGC peak patterns suggest that matching the edge patterns is much more efficient than matching the corresponding point patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Last years we reported at the SPIE conferences the results of development of a hierarchical structural classifier which used the contour structural elements as an input and was designed for matching the aerospace photographs taken in different seasons from different view points, or formed by different kinds of sensors. The aim of this investigation was development of a theoretical approach which could explain the previously described empirical results and could give a proof for the techniques applied in the elaborated algorithms, since many of these techniques were borrowed from the human vision system or were introduced heuristically. The proposed approach is based on the information theory and minimum description length principle (MDL). This principle can be stated in the following way. Such a model of the initial data should be chosen, which gives their shortest description without information losses when the chosen data model is extended with the description of discrepancy between the model and the data or with the description of the random component. In our case the data is a pair of images to be registered. In the task of image matching the images models are extended with the model of their mutual spatial transformation, and such the transformation is chosen which permits to minimize the joint description of a pair of images. To apply the MDL principle the model is introduced which formalizes the image structural description used in the classifier. Consequently, the methods developed earlier were reformulated in the terms of the proposed theoretical approach. As a result, the necessary improvements of the structural classifier were determined which can increase its reliability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique has been formulated based on hetero-associative target detection strategy that recognizes and tracks multiple dissimilar or hetero-associative targets from gray-scale image sequences taken from a moving aircraft in real time. Fringe-adjusted joint transform correlation combined with the proposed hetero-associative filter is used to enhance the correlation performance and thus ensures strong and equal cross-correlation peak for each element of the selected class. Tracking is accomplished by combining the analysis of single image frame with the determination of the motion from consecutive image frames. For efficient performance, the desired targets are identified prior to be tracked by correlating successive frames using the proposed filter which is an enhanced version of the fringe-adjusted filter. The optimality of the tracking performance is tested by MATLAB software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems.
In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile.
We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a procedure for classification of targets by a network of distributed radar sensors deployed to detect, locate and track moving targets. Estimated sensor positions and selected positions of a target under track are used to obtain the target aspect angle as seen by the sensors. This data is used to create a multi-angle profile of the target. Stored target templates are then matched in the least mean square sense with the target profile. These templates were generated from radar return signals collected from selected targets on a turntable. Probabilities of correct classification obtained by a simulation of the classification procedure are given as functions of signal-to-noise ratios and errors in estimates of target and sensor locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on non-cooperative automatic target recognition via features extracted from the radar backscatter returns using the Fractional Fourier Transform. This study is motivated by two factors: first, to examine radar backscatter mechanism of standard small targets, and second, to extract pertinent scattering features that can be used in target recognition. Radar returns have been examined using time-frequency analysis techniques, particularly those targets with dispersive scattering behavior. The Fractional Fourier Transform provides an attractive alternative to time-frequency analysis. The motivation lies in the fact that fractional Fourier Transform features retain phase information, and, because of their linear structure, are potentially less susceptible to noise contamination compared to their time-frequency counterparts. The fractional Fourier Transform has its disadvantages as well which are addressed in this paper. The FrFT scattering analysis scheme is tested using real radar signatures of commercial aircraft recorded in the UHF range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In dispersive wave propagation, the standard stationary phase approximation to the wave is accurate in the asymptotic regime.
Typically the calculation of the stationary points is
taken to depend only on the dispersion relation. We examine the effects of including the spatial phase of the initial wave as well in the calculation and show that doing so can improve the approximation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a wave propagates in a dispersive medium certain characteristics change and hence it may not be recognized as the same wave by
different observers. For lossless dispersive propagation, temporal moments such as the mean time and duration of the wave
change as a function of position, while frequency moments do not. We show that there are other moment-like temporal features of the wave
that are also invariant to dispersion. These moments may be useful
in automatic classification because indeed they are invariant to dispersive channel effects and hence do not depend on the position at which they are calculated, and they provide additional information beyond that given by frequency moments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three target types, namely T72, ZSU 23-4 and BMP-2 were measured in a tower/turntable configuration in several articulations each. A set of geometric, statistical, structural and polarimetric features is used to study the robustness of classification. Based on the Kolmogoroff-Smirnov distance between histograms a metric is defined that at the same time allows to quantify intra-class robustness and inter-class separability for an individual feature. For sets of several features, a simple classification approach in connection with a reference confusion matrix allows to assess the robustness of classification. It is demonstrated, that averaging the feature reference over all available target articulations improves the classification performance as compared to a reference that is based on one articulation only.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target signatures extracted by ultra-wideband ground penetrating radar (GPR) will substantially depend on the target's burial depth, and on the soil’s moisture content. Using a Method-of-Moments (MoM) code, we earlier simulated such returned echoes from two targets for several moisture contents and burial depths in a soil with known electric properties. We also showed that they could then be all translated to equivalent echoes from the target at some selected
standardized depth and soil moisture with adequate accuracy. The signature template of each target is here computed using a time-frequency distribution of the returned echo when the target is buried at a selected depth in the soil with selected moisture content. For any returned echo the relative difference can be computed between the target signature and a selected template signature. Using our target translation method (TTM) that signature difference can then be used
as a cost function to be minimized. This is done by adjusting the depth and moisture content, now taken to be unknown parameters, using the differential evolution method (DEM). The template that gives the smallest value of the minimized cost function for a chosen returned echo is here taken to signify the classification. As it turns out, any choice of returned waveform results in correct classification of the two targets used here. Moreover, when the proper template is used, the values of the depth and moisture parameters that give the minimum cost function are good predictions of the actual target depth and soil moisture content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following the tendency of increasingly using imaging sensors in military aircraft, future combat airplane pilots will need onboard artificial intelligence for aiding them in image interpretation and target designation.
This document presents a system which is able to simulate high-resolution artificial SAR imagery and thereby facilitates automatic target recognition (ATR) algorithm development. The system provides a comprehensive interface that allows dynamically requesting imagery depending on the location and heading of a simulated carrier platform. Landscapes, structures and target signatures are generated based on digital terrain data and target models.
An assessment of dissimilar database preparations for sensor simulation was done with respect to the different properties of SAR imaging compared to optical imaging. The document presents selected results for specific landscape elements. Post-processing algorithms for overcoming weaknesses of digital terrain databases and improving image realism are presented.
Simulated sensor imagery is useful in a wide range of applications, two of which are training of ATR algorithms and sensor simulation in flight simulation environments.
Using an existing ATR method as an example, the applicability and the influences of synthetic imagery on ATR training are shown and first approaches on how to validate the correctness of the imagery are explained. The integration of the system into a flight simulator in the context of interfacing and control topics serves as a concluding example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging laser radar (ladar) systems have been developed for automatic target identification in surveillance systems. A processor in the ladar uses the range value at the target pixels to estimate the target's 3-D shape (angle-angle-range) and, from the 3-D shape, identify the target. For targets in clutter and partially hidden targets, there are ambiguities in determining which pixels are on target, that lead to uncertainties in determining the target's 3-D shape. A method for improving the determination of which pixels are on target is to use the polarization components of the reflected light. Laser light retroreflected from man-made (smoother) surfaces is depolarized less than light from natural (rougher) surfaces. By sensing the depolarization of the reflected light at every pixel, the pixels on target can be distinguished from pixels on clutter and background. We describe the operation of a polarization diverse imaging laser radar and a preliminary evaluation of this ladar. We simulated the laser radar output, which are polarization diverse images. The polarization diverse images are fused into a single image. We then showed that, using the fused image, we are better able
to identify and distinguish the target from other objects of the same class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method for developing and training a classifier for detecting military vehicles in FLIR (Forward Looking Infrared) imagery. Often image analysis is done via constructing feature vectors from the original two-dimensional image. In this effort, a genetic algorithm is used to evolve a group of linear filters for constructing these feature vectors. Training is performed on collections of target chips and non-target or clutter chips drawn from FLIR image datasets. The evolved filters produce multi-dimensional feature vectors from each sample. First the fitness function for the genetic algorithm rewards maximal separation of target from non-target vectors measured by clustering the two sets and applying a vector space norm. Next, the entire method is adapted to supply feature vectors to a support vector machine classifier (SVM) in order to optimize the SVM's performance, i.e. the genetic algorithm's fitness function rewards effective SVM class distinction. Finally, supplemental features are incorporated into the system, resulting in an improved, hybrid classifier. This classification method is intended to be applicable to a wide variety of target-sensor scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid weighted/interacting particle filter, the selectively resampling particle (SERP) filter, is used to detect and track an unknown number of independent targets on a one-dimensional "racetrack" domain. The targets evolve in a nonlinear manner. The observations model a sensor positioned above the racetrack. The observation data takes the form of a discretized image of the racetrack, in which each discrete segment has a value depending both upon the presence or absence of targets in the corresponding portion of the domain, and upon lognormal noise. The SERP filter provides a conditional distribution approximated by particle simulations. After each observation is processed, the SERP filter selectively resamples its particles in a pairwise fashion, based on their relative likelihood. We consider a reinforcement learning approach to control this resampling. We compare two different ways of applying the filter to the problem: the signal measure approach and the model selection approach. We present quantitative results of the ability of the filter to detect and track the targets, for each of the techniques. Comparisons are made between the signal measure and model selection approaches, and between the dynamic and static resampling control techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic Target Recognition (ATR) algorithms are extremely sensitive to differences between the operating conditions under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. These extended operating conditions can cause a target's signature to be drastically different from training exemplars/models. For example, a target's signature can be influenced by: the time of day, the time of year, the weather, atmospheric conditions, position of the sun or other illumination sources, the target surface and material properties, the target composition, the target geometry, sensor characteristics, sensor viewing angle and range, the target surroundings and environment, and the target and scene temperature. Recognition rates degrade if an ATR is not trained for a particular EOC. Most infrared target detection techniques are based on a very simple probabilistic theory. This theory states that a pixel should be assigned the label of "target" if a set of measurements (features) is more likely to have come from an assumed (or learned) distribution of target features than from the distribution of background features. However, most detection systems treat these learned distributions as static and they are not adapted to changing EOCs. In this paper, we present an algorithm for assigning a pixel the label of target or background based on a statistical comparison of the distributions of measurements surrounding that pixel in the image. This method provides a feature-level adaptation to changing EOCs. Results are demonstrated on infrared imagery containing several military vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Selection of the kernel parameters is critical to the performance of Support Vector Machines (SVMs), directly impacting the generalization and classification efficacy of the SVM. An automated procedure for parameter selection is clearly desirable given the intractable problem of exhaustive search methods. The authors' previous work in this area involved analyzing the SVM training data margin distributions for a Gaussian kernel in order to guide the kernel parameter selection process. The approach entailed several iterations of training the SVM in order to minimize the number of support vectors. Our continued investigation of unsupervised kernel parameter selection has led to a scheme employing selection of the parameters before training occurs. Statistical methods are applied to the Gram matrix to determine kernel optimization in an unsupervised fashion. This preprocessing framework removes the requirement for iterative SVM training. Empirical results will be presented for the "toy" checkerboard and quadboard problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion serves as the basis for automatic target recognition; it maps images of teh same scene received from different sensors into a common reference system. A novel fusion method is described that employs image local response and the hybrid evolutionary algorithm (HEA). Given geometric transformation A(V) under parameter vector V (e.g. affine image transformation) of the images subjected to fusion, image local response is defined as image transform components of the parameter vector V are applied to the image, and the corresponding variations of the least squared differences of the gray levels of the two images (i.e. before and after parameter variation) form the image response matrix. The transform R(V) extracts only the dynamic contents of the image, i.e. the salient features that are most sensitive to geometric transformation A(V). Since R(V) maps the image onto itself, the result of the mapping is largely invariant to the type of the sensor that was used to obtain the image. Once the response matrices are built for all images subjected to fusion, HEA is used to map the images into the common reference system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and discrimination of targets in infrared imagery has been a challenging problem due to the variability of the target and clutter (background) signatures. In this paper we discuss the application of a novel quadratic filtering method using missile seeker infrared closing sequences. Image filtering techniques are well suited for target detection applications since they avoid the disadvantages of typical pixel-based detection schemes (such as segmentation and edge extraction). Another advantage is that the throughput complexity of the filtering approach, in the detection process, also does not vary with scene content. The performance of the proposed approach is assessed on several data sets, and the results are compared with that of previous linear filtering techniques. Since we can obtain the signature of some of the clutter "in-the-field" or during operation, we examine the impact of updating the filters to adapt to the clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major challenge for ATR evaluation is developing an accurate image truth that can be compared to an ATR algorithm's decisions to assess performance. While many standard truthing methods and scoring metrics exist for stationary targets in still imagery, techniques for dealing with motion imagery and moving targets are not as prevalent. This is partially because the moving imagery / moving targets scenario introduces the data association problem of assigning targets to tracks. This problem complicates the truthing and scoring task in two ways. First, video datasets typically contain far more imagery that must be truthed than static collections. Specifying the types and locations of the targets present for a large number of images is tedious, time consuming and error prone. Second, scoring ATR performance is ambiguous when assessing performance over a collection of video sequences. For example, if a target is tracked and successfully identified for 90% of a single video sequence, is the identification rate 90%, or is the single sequence evaluated in its entirety and the vehicle identification simply recorded as correct? In the former case, a bias will be introduced for easily identified targets that show up frequently in a sequence. In the latter case, the bias is avoided but system accuracy could be overstated.
In this paper, we present a complete truthing system we call the Scoring, Truthing, And Registration Toolkit (START). The first component is registration, which involves aligning the images of the same scene to a common reference frame. Once that reference frame has been determined, the second component, truthing, is used to specify target identity, posi-tion, orientation, and other scene characteristics. The final component, scoring, is used to assess the performance of a given algorithm as compared to the specified truth. In motion imagery, both stationary and moving targets can be de-tected and tracked over portions of a motion imagery clip. We present an approach to scoring performance in the context that provides a natural generalization of the standard methods for dealing with still imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One way to increase the robustness and efficiency of unmanned surveillance platforms is to introduce an autonomous data acquisition capability. In order to mimic a sensor operator's search pattern, combining wide area search with detailed study of detected regions of interest, the system must be able to produce target indications in real time. Rapid detection algorithms are also useful for cueing image analysts that process large amounts of aerial reconnaissance imagery. Recently, the use of a sequence of increasingly complex classifiers has by several authors been suggested as a means to achieve high processing rates at low false alarm and miss rates. The basic principle is that much of the background can be rejected by a simple classifier before more complex classifiers are applied to analyse more difficult remaining image regions. Even higher performance can be achieved if each detector stage is implemented as a set of expert classifiers, each specialised to a subset of the target training set. In order to cope with the increasingly difficult classification problem faced at successive stages, the partitioning of the target training set must be made increasingly fine-grained, resulting in a coarse-to-fine hierarchy of detectors. Most of the literature on this type of detectors is concerned with face detection. The present paper describes a system designed for detection of military ground vehicles in thermal imagery from airborne
platforms. The classifier components used are trained using a variant of the LogitBoost algorithm. The results obtained are encouraging, and suggest that it is possible to achieve very low false alarm and miss rates for this very demanding application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video compression is a necessary part of many real world applications where the data is being transmitted over data links with limited bandwidth. Previously, information metrics have been used to assess the distortion of target signatures with differing degrees of compression. This paper makes use of a well-known ATR algorithm to assess the impact of an instantiation of the evolving H.264 compression standard on the ATR detection itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A data object is constructed from a P by M Wurfelspiel matrix W
by choosing an entry from each column to construct a sequence A0A1•AM-1. Each of the PM possibilities are designed to correspond to the same category according to some chosen measure. This matrix could encode many types of data.
(1) Musical fragments, all of which evoke sadness;
each column entry is a 4 beat sequence
with a chosen A0A1A2 thus 16 beats long (W is P by 3).
(2) Paintings, all of which evoke happiness; each column entry
is a layer and a given A0A1A2 is a painting constructed using these layers (W is P by 3).
(3) abstract feature vectors corresponding to action potentials
evoked from a biological cell's exposure to a toxin.
The action potential is divided into four relevant regions
and each column entry represents the feature vector of a region.
A given A0A1A2 is then an abstraction of the excitable cell's output (W is P by 4).
(4) abstract feature vectors corresponding to an object such as
a face or vehicle. The object is divided into four
categories each assigned an abstract feature
vector with the resulting concatenation an abstract representation of the object (W is P by 4).
All of the examples above correspond to one particular measure
(sad music, happy paintings, an introduced toxin, an object to recognize)and hence, when a Wurfelspiel matrix is constructed,
relevant training information for recognition is encoded that can be used in many algorithms. The focus of this paper is on the application of these ideas to automatic target recognition (ATR). In addition, we discuss a larger biologically based model of temporal cortex polymodal sensor fusion which can use the feature vectors extracted from the ATR Wurfelspiel data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel approach to automatically recognize the target based on a view morphing database constructed by our multi-view morphing algorithm. Instead of using single reference image, a set of images or a video sequence is used to construct the reference database, where these images are re-organized by a triangulation of viewing sphere. At the vertex of each triangle, one image is stored in the database as the reference view from a specific viewing direction. For each triangle, our tri-view morphing algorithm can synthesize a high quality image for an arbitrary novel viewpoint amongst three neighboring reference images, and the barycentric blending scheme guarantees the seamless transitions between each neighboring triangles. Using the synthesized images, we apply appearance based recognition technique to recognize the target. In addition, using the proposed method, the pose of the object or camera motion can be approximately estimated. Several examples are demonstrated in the experiments to show that our approach is effective and promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This contribution describes the results of a collaboration the objective of which was to technically validate an assessment approach for automatic target recognition (ATR) components1. The approach is intended to become a standard for component specification and acceptance test during development and procurement and includes the provision of appropriate tools and data.
The collaboration was coordinated by the German Federal Office for Defense Technology and Procurement (BWB). Partners besides the BWB and the group Assessment of Fraunhofer IITB were ATR development groups of EADS Military Aircraft, EADS Dornier and Fraunhofer IITB.
The ATR development group of IITB contributed ATR results and developer's expertise to the collaboration while the industrial partners contributed ATR results and their expertise both from the developer's and the system integrator's point of view. The assessment group's responsibility was to provide task-relevant data and assessment tools, to carry out performance analyses and to document major milestones.
The result of the collaboration is twofold: the validation of the assessment approach by all partners, and two approved benchmarks for specific military target detection tasks in IR and SAR images. The tasks are defined by parameters including sensor, viewing geometries, targets, background etc. The benchmarks contain IR and SAR sensor data, respectively. Truth data and assessment tools are available for performance measurement and analysis. The datasets are split into training data for ATR optimization and test data exclusively used for performance analyses during acceptance tests. Training data and assessment tools are available for ATR developers upon request.
The work reported in this contribution was supported by the German Federal Office for Defense Technology and Procurement (BWB), EADS Dornier, and EADS Military Aircraft.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Class-associative detection involves recognition of multiple dissimilar targets simultaneously present in the input scene. In this paper, synthetic discriminant function (SDF) has been incorporated in the fringe-adjusted joint transform correlation based class-associative target detection technique to make it distortion invariant. The concept of fractional power fringe-adjusted joint transform correlation (FPFJTC) has been utilized both to generate the SDF based reference images and to detect the class-associative targets using multi-target detection algorithm. FPFJTC provides mainly three different types of filters, may be termed as generalized fringe-adjusted filters (GFAF), to modify the joint power spectrum and thus facilitates the selection of appropriate filter/filters. Here we have proposed the phase-only filter variation from the GFAF at all steps for successful detection. Simulation results verify that the proposed scheme performs satisfactorily in detecting both binary and gray level images of a class irrespective of distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In past decades, the solution to ATR problem has been thought of as a solution to the Pattern Recognition problem. The reasons that Pattern Recognition problem has never been solved successfully and reliably for real-world images are more serious than lack of appropriate ideas. Vision is a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. Vision mechanisms cannot be completely understood apart from the informational processes related to knowledge and intelligence. A reliable solution to the ATR problem is possible only within the solution of a more generic Image Understanding Problem. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise computations of 3-D models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations make possible invariant recognition of a real-world object as exemplar of a class. This allows for creating ATR systems, reliable in field conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When a binary pattern such as the edge-detected object, or the contour of a group of features, etc., is selected from a first layer (a preprocessing layer) of a neural network system according to the designer's choice, the refined and accurate recognition of this object is subject to the accurate but optimally robust comparison of this input pattern to a limited number of standard patterns. Optimum robustness here means that each standard pattern has an allowed variable range which is determined automatically in the noniterative learning, and that the chance for an unknown pattern to access each range is equal.
This paper will report the derivation and the analysis of the neural network system from the point of view of discrete algebra and matched filters. Its design principle relates closely to that of the universal mapping in a noniterative neural system and that of the matched filter in an electronic communication system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared imagers used to acquire data for automatic target recognition are inherently limited by the physical properties of their components. Fortunately, image super-resolution techniques can be applied to overcome the limits of these imaging systems. This increase in resolution can have potentially dramatic consequences for
improved automatic target recognition (ATR) on the resultant higher-resolution images. We will discuss superresolution techniques in general and specifically review the details of one such algorithm from the literature suited to real-time application on forward-looking infrared (FLIR) images. Following this tutorial, a numerical analysis of the algorithm applied to synthetic IR data will be presented, and we will conclude by discussing the implications of the analysis for improved ATR accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.