This PDF file contains the front matter associated with SPIE Proceedings Volume 6571, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Pan-sharpened images are useful in a wide variety of applications. Hence, giving quantitative importance to image
quality, depending on the nature of target application, may be required to yield maximum benefit. Current techniques for
joint evaluation of spatial and spectral quality without reference do not allow to quantitatively associate importance to
the image quality. This work proposes a novel global index based on harmonic mean theory to jointly evaluate the
performance of pan-sharpening algorithms without using a reference image. The harmonic mean of relative spatial
information gain and relative spectral information preservation provides a unique global index to compare the
performance of different algorithms. The proposed approach also facilitates in assigning relevance to either the spectral
or spatial quality of an image. The information divergence between the MS bands at lower resolutions and the pansharpened
image provides a measure of the spectral fidelity and mean-shift. Mutual information between the original pan
and synthetic pan images generated from the MS and pan-sharpen images is used to calculate the relative gain. The
relative gain helps to quantify the amount of spatial information injected by the algorithm. A trend comparison of the
proposed approach with other quality indexes using well-known pan-sharpening algorithms on high resolution (IKONOS
and Quickbird) and medium resolution (LandSat7 ETM+) datasets reveals that the new index can be used to evaluate the
quality of pan-sharpened image at the resolution of the pan image without the availability of a reference image.
The registration of images from cameras of di.erent types and/or at different locations is of great interest for
both military and civilian applications. Most available techniques are pixel level registration and use intensity
correlation to spatially align pixels from the two cameras. Lots of computation is consumed to operate on
each pixel of the images and as a result, it would be diffcult to register the images in real time. Furthermore,
images from different types of cameras may have different intensity distributions for corresponding pixels which
will degrade the registration accuracy. In this paper we propose to use improved Minimal Resource Allocation
Network (MRAN) to solve the image registration problem from two cameras. Potential features are added to
improve the performance of MRAN. There are two main contributions in this paper - First, weights going directly
from inputs to outputs are introduced and these parameters are updated by including in the extended Kalman
filter algorithm. Second, initial number of hidden units for the sequential training of MRAN are specified and
the means of the initial hidden units are precalculated using Self Organizing Maps. The experimental results
show that the proposed algorithm peforms very well both in speed and accuracy.
Recently, a multi-sensor image fusion system has been widely investigated due to its growing applications. In the
system, robust and accurate multi-modal image registration is essential and the fast registration is also important for
many applications. In this paper, we propose a fast algorithm for registering multi-modal images that are acquired from
two different sensors: electro-optic (EO) and infrared (IR). In the registration of multi-modal images, a normalized
mutual information (NMI) based registration algorithm is preferred due to its robust and accurate performance. And the
downhill simplex optimization scheme is popular in NMI-based registration because of its fast convergence rate.
However, since it still suffers from a high computational complexity, the complexity should be reduced further for (semi-
) real-time applications. In this paper, we attempt to reduce the computational complexity in the registration process. We
first modify the searching methodology for unconstrained function minimization in the ordinary downhill simplex
algorithm, by suggesting new vertex movements for fast vertex contraction. Thereby, we can reduce the number of
function evaluations. We also minimize the function evaluation time by linearizing the projective transformation in the
interpolation routine. Simulation results show that the proposed algorithm noticeably reduces the computational
complexity by 30% compared to the conventional NMI-based registration algorithm.
We determined the signal-to-noise ratios for an image fusion approach that is suitable for
application to systems with disparate sensors. We found there was reconstruction error present when the
forward and reverse transforms for image fusion give perfect reconstruction.
Multi-sensor platforms are widely used in surveillance video systems for both military and civilian applications. The
complimentary nature of different types of sensors (e.g. EO and IR sensors) makes it possible to observe the scene under
almost any condition (day/night/fog/smoke). In this paper, we propose an innovative EO/IR sensor registration and
fusion algorithm which runs real-time on a portable computing unit with head-mounted display. The EO/IR sensor suite
is mounted on a helmet for a dismounted soldier and the fused scene is shown in the goggle display upon the processing
on a portable computing unit. The linear homography transformation between images from the two sensors is precomputed
for the mid-to-far scene, which reduces the computational cost for the online calibration of the sensors. The
system is implemented in a highly optimized C++ code, with MMX/SSE, and performing a real-time registration. The
experimental results on real captured video show the system works very well both in speed and in performance.
In this work we investigate the relationships between features representing images, fusion schemes for these
features and kernel types used in an Web-based Adaptive Image Retrieval System. Using the Kernel Rocchio
learning method, several kernels having polynomial and Gaussian forms are applied to general images represented
by annotations and by color histograms in RGB and HSV color spaces. We propose different fusion schemes,
which incorporate kernel selector component(s). We perform experiments to study the relationships between
a concatenated vector and several kernel types. Experimental results show that an appropriate kernel could
significantly improve the performance of the retrieval system.
We propose a contrast enhanced fusion (CEF) method for merging infrared and color visible images. The CEF method
can be carried out in two ways: the standard CEF method and the fast CEF method. The standard method transforms the
original RGB color visible image into a linear luminance-chrominance color space in order to treat the achromatic and
chromatic components separately. The achromatic component and infrared image are combined by a grayscale fusion
scheme, and the original achromatic component is replaced by the grayscale fused image. Before the data are
retransformed back into the RGB color space, the means and variances between the original achromatic component and
the grayscale fused image are matched by a linear luminance remapping. The remapping procedure can fairly enhance
the contrast of the final color fused image. The standard CEF method can be implemented efficiently by the fast CEF
method that has the same fusion performance as the standard approach but manipulates images directly in RGB color
space. We used the proposed method to merge long wave infrared and color TV images. The experimental results show
that the CEF method can effectively produce a high-contrast color fused image with similar natural color characteristics
as the original color visible image. In addition, we have also illustrated that the hybrid simple and complex CEF methods
can be applied as a region of interest (ROI) image fusion solution, which allows ROIs to be fused with better quality
than the rest of the original images.
A multi-sensor detection and fusion technology is described in this paper. The system consists of inputs from three
sensors, Infra Red, Doppler Motion, and Stereo Video. The technique consists of three processing parts corresponding
to each sensor data, and a fusion module, which makes the final decision based on the inputs from the three parts. The
signal processing and detection algorithms process the inputs from each sensor and provides specific information to the
fusion module. The fusion module is based on the bayes belief propagation theory. It takes the processed inputs from all
of the sensor modules and provides a final decision for the presence and absence of objects, as well as their reliability
based on the iterative belief propagation algorithm operating on decision graphs. This choice of sensors is designed to
give high reliability. The infra red and Doppler provide detection ability at night, while stereo video has the ability to
analyze depth and range information. The combination of these sensors has the ability to provide a high probability of
detection and a very low false alarm rate. A prototype system was built using this technique to study the feasibility of
intrusion detection for NASA's launch danger zone protection. The system verified the potential of the proposed
algorithms and proved the feasibility of high probability of detection and low false alarm rates compared to many
The objective of this paper is to develop novel classification structures for military targets detection and recognition by
employing different fusion techniques. In real applications, the great diversity of materials in the background areas and
the similarity between the background and target signatures result in high false alarm rates and large miss classification
errors. In this paper, three new systems are proposed using different fusion techniques: pixel level fusion, decision
fusion, and classification fusion employing confidence vectors. These new developed systems are tested using an
experimental data to show its effectiveness.
Training classifiers individually, and then fusing their results, has the potential to improve classification accuracy; often,
dramatic improvements are realized. In this paper we examine how training classifiers using multiple polarimetric
features such as the Cloude-Pottier decomposition, even and odd bounce and the Polarimetric Whitening filter and then
fusing their results affects performance of ship classification. We explore and compare two currently competing
technologies of classifier bagging and classifier boosting for classifier fusion and introduce a new approach which
conducts a search through solution space to configure an optimal classifier given a library of classifiers and features. A
related and important facet of this work is feature selection and feature reduction methods. We explore how the selection
of different features affects classification performance. We also explore estimates of the classifier error and provide
estimates for noise bounds on the data and compare performance of the different methods compared to the noise present
The US Air Force Research Laboratory (AFRL) Fusion for Identifying Targets Experiment (FITE) program aims to
determine the benefits of decision-level fusion (DLF) of Automatic Target Recognition (ATR) products. This paper
describes the Bayesian framework used to characterize the trade-space for DLF approaches and applications. The
overall fusion context is represented as a Bayesian network and the fusion algorithms use Bayesian probability
computations. Bayesian networks conveniently organize the large sets of random variables and distributions appearing
in fusion system models, including models of operating conditions, prior knowledge, ATR performance, and fusion
algorithms. The relationship between fuser performance and these models may be analytically stated (the FITE
equation), but must be solved via stochastic system modeling and Monte Carlo simulation. A key element of the DLF
trade-space is the degree to which the various models depend on ATR operating conditions, since these will determine
the fuser's complexity and performance and will suggest new requirements on source ATRs.
Real world Operating Conditions (OCs) influence sensor data that in turn affects the performance of target detection
and identification systems utilizing the collected information. The impact of operating conditions on collected data is
widely accepted, but not fully characterized. OCs that affect data depend on sensor wavelength and associated scenario
phenomenology, and can vary significantly between electro-optical (EO), infrared (IR), and radar sensors. This paper
will discuss what operating conditions might be modeled for each sensor type and how they could affect automatic target
recognition (ATR) systems designed to exploit their respective sensory data. The OCs are broken out into four
categories; sensor, environment, target, and ATR algorithm training. These main categories will further contain
subcategories with varying levels of influence. The purpose of this work is to develop an OC distribution model for the
"real world" that can be used to realistically represent the performance of multiple ATR systems, and ultimately the
decision made from the fused ATR results. An accurate OC model will greatly enhance the performance assessment of
ATR and fusion systems by affording Bayesian conditioning in fusion performance analysis and aiding in the sensitivity
analysis of fusion performance over different operational conditions. Accurate OC models will also be useful in the
fusion algorithm operation.
A data fusion-based, multisensory detection system, called "Volume Sensor", was developed under the Advanced
Damage Countermeasures (ADC) portion of the US Navy's Future Naval Capabilities program (FNC) to meet reduced
manning goals. A diverse group of sensing modalities was chosen to provide an automated damage control monitoring
capability that could be constructed at a relatively low cost and also easily integrated into existing ship infrastructure.
Volume Sensor employs an efficient, scalable, and adaptable design framework that can serve as a template for
heterogeneous sensor network integration for situational awareness. In the development of Volume Sensor, a number of
challenges were addressed and met with solutions that are applicable to heterogeneous sensor networks of any type.
These solutions include: 1) a uniform, but general format for encapsulating sensor data, 2) a communications protocol
for the transfer of sensor data and command and control of networked sensor systems, 3) the development of event
specific data fusion algorithms, and 4) the design and implementation of modular and scalable system architecture. In
full-scale testing on a shipboard environment, two prototype Volume Sensor systems demonstrated the capability to
provide highly accurate and timely situational awareness regarding damage control events while simultaneously
imparting a negligible footprint on the ship's 100 Mbps Ethernet network and maintaining smooth and reliable
operation in a real-time fashion.
This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment
approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention
Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level
situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation
(HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new
unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack
pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly
coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty
and incompleteness of available information. A software tool is developed to demonstrate the performance of the high
level information fusion for cyber network defense situation and a simulation example shows the enhanced understating
of cyber-network defense.
This paper presents initial results for a tracking simulation of multiple maritime vehicles for use in a data fusion program detecting Weapons of Mass Destruction (WMD). This simulation supports a fusion algorithm (H2LIFT) for collecting and analyzing data providing a heuristic analysis tool for detecting weapons of mass destruction in the maritime domain. Tools required to develop a navigational simulation fitting a set of project objectives are introduced for integration into the H2LIFT algorithm. Emphasis is placed on the specific requirements of the H2LIFT project, however the basic equations, algorithms, and methodologies can be used as tools in a variety of scenario simulations. Discussion will be focused on track modeling (e.g. position tracking of ships), navigational techniques, WMD detection, and simulation of these models using Matlab and Simulink. Initial results provide absolute ship position data for a given multi-ship maritime scenario with random generation of a given ship containing a WMD. Required coordinate systems, conversions between coordinate systems, Earth modeling techniques, and navigational conventions and techniques are introduced for development of the simulations.
Disparity and uncertainty of information sources are both significant problems in information fusion. This paper investigates the problem of disparity in general, and in conjunction with FLASH - a hybrid information-fusion cognitive-processing approach we developed. Different forms of disparity are identified and their categorization is presented, and their implications on the information fusion processes are discussed. The issue of feature-level vs. decision-level fusion is investigated, and the methods of coping with disparity within FLASH are presented. Source uncertainty estimation techniques are discussed as well. Disparity studies and the results of computational experiments related to them are presented. These studies are suggestive of the potential of the FLASH hybrid approach for fusion of disparate information sources.
This paper presents a reasoning system that pools the judgments from a set of inference agents with information
from heterogeneous sources to generate a consensus opinion that reduces uncertainty and improves knowledge
quality. The system, called Collective Agents Interpolation Integral (CAII), addresses a high level data fusion
problem by combining, in a mathematically sound manner, multi-models of inference in knowledge intensive
multi agent architecture. Two major issues are addressed in CAII. One is the ability of the inference mechanisms
to deal with hybrid data inputs from multiple information sources and map the diverse data sets to a uniform
representation in an objective space of reasoning and integration. The other is the ability of the system
architecture to allow the continuous and discrete outputs of a diverse set of inference agents to interact, cooperate,
In our present work we introduce the use of data fusion in the field of Transportation and more
precisely for motorway travel time estimation. We present an Ad-hoc approach as the operational
foundation for the development of a novel travel time estimation algorithm, called Modified
Cumulative Traffic Counts Method (MCTC). Based on a data fusion paradigm, we combine in real
time multiple evidence derived from two complementary sources to feed our MCTC inference
engine and attempt to best estimate prevailing travel time. Our approach has as its main advantages
the modeling power of Theory of Evidence in expressing beliefs in some hypotheses, the ability to
add the notions of uncertainty in terms of confidence interval. We evaluate our travel estimation
algorithm prototype through a set of experiments that were conducted with real network traffic. We
conclude that data fusion is a promising approach as it increases the estimation and prediction
capability of our MCTC algorithm and increase the robustness of the estimation process.
Real-time target tracking in large disparate sensor networks has been simulated with a parallelized search based data
fusion algorithm using a simulated annealing approach. The networks are composed of large numbers of low fidelity
binary and bearing-only sensors, and small numbers of high fidelity position sensors over a large region. The primitive
sensors provide limited information, not sufficient to locate targets; the position sensors can report both range and
direction of the targets. Target positions are determined through fusing information from all types of sensors. A score
function, which takes into account the fidelity of sensors of different types, is defined and used as the evaluation function
for the optimization search. The fusion algorithm is parallelized using spatial decomposition so that the fusion process
can finish before the arrival of the next set of sensor data. A series of target tracking simulations are performed on a
Linux cluster with communication between nodes facilitated by the Message Passing Interface (MPI). The probability of
detection (POD), false alarm rate (FAR), and average deviation (AVD) are used to evaluate the network performance.
The input target information used for all the simulations is a set of target track data created from a theater level air
This research is intended to contribute to the development of automated and human-in-the-loop systems for higher level fusion to respond to the information requirements of command decision making. In tactical situations with short time constraints, the analysis of information requirements may take place in advance for certain classes of problems, and provided to commanders and their staff as part of the control and communications systems that come with sensor networks. In particular, it may be possible that certain standing orders can assume the role of Priority Intelligence Requirements. Standing orders to a sensor network are analogous to standing orders to Soldiers. Trained Soldiers presumably don't need to be told to report contact with hostiles, for example, or to report any sighting of civilians with weapons. Such standing orders define design goals and engineering requirements for sensor networks and their control and inference systems. Since such standing orders can be defined in advance for a class of situations, they minimize the need for situation-specific human analysis. Thus, standing orders should be able to drive automatic control of some network functions, automated fusion of sensor reports, and automated dissemination of fused information. We define example standing orders, and outline an algorithm for responding to one of them based on our experience in the field of multisensor fusion.
In this paper, we evaluate the use of a rank-score diversity measure for selecting sensory fusion operations for a robot localization and mapping application. Our current application involves robot mapping and navigation in an outdoor urban search and rescue situation in which we have many similar and mutually occluding landmarks. The robot is a 4-wheel direct drive platform equipped with visual, stereo depth and ultrasound sensors.
In such an application it's difficult to make useful and realistic assumptions about the sensor or environment statistics. Combinatorial Fusion Analysis(CFA) is used to develop an approach to fusion with unknown sensor and environment statistics. A metric is proposed that will indicate when fusion from a set of fusion alternatives will produce a more accurate estimation of depth than either sonar or stereo alone and when not. Experimental results are reported to illustrate that two CFA criteria are viable predictors to distinguish between positive fusion cases (the combined system performs better than or equal to the individual systems) and negative cases.
Maximum Likelihood Ensemble Filter (MLEF) is an alternative deterministic ensemble based filter technique
that optimizes a non-linear cost function along with a Maximum Likelihood approach. In addition to the common
use of ensembles for calculating error covariance, the ensembles in MLEF are exploited to efficiently calculate
Hessian preconditioning and the gradient of the cost function.
This study is divided into two segments. The first part presents a one sensor approach, were MLEF is
compared to different filters using Lorenz 63 system. These filters are: Extended Kalman Filter, Ensemble
Kalman Filter. The second part develops a multi-sensor system. Here we study a moving particle on an orbit
obtained from the same Lorenz system. We analyze the information content of MLEF's ensemble subspace for
each sensor and consider the effects of different number of ensembles on the fusion process. In practice, when
using ensemble based filtering techniques, a large ensemble size is required to obtain the best results. In this
study we show that MLEF can obtain similar results using a smaller ensemble size by utilizing an information
matrix, where essential characteristics are captured. This is a vital consideration when working with multi-sensor data fusion systems.
In this paper, a technique will be presented for establishing the value of acquiring data on attributes
unavailable at the time an initial inference is made from a fuzzy cognitive map. The technique involves
three steps. In the first, an assessment is made of the reachability of unavailable attributes to the final
outcomes. This involves determining whether a chain of causality from the attribute of interest to the
outcome is present. If not, the attribute of concern can not affect the outcome and can be eliminated from
further consideration. For those nodes that can affect the outcome dominance in the chains of causality are
determined within the map. This is the second step in the process. If other paths dominate the chain of
interest such that the attribute can not affect the outcome regardless of its value, then it can also be
eliminated from further consideration. In the final step, assuming that the cost of acquiring the required
information has been incorporated into the map, a determination is made of the value of having the
The successful design and application of the Ordered Weighted Averaging (OWA) method as a decision making tool
depend on the efficient computation of its order weights. The most popular methods for determining the order weights
are the Fuzzy Linguistic Quantifiers approach and the Minimal Variability method which give different behavior patterns
for OWA. These methods will be compared by using Sensitivity Analysis on the outputs of OWA with respect to the
optimism degree of the decision maker.
The theoretical results are illustrated in a water resources management problem. The Fuzzy Linguistic Quantifiers
approach gives more information about the behavior of the OWA outputs in comparison to the Minimal Variability
method. However, in using the Minimal Variability method, the OWA has a linear behavior with respect to the optimism
degree and therefore it has better computation efficiency.
When a Non-gyro inertial measurement unit (NGIMU) is working, an inevitable static coupling error and dynamic
coupling error occur. The coupling error is defined as a situation that the output signal of a single-axis accelerometer
includes the additional value, which affected by the accelerometer input in other direction. Obviously, the coupling error
will decrease the system measurement precision seriously. Basing on a nine-accelerometer configuration of NGIMU and
the definition of the coupling error, a new static and dynamic united decoupling method is applied to NGIMU. The
method overcome the complexity which is aroused by using the static decoupling method and dynamic coupling method
respectively, and simplifies the following processing system. Finally, a simulation case for estimating the error of the
angular rate in three axes is investigated. The simulation results show that after the static and dynamic decoupling, the
navigation precision is improved effectively.
Multi spectral and high resolution satellite imageries enhancing their resolution capabilities in terms of spatial and
spectral reflectance time to time such as LANDSAT, IRS, IKONOS and Digital Globe, these imageries are being
used effectively and efficiently in certain applications, whereas to register spectral reflectance in different channels
of electromagnetic spectrum is the principal characteristic of multi spectral satellite imageries. The real time nature
of remotely sensed data can be of high value for mapping and analyzing surface features. This paper explores the
broader application of remote sensing analysis for terrain mapping from multi-spectral satellite data, the accuracy of
digital elevation model has been verified from various surface interpolation algorithms in which contouring and
point interpolation techniques were extensively used. The study reveals that digital interpretation has become more
sharpened on a large scale and terrain mapping with high and multi-spectral satellite data along with GPS Mobile
Mapper can be done for any region, on some extent this research work has confirmed that sensor on a satellite can
navigate army movements.
Multisensor data fusion is an emerging technology applied to defense and non-defense applications. In this paper, a
image fusion algorithm using different texture parameters is proposed to identify long-range targets. The method uses a
semi-supervised approach for detecting single target from the input images. The procedure consists of three steps:
Feature extraction, Feature level fusion and Sensor level fusion. In this study, two methods of texture feature extraction
using co-occurrence matrix and run-length matrix are considered. Texture parameters are calculated at each pixel of the
selected training image, and target non-target pixels identified manually. Some of the texture features calculated at the
target position differ from those in the background. Discriminant analysis is used to perform feature level fusion on the
training image, which classify target and non-target pixels. By applying the discriminant function to the feature space
of textural parameters, a new image is created. The maxima of this image correspond to target point. The same
discriminant function can be applied to the other images for detecting the trained targets regions. Sensor level fusion
combines images obtained from feature level fusion of visual and IR images. The method was first tested with
synthetically generated images and then with real images. Results are obtained using both co-occurrence and run-length
method of texture feature extraction for target identification.