Electro-Optic (EO) image sensors exhibit the properties of high resolution and low noise level, but they cannot reflect information about the temperature of objects and do not work in dark environments. On the other hand, infrared (IR) image sensors exhibit the properties of low resolution and high noise level, but IR images can reflect information about the temperature of objects all the time. Therefore, in this paper, we propose a novel framework to enhance the resolution of EO images using the information (e.g., temperature) from IR images, which helps distinguish temperature variation of objects in the daytime via high-resolution EO images. The proposed novel framework involves four main steps: (1) select target objects with temperature variation in original IR images; (2) fuse original RGB color (EO) images and IR images based on image fusion algorithms; (3) blend the fused images of target objects in proportion with original gray-scale EO images; (4) superimpose the target objects’ temperature information, onto original EO images via the modified NTSC color space transformation. Therein, the image fusion step will be conducted by qualitative (frame pipeline) approach. Revealing temperature information in EO images for the first time is the most significant contribution of this paper. Simulation results will show the transformed EO images with the targets’ temperature information.
As a uncommon biometric modality, human gait recognition has a great advantage of identify people at a
distance without high resolution images. It has attracted much attention in recent years, especially in the
fields of computer vision and remote sensing. In this paper, we propose a human gait recognition framework
that consists of a reliable background subtraction method followed by the pyramid of Histogram of Gradient
(pHOG) feature extraction on the silhouette image, and a Hidden Markov Model (HMM) based classifier.
Through background subtraction, the silhouette of human gait in each frame is extracted and normalized from
the raw video sequence. After removing the shadow and noise in each region of interest (ROI), pHOG feature
is computed on the silhouettes images. Then the pHOG features of each gait class will be used to train a
corresponding HMM. In the test stage, pHOG feature will be extracted from each test sequence and used to
calculate the posterior probability toward each trained HMM model. Experimental results on the CASIA Gait
Dataset B1 demonstrate that with our proposed method can achieve very competitive recognition rate.
Electro-optic (EO) images exhibit the properties of high resolution and low noise level, while it is a challenge to
distinguish objects with infrared (IR), especially for objects with similar temperatures. In earlier work, we proposed a
novel framework for IR image enhancement based on the information (e.g., edge) from EO images. Our framework
superimposed the detected edges of the EO image with the corresponding transformed IR image. Obviously, this
framework resulted in better resolution IR images that help distinguish objects at night. For our IR image system, we
used the theoretical point spread function (PSF) proposed by Russell C. Hardie et al., which is composed of the
modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of
diffraction-limited optics. In addition, we designed an inverse filter based on the proposed PSF to transform the IR image.
In this paper, blending the detected edge of the EO image with the corresponding transformed IR image and the original
IR image is the principal idea for improving the previous framework. This improved framework requires four main steps:
(1) inverse filter-based IR image transformation, (2) image edge detection, (3) images registration, and (4) blending of
the corresponding images. Simulation results show that blended IR images have better quality over the superimposed
images that were generated under the previous framework. Based on the same steps, the simulation result shows a
blended IR image of better quality when only the original IR image is available.
Electro-optic (EO) images exhibit the properties of high resolution and low noise level, while it is a challenge to
distinguish objects at night through infrared (IR) images, especially for objects with a similar temperature. Therefore, we
will propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which
will result in high resolution IR images and help us distinguish objects at night. Superimposing the detected edge of the
EO image onto the corresponding transformed IR image is our principal idea for the proposed framework. In this
framework, we will adopt the theoretical point spread function (PSF) proposed by Russell C. Hardie et al. for our IR
image system, which is contributed by the modulation transfer function (MTF) of a uniform detector array and the
incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we will design an inverse filter in
terms of the proposed PSF to conduct the IR image transformation. The framework requires four main steps, which are
inverse filter-based IR image transformation, EO image edge detection, registration and superimposing of the obtained
image pair. Simulation results will show the superimposed IR images.
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A
variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years
to improve human visual perception for object detection. One of the main challenges for visible and infrared
image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an
acceptable computational cost.
This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a
contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied
on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations.
After that, the region surrounding the target area is segmented as the background regions. Then image fusion
is locally applied on the selected target and background regions by computing different linear combination of
color components from registered visible and infrared images. After obtaining different fused images, histogram
distributions are computed on these local fusion images as the fusion feature set. The variance ratio which
is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most
discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the
process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment
is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate
that our proposed method achieved a competitive performance compared with other fusion algorithms at a
relatively low computational cost.
The imbalanced learning problem (learning from imbalanced data) presents a significant new challenge to the pattern
recognition and machine learning society because in most instances real-world data is imbalanced. When
considering military applications, the imbalanced learning problem becomes much more critical because such
skewed distributions normally carry the most interesting and critical information. This critical information is
necessary to support the decision-making process in battlefield scenarios, such as anomaly or intrusion detection.
The fundamental issue with imbalanced learning is the ability of imbalanced data to compromise the
performance of standard learning algorithms, which assume balanced class distributions or equal misclassification
penalty costs. Therefore, when presented with complex imbalanced data sets these algorithms may not
be able to properly represent the distributive characteristics of the data. In this paper we present an empirical
study of several popular imbalanced learning algorithms on an army relevant data set. Specifically we will
conduct various experiments with SMOTE (Synthetic Minority Over-Sampling Technique), ADASYN (Adaptive
Synthetic Sampling), SMOTEBoost (Synthetic Minority Over-Sampling in Boosting), and AdaCost (Misclassification
Cost-Sensitive Boosting method) schemes. Detailed experimental settings and simulation results are
presented in this work, and a brief discussion of future research opportunities/challenges is also presented.
A cascade of filtering windows is implemented iteratively for removing random-valued impulse noise in heavily corrupted images. This method is based on the peer group concept (PGC), so a pixel is considered as noise-free if and only if for each window size, there exists a peer group of certain threshold cardinality for it. Otherwise, the pixel is considered as noisy. In the restoration process, the corrupted pixels are restored by taking the mean value of the remaining good pixels in the filtering window. Extensive simulations demonstrate that the proposed method produces competitive results at low noise rates, but at high noise rates, it outperforms other state-of-the-art methods. This approach efficiently suppresses the impulse noise, shows a low computational complexity, and has an equal effect on both color and gray-level images.
Detection and tracking of a varying number of people is very essential in surveillance sensor systems. In the
real applications, due to various human appearance and confusors, as well as various environmental conditions,
multiple targets detection and tracking become even more challenging. In this paper, we proposed a new
framework integrating a Multiple-Stage Histogram of Oriented Gradients (HOG) based human detector and the
Particle Filter Gaussian Process Dynamical Model (PFGPDM) for multiple targets detection and tracking. The
Multiple-Stage HOG human detector takes advantage from both the HOG feature set and the human motion
cues. The detector enables the framework detecting new targets entering the scene as well as providing potential
hypotheses for particle sampling in the PFGPDM. After processing the detection results, the motion of each
new target is calculated and projected to the low dimensional latent space of the GPDM to find the most similar
trained motion trajectory. In addition, the particle propagation of existing targets integrates both the motion
trajectory prediction in the latent space of GPDM and the hypotheses detected by the HOG human detector. Experimental tests are conducted on the IDIAP data set. The test results demonstrate that the proposed approach can robustly detect and track a varying number of targets with reasonable run-time overhead and performance.
This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE)
techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and
capture potential abnormal behaviors. With the continuous development of network technology, information security has
become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the
situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the
To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the
network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the
network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and
clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the
KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain
input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion,
reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection.
We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio
University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.
In this paper we present a new particle filter based multi-target tracking method incorporating Gaussian Process
Dynamical Model (GPDM) to improve robustness in multi-target tracking on complex motion patterns. With
the Particle Filter Gaussian Process Dynamical Model (PFGPDM), a high-dimensional training target trajectory
dataset of the observation space is projected to a low-dimensional latent space through Probabilistic Principal
Component Analysis (PPCA), which will then be used to classify test object trajectories, predict the next
motion state, and provide Gaussian process dynamical samples for the particle filter. In addition, histogram-
Bhartacharyya and GMM Kullback-Leibler are employed respectively, and compared in the particle filter as
complimentary features to coordinate data used in GPDM. Experimental tests are conducted on the PETS2007
benchmark dataset. The test results demonstrate that the approach can track more than four targets with
reasonable run-time overhead and good performance.
We proposed a novel intrusion detection system in mobile ad hoc networks using the social network analysis approach,
which is different from the conventional ones. Our social based IDS utilizes explored social relations for anomaly detections,
which can capture and represent similar network statistics as those used in data mining based intrusion detection
systems. Simulation results show that this social based IDS can effectively detect attacks with high detection rates and
low false alarm rates. Further more, our system is of simpler implementation and lower system complexity than rule based
Explosion detection and recognition is a critical capability to provide situational awareness to the war-fighters in
battlefield. Acoustic sensors are frequently deployed to detect such events and to trigger more expensive sensing/sensor
modalities (i.e. radar, laser spectroscope, IR etc.). Acoustic analysis of explosions has been intensively studied to
reliably discriminate mortars, artillery, round variations, and type of blast (i.e. chemical/biological or high-explosive).
One of the major challenges is high level of noise, which may include non-coherent noise generated from the
environmental background and coherent noise induced by possible mobile acoustic sensor platform. In this work, we
introduce a new acoustic scene analysis method to effectively enhance explosion classification reliability and reduce the
false alarm rate at low SNR and with high coherent noise. The proposed method is based on acoustic signature
modeling using Hidden Markov Models (HMMs). Special frequency domain acoustic features characterizing explosions
as well as coherent noise are extracted from each signal segment, which forms an observation vector for HMM training
and test. Classification is based on a unique model similarity measure between the HMM estimated from the test
observations and the trained HMMs. Experimental tests are based on the acoustic explosion dataset from US ARMY
ARDEC, and experimental results have demonstrated the effectiveness of the proposed method.
In this paper we propose a new technique to detect random-valued impulse noise in images. In this method, the noisy
pixels are detected iteratively through several phases. In each phase, a pixel will be marked as a noisy pixel if it does not
have sufficient number of similar pixels inside the neighborhood window. The size of the window increases over the
phases, so does the sufficient similar neighbor criterion. After the detection phases, all noisy pixels will be corrected in a
recovering process. We compare the performance of this method with other recently published methods, in terms of peak
signal to noise ratio and perceptual quality of the restored images. From the simulation results we observe that this
method outperforms all other methods at medium to high noise rates. The algorithm is very fast, providing consistent
performance over a wide range of noise rates. It also preserves fine details of the image.
Presented here is a novel clustering method for Hidden Markov Models (HMMs) and its application in
acoustic scene analysis. In this method, HMMs are clustered based on a similarity measure for stochastic
models defined as the generalized probability product kernel (GPPK), which can be efficiently evaluated
according to a fast algorithm introduced by Chen and Man (2005) . Acoustic signals from various
sources are partitioned into small frames. Frequency features are extracted from each of the frames to
form observation vectors. These frames are further grouped into segments, and an HMM is trained from
each of such segments. An unknown segment is categorized with a known event if its HMM has the
closest similarity with the HMM from the corresponding labeled segment. Experiments are conducted on
an underwater acoustic dataset from Steven Maritime Security Laboratory, Data set contains a swimmer
signature, a noise signature from the Hudson River, and a test sequence with a swimmer in the Hudson
River. Experimental results show that the proposed method can successfully associate the test sequence
with the swimmer signature at very high confidence, despite their different time behaviors.
Electro-Optical (EO) and Infra-Red (IR) sensors have been jointly deployed in many surveillance systems. In this
work we study the special characteristics of optical flow in IR imagery, and introduce an optical flow estimation
method using co-registered EO and IR image frames. The basic optical flow calculation is based on the combined
local and global (CLG) method (Bruhn, Weickert and Schnorr, 2002), which seeks solutions that simultaneously
satisfy a local averaged brightness constancy constraint and a global flow smoothness constraint. While CLG
method can be directly applied to IR image frames, the estimated optical flow fields usually manifest high level
of random motions caused by thermal noise. Furthermore, IR sensors operating at different wavelengths, e.g.
meddle-wave infrared (MWIR) and long-wave infrared (LWIR), may yield inconsistent motions in optical flow
estimation. Because of the availability of both EO and IR sensors in many practical scenarios, we propose to
estimate optical flow jointly using both EO and IR image frames. This method is able to take advantage of the
complementary information offered by these two imaging modalities. The joint optical flow calculation fuses the
motion fields from EO and IR images using a cross-regularization mechanism and a non-linear flow fusion model
which aligns the estimated motions based on neighbor activities. Experiments performed on the OTCBVS
dataset demonstrated that the proposed approach can effectively eliminate many unimportant motions, and
significantly reduce erroneous motions, such as sensor noise.
We consider the problems of placing cameras so that every point on a
perimeter, that is not necessarily planar, is covered by at least
one camera while using the smallest number of cameras.
This is accomplished by aligning the edges of the cameras' fields
of view with points on the boundary under surveillance.
Taken into consideration are
visibility concerns, where features such as mountains must not be
allowed to come between a camera and a boundary point that would
otherwise be in the camera's field of view. We provide a general
algorithm that determines optimal camera placements and orientations.
Additionally, we consider double coverings, where every boundary point
is seen by at least two cameras, with selected boundary points
and cameras situated such that the average calibration errors between
adjacent cameras is minimized. We describe an iterative algorithm
that accomplishes these tasks. We also consider a joint optimization
algorithm, which strikes a balance between minimizing calibration
error and the number of cameras required to cover the boundary.
The Adaptive SAR ATR Problem Set (AdaptSAPS) poses a typical "learning with a critic" problem, in which the system-under-test (SUT) is initially trained to characterize a subset of target objects (e.g. T72) and a subset of non-target objects (e.g. clutter), and is to be updated on-line using the Target Truth information. This work proposes an SUT for adaptive SAR imagery exploitation. The system is founded on a novel feature vector generation scheme and Linear Discriminant Analysis (LDA). The proposed feature vector generation scheme partitions SAR image chips into subimage blocks. The distribution density of subimage blocks is fitted as a Gaussian Mixture Model (GMM). Feature vector of each SAR image is composed of log-likelihoods of its subimage blocks on the pre-fitted GMM. Comparing to original SAR image chips, feature vectors generated from log-likelihoods display superior discriminative power. After feature generation, LDA is used to project feature vectors into a 1-dimensional subspace for classification. The performance of the proposed system is evaluated on the AdaptSAPS.
While computer vulnerabilities have been continually reported in laundry-list format by most commercial scanners, a comprehensive network vulnerability assessment has been an increasing challenge to security analysts. Researchers have proposed a variety of methods to build attack trees with chains of exploits, based on which
post-graph vulnerability analysis can be performed. The most recent approaches attempt to build attack trees by enumerating all potential attack paths, which are space consuming and result in poor scalability. This paper presents an approach to use Bayesian network to model potential attack paths. We call such graph as "Bayesian
attack graph". It provides a more compact representation of attack paths than conventional methods. Bayesian inference methods can be conveniently used for probabilistic analysis. In particular, we use the Bucket Elimination algorithm for belief updating, and we use Maximum Probability Explanation algorithm to compute an optimal subset of attack paths relative to prior knowledge on attackers and attack mechanisms. We tested our model on an experimental network. Test results demonstrate the effectiveness of our approach.
This paper presents a joint source coding and networking scheme for video delivery over ad hoc wireless local area networks. The objective is to improve the end-to-end video quality with the
constraint of the physical network. The proposed video transport scheme effectively integrates several networking components including load-aware multipath routing, class based queuing (CBQ), and scalable (or layered) video source coding techniques. A typical progressive video coder, 3D-SPIHT, is used to generate multi-layer source data streams. The coded bitstreams are then segmented into multiple sub-streams, each with a different level of importance towards the final video reconstruction. The underlay wireless ad hoc network is designed to support service differentiation. A contention sensitive load aware routing (CSLAR) protocol is proposed. The approach is to discover multiple routes between the source and the destination, and label each route with a load value which indicates its quality of service (QoS) characteristics. The video sub-streams will be distributed among these paths according to their QoS priority. CBQ is also applied to all intermediate nodes, which gives preference to important sub-streams. Through this approach, the scalable source coding techniques are incorporated with differentiated service (DiffServ) networking techniques so that the overall system performance is effectively improved. Simulations have been conducted on the network simulator (ns-2). Both network layer performance and application layer performance are evaluated. Significant improvements over traditional ad hoc wireless network transport schemes have been observed.
In this paper, we present a generalized framework for the design of adaptive quantization that is able to achieve a good balance between high compression performance and channel error resilience. The unique feature of our proposed adaptive quantization technique is that it improves the channel error resilience of the compression system. It also provides a simple way to perform bit stream error sensitivity analysis, which previously was only available for fixed rate quantization schemes. The coder automatically classifies the compressed data sequence into separated subsequences with different error sensitivity levels, which enables a good adaptation to different channel models according to their noise statistics and error protection schemes. Two sets of adaptive quantization examples are provided for subband coding of images. The first set is based on a layered quantization/coding approach where our techniques directly quantizes the subband coefficients. The other set is designed for a conventional subband coding system with optimal bit allocation and fixed rate quantization at each subband. Under this second structure, the technique performs lossless compression on quantized subband coefficients. Experimental results have shown that our coders can obtain high quality compression performance with significantly improved resilience to channel errors.