Analysis of foreground objects in scenery via image processing often involves a background subtraction process. This process aims to improve blob (connected component) content in the image. Quality blob content is often needed for defining regions of interest for object recognition and tracking. Three techniques are examined which optimize the background to be subtracted - genetic algorithm, an analytic solution based on convex optimization, and a related application of the CVX solver toolbox. These techniques are applied to a set of images and the results are compared. Additionally, a possible implementation architecture that uses multiple optimization techniques with subsequent arbitration to produce the best background subtraction is considered.
A model of three-dimensional (3D) object recognition using integral imaging is presented. Multiple elemental images
(EIs) are captured at different perspectives for reconstruction and recognition. Computational results are presented and
discussed for performance evaluation.
Moving vehicle detection in wide area motion imagery is a challenging task due to the large motion of the camera and
the small number of pixels on the target. At the same time, this task is very important for surveillance applications, and
the result can be used for urban traffic management, accident and emergency responder routing. Also, the effectiveness of
the context in object detection task can be further explored to increase target tracking accuracy. In this paper, we propose
to use Spatial Context(SC) to improve the performance of the vehicle detection task. We first model the background
of 8 consecutive frames with median filter, and get candidates by using background subtraction. The SC is built based
on the candidates that have been classified as positive by Histograms of Oriented Gradient(HOG) with Multiple Kernel
Learning(MKL). The region around each positive candidate is divided into m subregions with a fixed length l, then, the
SC, a histogram, is built based on the number of positive candidates in each region. We use the publicly available CLIF
2006 dataset to evaluate the effect of SC. The experiments demonstrate that SC is useful to remove false positives, around
which there are few positive candidates, and the combination of SC and HOG with multiple kernel learning outperforms
the use of SC or HOG only.
In this paper, we describe an algorithm for multi-modal entity co-reference resolution and present
experimental results using text and motion imagery data sources. Our model generates probabilistic
association between entities mentioned in text and detected in video data by jointly optimizing the measure of
appearance and behavior similarity. Appearance similarity is calculated as a match between propositionderived
entity attributes mentioned in text, and the object appearance classification from video sources. The
behavior similarity is calculated based on the semantic information about entity movements, actions, and
interactions with other entities mentioned in text and detected in video sources. Our model achieved 79% Fscore
for text-to-video entity co-reference resolution; we show that entity interactions present unique features
for resolving variability present in text data and ambiguity of visual appearance of entities.
Due to supply chain threats it is no longer a reasonable assumption that traditional protections alone will provide sufficient security for enterprise systems. The proposed cognitive trust model architecture extends the state-of-the-art in enterprise anti-exploitation technologies by providing collective immunity through backup and cross-checking, proactive health monitoring and adaptive/autonomic threat response, and network resource diversity.
To keep pace with our adversaries, we must expand the scope of machine learning and reasoning to address the breadth of possible attacks. One approach is to employ an algorithm to learn a set of causal models that describes the entire cyber network and each host end node. Such a learning algorithm would run continuously on the system and monitor activity in real time. With a set of causal models, the algorithm could anticipate novel attacks, take actions to thwart them, and predict the second-order effects flood of information, and the algorithm would have to determine which streams of that flood were relevant in which situations.
This paper will present the results of efforts toward the application of a developmental learning algorithm to the problem of cyber security. The algorithm is modeled on the principles of human developmental learning and is designed to allow an agent to learn about the computer system in which it resides through active exploration. Children are flexible learners who acquire knowledge by actively exploring their environment and making predictions about what they will find,1, 2 and our algorithm is inspired by the work of the developmental psychologist Jean Piaget.3 Piaget described how children construct knowledge in stages and learn new concepts on top of those they already know. Developmental learning allows our algorithm to focus on subsets of the environment that are most helpful for learning given its current knowledge. In experiments, the algorithm was able to learn the conditions for file exfiltration and use that knowledge to protect sensitive files.
In today’s highly mobile, networked, and interconnected internet world, the flow and volume of information is
overwhelming and continuously increasing. Therefore, it is believed that the next frontier in technological evolution and
development will rely in our ability to develop intelligent systems that can help us process, analyze, and make-sense of
information autonomously just as a well-trained and educated human expert. In computational intelligence,
neuromorphic computing promises to allow for the development of computing systems able to imitate natural neurobiological
processes and form the foundation for intelligent system architectures.
With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every
unauthorized access. High performance encryption algorithms were developed and implemented by software and
hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has
gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to
use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard)
cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key
generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.
Sensor-oriented vehicle tracking and analysis within a city (VTAC) plays an important role in
transportation control, public facility management and national security. This project is dedicated to
the development of a generic VTAC framework, which employs temporal and spatial dependent
partial differential equations (PDE) to formulate the expected traffic flow, through which movement
of the observed vehicles may be measured and analysis. The boundary conditions and parameters for
the traffic flow are derived from the statistical analysis about historical transportation data; the
physics domain is derived from the geographic information system. Using the artificial video data
generated by Blender as benchmark data, the VTAC framework is validated by measuring and
identifying those anomalous vehicles appeared in the video.
Understanding and organizing data is the first step toward exploiting laser vibrometry sensor phenomenology for target
classification. A fundamental challenge in robust vehicle classification using vibrometry signature data is the
determination of salient signal features and the fusion of appropriate measurements. . A particular technique, Diffusion
Maps, has demonstrated the potential to extract intuitively meaningful features . We want to develop an
understanding of this technique by validating existing results using vibrometry data. This paper briefly describes the
Diffusion Map technique, its application to dimension reduction of vibrometry data, and describes interesting problems
to be further explored.
This paper describes the process used to collect the Seasonal Weather And Gender (SWAG) dataset; an electro-optical
dataset of human subjects that can be used to develop advanced gender classification algorithms. Several novel features
characterize this ongoing effort (1) the human subjects self-label their gender by performing a specific action during the
data collection and (2) the data collection will span months and even years resulting in a dataset containing realistic
levels and types of clothing corresponding to the various seasons and weather conditions. It is envisioned that this type
of data will support the development and evaluation of more robust gender classification systems that are capable of
accurate gender recognition under extended operating conditions.