This PDF file contains the front matter associated with SPIE
Proceedings Volume 6964, including the Title Page, Copyright
information, Table of Contents, Introduction, and the
Conference Committee listing.
The talk discusses mechanisms of the mind and their engineering applications. The past attempts at designing
"intelligent systems" encountered mathematical difficulties related to algorithmic complexity. The culprit turned out to be
logic, which in one way or another was used not only in logic rule systems, but also in statistical, neural, and fuzzy
systems. Algorithmic complexity is related to Godel's theory, a most fundamental mathematical result. These difficulties
were overcome by replacing logic with a dynamic process "from vague to crisp," dynamic logic. It leads to algorithms
overcoming combinatorial complexity, and resulting in orders of magnitude improvement in classical problems of
detection, tracking, fusion, and prediction in noise. I present engineering applications to pattern recognition, detection,
tracking, fusion, financial predictions, and Internet search engines. Mathematical and engineering efficiency of dynamic
logic can also be understood as cognitive algorithm, which describes fundamental property of the mind, the knowledge
instinct responsible for all our higher cognitive functions: concepts, perception, cognition, instincts, imaginations,
intuitions, emotions, including emotions of the beautiful. I present our latest results in modeling evolution of languages
and cultures, their interactions in these processes, and role of music in cultural evolution. Experimental data is presented
that support the theory. Future directions are outlined.
We can be inspired by biological systems, but that does not mean we should attempt to directly implement the
components from which those biological systems are built. Particularly with cognitive systems, the properties of the
components are submerged by a higher level organization which is not deducible from the components. It may be easier
to use a process of reverse engineering of the product of a biological system to understand its operation, than theorizing
about its operation or attempting to build up the working system from its perceived components. The reverse engineering
of a cognitive system to handle a high level task is described, including the extensions required to an already undirected
structure. It is shown how construction of operators built on demand at a ground state can be used to make up for the lack
of the massively parallel activity of a biological cognitive system.
Many environments challenge human capabilities (e.g., situational stress, waiting, fatigue from long duty hours, etc.).
The capability to measure and model the individual's human performance is an important first step in determining a
person's or group's effectiveness in a particular situation. Human bias toward particular climates, favorite routines,
capabilities and limitations strongly influence overall performance. However, the mission team and relationships
amongst the team members adds a very import dimension to the performance during operations or simulations using
models of humans. This paper presents the Grid-Group Cm-α method for predicting performance considering both
environmental and cultural factors. The prediction method is based on Hooke's law which calculates the mechanical
strain on a solid object given the applied physical stresses. Grid-Group Cm-α treats the specific cultural and
environmental factors of a mission as applied stress on to the collection of individuals, the solid object. The collection of
individuals has a given set of properties based on their culture and physical capacities. The resulting strain is estimated
from these parameters and can be used to optimize group selection for mission objectives.
Cooperative motion control of teams of agile unmanned vehicles presents modeling challenges at several levels.
The "microscopic equations" describing individual vehicle dynamics and their interaction with the environment
may be known fairly precisely, but are generally too complicated to yield qualitative insights at the level of
multi-vehicle trajectory coordination. Interacting particle models are suitable for coordinating trajectories, but
require care to ensure that individual vehicles are not driven in a "costly" manner. From the point of view of
the cooperative motion controller, the individual vehicle autopilots serve to "shape" the microscopic equations,
and we have been exploring the interplay between autopilots and cooperative motion controllers using a multivehicle
hardware-in-the-loop simulator. Specifically, we seek refinements to interacting particle models in order
to better describe observed behavior, without sacrificing qualitative understanding. A recent analogous example
from biology involves introducing a fixed delay into a curvature-control-based feedback law for prey capture by an
echolocating bat. This delay captures both neural processing time and the flight-dynamic response of the bat as it uses sensor-driven feedback. We propose a comparable approach for unmanned vehicle modeling; however, in contrast to the bat, with unmanned vehicles we have an additional freedom to modify the autopilot. Simulation results demonstrate the effectiveness of this biologically guided modeling approach.
The coordination of a multi-robot system searching for multi targets is challenging under dynamic environment since the
multi-robot system demands group coherence (agents need to have the incentive to work together faithfully) and group
competence (agents need to know how to work together well). In our previous proposed bio-inspired coordination
method, Local Interaction through Virtual Stigmergy (LIVS), one problem is the considerable randomness of the robot
movement during coordination, which may lead to more power consumption and longer searching time. To address
these issues, an adaptive LIVS (ALIVS) method is proposed in this paper, which not only considers the travel cost and
target weight, but also predicting the target/robot ratio and potential robot redundancy with respect to the detected
targets. Furthermore, a dynamic weight adjustment is also applied to improve the searching performance. This new
method a truly distributed method where each robot makes its own decision based on its local sensing information and
the information from its neighbors. Basically, each robot only communicates with its neighbors through a virtual
stigmergy mechanism and makes its local movement decision based on a Particle Swarm Optimization (PSO) algorithm.
The proposed ALIVS algorithm has been implemented on the embodied robot simulator, Player/Stage, in a searching
target. The simulation results demonstrate the efficiency and robustness in a power-efficient manner with the real-world
To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR)
planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that
allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and
constraints to address dynamic collection requirements for assessment. To meet this need we have designed an
evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based
Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and
optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of
route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as
minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic
environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic
replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to
automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and
assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint
Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR
planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented
along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design,
early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future
research and development, as well as technology transition goals.
Finding certain associated signals in the modern electromagnetic environment can prove a difficult task due to signal
characteristics and associated platform tactics as well as the systems used to find these signals. One approach to finding
such signal sets is to employ multiple small unmanned aerial systems (UASs) equipped with RF sensors in a team to
search an area. The search environment may be partially known, but with a significant level of uncertainty as to the
locations and emissions behavior of the individual signals and their associated platforms. The team is likely to benefit
from a combination of using uncertain a priori information for planning and online search algorithms for dynamic
tasking of the team. Two search algorithms are examined for effectiveness: Archimedean spirals, in which the UASs
comprising the team do not respond to the environment, and artificial potential fields, in which they use environmental
perception and interactions to dynamically guide the search. A multi-objective genetic algorithm (MOGA) is used to
explore the desirable characteristics of search algorithms for this problem using two performance objectives. The results
indicate that the MOGA can successfully use uncertain a priori information to set the parameters of the search
algorithms. Also, we find that artificial potential fields may result in good performance, but that each of the fields has a
different contribution that may be appropriate only in certain states.
We examine the use of local decentralized decision-making methods for solving the problem of resource allocation.
Specifically, we study the problem of frequency coverage given a team of cooperating receivers. The decision
making process is decentralized in that receivers can only communicate locally. We use an extension of the
minority game approach to allocate receivers to current frequency coverage tasks.
We present a novel mathematical framework for Data Mining blogger text entries and converting latent conceptual
information into analytical predictive equations. These differential equations are conceptual models of the blogger's
topic and state-of-mind transition dynamics. The mathematical framework is explored for its value in characterization of
topic content and topic tracking as well as identification and prediction of topic dynamic changes.
Intelligence analysts are tasked with making sense of enormous amounts of data and gaining an awareness of a situation that can be acted upon. This process can be extremely difficult and time consuming. Trying to differentiate between important pieces of information and extraneous data only complicates the problem. When dealing with data containing entities and relationships, social network analysis (SNA) techniques can be employed to make this job easier. Applying network measures to social network graphs can identify the most significant nodes (entities) and edges (relationships) and help the analyst further focus on key areas of concern. Strange developed a model that identifies high value targets such as centers of gravity and critical vulnerabilities. SNA lends itself to the discovery of these high value targets and the Air Force Research Laboratory (AFRL) has investigated several network measures such as centrality, betweenness, and grouping to identify centers of gravity and critical vulnerabilities. Using these network measures, a process for the intelligence analyst has been developed to aid analysts in identifying points of tactical emphasis. Organizational Risk Analyzer (ORA) and Terrorist Modus Operandi Discovery System (TMODS) are the two applications used to compute the network measures and identify the points to be acted upon. Therefore, the result of leveraging social network analysis techniques and applications will provide the analyst and the intelligence community with more focused and concentrated analysis results allowing them to more easily exploit key attributes of a network, thus saving time, money, and manpower.
E.C Adam defined Situational Awareness (SA) as "the mental representation and understanding of objects, events,
people, system states, interactions, environmental conditions, and other situation-specific factors affecting human
performance in complex and dynamic tasks. Stated in lay terms, SA is simply knowing what is going on so you can
figure out what to do." We propose a novel idea to assist the human in gaining SA. Our hypothesis is that nature uses
qualia as a compression scheme to represent the many concepts encountered in everyday life. Qualia enable humans to
quickly come up with SA based on many complex measurements from their sensors, (eyes, ears, taste, touch, memory,
etc.), expectations, and experiences. Our ultimate objective is to develop a computer that uses qualia concepts to
transform sensor data to assist the human in gaining and maintaining improved SA. However, before any computer can
use qualia, we must first define a representation for qualia that can be implemented computationally. This paper will
present our representation for qualia. The representation is not simply a hierarchical aggregation of input data. Instead, it
is a prediction of what will happen next, derived from computations resulting from sensory inputs and the computational
engine of a qualia generator and qualia processor.
Although more information than ever before is available to support the intelligence analyst, the vast proliferation of types of data, devices, and protocols makes it increasingly
difficult to ensure that the right information is received by the right people at the right time. Analysts can rapidly shift between information overload and an information vacuum depending on their location and available equipment. The ability to securely manage and deliver critical knowledge and actionable intelligence to the analyst regardless of device configuration (bandwidth, processing speed, etc.), classification level or location in a reliable manner, would provide the analyst 24/7 access to useable information. There are several important components to an intuitive system that can provide timely information in a user-preferred manner. Two of these components: formatting information to accommodate the user's profiles and the identification of solutions to the problem of secure information delivery across multiple security
levels, will be discussed in this paper.
RF/IR wireless (virtual) synapses are critical components of HYDRA (Hyper-Distributed Robotic Autonomy)
neural networks, already discussed in two earlier papers. The HYDRA network has the potential to be very large, up to
1011-neurons and 1018-synapses, based on already established technologies (cellular RF telephony and IR-wireless
LANs). It is organized into almost fully connected IR-wireless clusters. The HYDRA neurons and synapses are very
flexible, simple, and low-cost. They can be modified into a broad variety of biologically-inspired brain-like computing
capabilities. In this third paper, we focus on neural hardware in general, and on IR-wireless synapses in particular. Such
synapses, based on LED/LD-connections, dominate the HYDRA neural cluster.
In recent years, there has been increased interest in the use of evolutionary algorithms (EAs) in the design of robust
image transforms for use in defense and security applications. An EA replaces the defining filter coeffcients
of a discrete wavelet transform (DWT) to provide improved image quality within bandwidth-limited image processing
applications, such as the transmission of surveillance data by swarms of unmanned aerial vehicles (UAVs)
over shared communication channels. The evolvability of image transform filters depends upon the properties
of the underlying fitness landscape traversed by the evolutionary algorithm. The landscape topography determines
the ease with which an optimization algorithm may identify highly-fit filters. The properties of a fitness
landscape depend upon a chosen evaluation function defined over the space of possible solutions. Evaluation
functions appropriate for image filter evolution include mean squared error (MSE), the universal image quality
index (UQI), peak signal-to-noise ratio (PSNR), and average absolute pixel error (AAPE). We conduct a theoretical
comparison of these image quality measures using random walks through fitness landscapes defined over
each evaluation function. This analysis allows us to compare the relative evolvability of the various potential
image quality measures by examining fitness topology for each measure in terms of ruggedness and deceptiveness.
A theoretical understanding of the topology of fitness landscapes aids in the design of evolutionary algorithms
capable of identifying near-optimal image transforms suitable for deployment in defense and security applications
of image processing.
Hyperspectral imagery (HSI) data has proven useful for discriminating targets, however the relatively slow speed at
which HSI data is gathered for an entire frame reduces the usefulness of fusing this information with grayscale video. A
new sensor under development has the ability to provide HSI data for a limited number of pixels while providing
grayscale video for the remainder of the pixels. The HSI data is co-registered with the grayscale video and is available
for each frame. This paper explores the exploitation of this new sensor for target tracking. The primary challenge of
exploiting this new sensor is to determine where the gathering of HSI data will be the most useful. We wish to optimize
the selection of pixels for which we will gather HSI data. We refer to this as spatial sampling. It is proposed that
spatial sampling be solved using a utility function where pixels receive a value based on their nearness to a target of
interest (TOI). The TOIs are determined from the tracking algorithm providing a close coupling of the tracking and the
sensor control. The relative importance or weighting of the different types of TOI will be accomplished by a genetic
algorithm. Tracking performance of the spatially sampled tracker is compared to both tracking with no HSI data and
although physically unrealizable, tracking with complete HSI data to demonstrate its effectiveness within the upper and
Efficient Global Optimization (EGO) is a competent evolutionary algorithm suited for problems with limited design
parameters and expensive cost functions. Many electromagnetics problems, including some antenna designs, fall
into this class, as complex electromagnetics simulations can take substantial computational effort. This makes simple
evolutionary algorithms such as genetic algorithms or particle swarms very time-consuming for design optimization, as
many iterations of large populations are usually required. When physical experiments are necessary to perform
tradeoffs or determine effects which may not be simulated, use of these algorithms is simply not practical at all due to
the large numbers of measurements required. In this paper we first present a brief introduction to the EGO algorithm.
We then present the parasitic superdirective two-element array design problem and results obtained by applying EGO to
obtain the optimal element separation and operating frequency to maximize the array directivity. We compare these
results to both the optimal solution and results obtained by performing a similar optimization using the Nelder-Mead
downhill simplex method. Our results indicate that, unlike the
Nelder-Mead algorithm, the EGO algorithm did not
become stuck in local minima but rather found the area of the correct global minimum. However, our implementation
did not always drill down into the precise minimum and the addition of a local search technique seems to be indicated.