In contrast to networks of neurons where behavior is governed by average firing rate, what computations are implemented most easily, efficiently, and robustly by networks of neurons that spike? Spiking neurons synchronize much more readily when their firing rates are similar than when they are different. This property can be used to very simply and robustly implement, in a network of appropriately connected spiking neurons, a many are equals operation: synchronization indicates that many of the neurons' firing rates are similar. Such an operation is computationally very powerful. The computation is robust to outliers, and contains a natural invariance: over a broad range of firing rates, the synchronization phenomenon depends only on rate similarity, and not on the precise firing rate level. We demonstrate the computational power of this operation by constructing a simple network of spiking neurons with output neurons that respond selectively to a complex spectrotemporal pattern, the spoken word one. The response is invariant to uniform time-warp. Time is encoded by slowly decaying firing rates, and the selectivity is largely speaker-independent. We posit many are equals synchronization is a simple yet powerful computational building block for spiking neural networks.
Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.
We describe an integrated vision system which reliably detects persons in static color natural scenes, or other targets among distracting objects. The system is built upon the biologically-inspired synergy between two processing stages: A fast trainable visual attention front-end (where), which rapidly selects a restricted number of conspicuous image locations, and a computationally expensive object recognition back-end (what), which determines whether the selected locations are targets of interest. We experiment with two recognition back-ends: One uses a support vector machine algorithm and achieves highly reliable recognition of pedestrians in natural scenes, but is not particularly biologically plausible, while the other is directly inspired from the neurobiology of inferotemporal cortex, but is not yet as robust with natural images. Integrating the attention and recognition algorithms yields substantial speedup over exhaustive search, while preserving detection rate. The success of this approach demonstrates that using a biological attention-based strategy to guide an object recognition system may represent an efficient strategy for rapid scene analysis.
What strikes the attention of a neural network designer is that the chemicals seem to work not so much on individual neural circuits as on neural cell assemblies. These are large blocks of neural networks that carry out high level tasks using their constituent networks as needed. It follows to us that we might seek ways of achieving that same sort of behavior in an artificial neural network. In what follows, we provide two examples of how that might be done in an artificial system.
Here introduces a new idea of Holonic Transformation Method (HTM) which is able to represent the huge and complexed systems clearly and precisely as they are expected and the time whenever they are wanted, and which is also able to control them intelligently and flexibly. This idea is originated from Arthur Koestler, the late Hungarian novelist, science writer, and philosopher, who defined the idea in terms of living systems. Here, by expanding this idea philosophically and mathematically in order to use for management in the field of engineering, it has made possible to treat the huge and complexed system, which is not possible in the conventional methods.
We consider algebraic foundations of geometrical optics approximation. The consideration is aimed at optical implementation of computational intelligence models. Theory of triangular norms and measure means are used to formulate the description. The process of negative photo-registration is considered as the implementation of the negation, which generates the algebra. Three approximations of negative recording media transmittance are considered: linear, involutive, and non-involutive one. Optically realizable orders and relations of fuzzy numbers, fuzzy sets and images are considered.
Recent advances in image and signal processing have created a new challenging environment for biomedical engineers. Methods that were developed for different fields are now finding a fertile ground in biomedicine, especially in the analysis of bio-signals and in the understanding of images. More and more, these methods are used in the operating room, helping surgeons, and in the physician's office as aids for diagnostic purposes. Neural Network (NN) research on the other hand, has gone a long way in the past decade. NNs now consist of many thousands of highly interconnected processing elements that can encode, store and recall relationships between different patterns by altering the weighting coefficients of inputs in a systematic way. Although they can generate reasonable outputs from unknown input patterns, and can tolerate a great deal of noise, they are very slow when run on a serial machine. We have used advanced signal processing and innovative image processing methods that are used along with computational intelligence for diagnostic purposes and as visualization aids inside and outside the operating room. Applications to be discussed include EEGs and field potentials in Parkinson's disease along with 3D reconstruction of MR or fMR brain images in Parkinson's patients, are currently used in the operating room for Pallidotomies and Deep Brain Stimulation (DBS).
Proc. SPIE 4479, Soft computing and soft communication (SC2) hybrid rf wireless communication platforms and interfaces for BLOS and sensor WLAN applications, 0000 (14 November 2001); doi: 10.1117/12.448329
Presently, there are many technological and industrial efforts for development of virtual flight simulators, usually based on networked technologies. In order to solve the problems of real time availability and realistic quality of simulators, source data images and digital terrain models (DTM) should have some generalized structure, which supposes different imagery resolution and different amount of detail on each level of 3D simulation. One of the central problems is geotruthing of satellite imagery with realistic accuracy requirements with respect to DTM. Traditionally such geotruthing can be achieved by means of geo control points measurements. This process is labor intensive and requires special photogrammetric operator skills. In order to avoid such a process an algorithm of terrain and image models singularity's recognition based on Catastrophe theory is investigated in this paper. This approach does not require training but operates with direct comparison of the analytical manifolds from DTM with those actually extracted from the image. The technology described in this paper, the Catastrophe Approach, and algorithms of satellite imagery treatment may be implemented in a multi-level image pyramid flight simulators. Theoretical approaches and practical realization indicates that the Catastrophe Approach is easy- to-use for a final customer and can be implemented on-line to networked flight simulators.
The schema theorem describes the expected proportion of a particular schema at the next generation in an evolutionary algorithm given the current proportion of that schema, its realized fitness, and the mean fitness of all extant solutions (ignoring the effects of variation operators). Simple iterative analysis of this relationship, extrapolated over successive generations, has led to a claim that the use of proportional selection generates an exponentially increasing proportion of schemata that are of above-average fitness. This paper shows that this claim is not correct, and moreover that iterating the expectations derived from the schema theorem leads to erroneous predictions about schemata propagation even in the simplest problems and even when iterated over only two generations.
There are many approaches to solving multi-objective optimization problems using evolutionary algorithms. We need to select methods for representing and aggregating preferences, as well as choosing strategies for searching in multi-dimensional objective spaces. First we suggest the use of linguistic variables to represent preferences and the use of fuzzy rule systems to implement tradeoff aggregations. After a review of alternatives EA methods for multi-objective optimizations, we explore the use of multi-sexual genetic algorithms (MSGA). In using a MSGA, we need to modify certain parts of the GAs, namely the selection and crossover operations. The selection operator groups solutions according to their gender tag to prepare them for crossover. The crossover is modified by appending a gender tag at the end of the chromosome. We use single and double point crossovers. We determine the gender of the offspring by the amount of genetic material provided by each parent. The parent that contributed the most to the creation of a specific offspring determines the gender that the offspring will inherit. This is still a work in progress, and in the conclusion we examine many future extensions and experiments.
We describe modeling techniques from the field of Soft Computing (SC), and we illustrate their use in solving diagnostics and prognostics problems. Soft Computing is an association of computing methodologies that includes as its principal members fuzzy, neural, evolutionary, and probabilistic computing. These methodologies enable us to deal with imprecise, uncertain data and incomplete domain knowledge typically encountered in real-world applications. We analyze five successful SC case studies of applications to equipment diagnostics, forecasting, and control, e.g., prediction of voltage breakdown in power distribution networks, prediction of paper web breakage in paper mills, raw mix proportioning control in cement plants, diagnostics of power generation faults, and classification of MRI signatures for incipient failure detection. We conclude by projecting future trends of SC technologies and their use in constructing hybrid SC systems.
The problem of reconstructing an irregularly sampled discrete-time band-limited signal with unknown sampling locations can be analyzed using both geometric and algebraic approaches. This problem can be solved using iterative and non-iterative techniques including the cyclic coordinate approach and the random search method. When the spectrum of the given signal is band-limited to L coefficients, the algebraic structure underlying the signal can be dealt using subspace techniques and a method is suggested to classify the solutions based on this approach. We numerically solve the Irregular Sampling at Unknown Locations (ISUL) problem by considering it as a combinatorial optimization problem. The exhaustive search method to determine the optimum solution is computationally intensive. The need for a more efficient optimization technique to save computational complexity leads us to propose Evolutionary Programming as a stochastic optimization technique. Evolutionary algorithms, based on the models of natural evolution were originally developed as a method to evolve finite-state machines for solving time series prediction tasks and were later extended to parameter optimization problems. The solution space is modeled as a population of individuals, and the search for the optimum solution is obtained by evolving to the best individual in the population. We propose an Evolutionary Programming (EP) based method to converge to the global optimum and obtain the set of sampling locations for the given irregularly sampled signal. The results obtained by EP are compared with the Random Search and Cyclic Coordinate descent algorithms.
Given a classifier, presently we use a confusion matrix to quantify how much the classifier deviates from truth based upon training data. Shortcomings to this limited application of the confusion matrix are that (1) it does not communicate data trends in feature space, for example where errors congregate, and (2) the truth mapping is largely unknown except for a small, potentially biased sample set. In practice, one does not have truth but has to rely on an expert's opinion. We propose the mathematical theory of confusion comparing and contrasting the opinions of two experts (i.e., two classifiers). This theory has advantages over traditional confusion matrices in that it provides a capability for expressing classification confidence over ALL of feature space, not just at sampled truth. This theory quantifies different types of confusion between classifiers and yields a region of feature space where confusion occurs. An example using Artificial Neural Networks will be given.
In modern high speed networks, such as those based on TCP/IP or ATM technologies, congestion control mechanisms play an important role in achieving optimum performance. Multiple-buffer architectures are one of the mechanisms that can be used to satisfy the different QOS requirements between different connections. Priority control mechanisms must be applied to schedule the service sequence between the buffers. The priority control mechanisms must be able to guarantee the different QOS requirements for all the connections as well as to meet the real-time characteristics of the networks. In this paper we evaluate several dynamic priority schemes and extend them through the use of fuzzy techniques. Performance evaluation shows the efficiency that such an approach provides.
Intelligent environments are systems that are aware of the spatial information and activities within them through sensors and interact with people in a natural and unobtrusive way. An intelligent system using networked omnivision array is proposed based on specified requirements of intelligent environments. It utilizes Omni-Directional Vision Sensor (ODVS) network as the sensory input. ODVS optical modeling is described, which allows panoramic and perspective view generation. A 3D tracker based on the ODVS network is constructed. Using the tracking information, active camera selection and dynamic perspective view generation enable real-time face tracking. Face recognition is also implemented for person identification. Current results of the modules and extensions to the system are also discussed.
Optics has a number of deep analogies with main principles of Computational Intelligence. We can see strong analogies between basic optical phenomena, used in Fourier-holography, and mathematical foundations of Fuzzy Set Theory. Also, analogies between optical holography technique and principles of Neural Networks Paradigm can be seen. Progress in new holographic recording media with self-developing property leads to Evolutionary Computations holographic realization. Based on these analogies we review holographic techniques from two points of view: Fuzzy Logic and Fuzzy relations.
In this paper, methods of choosing a vehicle out of an image are explored. Digital images are taken from a monocular camera. Image processing techniques are applied to each single frame picture to create the feature vector. Finally the resulting features are used to classify whether there is a car in the picture or not using support vector machines. The results are compared to those obtained using a neural network. A discussion on techniques to enhance the feature vector and the results from both learning machines will be included.
In this article we present an hybrid SOM+PCA approach for face identification that is based on separating shape and texture information. Shape will be processed by a modified Hausdorff distance SOM and texture processing relies on a modular PCA. In most successfully view-based recognition systems, shape and texture are jointly used to statistically model a linear or piecewise linear subspace that optimally explains the face space for a specific database. Our work is aimed to separate the influence that variance in face shape stamps on the set of eigenfaces in the classical PCA decomposition. In this sense we search for a more efficiently coded face-vector for identification. The ultimate goal consist of finding a non-linear transformation invariant to gesture changes and, in a larger extent, to pose changes. The first part of this paper is dedicated to the shape processor of the system, that is based on a novel shape-based Self Organizing Map, and the second part deals with the subspace PCA decomposition that relies on the SOM clustering. Results are reported by comparing face identification between PCA and the SOM-PCA approach.