Evolutionary optimizers employ independent Gaussian random variables as a central component for their processing, which often renders them immune to analysis. This paper investigates the applicability of the Hurst dimension, a fractal dimension, as a characterization of processing in an evolutionary optimizer. Results show that this fractal measure does highlight some interesting processing commonalities between standard and self-adaptive evolutionary optimization. A potentially worthwhile modification to evolutionary optimization is suggested based on the results.
Packet-switched networks using the Internet Protocol (IP) provide multimedia services through broadband wireless access to mobile and fixed subscribers from an IP core network via bi-directional paths consisting of a hierarchy of high-speed routers, switches, and servers. Packets are aggregated at the nodes that form the ordered links of end- to-end paths between subscriber and gateway. Network resources are allocated at nodes to meet quality of service (QoS) requirements of new and existing calls. If sufficient resources are not available to satisfy a call's QoS, the call is blocked or dropped, reducing network uptime or availability. Packet flows are shared among redundant devices, clustered at nodes, to reduce blocking and dropping and speed failure recovery. A two-stage genetic algorithm (GA) is proposed to assign resources to feasible paths to provide calls the best possible resource utilization, availability, and QoS levels, while balancing traffic among devices at nodes. The GA operates on a population of integer-valued vectors of call ID, QoS requirements, and end-to-end paths encoded as node-device pairs. Selection, crossover, and mutation are defined for the GA. At call arrivals and departures, the GA limits the number of candidate paths based on their fitness to provide QoS, path availability, resource utilization, and load balance. Simulation results are discussed for different scenarios.
Many complex artificial intelligence (IA) problems are goal- driven in nature and the opportunity exists to realize the benefits of a goal-oriented solution. In many cases, such as in command and control, a goal-oriented approach may be the only option. One of many appropriate applications for such an approach is War Gaming. War Gaming is an important tool for command and control because it provides a set of alternative courses of actions so that military leaders can contemplate their next move in the battlefield. For instance, when making decisions that save lives, it is necessary to completely understand the consequences of a given order. A goal-oriented approach provides a slowly evolving tractably reasoned solution that inherently follows one of the principles of war: namely concentration on the objective. Future decision-making will depend not only on the battlefield, but also on a virtual world where military leaders can wage wars and determine their options by playing computer war games much like the real world. The problem with these games is that the built-in AI does not learn nor adapt and many times cheats, because the intelligent player has access to all the information, while the user has access to limited information provided on a display. These games are written for the purpose of entertainment and actions are calculated a priori and off-line, and are made prior or during their development. With these games getting more sophisticated in structure and less domain specific in scope, there needs to be a more general intelligent player that can adapt and learn in case the battlefield situations or the rules of engagement change. One such war game that might be considered is Risk. Risk incorporates the principles of war, is a top-down scalable model, and provides a good application for testing a variety of goal- oriented AI approaches. By integrating a goal-oriented hybrid approach, one can develop a program that plays the Risk game effectively and move one step closer to solving more difficult real-world AI problems. Using a hybrid approach that includes adaptation via evolutionary computation for the intelligent planning of a Risk player's turn provides better dynamic intelligent planning than more uniform approaches.
In this paper we discuss the design of sequential detection networks for nonparametric sequential analysis. We present a general probabilistic model for sequential detection problems where the sample size as well as the statistics of the sample can be varied. A general sequential detection network handles three decisions. First, the network decides whether to continue sampling or stop and make a final decision. Second, in the case of continued sampling the network chooses the source for the next sample. Third, once the sampling is concluded the network makes the final classification decision. We present a Q-learning method to train sequential detection networks through reinforcement learning and cross-entropy minimization on labeled data. As a special case we obtain networks that approximate the optimal parametric sequential probability ratio test. The performance of the proposed detection networks is compared to optimal tests using simulations.
In ths study of group theory, coset enumeration is a major technique for determining the order of finitely presented groups. ACE is an important computer implemented coset enumeration system. It provides a wide choice of parameter settings, which can derive different strategies for enumeration. In this paper, an evolutionary algorithm is used to optimize parameter settings for ACE to discover better enumerations for several classic groups. The results show that the evolutionary algorithm discovers ACE parameter settings that construct previously unknown enumerations that are more optimal than enumerations discovered by hand or using brute-force search techniques.
With the recent release of the movie AI, there is interest in artificial intelligence and in just how far we can take computational intelligence. This paper discusses the advances made in the computational intelligence arena and brings perspective to what may be possible in the future.
In the recent past category regions have been introduced as new geometrical concepts and provide a visualization tool that facilitates significant insight into the nature of the competition among categories during both the training and performance phase of Fuzzy ART (FA) and Fuzzy ARTMAP (FAM). These regions are defined as the geometric interpretation of the Vigilance Test and the competition of each category with an uncommitted F2-layer node for a specific input pattern (Commitment Test). In this paper we show how the notion of category regions can be naturally extended to Ellipsoid ART (EA) and Ellipsoid ARTMAP (EAM) and focus on the regions' theoretical properties, when considering the Choice-by-Difference category choice function. Based on these properties we state three theoretical results applicable to both EA and EAM. Specifically, if r and a denote the vigilance and the choice parameter respectively, we show that in certain areas of the (a,r) plane the result of EA/EAM training is independent of the specific value of either r or w (parameter of the activation function value for an uncommitted F2-layer node). Finally, we provide a refined upper bound on the size of categories created in EA/EAM during training. All the results are immediately applicable to FA/FAM as well.
Ellipsoid ARTMAP (EAM) is an adaptive-resonance-theory neural network architecture that is capable of successfully performing classification tasks using incremental learning. EAM achieves its task by summarizing labeled input data via hyper-ellipsoidal structures (categories). A major property of EAM, when using off-line fast learning, is that it perfectly learns its training set after training has completed. Depending on the classification problems at hand, this fact implies that off-line EAM training may potentially suffer from over-fitting. For such problems we present an enhancement to the basic Ellipsoid ARTMAP architecture, namely Boosted Ellipsoid ARTMAP (bEAM), that is designed to simultaneously improve the generalization properties and reduce the number of created categories for EAM's off-line fast learning. This is being accomplished by forcing EAM to be tolerant about occasional misclassification errors during fast learning. An additional advantage provided by bEAM's desing is the capability of learning inconsistent cases, that is, learning identical patterns with contradicting class labels. After we present the theory behind bEAM's enhancements, we provide some preliminary experimental results, which compare the new variant to the original EAM network, Probabilistic EAM and three different variants of the Restricted Coulomb Energy neural network on the square-in-a-square classification problem.
In this paper, we introduce a modification of the Fuzzy ARTMAP (FAM) neural network, namely, the Fuzzy ARTMAP with adaptively weighted distances (FAMawd) neural network. In FAMawd we substitute the regular L1-norm with a weighted L1-norm to measure the distances between categories and input patterns. The distance-related weights are a function of a category's shape and allow for bias in the direction of a category's expansion during learning. Moreover, the modification to the distance measurement is proposed in order to study the capability of FAMawd in achieving more compact knowledge representation than FAM, while simultaneously maintaining good classification performance. For a special parameter setting FAMawd simplifies to the original FAM, thus, making FAMawd a generalization of the FAM architecture. We also present an experimental comparison between FAMawd and FAM on two benchmark classification problems in terms of generalization performance and utilization of categories. Our obtained results illustrate FAMawd's potential to exhibit low memory utilization, while maintaining classification performance comparable to FAM.
Artificial neural networks are adaptive methods which can be trained to approximate a functional relationship implicitly encoded in training data. A large variety of neural network types (e.g. linear versus non-linear) gives rise to principal questions about the appropriateness of data pre-processing techniques, training methodologies, the resulting neural net-work topology and possible interdependencies thereof. The a posteriori interpretation of the numerical results gives hints for some guidelines for neural network applications in engineering applications. Data pre-processing techniques are a powerful means for pre-structuring the problem setting of function approximation through an adaptive training procedure. Especially integral transforms may change the nature of the training problem significantly without loss of generality if carefully selected and represent an excellent opportunity to incorporate additional knowledge about the process to improve the training and the result interpretation. Some numerical examples from engineering domains are used to illustrate the theoretical arguments in the context of a practical setting.
Compression of digital images has been a very important subject of research for several decades, and a vast number of techniques have been proposed. In particular, the possibility of image compression using Neural Networks (Nns) has been considered by many researchers in recent years, and several Feed-forward Neural Networks (FNNs) have been proposed with reported promising experimental results. Constructive One-Hidden-Layer Feedforward Neural Network (OHL-FNN) is one such architecture. At previous SPIE conferences, we have proposed a new constructive OHL-FNN using Hermite polynomials for regression and recognition problems, and good experimental results were demonstrated. In this paper, we first modify and then apply our proposed OHL-FNN to compress still and moving images and investigated its performance in terms of both training and generalization capabilities. Extensive experimental results for still images (Lena, Lake, and Girl) and moving images (football game) are presented. It is revealed that the performance of the constructive OHL-FNN using Hermite polynomials is quite good for both still and moving image compression.
Given a finite collection of classifiers trained on two-class data one wishes to fuse the classifiers to form a new classifier with improved performance. Typically, the fusion is done at the output level using logical ANDs and ORs. The proposed fusion is based on the location of the feature vector with respect to the expertise sets and confusion sets of the classifiers. Given a feature vector x, if any one of the classifiers is an expert on x then the fusion rule should be an OR. If the classifiers are confused at x then the fusion rule should be defined is such a way to reflect this confusion or uncertainty. We give this fusion rule that is based upon the confusion sets as well as the expertise sets. We believe that this fusion rule will produce classifiers that perform better than classifiers that resulted from other fusion rules.
An intelligent agent---defined as an autonomous, adaptive, cooperative computer program---must credibly represent its expertise in negotiations with peer agents. Given an agent-based classifier, the determination of where in the domain the classifier is an expert must be explicitly stated. Likewise, where the classifier is confused should also be represented. Currently, an error measures provides an estimate of the relative size of the expertise and confusion sets, but error does not offer a distinct opinion on an untruthed feature vector's membership---i.e., whether its classification is based on specific information, conjecture or chance. We propose the theory for estimating the complete membership of a classifier's expertise sets and confusion sets. From these sets, we construct a 4-value classifier that hypothesizes for each new feature vector whether its classification can be made confidently or not. Examples are given that demonstrate the utility of this theory using multilayer perceptrons.
The architecture of an artificial neural network has a significant influence on its performance. For a given problem, the proper architecture is found by trial and error. This approach is time consuming and may not always produce the optimal network. In this reason, we can apply the evolutionary computation such as genetic algorithm to implement the automation of network's structure as well as the biological inspiration in neural networks to successfully adapt varying input environment. Moreover, we examine the performance of combining multiple evolving networks that are more likely to model the neurophysiology of the human brain. In the combining module, all individual networks or selected individual networks in the population are used. Also, the dynamic data set is used to provide the networks with diversity and generalization. In this model, each evolving individual network is designed to have a specific feature set and neuron connection links for given data. Then, the results are combined through the combining module to improve the generalization performance of the single network. The Iris and Austrian credit data are used in the experiment.
This paper presents a tree-based search method to speed up the encoding process for vector quantization. The method is especially designed for very large codebooks and is based on a local search rather than on a global search including the whole feature space. The relations between the proposed method and several existing fast algorithms are discussed. Simulation results demonstrate that with little preprocessing and memory cost, the encoding time of the new algorithm has been reduced significantly while encoding quality remains the same with respect to other existing fast algorithms.
In this paper we have sketched some technical details of an FM sub-carrier technology called Multi Purpose Radio Communication Channel (MPRC). This technology delivers actually data at maximum data rate of around 40 kbs using a proprietary codec algorithm: Subsidiary Communication Channel (SCC). A core device of this codec algorithm is a DWT compressor with a proprietary pre-processing, constituted by a neural self-adapting filter, the Dynamic Perceptron Algorithm (DPA), able to detect edges and to extract objects from the moving images flow, so to optimize the overall compression rate and the image quality. As a result it is possible to obtain video transmission in QCIF format at roughly 8/12 fps using 35 kHz of the 100 kHz available for a commercial FM radio station in Europe. This allows transmitting video on FM radio together with the usual radio broadcasting. On the contrary, if we use all the available 100 kHz., we obtain, after the charge related to the error protocol, a channel for compressed video transmission of about 113 kbit, allowing high quality 640x480 (zoomed or not zoomed) video images.
In this paper we present an encryption module included in the Subsidiary Communication Channel (SCC) System we are developing for video-on-FM radio broadcasting. This module is aimed to encrypt by symmetric key the video image archive and real-time database of the broadcaster, and by asymmetric key the video broadcasting to final users. The module includes our proprietary Techniteia Encryption Library (TEL), that is already successfully running and securing several e-commerce portals in Europe. TEL is written in C-ANSI language for its easy exportation onto all main platforms and it is optimized for real-time applications. It is based on the blowfish encryption algorithm and it is characterized by a physically separated sub-module for the automatic generation/recovering of the variable sub-keys of the blowfish algorithm. In this way, different parts of the database are encrypted by different keys, both in space and in time, for granting an optimal security.
This paper presents the preliminary results, for a technique that help in the analysis of skin cancer using a vision system. The technique, start with the learning system of the ROI in the image, this ROI are analyzed to extract the characteristics of image. Global population is created with the extracted data, for reference and comparison. For the new image, a neural linear system, were used to identify the ROI possible that may be skin cancer. The system is able to estimate and learn for the new ROI to improve the results. The system use only the system vision to acquire the image and illumination control to improve the image capture, the results are compared with other skin cancer image analyzed and other images.
The modality and features of gray differential equations and the features of differential parameters have been researched. The gray attribute of the differential equation parameters also has been analyzed. Based on studying one dimensional gray problem modeling and neural network modeling, a method of whitening the parameters of gray differential equation using gray neural network- GNNM was put forward. Furthermore, we also studied two-dimensional gray problem and built GNNM.
It's considered a hard problem in adaptive control for the nonreversible nonlinear dynamic system with SISO to be controlled using the method of neural network (NN). After analysis about the causes why this type of systems is hard to be controlled this paper presents a fuzzy neural network (FNN) based hyper-cylinder cluster and its algorithm. A theorem shows that if it is used in the control of the system mentioned above, the static error may be small arbitrarily as long as the parameter (delta) in hyper-cylinder is small enough. The feasibility and the control effectiveness of the method are examined through simulation example of nonlinear dynamic system.
In this paper a new watermarking algorithm for digital images operating in the frequency domain is presented: a sequence of pseudo-random real numbers is embedded in a selected set of DCT coefficients. After embedding, the watermark is adapted to the image to be signed by exploiting the masking characteristics of the Human Visual System in order to achieve watermark invisibility without diminishing its robustness. Experimental results demonstrate that the watermark is robust to several signal processing techniques and geometric distortions, including JPEG compression, low pass and median filtering, histogram equalization and stretching, Gaussian noise addition and cropping.
In this paper, we describe a watermarking technique for hiding a confidential two-dimension binary watermark, such as a company logo, into a still image. Our technique is applied to the frequency domain of the image obtained by two-dimension DCT. The watermark is embedded with a secret key into the low frequency and stored according to a zigzag format. The extraction of the watermark can be performed without knowledge of the original image, but the correct secret key is needed. Finally, since the transform domain algorithm to encode the watermark information is used, the information is robust enough against JPEG compression and ordinary image processing.
Automatic recognition of frog vocalization is considered a valuable tool for a variety of biological research and environmental monitoring applications. In this research an automatic monitoring system, which can recognize the vocalizations of four species of frogs and can identify different individuals within the species of interest, is proposed. For the desired monitoring system, species identification is performed first with the proposed filtering and grouping algorithm. Individual identification, which can estimate frog population within the specific species, is performed in the second stage. Digital signal pre-processing, feature extraction, dimensionality reduction, and neural network pattern classification are performed step by step in this stage. Wavelet Packet feature extraction together with two different dimension reduction algorithms are synergistically integrated to produce final feature vectors, which are to be fed into a neural network classifier. The simulation results show the promising future of deploying an array of continuous, on-line environmental monitoring systems based upon nonintrusive analysis of animal calls.