Self-adaptation in evolutionary algorithms concerns processes in which individuals incorporate information on how to search for new individuals. Instead of detailing the means for searching the space of possible solutions a priori, a process of random variation is applied both in terms of searching the space and searching for strategies to search the space. In one common implementation, each individual in the population is represented as a pair of vectors (x,σ), where x is the candidate solution to an optimization problem scored in terms of function f(x), and σ is the so-called strategy parameter vector that influences how offspring will be created from the individual. Typically, σ describes a variance or covariance matrix under Gaussian mutations. Experimental evidence suggest that the elements of σ can sometimes become too small to explore the given search space adequately. The evolutionary search then stagnates until the elements of σ grown sufficiently large as a result of random variation. Several methods have been offered to remedy this situation. This paper reviews recent results with one such method, which associates multiple strategy parameter vectors with a single individual. A single strategy vector is active at any time and dictates how offspring will be generated. Experiments on four 10-dimensional benchmark functions are reviewed, in which the number of strategy parameter vector is varied over 1, 2, 3, 4, 5, 10, and 20. The results indicate advantages for using multiple strategy parameter vectors. Furthermore, the relationship between the mean best result after a fixed number of generations and the number of strategy parameter vectors can be determined reliably in each case.
In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.
We discuss a successful application of evolutionary algorithms and femtosecond pulse-shaping technology to the coherent control of quantum phenomena. After a brief review of the field of quantum control, we show how evolutionary algorithms provide an effective and, so far, the only general solution to the problem of managing the complex interplay of interference effects which characterize quantum phenomena. A representative list of experimental results is presented, and some directions for future developments are discussed. The success of evolutionary algorithms in quantum control is seen as a significant step in the evolution of computational intelligence, from evolutionary algorithms, to evolutionary programming, to evolutionary engineering, whereby a hardware system organizes itself and evolves on line to achieve a desired result.
We describe a new mobile robotics platform specifically designed for the implementation and testing of neuromorphic vision algorithms in unconstrained outdoors environments. The new platform includes significant computational power (four 1.1GHz CPUs with gigabit interconnect), a high-speed four-wheel-drive chassis, standard Linux operating system, and a comprehensive toolkit of C++ vision classes. The robot is designed with two major goals in mind: real-time operation of sophisticated neuromorphic vision algorithms, and off-the-shelf components to ensure rapid technological evolvability. A preliminary embedded neuromorphic vision architecture that includes attentional, gist/layout, object recognition, and high-level decision subsystems is finally described.
Mobile robotic systems with a wide variety of sensors, actuators, and onboard high-speed processors are commercially and readily available. The information processing capabilities of these system presently lack the robustness and sophistication of biological systems. One challenge is that the high-dimensional input signals from the sensors need to be converted into a smaller number of perceptually relevant features. This dimensionality reduction can be performed on static signals such as a single image or on dynamic data such as a speech spectrogram. This proceedings discusses several different models for dimensionality reduction that differ only on the constraints on the variables and parameters of the models. In particular, nonnegativity constraints are shown to give rise to distributed yet sparse representations of both static and dynamic data.
Hybrid soft computing models, based by neural, fuzzy and evolutionary computation technologies, have been applied to a large number of classification, prediction, and control problems. This paper focuses on one of such applications and presents a systematic process for building a predictive model to estimate time-to-breakage and provide a web break tendency indicator in the wet-end part of paper making machines. Through successive information refinement of information gleaned from sensor readings via data analysis, principal component analysis (PCA), adaptive neural fuzzy inference system (ANFIS), and trending analysis, a break tendency indicator was built. Output of this indicator is the break margin. The break margin is then interpreted using a stoplight metaphor. This interpretation provides a more gradual web break sensitivity indicator, since it uses more classes compared to a binary indicator. By generating an accurate web break tendency indicator with enough lead-time, we help in the overall control of the paper making cycle by minimizing down time and improving productivity.
A systematic approach is presented to achieve a reliable neural model for microwave active devices with different numbers of training data. The method is implemented for a small-signal bias depended modeling of pHEMT in tow different environments, on a standard test-fixture and in the New Generation Quasi-Monolithic Integration Technology (NGQMIT), with different numbers of training data. The errors for different numbers of training data have been compared to each other and show that by using this method a reliable model is achievable even though the number of training data is considerably small. The method aims at constructing a model, which can satisfy the criteria of minimum training error, maximum smoothness (to avoid the problem of over-fitting), and simplest network structure.
We introduce an algorithm for classifying time series data. Since our initial application is for lightning data, we call the algorithm Zeus. Zeus is a hybrid algorithm that employs evolutionary computation for feature extraction, and a support vector machine for the final backend classification. Support vector machines have a reputation for classifying in high-dimensional spaces without overfitting, so the utility of reducing dimensionality with an intermediate feature selection step has been questioned. We address this question by testing Zeus on a lightning classification task using data acquired from the Fast On-orbit Recording of Transient Events (FORTE) satellite.
This paper presents a decision architecture algorithm for training neural equation based networks to make autonomous multi-goal oriented, multi-class decisions. These architectures make decisions based on their individual goals and draw from the same network centric feature set. Traditionally, these architectures are comprised of neural networks that offer marginal performance due to lack of convergence of the training set. We present an approach for autonomously extracting sample points as I/O exemplars for generation of multi-branch, multi-node decision architectures populated by adaptively derived neural equations. To test the robustness of this architecture, open source data sets in the form of financial time series were used, requiring a three-class decision space analogous to the lethal, non-lethal, and clutter discrimination problem. This algorithm and the results of its application are presented here.
This paper presents local area enhancement of the segmented color image obtained from the multi-spectral image clustering by using FCM (fuzzy c-means). In case, the multi-spectral images, which have the number of bands more than that of 3, must decrease the data volume to remain the number of bands of 3 in order to correspond with the meaning of red, green, and blue images. PCA (Principal Components Analysis) is then used to transform original multi-spectral images into PCA images. The first three components having information more than that of original images of 95% is assigned as red, green and blue images, namely RGB color image. FCM clustering apply to RGB color image, separately. This method is called the PCA-FCM technique being the multi-spectral image clustering. By applying such technique, the result images consisted of red, green, and blue images separately are the segmented images. By histogram equalization algorithm, the result of local area enhancement based on a number of clusters as the segmented image can solve effect of intensity saturation from global area enhancement and the perceptibility of color image is clearly improved.
This paper describes how a Genetic Algorithms (GA) based optimization method is used specifically to register two ocular fundus images having either temporal or multimodal difference. Ocular fundus images of the same eye are generally compared by ophthalmologists to find differences due to growth of abnormalities in retina for diagnosis, follow-up, and surgery purposes. Being relatively small size and having different geometrical settings, comparison between these image pairs cannot be properly done without a registration first. In this paper, registration task is viewed as an optimization problem to search for optimal values of transformation parameters relating the two images. A GA based technique is applied to the preprocessed, binarized fundus image pair to find the best transformation which gives the maximum fitness for matching. A new formulation of fitness function is proposed to reduce computation time of GA while maintaining the required accuracy. Since the registration algorithm performance depends heavily on how well the image pair was preprocessed to obtain good quality binary images, the preprocessing methods are also explained in the paper. Results show that there is no performance difference for the proposed method when applied to either the temporal or the multimodal fundus image pair. The maximum, minimum, and average registration distances in pixels between the proposed method and manual method are 4.27, 1.83, and 3.18 respectively for the entire data set of 512×512 image pairs. The computation time is at least three times less than the method based on similar technique presented by another work.
An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.
The joint action of two readily observed effects in solution magnetic resonance-radiation damping and the dipolar field-are shown to generate spatiotemporal chaos in routine experiments. The extreme sensitivity of the chaotic spin dynamics to experimental conditions during the initial evolution period can be used to construct a spin amplifier to enhance sensitivity and contrast in magnetic resonance spectroscopy and imaging. Alternatively, amplification of intrinsic spin noise or tiny experimental perturbations such as temperature gradient fluctuations leads to signal interferences and highly irreproducible measurements. Controlling the underlying chaotic evolution provides the crucial link between amplifying weak signals and counteracting unwanted signal fluctuations.
Some animals have evolved the ability to use spectral information coming from the world to help them live their lives in some improved ways. Over many centuries, we humans have started to understand what color is, why it evolved and how animals gather the information they need to compute color. As a general rule, once we learn how and why nature does something, it makes sense to incorporate that knowledge into our technology. This paper explores Artificial Color.
We adopt three aspects of Natural Color:
The use of color as a discriminant in applications
The special trick nature uses to sense data for color computation in which it uses two or more sensors with broad, spectrally overlapping bands to sense
The selection of the spectral shape of those bands to enhance usefulness for a task.
Artificial Color is compared with hyperspectral imaging, and the latter is found wanting in
This paper presents a discussion on constructing a wireless ad hoc network using unattended ground visual sensors. The IEEE 802.11 WLAN standard is used to implement a single-hop ad hoc network because of its simplicity. The bandwidth allocation and traffic control between visual sensors is coordinated by the Medium Access Control (MAC) protocol. A ground visual sensor tower is designed for the networking purpose with specially designed video compression, power management, and network module to achieve maximum thoroughput
The problem of multiextremum optimization is very general in optical design. Many efforts have been applied to finding approaches to its solution, but optical software developers are still far from finding a universal and reliable method. Solving the problem of finding the optimal angles of rotation of real components is a good way to test different approaches to the problem of multiextremum optimization. Compared with the general approach to optical-system design, multiextremum optimization is unconstrained optimization with an analytical test for the optimization criterion: the mean-square wavefront deformation. Finding the optimal angles of rotation shares many specific features with the general problem of optical-system design, such as having a large number of minimums. In addition, these minimums have a special character: they look like a “ravines”. The common optimization methods (gradient or Newtonian) easily can find the local minimums associated with an initial point, but they lack the ability to jump to another minimum. A genetic algorithm can find some point in the zone of attraction to another minimum, but it gets stuck in a “ravine” bottom line. An adaptive genetic algorithm together with local optimization methods can find a major number of minimums.
Proliferation of high speed, low cost computing systems has caused an explosion of data, leading to the need for higher and higher capacity storage systems and faster data transmission. The field of nuclear physics has felt this need acutely because data can be acquired at >100 Mbps, the data accumulated by a single collider can exceed hundreds of petabytes. Although much effort has gone into improving data storage, all approaches are severely limited by slow processing, high capacity memory requirements, and compression of heterogeneous data. Physical Optics Corporation (POC) has developed a preliminary prototype Multimode Intelligent Compression Engine (MICE) that overcomes many of the limitations of lossless data compression techniques for large scale data storage, retrieval, and processing. The MICE approach combines novel software/hardware. The key MICE innovation is a unique fuzzy logic artificial intelligence preprocessor that examines the incoming data in sections, on the fly, and applies optimum compression to each section. No a priori information is required about the data. The unique MICE hardware is based on POC's high speed, highly parallel processing technology. The hardware is compact and economical, and enables the system to compress data in real time.
The run time of the evolutionary algorithm for image registration depends on the time required for the evaluation of the fitness value of a parameter vector during one iteration. This time can be reduced if some preprocessing is employed prior to image registration resulting in image reduction. Two algorithms are compared that can be potentially used for preprocessing, fractal encoding and a simple segmentation technique. Numerical experiments show that both algorithms perform well and can be successfully applied to image reduction.
We propose the development of a functional system for diagnosing and measuring ocular refractive errors in the human eye (astigmatism, hypermetropia and myopia) by automatically analyzing images of the human ocular globe acquired with the Hartmann-Schack (HS) technique. HS images are to be input into a system capable of recognizing the presence of a refractive error and outputting a measure of such an error. The system should pre-process and image supplied by the acquisition technique and then use artificial neural networks combined with fuzzy logic to extract the necessary information and output an automated diagnosis of the refractive errors that may be present in the ocular globe under exam.
Choosing a control strategy for an unknown process is risky and can set one up for failure. If the process is one-of-a-kind and time constraints are tight, this can pose an even graver risk to the success if the wrong decision is made. On the other hand, significant reduction in cost and time can also be realized. We selected a fuzzy logic control strategy to run a complex test setup requiring multi-loop temperature feedback control, thermal ramping, mode switching, and temperature profile tracking. Changes were made almost daily in the beginning, some requiring considerable thought and effort to address. Though the system's behavior was unpredictable and non-linear, fuzzy logic proved to be an amazingly robust and flexible control strategy that worked well for us. It seems to be applicable to virtually any control problem, whether simple or complex, small or large, with a high probability of success. This paper will discuss the particular control requirements of this test, the technical challenges faced by both the system designer and the instrument/controls designer in getting everything working, how well the system performed, and what was learned from the experience that could be applied to practically any difficult control problem to help decide if using fuzzy logic makes sense.
It is difficult to achieve restoration of high frequency information by the traditional algorithms using an undersampled and degraded low-resolution image. Nonlinear algorithms provide a better solution to above problem. As a nonlinear and real-time processing method, a MLP neural network super-resolution restoration for the undersampled and degraded low-resolution image is proposed. Experimental results demonstrate that the proposed approach can achieve super-resolution and a good restored image.
An evolution model of data fusion system based on evolution procedure of nervous system is proposed. There are lots of similar characteristics between the evolution of nervous system and the development of data fusion system. It is reasonable to try to find guidelines in the theory of nervous system. For example, the function and structure of data fusion nodes almost take a same role as neurons in nervous system do, so we name the data fusion nodes as data fusion units. Just as the nervous system, the basic evolution architectures of data fusion system include four phases: Chaos (Autonomous), Fully Distributed, Centralized, and Internal Model Based Hierarchical. In the last phase of evolution, interface among unites become independent intelligent parts step by step. It provides a more flexible hierarchical data fusion architecture, which makes it be possible to simulate the regulation and adaptation mechanism of nervous system. The application analyses of these mechanisms to the data fusion systems proved that this dynamic hierarchy architecture is capable of deciding not only what to fuse and how to fuse but also when to fuse.
With a Damman grating and an artificial neural network back-propagation network (BP network), a fast and parallel 3-D non-contact measurement method using a stereo vision system is introduced. Some detail discussions on optical setup, data acquisition and some method to raise the training precision of BP network are proposed. Experiments have been completed and the results proved the feasibility of the method. Using the method, one can get the object profile information rapidly, deal with data almost parallel and need not to consider the effect of lens distortion.
One of the main problems in imaging systems is the difficulty in placing the output CCD exactly at the imaging plane. This causes the so-called defocus effect in which the optical transfer function in the output plane is worse than expected. This leads to loss of detail and possible contrast conversion. A good solution for solving the defocus problem in a specific plane would be to place a phase-only filter right after (or before) the lens of the imaging setup. This filter can be designed to cancel the defocusing effect in the specific plane. The design of a filter that will yield good images for a range of planes at different distances from the actual imaging plane requires a more complex approach. In this work the authors introduce a novel method for designing the filter using Fuzzy Logic principles. The Fuzzy Logic inference engine accepts as input data a set of filters; each designed for good results in a specific region, and collaborates them to produce a single phase-only filter. The optical transfer function of the combined filter in the various regions is presented to demonstrate the improvement in limiting defocus.
The allocation of CPU time and memory resources, are well known problems in organizations with a large number of users, and a single mainframe. Usually the amount of resources given to a single user is based on its own statistics, not on the entire statistics of the organization therefore patterns are not well identified and the allocation system is prodigal. In this work the authors suggest a fuzzy logic based algorithm to optimize the CPU and memory distribution between the users based on the history of the users. The algorithm works separately on heavy users and light users since they have different patterns to be observed. The result is a set of rules, generated by the fuzzy logic inference engine that will allow the system to use its computing ability in an optimized manner. Test results on data taken from the Faculty of Engineering in Tel Aviv University, demonstrate the abilities of the new algorithm.
Image blur strongly degrades object recognition. We propose a mechanism to reduce defocus blur by reducing the aperture of the camera lens, and show that it leads to a far more robust recognition. The recognition is demonstrated via a Neural Network architecture that we have previously proposed for blurred face recognition.