Proc. SPIE. 7347, Evolutionary and Bio-Inspired Computation: Theory and Applications III
KEYWORDS: Statistical analysis, Data modeling, Data storage, Process control, Feature selection, Analog electronics, Algorithm development, Statistical modeling, Facility engineering, Image information entropy
In evolutionary learning, the <i>sine qua non </i>is evolvability, which requires heritability of fitness and a balance between
exploitation and exploration. Unfortunately, commonly used fitness measures, such as root mean squared error (RMSE),
often fail to reward individuals whose presence in the population is needed to explain important data variance; and
indicators of diversity generally are not only incommensurate with those of fitness but also essentially arbitrary. Thus,
due to poor scaling, deception, etc., apparently relatively high fitness individuals in early generations may not contain
the building blocks needed to evolve optimal solutions in later generations. To reward individuals for their potential
incremental contributions to the solution of the overall problem, heritable information theoretic functionals are
developed that incorporate diversity considerations into fitness, explicitly identifying building blocks suitable for
recombination (e.g. for non-random mating). Algorithms for estimating these functionals from either discrete or
continuous data are illustrated by application to input selection in a high dimensional industrial process control data set.
Multiobjective information theoretic ensemble selection is shown to avoid some known feature selection pitfalls.