Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global
optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by
evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a
Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given
row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random
Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling
properties. There are techniques which address these issues. We describe and use one technique in this paper.
EGO is an evolutionary, data-adaptive algorithm which can be useful for optimization problems with expensive cost
functions. Many antenna design problems qualify since complex computational electromagnetics (CEM) simulations
can take significant resources. This makes evolutionary algorithms such as genetic algorithms (GA) or particle swarm
optimization (PSO) problematic since iterations of large populations are required. In this paper we discuss multiparameter
optimization of a wideband, single-element antenna over a metamaterial ground plane and the interfacing of
EGO (optimization) with a full-wave CEM simulation (cost function evaluation).
Efficient Global Optimization (EGO) minimizes expensive cost function evaluations by correlating evaluated parameter
sets and respective solutions to model the optimization space. For optimizations requiring destructive testing or lengthy
simulations, this computational overhead represents a desirable tradeoff. However, the inspection of the predictor space
to determine the next evaluation point can be a time-intensive operation. Although DACE predictor evaluation may be
conducted for limited parameters by exhaustive sampling, this method is not extendable to large dimensions. We apply
EGO here to the 11-dimensional optimization of a wide-band fragmented patch antenna and present an alternative
genetic algorithm approach for selecting the next evaluation point. We compare results achieved with EGO on this
optimization problem to previous results achieved with a genetic algorithm.
Efficient Global Optimization (EGO) is a competent evolutionary algorithm which can be useful for problems with
expensive cost functions [1,2,3,4,5]. The goal is to find the global minimum using as few function evaluations as
possible. Our research indicates that EGO requires far fewer evaluations than genetic algorithms (GAs). However, both
algorithms do not always drill down to the absolute minimum, therefore the addition of a final local search technique is
indicated. In this paper, we introduce three "endgame" techniques. The techniques can improve optimization efficiency
(fewer cost function evaluations) and, if required, they can provide very accurate estimates of the global minimum. We
also report results using a different cost function than the one previously used [2,3].
Chromosome design has been shown to be a crucial element in developing genetic algorithms which approach global
solutions without premature convergence. The consecutive positioning of parameters with high-correlations and
relevance enhances the creation of genetic building blocks which are likely to persist across recombination to provide
genetic inheritance. Incorporating positional gene relevance is challenging, however, in multi-dimensional design
problems. We present a hybrid chromosome designed for optimizing a fragmented patch antenna which combines
linear and two-dimensional gene representations. We compare previous results obtained with a linear chromosome to
solutions obtained with this new hybrid representation.
Efficient Global Optimization (EGO) is a competent evolutionary algorithm suited for problems with limited design
parameters and expensive cost functions. Many electromagnetics problems, including some antenna designs, fall
into this class, as complex electromagnetics simulations can take substantial computational effort. This makes simple
evolutionary algorithms such as genetic algorithms or particle swarms very time-consuming for design optimization, as
many iterations of large populations are usually required. When physical experiments are necessary to perform
tradeoffs or determine effects which may not be simulated, use of these algorithms is simply not practical at all due to
the large numbers of measurements required. In this paper we first present a brief introduction to the EGO algorithm.
We then present the parasitic superdirective two-element array design problem and results obtained by applying EGO to
obtain the optimal element separation and operating frequency to maximize the array directivity. We compare these
results to both the optimal solution and results obtained by performing a similar optimization using the Nelder-Mead
downhill simplex method. Our results indicate that, unlike the
Nelder-Mead algorithm, the EGO algorithm did not
become stuck in local minima but rather found the area of the correct global minimum. However, our implementation
did not always drill down into the precise minimum and the addition of a local search technique seems to be indicated.