Framework For Particle Swarm Optimization With Surrogate Functions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Framework for Particle

Swarm Optimization with


Surrogate Functions
Technical Report TUD-CS-2009-0139
August 26, 2009
Matthew D. Parno

, Kathleen R. Fowler

, and Thomas Hemker


Department of Mathematics, Clarkson University, Potsdam, NY 13699-5815, United States


(parnomd@clarkson.edu,kfowler@clarkson.edu)

Simulation, Systems Optimization and Robotics, Department of Computer Science, Technische


Universitt Darmstadt, Hochschulstrae 10, 64289 Darmstadt, Germany
(hemker@sim.tu-darmstadt.de)

Corresponding author
Abstract
Particle swarm optimization (PSO) is a population-based, heuristic minimization technique that is based on social behav-
ior. The method has been shown to perform well on a variety of problems including those with nonconvex, nonsmooth
objective functions with multiple local minima. However, the method can be computationally expensive in that a large
number of function calls is required to advance the swarm at each optimization iteration. This is a signicant drawback
when function evaluations depend on output from an off-the-shelf simulation program, which is often the case in en-
gineering applications. To this end, we propose an algorithm which incorporates surrogate functions, which serve as a
stand-in for the expensive objective function, within the PSO framework. We present numerical results to show that this
hybrid approach can improve algorithmic efciency.
1 Introduction
We propose an algorithmic technique to improve the computational efciency of a general particle swarm optimization
(PSO) algorithm [1]. The motivation for this work is that population-based, heuristic methods are often applied in
engineering disciplines on simulation-based design problems. The PSO has been successfully applied to a variety of
such problems including [2, 3, 4, 5]. The challenges in these types of problems are inherent in the objective function
and constraints, which rely on output from a black-box simulator. A wide range of applicable derivative-free methods
exist for these types of problems. Population-based heuristics are often used because on initial stages the elements are
widely spread out in the design space to identify promising regions. Even if prior knowledge can be seeded into initial
populations, they do not rely on initial guesses for starting points which can bias many derivative-free single search
approaches. However, a major aw is that a large number of function calls is usually required.
Surrogate optimization in the context considered in this work means to solve surrogate problems generated with
stand-in" functions by using true objective function values to build an approximation to the original optimization prob-
lem. Thus, surrogate function approaches can help here to reduce the number of expensive black-box calls. The key
feature of surrogate optimization is that the approach avoids running directly in a loop with the real system that gener-
ates the original optimization problem, but calls a surrogate function which has a lower computational cost to determine
new promising points to test with the real system. One well known method to construct such surrogate functions is by
using Design and Analysis of Computer Experiements (DACE), as described by Sacks [6]. Moreover, there are various
other approximation techniques; many of which have been applied for optimization purposes, a short overview is given
in e.g., [7, 8]. Surrogate optimization offers exibility in both the choice of approximation method and choice of opti-
mizer. The optimization method can be chosen appropriate to the properties of the generated surrogate functions so that
almost no restrictions are given to use a specic method on the arising surrogate problem. For this work, we incorporate
a surrogate function to serve as an oracle in the search phase of the PSO algorithm.
We provide background on the PSO algorithm in Section 2 and on functional approximations by DACE in Section
3, as well as details on the implementation in Section 4. In Section 5 we provide promising numerical results on a
set of standard optimization test problems and a groundwater management application proposed in [9] and studied
in [10, 11, 12, 13] as a test problem for derivative-free optimization methods. We end with some concluding remarks and
plans for future work.
2 Particle Swarm Optimization
We pose our optimization problem as
min
x
f (x) (1)
where the objective function f is dened on some feasible set . The PSO algorithm was rst proposed by Kennedy and
Eberhart in 1995 [1] and is based on social communication within a group. A general PSO algorithm involves a set of k
particles (design points), used to search the design space for a functions global minimum. Each particle moves through
the space by updating its position with a velocity that accounts for the best position that particle has found and the best
position found by any particle. Mathematically, for i = 1, ..., k, we have
v
i
(t +1) = v
i
(t) + c
1
r
1
(p
i
x
i
) + c
2
r
2
(g x
k
) (2)
x
i
(t +1) = x
i
(t) +v
i
(t +1) (3)
where t is discrete iteration counter, v(t) is the current velocity, v(t + 1) is the updated velocity, is an inertia
weighting term, c
1
, c
2
are real constants, r
1
, r
2
are random numbers in (0,1], x
k
is the current position of the i
th
particle,
p
k
is the best position found by the i
th
particle, and g is the best position found by any particle [1]. The random
numbers r
1
, r
2
in Eqs. (2) and (3) make this implementation of PSO stochastic. Deterministic implementations exist
when r
1
= r
2
= 1 [14].
Convergence and efciency are of great concern in the application of PSO thus much analysis of the algorithm has
been done. See [14, 15, 16], [17] for example. Much of the focus of previous work, specically [14, 15], has been on
altering Eq. (2) to speed up convergence of the PSO algorithm. One alteration proposed by Clerc and Kennedy in [14]
removes the inertia weighting term and uses a constriction factor instead. The velocity equation then becomes
v
i
(t +1) =
_
v
i
(t) + c
1
r
1
(p
i
x
i
) + c
2
r
2
(g x
i
)
_
(4)
where is the constriction factor given by
:=
2
2 +
_

2
4
(5)
and governs rate of convergence (0, 2) and = c
1
+ c
2
.
For critically damped convergence, = 4 and for overdamped convergence >4 [14]. For this work, we use = 4.1.
We further speed up the convergence of the PSO by making use of a surrogate function within the algorithmic frame-
work. The surrogate function is used in addition to the objective function to exploit the use of intermediate data points
found by the particles and better predict the objective function behavior. The minimum of the surrogate at each iteration
is used to guide the choice of g for use in Eq. (4). This process is described in more detail in subsequent sections.
3 Surrogate functions by DACE
The optimization method we apply consists of a statistical approximation method that calculates a smooth surrogate
function of the original objective function during the rst iteration, and afterwards every l iterations.
As a standard approach designed for deterministic black-box optimization problems, design and analysis of computer
experiments (DACE) described by Sacks et al. [6] is widely used, which has its origins in the geostatistical Kriging
method [18]. A DACE model

f is, in its general form, a two component model,

f (x) =
m

j=1

j
(x) + Z(x).
The rst and more global part approximates the global trend of the unknown function f by a linear combination of a
parameter vector IR
m
and m given functions
j
. Different approaches can be considered for this rst part [19].
Universal Kriging is predicting the mean of the underlying function as a realization of a stochastic process by a linear
combination of m known basis functions on the whole domain, as well as detrended Kriging applies a classical linear
regression model. In our application we follow the idea of ordinary Kriging where a constant mean is assumed on the
whole domain with m = 1 and one real valued constant .
The second part, Z(x), guarantees that the DACE model

f ts to f for a number of sets of points x
i
, i = 1, ..., n, out of
the variable space,
f (x
i
) =

f (x
i
), i = 1, ..., n.
This part is assumed to be a realization of a stationary Gaussian random function with a mean of zero, E[Z(x)] = 0, and
a covariance
Cov[Z(x
i
), Z(x
j
)] =
2
R(x
i
, x
j
),
with i = j, and i, j {1, ..., n}. The smoothness and properties like differentiability are controlled by the chosen correla-
tion function R, in our case just the product of one dimensional exponential correlation functions R
j
,
R(x
1
, x
2
) =
dim(x)

j=1
R
j
(|x
1, j
x
2, j
|),
as it is proposed by Sacks et al. in [6].
By maximum likelihood estimation the process variance
2
, the regression parameter , and the correlation parameter
of R are estimated most consistent to the present objective function values to which

f has to t. For a more detailed
description on the theory behind these kind of models we refer the reader to [20, 21, 22].
2
Algorithm 1 sPSO algorithm for NLP problems
Require: function to be minimized f (x), maximum number of iterations t
max
, guess at minimum tness f
min
, number
of particles k, box constraints (x
l b
, x
ub
), maximum velocity v
max
, and the number of iterations between surrogate
function renement l.
Ensure: During all iterations all pairs (x, f (x)) are saved for use with DACE, nally point x

with the lowest obtained


objective function value is provided.
1: t = 0
2: for (i = 1 : k) do
3: if (t=0) then
4: v
i
(t +1) = rand(v
max
, v
max
),
x
i
= rand(x
l b
, x
ub
),
p
i
= x
i
5: else
6: v
i
(t +1) = [v
i
(t) + c
1
R
1
(p
i
x
i
)
+ c
2
R
2
(x

x
i
)],
x
i
(t +1) = x
i
(t) +v
i
(t +1),
p
i
(t +1) = min(p
i
(t), x
i
(t))
7: end if
8: y
i
= f (x
i
)
9: end for
10: x

= argmin {f (x)|x {x
j
}
j=1,...,k
}
11: while (t t
max
) ( f (x

) > f
min
) do
12: if (t = 0) (t mod l = 0) then
13: Build surrogate

f with DACE
14: Find g = argmin {

f x)|x
l b
x x
ub
} by using standard PSO on

f
15: if ( f ( x

) < f (g)) then


16: x

= x

17: else
18: x

= g
19: end if
20: end if
21: t = t +1
22: end while
23: return x

4 Algorithmic Framework
In the context of simulation-based optimization, it is desirable for an algorithm to be as frugal as possible in the number
of objective function evaluations performed. For the PSO, the swarm is most often between 20 and 30 particles. After
only a few iterations there will already have been a large number of function calls, which implies a reasonable surrogate
function should be possible.
4.1 Surrogate Implementation in PSO Algorithm
In this implementation, a surrogate function

f (x) of the objective function, f (x), is rst created after the initial function
values for the swarm are computed. Similarities exist to earlier approaches as described in [23]. However, here the
surrogate function is treated as a black box and thus better approximations can easily be made in terms of specic tness
functions. For example, if simplied physics models are available.
The surrogate function is then updated every l iterations during the loop, where l is a user specied parameter that
will depend on the computational expense for each evaluation of the problem; a larger value of l may be useful on less
difcult problems where the added computational effort of maintaining the surrogate outweighs the benet of having the
surrogate. Let x

be the minimum position found on the surrogate function (that is



f ( x

) is the minimum value when


optimizing the surrogate). To incorporate the surrogate into the PSO algorithm, we replace g in Eq. (4) with x

. Here
x

is dened by
x

=
_
x

, if f ( x

) < f (g)
g, else
(6)
3
(a) Original Schwefel Function (b) Surrogate Model
Figure 1: Surrogate Model of the Schwefel Function after 10 iterations of sPSO
The new equation for updating velocity becomes
v
k
(t +1) =
_
v
k
(t) + c
1
r
1
(p
k
x
k
) + c
2
r
2
(x

x
k
)
_
. (7)
We outline the algorithm for the PSO with the surrogate oracle, referred to as sPSO, in Algorithm 1.
4.2 Social Interaction within sPSO
In a basic PSO, the best position found by any of the particles is shared among all the particles. This is a very limited
exchange of information. Real societal interactions generally involve a much larger combined knowledge. For example,
if a group of people is attempting to nd the least expensive gas station in a city, although they would ask one another
the best gas prices found by each individual, a more successful group may also go to a website that keeps track of all the
gas prices over time. The surrogate model works in a similar way to the gas price website, it provides a larger database
of information where more accurate predictions of the global minimum can be made.
5 Numerical Results
For this work, the sPSO algorithm is implemented in MATLAB, although the algorithm is platform independent. To build
the surrogate function we use the Kriging capabilities of the DACE toolbox [21, 22]. It should be noted, however, that
any technique can be used to create a surrogate function in the sPSO algorithm. For example, if prior knowledge about
the underlying tness function is available this should be used for the initial setting of the functional approximation
methods, as any additional information can improve the quality of used surrogates functions. We begin our analysis
of sPSO performance by comparing sPSO performance to the performance of an identical PSO algorithm, omitting the
surrogate function.
5.1 Standard Test Problems
Our initial test suite includes the Ackley, Griewank, Michalewicz, Rastrigin, and Schwefel functions [24,25,26,27,28,29].
These test problems each contain two decision variables to ease surrogate visualization; however, the groundwater
problem considered on Section 5.2 is a six variable problem. This initial set of problems are challenging for the sPSO
algorithm in a variety of ways ranging from the presence of high frequency noise in the Griewank function to deep valleys
and plateaus in the Michalewicz function. The functions are shown in Figures 2-1.
Optimization was performed on each function 30 times with both the sPSO and PSO algorithms. Repeating the
minimization is necessary because of the random coefcients that both methods use to update each particles position.
For algorithmic parameters, we use a swarm size of 30 particles, c
1
= 2.8, c
2
= 1.3, K = 0.7, = 4.1, and n = 1 (meaning
we update the surrogate at each iteration). For our rst simple test problem set, global minima are known. Thus, the
stopping criteria is dened as the point when the algorithm reached a minimum within of the known global minimum.
The value of will depend on the scale of the problem and desired accuracy of the result; however, in our test problems
< .001 in all cases.
A summary of the results is in Table 1. For all of the test functions except the Griewank function 3, the global minima
was reached with every optimization run. Although in a few instances the PSO and sPSO algorithm did not converge to
4
(a) Original Ackley Function (b) Surrogate Model
Figure 2: Surrogate Model of the Ackley Function after 10 iterations of sPSO
Table 1: Summary of Results for comparison of sPSO and PSO algorithms
Ackley Griewank Michalewicz Rastrigin Schwefel
PSO sPSO PSO sPSO PSO sPSO PSO sPSO PSO sPSO

N 44.7 26.73 149.6 148.7 16.23 10.23 33.93 20.63 60.5 28.4
s 4.55 1.95 94.9 89.4 3.34 1.61 7.88 3.91 69.8 34

N
s
0 7.03 0 3.07 0 7.43 0 11.6 0 9.4
the global minima of the Griewank function within 200 iterations, when run longer (up to 400 iterations), the algorithms
always converged to the global minima. In Table 1,

N is the total number of iterations in an optimization run,

N
s
is the
number of times the surrogate minimum was used instead of the best particle, s is the sample standard deviation of

N.
The results imply that the use of a surrogate function can improve the efciency of the algorithm by enhancing the
search phase. Statistically, the two optimization techniques perform the same on the Griewank function, implying that
the use of a surrogate function in the PSO algorithm is not benecial on functions containing high frequency noise.
Conceptually, this makes sense; the higher frequency components of a function cannot be modeled using the relatively
limited sampling ability of the swarm.
The surrogate functions generated by the sPSO can be seen in Figures 2-1. Notice the surrogate functions are more
accurate near global minimums of the surrogate functions. As the sPSO algorithm progresses, the swarm moves closer
to the surrogate minimum. When the surrogate function is updated, the larger number of known values causes the
surrogate function to be more accurate near the surrogate minimum. In the above test functions this does not effect sPSO
performance; however, the uneven renement could trap the swarm at local minima far from the global minimum. If the
surrogate function does not capture the area surrounding the global minimum of the function, the swarm has a lower
chance of nding the global minimum. To help avoid this, Latin Hypercube Sampling (LHS) is used to generate the initial
positions of the particles. Users can also decrease the risk of not capturing the global minimum by increasing the swarm
size, thus creating a ner initial surrogate function.
5.2 Groundwater Management Application
The purpose of this section is to demonstrate the performance of the sPSO algorithm on a simulation-based engineering
application. The problem studied here is part of a suite of simulation-based particle swarm optimization benchmarking
problems from hydrology proposed in [9]. The problem has been shown to be challenging due to the presence of local
minima, a disconnected feasible region, nonconvexity, and a discontinuous objective function [10, 11, 12, 13]. Moreover,
in [30] the authors show a strong dependence on initial candidates for single-search optimization methods. The objective
is to control the migration of a contaminant plume by using extraction wells to alter the direction of groundwater ow.
For this work the decision variables are the well locations, {(x
i
, y
i
)}
2
i=1
, and the pumping rates {Q
i
}
2
i=1
of two wells, with
the possibility that the optimal design may only contain one well (that is Q
i
= 0 m
3
/s for i = 1 or 2).
5
(a) Original Griewank Function (b) High Frequency Components of Griewank Function
(c) Surrogate Model
Figure 3: Surrogate Model of the Griewank Function after 9 iterations of sPSO
The objective function, J, is the sum of a capital installation cost J
c
and an operational cost J
o
given by
J =

i
_

0
d
b
0
i
+
1
|Q
m
i
|
b
1
(z
gs
h
min
)
b
2
_
. _ .
J
c
+
_
t
f
0
_

2
Q
i
(h
i
z
gs
)
_
dt
. _ .
J
o
. (8)
In J
c
, the rst term accounts for drilling and installing each well and the second term represents the additional cost for
pumps for extraction wells. The operational cost, J
o
, includes the lift cost associated with raising the water to the surface.
More specically, in Eq. (8)
j
and b
j
are cost coefcients and exponents, respectively, d
i
is the depth of well i, which
is set to the ground surface elevation z
gs
, Q
m
i
is the design pumping rate, and h
min
is the minimum allowable hydraulic
head. Including the installation term, J
c
means that removing a well from the design leads to a large decrease in cost
but also yields a discontinuous objective function. If, in the course of the optimization, a well rate satises the inequality
|Q
i
| 10
6
m
3
/s, that well is removed from the design space and excluded from all other calculations.
The hydraulic heads, h
i
for well i, also vary with the decision variables and obtaining their values for each design point
requires a solution to equations that model saturated ow. The model is given by
S
s
h
t
= (K h) +

S, (9)
where S
s
is the specic storage coefcient, h is the hydraulic head, K is the hydraulic conductivity tensor, and

S is a uid
source term and is where the decision variables enter into the state equation. The simulator MODFLOW2000 [31] is used
to nd a solution to Eg. (9).
6
(a) Original Michalewicz Function (b) Surrogate Model
Figure 4: Surrogate Model of the Michalewicz Function after 11 iterations of sPSO
[t]
(a) Original Rastrigin Function (b) Surrogate Model
Figure 5: Surrogate Model of the Rastrigin Function after 12 iterations of sPSO
Constraints include bounds on the well capacities and the hydraulic head at each well location. We also constrain the
net pumping rate using the inequality
Q
T
=
n

i=1
Q
i
Q
max
T
, (10)
where Q
max
T
is the maximum allowable total extraction rate.
A direct approach for plume containment is to impose constraints on the concentration at specied locations. This
constraint can be expressed as
C
j
C
max
j
(11)
where C
j
is the concentration in kg/m
3
at some observation node j and C
max
j
is the maximum allowable concentration.
Evaluation of this constraint requires a solution to the contaminant transport equation
(

)
t
= (

) (qC

), (12)
where C

kg/m
3
is the concentration of species in the aqueous phase,

is the volume fraction of the aqueous phase,


D is a hydrodynamic dispersion tensor, v is the mean pore velocity, and q is the Darcy velocity. The simulator known as
MT3DMS [32] is used to obtain a solution to the transport equation.
The physical domain is a 1000 1000 30 unconned aquifer implying that the bound constraints on the hydraulic
head depends nonlinearly on the pumping rates. The soil parameters, boundary and initial conditions, and cost informa-
tion are all specied in [9].
7
Table 2: sPSO and PSO performance on Groundwater Application
PSO sPSO

N 26.25 18.05
s 3.32 3.41
5.3 sPSO Performance on Groundwater Application
For this application we considered two extraction wells allowing for six decision variables; the x and y positions of
the wells along with the extraction rates. Bound constraints on the extraction wells are given by 0.0064 Q
i
0
m
3
/s. The remaining linear and nonlinear constraints were implemented via weighted penalty term based on the relative
constraint violations. We used the same optimization parameters as in the previous test suite, but since one groundwater
and transport simulation can take up to 45 seconds, we ran the optimization only 10 times with both the sPSO and
PSO algorithms. For the convergence test on this problem, an approximate global minimum is known, so a similar
procedure as before was used simply for comparison of the PSO and sPSO algorithms. Table 2 summarizes the results of
the optimization.
The sPSO algorithm used fewer function evaluations than the PSO algorithm, increasing the appeal of the surrogate
function to improve efciency on the background of computational expensive tness function evaluations (here 45 sec-
onds per run). Further speed up in term of less tness function evaluations may be obtained by taking surrogate function
in each iteration as a dynamic optimization problem as described e.g. in [33].
6 Conclusions and Discussion
Previous studies have shown that particle swarm optimization is a suitable technique when little is known about the
behavior of the objective function. One strength of PSO is that the method requires only a few initial parameters. Initial
iterates obtained by any kind of expert knowledge are not required to start PSO.
Surrogate functions were incorporated into the PSO algorithm to improve efciency on engineering problems. Since
the global PSO is used on a variety of engineering problems, [34, 35, 4, 5], we also start with the global PSO variant as a
building block for the sPSO. However, many enhanced variations of the PSO algorithm exist, each excelling on different
types of problems. This encourages the development of alternative sPSO algorithms based on other PSO variants, such
as the local PSO, adaptive PSO, or discrete PSO, [36]. For example, on high dimensional problems, the local PSO may
have more reliable convergence properties than the global PSO. In practice, we have found that in general, the local
PSO begins to outperform the global PSO on problems with more than 10 dimensions. Thus, applications with a large
number of decision variables may consider using the ideas presented in this work to create a sPSO variant from a local
PSO topology.
We provide a new framework for including surrogate functions to the PSO framework. The surrogate is built using
intermediate function values from the optimization and the minimum of the surrogate helps guide the search phase. No
additional objective function evaluations are required to build these surrogates. We have shown that the proposed sPSO
can result in signicantly fewer objective function evaluations than the PSO. This is of particular interest for simulation
based problems where the simulation typically is much more computationally expensive than the cost to maintain the
surrogate function.
References
[1] J. Kennedy and R. Eberhart, Particle swarm optimization, IEEE Conference on Neural Networks, vol. 4, pp. 1942
1948, 1995.
[2] C.-J. Liao, C.-T. Tseng, and P. Luarn, A discrete version of particle swarm optimization for owshop scheduling
problems, Computers & Operations Research, vol. 34, pp. 30993111, 2007.
[3] J. Robinson and Y. Rahmat-Samii, Particle swarm optimization in electromagnetics, IEEE Transactions on Antennas
and Propagation, vol. 52, no. 2, pp. 397407, 2004.
[4] H. Yoshida, K. Kawata, Y. Fukuyama, S. Takayama, and Y. Nakanishi, A particle swarm optimization for reactive
power and voltage control considering voltage security assessment, IEEE Transactions on Power Systems, vol. 15,
no. 4, pp. 12321239, 2000.
[5] L. S. Matott, S. L. Bartelt-Hunt, A. J. Rabideau, and K. R. Fowler, Application of heuristic optimization techniques
and algorithm tuning to multilayed sorptive barrier design, Environmental Science and Technology, vol. 40, pp.
43546360, 2006.
8
[6] J. Sacks, S. B. Schiller, and W. J. Welch, Design for computer experiments, Technometrics, vol. 31, pp. 4147,
1989.
[7] R. Jin, X. Du, and W. Chen, The use of metamodeling techniques for optimization under uncertainty, ASME Design
Automation Conference, 2001.
[8] R. Jin, W. Chen, and T. W. Simpson, Comparative studies of metamodelling techniques under multiple modelling
criteria, Structural and Multidisciplinary Optimization, vol. 23, pp. 113, 2001.
[9] A. S. Mayer, C. T. Kelley, and C. T. Miller, Optimal design for problems involving ow and transport phenomena in
saturated subsurface systems, Advances in Water Resources, vol. 12, pp. 12331256, 2002.
[10] L. S. Mattot, A. J. Rabideau, and J. R. Craig, Pump-and-treat optimization using analytic element method ow
models, Advances in Water Resources, vol. 29, pp. 760775, 2006.
[11] T. Hemker, K. R. Fowler, and O. von Stryk, Derivative-free optimization methods for handling xed costs in optimal
groundwater remediation design, in Proc. of the CMWR XVI - Computational Methods in Water Resources, 19-22
June 2006.
[12] G. A. Gray and K. R. Fowler, Approaching the groundwater remediation problem using multidelity optimization,
in Proc. of the CMWR XVI - Computational Methods in Water Resources, 19-22 June 2006.
[13] T. Hemker, K. R. Fowler, M. W. Farthing, and O. von Stryk, A mixed-integer simulation-based optimization approach
with surrogate functions in water resources management, Optimization and Engineering, p. to appear, 2008.
[14] M. Clerc and J. Kennedy, The particle swarm explosion, stability, and convergence in a multidimensional complex
space, IEEE Transactions on Evolutionary Computation, vol. 6, pp. 5873, 2002.
[15] H. Fan, A modication to particle swarm optimization algorithm, Engineering Computations, vol. 19, no. 9, pp.
970989, 2002.
[16] I. C. Trelea, The particle swarm optimization algorithm: convergence analysis and parameter selection, Informa-
tion Processing Letters, vol. 85, pp. 317325, 2003.
[17] M. Jiang, Y. P. Luo, and S. Y. Yang, Stochastic convergence analysis and parameter selection of the standard particle
swarm optimization algorithm, Information Processing Letters, vol. 102, pp. 816, 2007.
[18] G. Matheron, Principles of geostatistics, Econom. Geol., vol. 58, pp. 1246 1266, 1963.
[19] J. Martin and T. Simpson, A study on the use of kriging models to approximate deterministic computer models, in
Proceedings of DETC03, 2003.
[20] J. Koehler and A. Owen, Computer experiments, Handbook of Statistics, vol. 13, pp. 261308, 1996.
[21] S. N. Lophaven, H. B. Nielson, and J. Sndergaard, DACE: A MATLAB Kriging Toolbox, Technical University of
Denmark.
[22] S. N. Lophaven, H. B. Nielsen, and S. Sndergaard, Aspects of the Matlab toolbox DACE, IMM, Tech. Rep. IMM-
TR-2002-13, August 2002.
[23] M. Iqbal and M. A. Montes de Oca, An estimation of distribution particle swarm optimization algorithm, in Ant
Colony Optimization and Swarm Intelligence. 5th International Workshop, ANTS 2006, Brussels, ser. LNCS, D. M.
et. al., Ed., vol. 4150/2006. Berlin, Germany: Springer Verlag, August 2006, pp. 7283. [Online]. Available:
http://www.cs.kent.ac.uk/pubs/2006/2601
[24] J. J. More, B. S. Garbow, and K. E. Hillstrom, Testing unconstrained optimization, ACM Transactions on Mathe-
matical Software, vol. 7, no. 1, pp. 1741, 1981.
[25] D. H. Ackley, A connectionist machine for genetic hillclimbing. Norwell, MA, USA: Kluwer Academic Publishers,
1987.
[26] H.-P. Schwefel, Numerical Optimization of Computer Models. New York, NY, USA: John Wiley & Sons, Inc., 1981.
[27] L. A. Rastrigin, Extremal control systems, Theoretical Foundations of Engineering Cybernetics Series (in Russian).
9
[28] T. Back, D. B. Fogel, and Z. Michalewicz, Handbook of Evolutionary Computation. CRC Press, 1997.
[29] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs. Berlin, Heidelberg, New York:
Springer-Verlag, 1992.
[30] K. R. Fowler, J. P. Reese, C. E. Kees, J. E. Dennis, Jr., C. T. Kelley, C. T. Miller, C. Audet, A. J. Booker, G. Couture, R. W.
Darwin, M. W. Farthing, D. E. Finkel, J. M. Gablonsky, G. Gray, and T. G. Kolda, A comparison of derivative-free
optimization methods for water supply and hydraulic capture community problems, Accepted to Advances in Water
Resources, 2008.
[31] C. Zheng, M. C. Hill, and P. A. Hsieh, MODFLOW2000, The U.S.G.S Survey Modular Ground-Water Model User Guide
to the LMT6 Package, the Linkage With MT3DMS for Multispecies Mass Transport Modeling, users guide ed., USGS,
2001.
[32] C. Zheng and P. P. Wang, MT3DMS: A Modular Three-Dimensional Multispecies Transport Model for Simulation of
Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems, documentation and users
guide ed., 1999.
[33] T. Blackwell, Particle swarm optimization in dynamic environments, in Evolutionary Computation in Dynamic and
Uncertain Environments. Springer, 2007, pp. 2949.
[34] H. H. BALCI and J. F. VALENZUELA, Scheduling electric power generators using particle swarm optimization
combined with the lagrangian relaxation method, International Journal of Applied Mathematics and Computer
Sciences, vol. 14, no. 3, pp. 411421, 2004.
[35] L. S. Matott, OSTRICH: An Optimization Software Tool; Documentation and User

Zs Guide, State University of New


York at Buffalo, http://www.groundwater.buffalo.edu/software/Download.php.
[36] K. Y. Lee and M. A. El-Sharkawi, Eds., Modern Heuristic Optimization Techniques: Theory and Applications to Power
Systems. John Wiley and Sons Inc., 2008.
10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy