Du 2016
Du 2016
Du 2016
PSO can locate the region of the optimum faster than EAs, but once in this region
it progresses slowly due to the fixed velocity stepsize. Almost all variants of PSO
try to solve the stagnation problem. This chapter is dedicated to PSO as well as its
variants.
9.1 Introduction
The notion of employing many autonomous particles that act together in simple ways
to produce seemingly complex emergent behavior was initially considered to solve
the problem of rendering images in computer animations [79]. A particle system
stochastically generates a series of moving points. Each particle is assigned an initial
velocity vector. It may also have additional characteristics such as color, texture, and
limited lifetime. Iteratively, velocity vectors are adjusted by some random factor. In
computer graphics and computer games, particle systems are ubiquitous and are the
de facto method for producing animated effects such as fire, smoke, clouds, gunfire,
water, cloth, explosions, magic, lighting, electricity, flocking, and many others. They
are defined by a set of points in space and a set of rules guiding their behavior
and appearance, e.g., velocity, color, size, shape, transparency, and rotation. This
decouples the creation of new complex effects from mathematics and programming.
Today, particle systems are even more popular in global optimization.
PSO originates from studies of synchronous bird flocking, fish schooling, and bees
buzzing [22,44,45,59,83]. It evolves populations or swarms of individuals called
particles. Particles work under social behavior in swarms. PSO finds the global
best solution by simply adjusting the moving vector of each particle according to
its personal best (cognition aspect) and the global best (social aspect) positions of
particles in the entire swarm at each iteration.
Compared with ant colony algorithms and EAs, PSO requires only primitive
mathematical operators, less computational bookkeeping and generally fewer lines
of code, and thus it is computationally inexpensive in terms of both memory require-
ments and speed. PSO is popular due to its simplicity of implementation and its
ability to quickly converge to a reasonably acceptable solution.
suboptimal, the swarm can easily stagnate around it without any pressure to continue
exploration. This can be seen from (9.1). If x i (t) = x i∗ (t) = x g (t), then the velocity
update will depend only on the value of αv i (t). If their previous velocities v i (t) are
very close to zero, then all the particles will stop moving once they catch up with
the gbest particle. Even worse, the gbest point may not be a local minimum. This
phenomenon is referred to as stagnation. To avoid stagnation, reseeding or partial
restart is introduced by generating new particles at distinct places of the search space.
Almost all variants of PSO try to solve the local optimum or stagnation problem.
PSO can locate the region of the optimum faster than EAs. However, once in this
region it progresses slowly due to the fixed velocity stepsize. Linearly decreasing
weight PSO (LDWPSO) [83] effectively balances the global and local search abilities
of the swarm by introducing a linearly decreasing inertia weight on the previous
velocity of the particle into (9.1):
v i (t + 1) = αv i (t) + c1r1 x i∗ (t) − x i (t) + c2 r2 x g (t) − x i (t) , (9.3)
where α is called the inertia weight, and the positive constants c1 and c2 are, respec-
tively, cognitive and social parameters. Typically, c1 = 2.0, c2 = 2.0, and α gradu-
ally decreases from αmax to αmin :
t
α(t) = αmax − (αmax − αmin ) , (9.4)
T
T being the maximum number of iterations. One can select αmax = 1 and αmin = 0.1.
The flowchart of PSO is given by Algorithm 9.1.
Center PSO [57] introduces a center particle into LDWPSO and is updated as
the swarm center at every iteration. The center particle has no velocity, but it is
involved in all operations the same way as the ordinary particles, such as fitness
evaluation, competition for the best particle, except for the velocity calculation. All
particles oscillate around the swarm center and gradually converge toward it. The
center particle often becomes the gbest of the swarm during the run. Therefore, it
has more opportunities to guide the search of the whole swarm, and influences the
performance greatly. CenterPSO achieves not only better solutions but also faster
convergence than LDWPSO does.
PSO, DE, and CMA-ES are compared using certain fitness landscapes evolved
with GP in [52]. DE may get stuck in local optima most of the time for some problem
landscapes. However, over similar landscapes PSO will always find the global optima
correctly within a maximum time bound. DE sometimes has a limited ability to move
its population large distances across the search space if the population is clustered
in a limited portion of it.
Instead of applying inertia to the velocity memory, constriction PSO [22] applies
a constriction factor χ to control the magnitude of velocities:
v i (t + 1) = χ{v i (t) + φ1r1 (x i∗ (t) − x i (t)) + φ2 r2 (x g (t) − x i (t))}, (9.5)
2
χ= , (9.6)
2 − ϕ − ϕ2 − 4ϕ
156 9 Particle Swarm Optimization
1. Set t = 1.
Initialize each particle in the population by randomly selecting values for its position
x i and velocity v i , i = 1, . . . , N P .
2. Repeat:
a. Calculate the fitness value of each particle i.
If the fitness value for each particle i is greater than its best fitness value found
so far, then revise x i∗ (t).
b. Determine the location of the particle with the highest fitness and revise x g (t) if
necessary.
c. for each particle i, calculate its velocity according to (9.1) or (9.3).
d. Update the location of each particle i according to (9.2).
e. Set t = t + 1.
until stopping criteria are met.
where ϕ = ϕ1 + ϕ2 > 4. With this formulation, the velocity limit vmax is no longer
necessary, and the algorithm could guarantee convergence without clamping the
velocity. It is suggested that ϕ = 4.1 (c1 = c2 = 2.05) and χ = 0.729 [27]. When
α = χ and ϕ1 + ϕ2 > 4, the constriction and inertia approaches are algebraically
equivalent and improved performance could be achieved across a wide range of
problems [27]. Constriction PSO has faster convergence than LDWPSO, but it is
prone to be trapped in local optima for multimodal functions.
Bare-bones PSO [42], as the simplest version of PSO, eliminates the velocity equation
of PSO and uses a Gaussian distribution based on pbest and gbest to sample the
search space. It does not use the inertia weight, acceleration coefficient or velocity.
The velocity update equation (9.3) is not used and a Gaussian distribution with the
global and local best positions is used to update the particles’ positions. Bare-bones
PSO has the following update equations:
xi, j (t + 1) = gi, j (t) + σi, j (t)N (0, 1), (9.7)
g
gi, j (t) = 0.5 xi,∗ j (t) + x j (t) , (9.8)
g
σi, j (t) = xi,∗ j (t) − x j (t) , (9.9)
where subscripts i, j denote the ith particle and jth dimension, respectively, N (0, 1)
is the Gaussian distribution with zero mean and unit variance. The method can be
9.2 Basic PSO Algorithms 157
derived from basic PSO [68]. An alternative version is to set xi, j (t + 1) to (9.7) with
50 % chance, and to the previous best position x i,∗ j (t) with 50 % chance. Bare-bones
PSO still suffers from the problem of premature convergence.
It is shown in [15] that during stagnation in PSO, the points sampled by the leader
particle lie on a specific line. The condition under which particles stick to exploring
one side of the stagnation point only is obtained, and the case where both sides
are explored is also given. Information about the gradient of the objective function
during stagnation in PSO are also obtained.
Under the generalized theoretical deterministic PSO model, conditions for particle
convergence to a point are derived in [20]. The model greatly weakens the stagnation
assumption, by assuming that each particle’s personal best and neighborhood best
can occupy an arbitrarily large number of unique positions.
In [21], an objective function is designed for assumption-free convergence analysis
of some PSO variants. It is found that canonical particle swarm’s topology does not
have an impact on the parameter region needed to ensure convergence. The parameter
region needed to ensure convergent particle behavior has been empirically obtained
for fully informed PSO, bare-bones PSO, and standard PSO 2011.
The issues associated with PSO are the stagnation of particles in some points in
the search space, inability to change the value of one or more decision variables,
poor performance in case of small swarm, lack of guarantee to converge even to
a local optimum, poor performance for an increasing number of dimensions, and
sensitivity to the rotation of the search space. A general form of velocity update rule
for PSO proposed in [10] guarantees to address all of these issues if the user-definable
function f satisfies the two conditions: (i) f is designed in such a way that for any
input vector x in the search space, there exists a region A which contains x and f (x)
can be located anywhere in A, and (ii) f is invariant under any affine transformation.
Example 9.1: We revisit the optimization problem treated in Example 2.1. The
Easom function is plotted in Figure 2.1. The global minimum value is −1 at
x = (π, π)T .
MATLAB Global Optimization Toolbox provides a PSO solver particles
warm. Using the default parameter settings, particleswarm solver can always
find the global optimum very rapidly for ten random runs, for the range [−100, 100]2 .
This is because all the initial individuals which are randomly selected in (0, 1) are
very close the global optimum.
A fair evaluation of PSO is to set the initial population randomly from the entire
domain. We select an initial population size of 40 and other default parameters. For
20 random runs, the solver converged 19 times for a maximum of 100 generations.
For a random run, we have f (x) = −1.0000 at (3.1416, 3.1416) with 2363 function
evaluations, and all the individuals converge toward the global optimum. The evolu-
tion of a random run is illustrated in Figure 9.1. For this problem, we conclude that
the particleswarm solver outperforms ga and simulannealbnd solvers.
9.3 PSO Variants Using Different Neighborhood Topologies 159
-0.2
Function value
-0.4
-0.6
-0.8
-1
0 10 20 30 40 50 60
Iteration
Figure 9.1 The evolution of a random run of PSO: the minimum and average objectives.
A key feature of PSO is social information sharing among the neighborhood. Typical
neighborhood topologies are the von-Neumann neighborhood, gbest and lbest, as
shown in Figure 9.2. The simplest neighbor structure might be the ring structure.
Basic PSO uses gbest topology, in which the neighborhood consists of the whole
swarm, meaning that all the particles have the information of the globally found best
solution. Every particle is a neighbor of every other particle.
The lbest neighborhood has ring lattice topology: each particle generates a neigh-
borhood consisting of itself and its two or more immediate neighbors. The neighbors
may not be close to the generating particle either regarding the objective function
values or the positions, instead they are chosen by their adjacent indices.
chronous model. In general, the asynchronous model has faster convergence speed
than synchronous PSO, yet at the cost of getting trapped by rapidly attracting all parti-
cles to a deceitful solution. Random asynchronous PSO is a variant of asynchronous
PSO where particles are selected at random to perform their operations. Random
asynchronous PSO has the best general performance in large neighborhoods, while
synchronous PSO has the best one in small neighborhoods [77].
In fitness-distance-ratio-based PSO (FDR-PSO) [74], each particle utilizes an
additional information of the nearby higher fitness particle that is selected according
to fitness–distance ratio, i.e., the ratio of fitness improvement over the respective
weighted Euclidean distance. The algorithm moves particles toward nearby particles
of higher fitness, instead of attracting each particle toward just the gbest position. This
combats the problem of premature convergence observed in PSO. Concurrent PSO
[6] avoids the possible crosstalk effect of pbest and gbest with nbest in FDR-PSO
by concurrently simulating modified PSO and FDR-PSO algorithms with frequent
message passing between them.
To avoid stagnation and to keep the gbest particle moving until it has reached a
local minimum, guaranteed convergence PSO [87] uses a different velocity update
equation for the x g particle, which causes the particle to perform a random search
around x g within a radius defined by a scaling factor. Its ability to operate with small
swarm sizes makes it an enabling technique for parallel niching solutions.
For large parameter optimization problems, orthogonal PSO [35] uses an intel-
ligent move mechanism, which applies orthogonal experimental design to adjust a
velocity for each particle by using a divide and conquer approach in determining the
next move of particles.
In [14], basic PSO and Michigan PSO are used to solve the problem of prototype
placement for nearest prototype classifiers. In the Michigan approach, a member of
the population only encodes part of the solution, and the whole swarm is the potential
solution to the problem. This reduces the dimension of the search space. Adaptive
Michigan PSO [14] uses modified PSO equations with both particle competition
and cooperation between the closest neighbors and a dynamic neighborhood. The
Michigan PSO algorithms introduce a local fitness function to guide the particles’
movement and dynamic neighborhoods that are calculated on each iteration.
Diversity can be maintained by relocating the particles when they are too close
to each other [60] or using some collision-avoiding mechanisms [8]. In [71], trans-
formations of the objective function through deflection and stretching are used to
overcome local minimizers and a repulsion source at each detected minimizer is
used to repel particles away from previously detected minimizers. This combina-
tion is able to find as many global minima as possible by preventing particles from
moving to a previously discovered minimal region.
In [30], PSO is used to improve simplex search. Clustering-aided simplex PSO
[40] incorporates simplex method to improve PSO performance. Each particle in
PSO is regarded as a point of the simplex. On each iteration, the worst particle is
replaced by a new particle generated by one iteration of the simplex method. Then,
all particles are again updated by PSO. PSO and simplex methods are performed
iteratively.
162 9 Particle Swarm Optimization
Adaptive PSO [93] first, by evaluating the population distribution and particle
fitness, performs a real-time procedure to identify one of the four defined evolution-
ary states, including exploration, exploitation, convergence, and jumping out in each
generation. It enables the automatic control of algorithmic parameters at run time to
improve the search efficiency and convergence speed. Then, an elitist learning strat-
egy is performed when the evolutionary state is classified as convergence state. The
strategy will act on the gbest particle to jump out of the likely local optima. Adaptive
PSO substantially enhances the performance of PSO in terms of convergence speed,
global optimality, solution accuracy, and algorithm reliability.
Chaotic PSO [2] utilizes chaotic maps for parameter adaptation which can improve
the search ability of basic PSO. Frankenstein’s PSO [25] combines a number of algo-
rithmic components such as time-varying population topology, the velocity updating
mechanism of fully informed PSO [64], and decreasing inertia weight, showing
advantages in terms of optimization speed and reliability. Particles are initially con-
nected with fully connected topology, which is reduced over time with certain pattern.
Comprehensive learning PSO (http://www.ntu.edu.sg/home/epnsugan) [55] uses
all other particles’ pbest information to update a particle’s velocity. It learns each
dimension of a particle from just one particle’s historical best information, while
each particle learns from different particles’ historical best information for different
dimensions for a few generations. This strategy helps to preserve the diversity to
discourage premature convergence. The method outperforms PSO with inertia weight
[83] and PSO with constriction factor [22] in solving multimodal problems.
Inspired by the social behavior of clan, clan PSO [13] divides the PSO population
into several clans. Each clan will first perform the search and the particle with the best
fitness is selected as the clan leader. The leaders then meet to adjust their position.
Dynamic clan PSO [7] allows particles in one clan migrate to another clan.
Motivated by a social phenomenon where multiple of good exemplars assist the
crowd to progress better, in example-based Learning PSO [37], an example set of
multiple gbest particles is employed to update the particles’ position in example-
based Learning PSO.
Charged PSO [8] utilizes an analogy of electrostatic energy, where some mutually
repelling particles orbit a nucleus of neutral particles. This nucleus corresponds to
a basic PSO swarm. The particles with identical charges produce a repulsive force
between them. The neutral particles allow exploitation while the charged particles
enforce separation to maintain exploration.
Random black hole PSO [95] is a PSO algorithm based on the concept of black
holes in physics. In each dimension of a particle, a black hole located nearest to
the best particle of the swarm in current generation is randomly generated and then
particles of the swarm are randomly pulled into the black hole with a probability
p. This helps the algorithm fly out of local minima, and substantially speed up the
evolution process to global optimum.
Social learning plays an important role in behavior learning among social animals.
In contrast to individual learning, social learning allows individuals to learn behaviors
from others without the cost of individual trials and errors. Social learning PSO [18]
introduces social learning mechanisms into PSO. Each particle learns from any of
164 9 Particle Swarm Optimization
the better particles (termed demonstrators) in the current swarm. Social learning
PSO adopts a dimension-dependent parameter control method. It performs well on
low-dimensional problems and is promising for solving large-scale problems as well.
In [5], agents in the swarm are categorized into explorers and settlers, which can
dynamically exchange their role in the search process. This particle task differen-
tiation is achieved through a different way of adjusting the particle velocities. The
coefficients of the cognitive and social component of the stochastic acceleration as
well as the inertia weight are related to the distance of each particle from the gbest
position found so far. This particle task differentiation enhances the local search
ability of the particles close to the gbest and improves the exploration ability of the
particles far from the gbest.
PSO lacks mechanisms which add diversity to exploration in the search process.
Inspired by the collective response behavior of starlings, starling PSO [65] introduces
a mechanism to add diversity into PSO. This mechanism consists of initialization,
identifying seven nearest neighbors, and orientation change.
In PSO, the particles move through the solution space through perturbations of their
position, which are influenced by other particles, whereas in EAs, individuals breed
with one another to produce new individuals. Compared to EAs, PSO is easy to
implement and there are few parameters to adjust. In PSO, every particle remembers
its pbest and gbest, thus having a more effective memory capability than EAs have.
PSO is also more efficient in maintaining the diversity of the swarm, since all the
particles use the information related to the most successful particle in order to improve
themselves, whereas in EAs only the good solutions are saved.
Hybridization of EAs and PSO is usually implemented by incorporating genetic
operators into PSO to enhance the performance of PSO: to keep the best particles
[4], to increase the diversity, and to improve the ability to escape local minima [61].
In [4], a tournament selection process is applied to replace each poorly performing
particle’s velocity and position with those of better performing particles. In [61], basic
PSO is combined with arithmetic crossover. The hybrid PSOs combine the velocity
and position update rules with the ideas of breeding and subpopulations. The swarm
is divided into subpopulations, and a breeding operator is used within a subpopulation
or between the subpopulations to increase the diversity of the population. In [82], the
standard velocity and position update rules of PSO are combined with the concepts
of selection, crossover, and mutation. A breeding ratio is employed to determine
the proportion of the population that undergoes breeding procedure in the current
generation and the portion to perform regular PSO operation. Grammatical swarm
adopts PSO coupled to a grammatical evolution genotype–phenotype mapping to
generate programs [67].
Evolutionary self-adapting PSO [63] grants a PSO scheme with an explicit selec-
tion procedure and with self-adapting properties for its parameters. This selection
9.5 PSO and EAs: Hybridization 165
acts on the weights or parameters governing the behavior of a particle and, the particle
movement operator is introduced to generate diversity.
In [39], mutation, crossover, and elitism are incorporated into PSO. The upper-
half of the best-performing individuals, known as elites, are regarded as a swarm
and enhanced by PSO. The enhanced elites constitute half of the population in the
new generation, while crossover and mutation operations are applied to the enhanced
elites to generate the other half.
AMALGAM-SO [90] implements self-adaptive multimethod search using a sin-
gle universal genetic operator for population evolution. It merges the strengths of
CMA-ES, GA, and PSO for population evolution during each generation and imple-
ments a self-adaptive learning strategy to automatically tune the number of offspring.
The method scales well with increasing number of dimensions, converges in the close
proximity of the global minimum for functions with noise induced multimodality,
and is designed to take full advantage of the power of distributed computer networks.
Time-varying acceleration coefficients (TVAC) [78] are introduced to efficiently
control the local search and convergence to the global optimum, in addition to the
time-varying inertia weight factor in PSO. Mutated PSO with TVAC adds a pertur-
bation to a randomly selected modulus of the velocity vector of a random particle by
predefined probability. Self-organizing hierarchical PSO with TVAC considers only
the social and cognitive parts, but eliminates the inertia term in the velocity update
rule. Particles are reinitialized whenever they are stagnated in the search space, or
any component of a particle’s velocity vector becomes very close to zero.
Generate a random number r for each bit. If r < Ti,d , then xid is interpreted as 1;
otherwise, as 0. The velocity term is limited to |vi,d | < Vmax . To prevent Ti,d from
approaching 0 or 1, one can force Vmax = 4 [44].
166 9 Particle Swarm Optimization
Based on the discrete PSO proposed in [43], multiphase discrete PSO [3] is for-
mulated by using an alternative velocity update technique, which incorporates hill
climbing using random stepsize in the search space. The particles are divided into
groups that follow different search strategies. A discrete PSO algorithm is proposed
in [56] for flowshop scheduling, where the particle and velocity are redefined, an
efficient approach is developed to move a particle to the new sequence, and a local
search scheme is incorporated.
Jumping PSO [62] is a discrete PSO inspired from frogs. The positions x i of
particles jump from one solution to another. It does not consider any velocity. Each
particle has three attractors: its own best position, the best position of its social
neighborhood, and the gbest position. A jump approaching an attractor consists of
changing a feature of the current solution by a feature of the attractor.
Multiple swarms in PSO explore the search space together to attain the objective of
finding the optimal solutions. This resembles many bird species joining to form a
flock in a geographical region, to achieve certain foraging behaviors that benefit one
another. Each species has different food preferences. This corresponds to multiple
swarms locating possible solutions in different regions of the solution space. This
is also similar to people all over the world: In each country, there is a different
lifestyle that is best suited to the ethnic culture. A species can be defined as a group
of individuals sharing common attributes according to some similarity metric.
Multi-swarm PSO is used for solving multimodal problems and combating PSO’s
tendency in premature convergence. It typically adopts a heuristically chosen number
of swarms with a fixed swarm size throughout the search process. Multi-swarm PSO
is also used to locate and track changing optima in a dynamic environment.
Based on guaranteed convergence PSO [87], niching PSO [11] creates a subswarm
from a particle and its nearest spatial neighbor, if the variance in that particle’s fitness
is below a threshold. Niching PSO initially sets up subswarm leaders by training the
main swarm utilizing the basic PSO using no social information (c2 = 0). Niches are
then identified and a subswarm radius is set. As optimization progresses, particles
are allowed to join subswarms, which are in turn allowed to merge. Once the velocity
has minimized, they converge to their subswarm optimum.
In turbulent PSO [19], the population is divided into two subswarms: one sub-
swarm following the gbest, while the other moving in the opposite direction. The
particles’ positions are dependent on their lbest, their corresponding subswarm’s
best, and the gbest collected from the two subswarms. If the gbest has not improved
for fifteen successive iterations, the worst particles of a subswarm are replaced by the
best ones from the other subswarm, and the subswarms switch their flight directions.
Turbulent PSO avoids premature convergence by replacing the velocity memory by
a random turbulence operator when a particle exceeds it. Fuzzy adaptive turbulent
9.7 Multi-swarm PSOs 167
PSO [58] is a hybrid of turbulent PSO with a fuzzy logic controller to adaptively
regulate the velocity parameters.
Speciation-based PSO [54] uses spatial speciation for locating multiple local
optima in parallel. Each species is grouped around a dominating particle called the
species seed. At each iteration, species seeds are identified from the entire popula-
tion, and are then adopted as neighborhood bests for these individual species groups
separately. Dynamic speciation-based PSO [69] modifies speciation-based PSO for
tracking multiple optima in the dynamic environment by comparing the fitness of
each particle’s current lbest with its previous record to continuously monitor the
moving peaks, and by using a predefined species population size to quantify the
crowdedness of species before they are reinitialized randomly in the solution space
to search for new possible optima.
In adaptive sequential niche PSO [94], the fitness values of the particles are mod-
ified by a penalty function to prevent all subswarms from converging to the same
optima. A niche radius is not required. It can find all optimal solutions for multimodal
function sequentially.
In [48], the swarm population is clustered into a certain number of clusters. Then,
a particle’s lbest is replaced by its cluster center, and the particles’ gbest is replaced
by the neighbors’ best. This approach has improved the diversity and exploration of
PSO. In [72], in order to solve multimodal problems, clustering is used to identify
the niches in the swarm population and then to restrict the neighborhood of each
particle to the other particles in the same cluster in order to perform a local search
for any local minima located within the clusters.
In [9], the population of particles are split into a set of interacting swarms,
which interact locally by an exclusion parameter and globally through a new anti-
convergence operator. Each swarm maintains diversity either by using charged or
quantum particles. Quantum swarm optimization (QSO) builds on the atomic pic-
ture of charged PSO, and uses a quantum analogy for the dynamics of the charged
particles. Multi-QSO uses multiple swarms [9].
In multigrouped PSO [81], N solutions of a multimodal function can be searched
with N groups. A repulsive velocity component is added to the particle update equa-
tion, which will push the intruding particles out of the other group’s gbest radius.
The predefined radius is allowed to increase linearly during the search process to
avoid several groups from settling on the same peak.
When multi-swarms are used for enhancing diversity of PSO, each swarm per-
forms a PSO paradigm independently. After some predefined generations, the swarms
will exchange information based on a diversified list of particles. Some strategies
for information exchange between two or more swarms are given in [28,75]. In [28],
two subswarms are updated independently for a certain interval, and then, the best
particles (information) in each subswarm are exchanged. In [75], swarm population
is initially clustered into a predefined number of swarms. Particles’ positions are first
updated using a PSO equation where three levels of communications are facilitated,
namely, personal, global, and neighborhood levels. At every iteration, the particles in
a swarm are divided into two sets: One set of particles is sent to another swarm, while
the other set of particles will be replaced by the individuals from other swarms [75].
168 9 Particle Swarm Optimization
ture. Tribes may benefit by removal of their weakest member, or by addition of a new
member. The best particles of the tribes are exchanged among all the tribes. Relation-
ships between particles in a tribe are similar to those defined in global PSO. TRIBES
is efficient in quickly finding a good region of the landscape, but less efficient for
local refinement.
Problems
9.1 Explain why in basic PSO with a neighbor structure a larger neighbor number
has faster convergence, but in fully informed PSO the opposite is true.
9.2 Implement the particleswarm solver of MATLAB Global Optimization
Toolbox for solving a benchmark function. Test the influence of different para-
meter settings.
References
1. Akat SB, Gazi V. Decentralized asynchronous particle swarm optimization. In: Proceedings of
the IEEE swarm intelligence symposium, St. Louis, MO, USA, September 2008. p. 1–8.
2. Alatas B, Akin E, Bedri A. Ozer, Chaos embedded particle swarm optimization algorithms.
Chaos Solitons Fractals. 2009;40(5):1715–34.
3. Al-kazemi B, Mohan CK. Multi-phase discrete particle swarm optimization. In: Proceedings
of the 4th international workshop on frontiers in evolutionary algorithms, Kinsale, Ireland,
January 2002.
4. Angeline PJ. Using selection to improve particle swarm optimization. In: Proceedings of IEEE
congress on evolutionary computation, Anchorage, AK, USA, May 1998. p. 84–89.
5. Ardizzon G, Cavazzini G, Pavesi G. Adaptive acceleration coefficients for a new search diver-
sification strategy in particle swarm optimization algorithms. Inf Sci. 2015;299:337–78.
6. Baskar S, Suganthan P. A novel concurrent particle swarm optimization. In: Proceedings of
IEEE congress on evolutionary computation (CEC), Beijing, China, June 2004. p. 792–796.
7. Bastos-Filho CJA, Carvalho DF, Figueiredo EMN, de Miranda PBC. Dynamicclan particle
swarm optimization. In: Proceedings of the 9th international conference on intelligent systems
design and applications (ISDA’09), Pisa, Italy, November 2009. p. 249–254.
8. Blackwell TM, Bentley P. Don’t push me! Collision-avoiding swarms. In: Proceedings of
congress on evolutionary computation, Honolulu, HI, USA, May 2002, vol. 2. p. 1691–1696.
9. Blackwell T, Branke J. Multiswarms, exclusion, and anti-convergence in dynamic environ-
ments. IEEE Trans Evol Comput. 2006;10(4):459–72.
10. Bonyadi MR, Michalewicz Z. A locally convergent rotationally invariant particle swarm opti-
mization algorithm. Swarm Intell. 2014;8:159–98.
11. Brits R, Engelbrecht AF, van den Bergh F. A niching particle swarm optimizer. In: Proceedings
of the 4th Asia-Pacific conference on simulated evolutions and learning, Singapore, November
2002. p. 692–696.
12. Carlisle A, Dozier G. An off-the-shelf PSO. In: Proceedings of workshop on particle swarm
optimization, Indianapolis, IN, USA, Jannuary 2001. p. 1–6.
13. Carvalho DF, Bastos-Filho CJA. Clan particle swarm optimization. In: Proceedings of IEEE
congress on evolutionary computation (CEC), Hong Kong, China, June 2008. p. 3044–3051.
170 9 Particle Swarm Optimization
14. Cervantes A, Galvan IM, Isasi P. AMPSO: a new particle swarm method for nearest neighbor-
hood classification. IEEE Trans Syst Man Cybern Part B. 2009;39(5):1082–91.
15. Chatterjee S, Goswami D, Mukherjee S, Das S. Behavioral analysis of the leader particle during
stagnation in a particle swarm optimization algorithm. Inf Sci. 2014;279:18–36.
16. Chen H, Zhu Y, Hu K. Discrete and continuous optimization based on multi-swarm coevolution.
Nat Comput. 2010;9:659–82.
17. Chen W-N, Zhang J, Lin Y, Chen N, Zhan Z-H, Chung HS-H, Li Y, Shi Y-H. Particle swarm
optimization with an aging leader and challengers. IEEE Trans Evol Comput. 2013;17(2):241–
58.
18. Cheng R, Jin Y. A social learning particle swarm optimization algorithm for scalable optimiza-
tion. Inf Sci. 2015;291:43–60.
19. Chen G, Yu J. Two sub-swarms particle swarm optimization algorithm. In: Advances in natural
computation, vol. 3612 of Lecture notes in computer science. Berlin: Springer; 2005. p. 515–
524.
20. Cleghorn CW, Engelbrecht AP. A generalized theoretical deterministic particle swarm model.
Swarm Intell. 2014;8:35–59.
21. Cleghorn CW, Engelbrecht AP. Particle swarm variants: standardized convergence analysis.
Swarm Intell. 2015;9:177–203.
22. Clerc M, Kennedy J. The particle swarm-explosion, stability, and convergence in a multidi-
mensional complex space. IEEE Trans Evol Comput. 2002;6(1):58–73.
23. Clerc M. Particle swarm optimization. In: International scientific and technical encyclopaedia.
Hoboken: Wiley; 2006.
24. Coelho LS, Krohling RA. Predictive controller tuning using modified particle swarm optimi-
sation based on Cauchy and Gaussian distributions. In: Proceedings of the 8th online world
conference soft computing and industrial applications, Dortmund, Germany, September 2003.
p. 7–12.
25. de Oca MAM, Stutzle T, Birattari M, Dorigo M. Frankenstein’s PSO: a composite particle
swarm optimization algorithm. IEEE Trans Evol Comput. 2009;13(5):1120–32.
26. de Oca MAM, Stutzle T, Van den Enden K, Dorigo M. Incremental social learning in particle
swarms. IEEE Trans Syst Man Cybern Part B. 2011;41(2):368–84.
27. Eberhart RC, Shi Y. Comparing inertia weights and constriction factors in particle swarm
optimization. In: Proceedings of IEEE congress on evolutionary computation (CEC), La Jolla,
CA, USA, July 2000. p. 84–88.
28. El-Abd M, Kamel MS. Information exchange in multiple cooperating swarms. In: Proceedings
of IEEE swarm intelligence symposium, Pasadena, CA, USA, June 2005. p. 138–142.
29. Esquivel SC, Coello CAC. On the use of particle swarm optimization with multimodal func-
tions. In: Proceedings of IEEE congress on evolutionary computation (CEC), Canberra, Aus-
tralia, 2003. p. 1130–1136.
30. Fan SKS, Liang YC, Zahara E. Hybrid simplex search and particle swarm optimization for the
global optimization of multimodal functions. Eng Optim. 2004;36(4):401–18.
31. Fernandez-Martinez JL, Garcia-Gonzalo E. Stochastic stability analysis of the linear continuous
and discrete PSO models. IEEE Trans Evol Comput. 2011;15(3):405–23.
32. Hakli H, Uguz H. A novel particle swarm optimization algorithm with Levy flight. Appl Soft
Comput. 2014;23:333–45.
33. He S, Wu QH, Wen JY, Saunders JR, Paton RC. A particle swarm optimizer with passive
congregation. Biosystems. 2004;78:135–47.
34. Higashi N, Iba H. Particle swarm optimization with Gaussian mutation. In: Proceedings of
IEEE swarm intelligence symposium, Indianapolis, IN, USA, April 2003. p. 72–79.
35. Ho S-Y, Lin H-S, Liauh W-H, Ho S-J. OPSO: orthogonal particle swarm optimization and its
application to task assignment problems. IEEE Trans Syst Man Cybern Part A. 2008;38(2):288–
98.
References 171
36. Hsieh S-T, Sun T-Y, Liu C-C, Tsai S-J. Efficient population utilization strategy for particle
swarm optimizer. IEEE Trans Syst Man Cybern Part B. 2009;39(2):444–56.
37. Huang H, Qin H, Hao Z, Lim A. Example-based learning particle swarm optimization for
continuous optimization. Inf Sci. 2012;182:125–38.
38. Janson S, Middendorf M. A hierarchical particle swarm optimizer and its adaptive variant.
IEEE Trans Syst Man Cybern Part B. 2005;35(6):1272–82.
39. Juang C-F. A hybrid of genetic algorithm and particle swarm optimization for recurrent network
design. IEEE Trans Syst Man Cybern Part B. 2004;34(2):997–1006.
40. Juang C-F, Chung I-F, Hsu C-H. Automatic construction of feedforward/recurrent fuzzy
systems by clustering-aided simplex particle swarm optimization. Fuzzy Sets Syst.
2007;158(18):1979–96.
41. Kadirkamanathan V, Selvarajah K, Fleming PJ. Stability analysis of the particle dynamics in
particle swarm optimizer. IEEE Trans Evol Comput. 2006;10(3):245–55.
42. Kennedy J. Bare bones particle swarms. In: Proceedings of IEEE swarm intelligence sympo-
sium, Indianapolis, IN, USA, April 2003. p. 80–87.
43. Kennedy J, Eberhart RC. A discrete binary version of the particle swarm algorithm. In: Pro-
ceedings of IEEE conference on systems, man, and cybernetics, Orlando, FL, USA, October
1997. p. 4104–4109.
44. Kennedy J, Eberhart RC. Swarm intelligence. San Francisco, CA: Morgan Kaufmann; 2001.
45. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of IEEE international
conference on neural networks, Perth, WA, USA, November 1995, vol. 4. p. 1942–1948.
46. Kennedy J, Mendes R. Population structure and particle swarm performance. In: Proceedings
of congress on evolutionary computation, Honolulu, HI, USA, May 2002. p. 1671–1676.
47. Kennedy J. Small worlds and mega-minds: Effects of neighborhood topology on particle swarm
performance. In: Proceedings of congress on evolutionary computation (CEC), Washington,
DC, USA, July 1999. p. 1931–1938.
48. Kennedy J. Stereotyping: improving particle swarm performance with cluster analysis. In:
Proceedings of congress on evolutionary computation (CEC), La Jolla, CA, July 2000. p.
1507–1512.
49. Kennedy J. The particle swarm: social adaptation of knowledge. In: Proceedings of IEEE
international conference on evolutionary computation, Indianapolis, USA, April 1997. p. 303–
308.
50. Koh B-I, George AD, Haftka RT, Fregly BJ. Parallel asynchronous particle swarm optimization.
Int J Numer Methods Eng. 2006;67:578–95.
51. Krohling RA. Gaussian swarm: a novel particle swarm optimization algorithm. In: Proceedings
of IEEE conference cybernetics and intelligent systems, Singapore, December 2004. p. 372–
376.
52. Langdon WB, Poli R. Evolving problems to learn about particle swarm optimizers and other
search algorithms. IEEE Trans Evol Comput. 2007;11(5):561–78.
53. Lanzarini L, Leza V, De Giusti A. Particle swarm optimization with variable population size.
In: Proceedings of the 9th international conference on artificial intelligence and soft computing,
Zakopane, Poland, June 2008, vol. 5097 of Lecture notes in computer science. Berlin: Springer;
2008. p. 438–449.
54. Li X. Adaptively choosing neighbourhood bests using species in a particle swarm optimizer for
multimodal function optimization. In: Proceedings of genetic and evolutionary computation
conference (GECCO), Seattle, WA, USA, June 2004. p. 105–116.
55. Liang JJ, Qin AK, Suganthan PN, Baskar S. Comprehensive learning particle swarm optimizer
for global optimization of multimodal functions. IEEE Trans Evol Comput. 2006;10(3):281–
95.
56. Liao C-J, Tseng C-T, Luarn P. A discrete version of particle swarm optimization for flowshop
scheduling problems. Comput Oper Res. 2007;34:3099–111.
172 9 Particle Swarm Optimization
57. Liu Y, Qin Z, Shi Z, Lu J. Center particle swarm optimization. Neurocomputing. 2007;70:672–
9.
58. Liu H, Abraham A. Fuzzy adaptive turbulent particle swarm optimization. In: Proceedings of
the 5th international conference on hybrid intelligent systems (HIS’05), Rio de Janeiro, Brazil,
November 2005. p. 445–450.
59. Loengarov A, Tereshko V. A minimal model of honey bee foraging. In: Proceedings of IEEE
swarm intelligence symposium, Indianapolis, IN, USA, May 2006. p. 175–182.
60. Lovbjerg M, Krink T. Extending particle swarm optimisers with self-organized criticality. In:
Proceedings of congress on evolutionary computation (CEC), Honolulu, HI, USA, May 2002.
p. 1588–1593.
61. Lovbjerg M, Rasmussen TK, Krink T. Hybrid particle swarm optimiser with breeding and sub-
populations. In: Proceedings of genetic and evolutionary computation conference (GECCO),
Menlo Park, CA, USA, August 2001. p. 469–476.
62. Martinez-Garcia FJ, Moreno-Perez JA. Jumping frogs optimization: a new swarm method for
discrete optimization. Technical Report DEIOC 3/2008, Department of Statistics, O.R. and
Computing, University of La Laguna, Tenerife, Spain, 2008.
63. Miranda V, Fonseca N. EPSO—Best of two worlds meta-heuristic applied to power system
problems. In: Proceedings of IEEE congress on evolutionary computation, Honolulu, HI, USA,
May 2002. p. 1080–1085.
64. Mendes R, Kennedy J, Neves J. The fully informed particle swarm: simpler, maybe better.
IEEE Trans Evol Comput. 2004;8(3):204–10.
65. Netjinda N, Achalakul T, Sirinaovakul B. Particle swarm optimization inspired by starling flock
behavior. Appl Soft Comput. 2015;35:411–22.
66. Niu B, Zhu Y, He X. Multi-population cooperative particle swarm optimization. In: Proceedings
of European conference on advances in artificial life, Canterbury, UK, September 2005. p. 874–
883.
67. O’Neill M, Brabazon A. Grammatical swarm: the generation of programs by social program-
ming. Nat Comput. 2006;5:443–62.
68. Pan F, Hu X, Eberhart RC, Chen Y. An analysis of bare bones particle swarm. In: Proceedings
of the IEEE swarm intelligence symposium, St. Louis, MO, USA, September 2008. p. 21–23.
69. Parrott D, Li X. Locating and tracking multiple dynamic optima by a particle swarm model
using speciation. IEEE Trans Evol Comput. 2006;10(4):440–58.
70. Parsopoulos KE, Vrahatis MN. UPSO: a unified particle swarm optimization scheme. In: Pro-
ceedings of the international conference of computational methods in sciences and engineering,
2004. The Netherlands: VSP International Science Publishers; 2004. pp. 868–873.
71. Parsopoulos KE, Vrahatis MN. On the computation of all global minimizers through particle
swarm optimization. IEEE Trans Evol Comput. 2004;8(3):211–24.
72. Passaro A, Starita A. Clustering particles for multimodal function optimization. In: Proceedings
of ECAI workshop on evolutionary computation, Riva del Garda, Italy, 2006. p. 124–131.
73. Pedersen MEH, Chipperfield AJ. Simplifying particle swarm optimization. Appl Soft Comput.
2010;10(2):618–28.
74. Peram T, Veeramachaneni K, Mohan CK. Fitness-distance-ratio based particle swarm opti-
mization. In: Proceedings of the IEEE swarm intelligence symposium, Indianapolis, IN, USA,
April 2003. p. 174–181.
75. Pulido GT, Coello CAC. Using clustering techniques to improve the performance of a par-
ticle swarm optimizer. In: Proceedings of genetic and evolutionary computation conference
(GECCO), Seattle, WA, USA, June 2004. p. 225–237.
76. Qin Q, Cheng S, Zhang Q, Li L, Shi Y. Biomimicry of parasitic behavior in a coevolutionary par-
ticle swarm optimization algorithm for global optimization. Appl Soft Comput. 2015;32:224–
40.
77. Rada-Vilela J, Zhang M, Seah W. A performance study on synchronicity and neighborhood
size in particle swarm optimization. Soft Comput. 2013;17:1019–30.
References 173
78. Ratnaweera A, Halgamuge SK, Watson HC. Self-organizing hierarchical particle swarm opti-
mizer with time-varying acceleration coefficients. IEEE Trans Evol Comput. 2004;8(3):240–
55.
79. Reeves WT. Particle systems—a technique for modeling a class of fuzzy objects. ACM Trans
Graph. 1983;2(2):91–108.
80. Secrest BR, Lamont GB. Visualizing particle swarm optimizationGaussian particle swarm
optimization. In: Proceedings of the IEEE swarm intelligence symposium, Indianapolis, IN,
USA, April 2003. p. 198–204.
81. Seo JH, Lim CH, Heo CG, Kim JK, Jung HK, Lee CC. Multimodal function optimization
based on particle swarm optimization. IEEE Trans Magn. 2006;42(4):1095–8.
82. Settles M, Soule T. Breeding swarms: a GA/PSO hybrid. In: Proceedings of genetic and evo-
lutionary computation conference (GECCO), Washington, DC, USA, June 2005. p. 161–168.
83. Shi Y, Eberhart RC. A modified particle swarm optimizer. In: Proceedings of IEEE congress
on evolutionary computation, Anchorage, AK, USA, May 1998. p. 69–73.
84. Silva A, Neves A, Goncalves T. An heterogeneous particle swarm optimizer with predator
and scout particles. In: Proceedings of the 3rd international conference on autonomous and
intelligent systems (AIS 2012), Aveiro, Portugal, June 2012. p. 200–208.
85. Stacey A, Jancic M, Grundy I. Particle swarm optimization with mutation. In: Proceedings of
IEEE congress on evolutionary computation (CEC), Canberra, Australia, December 2003. p.
1425–1430.
86. Suganthan PN. Particle swarm optimizer with neighborhood operator. In: Proceedings of IEEE
congress on evolutionary computation (CEC), Washington, DC, USA, July 1999. p. 1958–1962.
87. van den Bergh F, Engelbrecht AP. A new locally convergent particle swarm optimizer. In: Pro-
ceedings of IEEE conference on systems, man, and cybernetics, Hammamet, Tunisia, October
2002, vol. 3. p. 96–101.
88. van den Bergh F, Engelbrecht AP. A cooperative approach to particle swarm optimization.
IEEE Trans Evol Comput. 2004;3:225–39.
89. van den Bergh F, Engelbrecht AP. A study of particle swarm optimization particle trajectories.
Inf Sci. 2006;176(8):937–71.
90. Vrugt JA, Robinson BA, Hyman JM. Self-adaptive multimethod search for global optimization
in real-parameter spaces. IEEE Trans Evol Comput. 2009;13(2):243–59.
91. Wang H, Liu Y, Zeng S, Li C. Opposition-based particle swarm algorithm with Cauchy muta-
tion. In: Proceedings of the IEEE congress on evolutionary computation (CEC), Singapore,
September 2007. p. 4750–4756.
92. Yang C, Simon D. A new particle swarm optimization technique. In: Proceedings of the 18th
IEEE international conference on systems engineering, Las Vegas, NV, USA, August 2005. p.
164–169.
93. Zhan Z-H, Zhang J, Li Y, Chung HS-H. Adaptive particle swarm optimization. IEEE Trans
Syst Man Cybern Part B. 2009;39(6):1362–81.
94. Zhang J, Huang DS, Lok TM, Lyu MR. A novel adaptive sequential niche technique for
multimodal function optimization. Neurocomputing. 2006;69:2396–401.
95. Zhang J, Liu K, Tan Y, He X. Random black hole particle swarm optimization and its application.
In: Proceedings on IEEE international conference on neural networks and signal processing,
Nanjing, China, June 2008. p. 359–365.