Du 2016

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Particle Swarm Optimization

PSO can locate the region of the optimum faster than EAs, but once in this region
it progresses slowly due to the fixed velocity stepsize. Almost all variants of PSO
try to solve the stagnation problem. This chapter is dedicated to PSO as well as its
variants.

9.1 Introduction

The notion of employing many autonomous particles that act together in simple ways
to produce seemingly complex emergent behavior was initially considered to solve
the problem of rendering images in computer animations [79]. A particle system
stochastically generates a series of moving points. Each particle is assigned an initial
velocity vector. It may also have additional characteristics such as color, texture, and
limited lifetime. Iteratively, velocity vectors are adjusted by some random factor. In
computer graphics and computer games, particle systems are ubiquitous and are the
de facto method for producing animated effects such as fire, smoke, clouds, gunfire,
water, cloth, explosions, magic, lighting, electricity, flocking, and many others. They
are defined by a set of points in space and a set of rules guiding their behavior
and appearance, e.g., velocity, color, size, shape, transparency, and rotation. This
decouples the creation of new complex effects from mathematics and programming.
Today, particle systems are even more popular in global optimization.
PSO originates from studies of synchronous bird flocking, fish schooling, and bees
buzzing [22,44,45,59,83]. It evolves populations or swarms of individuals called
particles. Particles work under social behavior in swarms. PSO finds the global
best solution by simply adjusting the moving vector of each particle according to
its personal best (cognition aspect) and the global best (social aspect) positions of
particles in the entire swarm at each iteration.

© Springer International Publishing Switzerland 2016 153


K.-L. Du and M.N.S. Swamy, Search and Optimization by Metaheuristics,
DOI 10.1007/978-3-319-41192-7_9
154 9 Particle Swarm Optimization

Compared with ant colony algorithms and EAs, PSO requires only primitive
mathematical operators, less computational bookkeeping and generally fewer lines
of code, and thus it is computationally inexpensive in terms of both memory require-
ments and speed. PSO is popular due to its simplicity of implementation and its
ability to quickly converge to a reasonably acceptable solution.

9.2 Basic PSO Algorithms

The socio-cognitive learning process of basic PSO is based on a particle’s own


experience and the experience of the most successful particle. For an optimization
problem of n variables, a swarm of N P particles is defined, where each particle is
assigned a random position in the n-dimensional space as a candidate solution. Each
particle has its own trajectory, namely position x i and velocity v i , and moves in the
search space by successively updating its trajectory. Populations of particles modify
their trajectories based on the best positions visited earlier by themselves and other
particles. All particles have fitness values that are evaluated by the fitness function
to be optimized. The particles are flown through the solution space by following the
current optimum particles. The algorithm initializes a group of particles with random
positions and then searches for optima by updating generations. In every iteration,
each particle is updated by following the two best values, namely, the particle best
pbest, denoted x i∗ , i = 1, . . . , N P , which is the best solution it has achieved so far,
and the global best gbest, denoted x g , which is the best value obtained so far by any
particle in the population. The best value for the population in a generation is a local
best, lbest.
At iteration t + 1, the swarm can be updated by [45]
   
v i (t + 1) = v i (t) + cr1 x i∗ (t) − x i (t) + cr2 x g (t) − x i (t) , (9.1)
x i (t + 1) = x i (t) + v i (t + 1), i = 1, . . . , N P , (9.2)
where the acceleration constant c > 0, and r1 and r2 are uniform random numbers
within [0, 1]. This basic PSO may lead to swarm explosion and divergence due to
lack of control of the magnitude of the velocities. This can be solved by setting a
threshold vmax on the absolute value of velocity v i .
PSO can be physically interpreted as a particular discretization of a stochastic
damped mass–spring system: the so-called PSO continuous model. From (9.1), the
velocities of particles are determined by their previous velocities, cognitive learning
(the second term), and social learning (the third term). Due to social learning, all the
particles are attracted by gbest and move toward it. The other two parts correspond to
the autonomy property, which makes particles keep their own information. Therefore,
during the search all particles move toward the region where gbest is located.
Because all particles in the swarm learn from gbest even if gbest is far from the
global optimum, particles may easily be attracted to the gbest region and get trapped in
a local optimum for multimodal problems. In case the gbest positions locate on local
minimum, other particles in the swarm may also be trapped. If an early solution is
9.2 Basic PSO Algorithms 155

suboptimal, the swarm can easily stagnate around it without any pressure to continue
exploration. This can be seen from (9.1). If x i (t) = x i∗ (t) = x g (t), then the velocity
update will depend only on the value of αv i (t). If their previous velocities v i (t) are
very close to zero, then all the particles will stop moving once they catch up with
the gbest particle. Even worse, the gbest point may not be a local minimum. This
phenomenon is referred to as stagnation. To avoid stagnation, reseeding or partial
restart is introduced by generating new particles at distinct places of the search space.
Almost all variants of PSO try to solve the local optimum or stagnation problem.
PSO can locate the region of the optimum faster than EAs. However, once in this
region it progresses slowly due to the fixed velocity stepsize. Linearly decreasing
weight PSO (LDWPSO) [83] effectively balances the global and local search abilities
of the swarm by introducing a linearly decreasing inertia weight on the previous
velocity of the particle into (9.1):
   
v i (t + 1) = αv i (t) + c1r1 x i∗ (t) − x i (t) + c2 r2 x g (t) − x i (t) , (9.3)
where α is called the inertia weight, and the positive constants c1 and c2 are, respec-
tively, cognitive and social parameters. Typically, c1 = 2.0, c2 = 2.0, and α gradu-
ally decreases from αmax to αmin :
t
α(t) = αmax − (αmax − αmin ) , (9.4)
T
T being the maximum number of iterations. One can select αmax = 1 and αmin = 0.1.
The flowchart of PSO is given by Algorithm 9.1.
Center PSO [57] introduces a center particle into LDWPSO and is updated as
the swarm center at every iteration. The center particle has no velocity, but it is
involved in all operations the same way as the ordinary particles, such as fitness
evaluation, competition for the best particle, except for the velocity calculation. All
particles oscillate around the swarm center and gradually converge toward it. The
center particle often becomes the gbest of the swarm during the run. Therefore, it
has more opportunities to guide the search of the whole swarm, and influences the
performance greatly. CenterPSO achieves not only better solutions but also faster
convergence than LDWPSO does.
PSO, DE, and CMA-ES are compared using certain fitness landscapes evolved
with GP in [52]. DE may get stuck in local optima most of the time for some problem
landscapes. However, over similar landscapes PSO will always find the global optima
correctly within a maximum time bound. DE sometimes has a limited ability to move
its population large distances across the search space if the population is clustered
in a limited portion of it.
Instead of applying inertia to the velocity memory, constriction PSO [22] applies
a constriction factor χ to control the magnitude of velocities:
v i (t + 1) = χ{v i (t) + φ1r1 (x i∗ (t) − x i (t)) + φ2 r2 (x g (t) − x i (t))}, (9.5)

2
χ=   , (9.6)
 
2 − ϕ − ϕ2 − 4ϕ
156 9 Particle Swarm Optimization

Algorithm 9.1 (PSO).

1. Set t = 1.
Initialize each particle in the population by randomly selecting values for its position
x i and velocity v i , i = 1, . . . , N P .
2. Repeat:
a. Calculate the fitness value of each particle i.
If the fitness value for each particle i is greater than its best fitness value found
so far, then revise x i∗ (t).
b. Determine the location of the particle with the highest fitness and revise x g (t) if
necessary.
c. for each particle i, calculate its velocity according to (9.1) or (9.3).
d. Update the location of each particle i according to (9.2).
e. Set t = t + 1.
until stopping criteria are met.

where ϕ = ϕ1 + ϕ2 > 4. With this formulation, the velocity limit vmax is no longer
necessary, and the algorithm could guarantee convergence without clamping the
velocity. It is suggested that ϕ = 4.1 (c1 = c2 = 2.05) and χ = 0.729 [27]. When
α = χ and ϕ1 + ϕ2 > 4, the constriction and inertia approaches are algebraically
equivalent and improved performance could be achieved across a wide range of
problems [27]. Constriction PSO has faster convergence than LDWPSO, but it is
prone to be trapped in local optima for multimodal functions.

9.2.1 Bare-Bones PSO

Bare-bones PSO [42], as the simplest version of PSO, eliminates the velocity equation
of PSO and uses a Gaussian distribution based on pbest and gbest to sample the
search space. It does not use the inertia weight, acceleration coefficient or velocity.
The velocity update equation (9.3) is not used and a Gaussian distribution with the
global and local best positions is used to update the particles’ positions. Bare-bones
PSO has the following update equations:
xi, j (t + 1) = gi, j (t) + σi, j (t)N (0, 1), (9.7)
 
g
gi, j (t) = 0.5 xi,∗ j (t) + x j (t) , (9.8)
 
 g 
σi, j (t) = xi,∗ j (t) − x j (t) , (9.9)

where subscripts i, j denote the ith particle and jth dimension, respectively, N (0, 1)
is the Gaussian distribution with zero mean and unit variance. The method can be
9.2 Basic PSO Algorithms 157

derived from basic PSO [68]. An alternative version is to set xi, j (t + 1) to (9.7) with
50 % chance, and to the previous best position x i,∗ j (t) with 50 % chance. Bare-bones
PSO still suffers from the problem of premature convergence.

9.2.2 PSO Variants Using Gaussian or Cauchy Distribution

In basic PSO, a uniform probability distribution is used to generate random numbers


for the coefficients r1 and r2 . The use of Gaussian or Cauchy probability distributions
may improve the ability of fine-tuning or even escaping from local optima. In [24],
truncated Gaussian and Cauchy probability distributions are used to generate random
numbers for the velocity updating equation. In [80], a rule is used for moving the
particles of the swarm a Gaussian distance from the gbest and lbest. An additional
perturbation term can be introduced to the velocity updating equation as a Gaussian
mutation operator [34,85] or as a Cauchy mutation operator [29]. A Gaussian dis-
tribution is also used in a simplified PSO algorithm [42]. The velocity equation can
be updated based on the Gaussian distribution, where the constants c1 and c2 are
generated using the absolute value of the Gaussian distribution with zero mean and
unit standard deviation [51].
In [32], PSO is combined with Levy flights to get rid of local minima and improve
global search capability. Levy flight is a random walk determining stepsize using
Levy distribution. A more efficient search takes place in the search space, thanks to
the long jumps to be made by the particles. A limit value is defined for each particle,
and if the particles could not improve self-solutions at the end of current iteration,
this limit is increased. If the limit value determined is exceeded by a particle, the
particle is redistributed in the search space with Levy flight method.

9.2.3 Stability Analysis of PSO

In [22], the stability analysis of PSO is implemented by simplifying PSO through


treating the random coefficients as constants; this leads to a deterministic second-
order linear dynamical system whose stability depends on the system poles or the
eigenvalues of the state matrix. In [41], sufficient conditions for the stability of
the particle dynamics are derived using Lyapunov stability theorem. A stochastic
analysis of the linear continuous and generalized PSO models for the case of a
stochastic center of attraction are presented in [31]. Generalized PSO tends to the
continuous PSO, when time step approaches zero.
Theoretically, each particle in PSO is proved to converge to the weighted average
of x i∗ and g best [22,89]:
c1 x i∗ + c2 x g (t)
lim x i (t) = , (9.10)
t→∞ c1 + c2
where c1 and c2 are the two learning factors in PSO.
158 9 Particle Swarm Optimization

It is shown in [15] that during stagnation in PSO, the points sampled by the leader
particle lie on a specific line. The condition under which particles stick to exploring
one side of the stagnation point only is obtained, and the case where both sides
are explored is also given. Information about the gradient of the objective function
during stagnation in PSO are also obtained.
Under the generalized theoretical deterministic PSO model, conditions for particle
convergence to a point are derived in [20]. The model greatly weakens the stagnation
assumption, by assuming that each particle’s personal best and neighborhood best
can occupy an arbitrarily large number of unique positions.
In [21], an objective function is designed for assumption-free convergence analysis
of some PSO variants. It is found that canonical particle swarm’s topology does not
have an impact on the parameter region needed to ensure convergence. The parameter
region needed to ensure convergent particle behavior has been empirically obtained
for fully informed PSO, bare-bones PSO, and standard PSO 2011.
The issues associated with PSO are the stagnation of particles in some points in
the search space, inability to change the value of one or more decision variables,
poor performance in case of small swarm, lack of guarantee to converge even to
a local optimum, poor performance for an increasing number of dimensions, and
sensitivity to the rotation of the search space. A general form of velocity update rule
for PSO proposed in [10] guarantees to address all of these issues if the user-definable
function f satisfies the two conditions: (i) f is designed in such a way that for any
input vector x in the search space, there exists a region A which contains x and f (x)
can be located anywhere in A, and (ii) f is invariant under any affine transformation.

Example 9.1: We revisit the optimization problem treated in Example 2.1. The
Easom function is plotted in Figure 2.1. The global minimum value is −1 at
x = (π, π)T .
MATLAB Global Optimization Toolbox provides a PSO solver particles
warm. Using the default parameter settings, particleswarm solver can always
find the global optimum very rapidly for ten random runs, for the range [−100, 100]2 .
This is because all the initial individuals which are randomly selected in (0, 1) are
very close the global optimum.
A fair evaluation of PSO is to set the initial population randomly from the entire
domain. We select an initial population size of 40 and other default parameters. For
20 random runs, the solver converged 19 times for a maximum of 100 generations.
For a random run, we have f (x) = −1.0000 at (3.1416, 3.1416) with 2363 function
evaluations, and all the individuals converge toward the global optimum. The evolu-
tion of a random run is illustrated in Figure 9.1. For this problem, we conclude that
the particleswarm solver outperforms ga and simulannealbnd solvers.
9.3 PSO Variants Using Different Neighborhood Topologies 159

Best: -1 Mean: -0.988523


0.2
Best fitness
0 Mean fitness

-0.2
Function value

-0.4

-0.6

-0.8

-1
0 10 20 30 40 50 60
Iteration

Figure 9.1 The evolution of a random run of PSO: the minimum and average objectives.

9.3 PSO Variants Using Different Neighborhood Topologies

A key feature of PSO is social information sharing among the neighborhood. Typical
neighborhood topologies are the von-Neumann neighborhood, gbest and lbest, as
shown in Figure 9.2. The simplest neighbor structure might be the ring structure.
Basic PSO uses gbest topology, in which the neighborhood consists of the whole
swarm, meaning that all the particles have the information of the globally found best
solution. Every particle is a neighbor of every other particle.
The lbest neighborhood has ring lattice topology: each particle generates a neigh-
borhood consisting of itself and its two or more immediate neighbors. The neighbors
may not be close to the generating particle either regarding the objective function
values or the positions, instead they are chosen by their adjacent indices.

lbest gbest Von Neumann


Figure 9.2 Swarms with different social networks.
160 9 Particle Swarm Optimization

For the von-Neumann neighborhood, each particle possesses four neighbors on a


two-dimensional lattice that is wrapped on all four sides (torus), and a particle is in
the middle of its four neighbors. The possible particle number is restricted to four.
Based on testing on several social network structures, PSO with a small neigh-
borhood tends to perform better on complex problems, while PSO with a large
neighborhood would perform better on simple problems [46,47]. The von-Neumann
neighborhood topology performs consistently better than gbest and lbest do [46].
To prevent premature convergence, in fully informed PSO [64], a particle uses
information from all its topological neighbors to update the velocity. The influence of
each particle on its neighbors is weighted based on its fitness value and the neighbor-
hood size. This scheme outperforms basic PSO. The constriction factor is adopted
in fully informed PSO, with the value ϕ being equally distributed among all the
neighbors of a particle.
Unified PSO [70] is obtained by modifying the constricted algorithm to harness the
explorative behavior of global variant and exploitative nature of a local neighborhood
variant. Two velocity updates are initially calculated and are then linearly combined
to form a unified velocity update, which is then applied to the current position.
The lbest topology is better for exploring the search space while gbest converges
faster. The variable neighborhood operator [86] begins the search with an lbest ring
lattice and slowly increases the size of the neighborhood, until the population is fully
connected.
In hierarchical PSO [38], particles are arranged in a dynamic hierarchy to define
a neighborhood structure. Depending on the quality of their pbests, the particles
move up or down the hierarchy. A good particle on the higher hierarchy has a larger
influence on the swarm. The shape of the hierarchy can be dynamically adapted.
Different behavior to the individual particles can also be assigned with respect to
their level in the hierarchy.

9.4 Other PSO Variants

Particle swarm adaptation is an optimization paradigm that simulates the ability


of human societies to process knowledge. Similar to social-only PSO [49], many
optimizing liaisons optimization [73] is a simplified PSO by not having any attraction
to the particle’s personal best position. It has a performance comparable to that of
PSO, and has behavioral parameters that are easier to tune.
Basic PSO [45] is synchronous PSO in which communication between particles is
synchronous. Particles communicate their best positions and respective objective val-
ues to their neighbors, and the neighbors do the same immediately. Hence, particles
have perfect information from their neighbors before updating their positions. For
asynchronous PSO models [1,12,50], in a given iteration, each particle updates and
communicates its memory to its neighbors immediately after its move to a new posi-
tion. Thus, the particles that remain to be updated in the same iteration can exploit the
new information immediately, instead of waiting for the next iteration as in the syn-
9.4 Other PSO Variants 161

chronous model. In general, the asynchronous model has faster convergence speed
than synchronous PSO, yet at the cost of getting trapped by rapidly attracting all parti-
cles to a deceitful solution. Random asynchronous PSO is a variant of asynchronous
PSO where particles are selected at random to perform their operations. Random
asynchronous PSO has the best general performance in large neighborhoods, while
synchronous PSO has the best one in small neighborhoods [77].
In fitness-distance-ratio-based PSO (FDR-PSO) [74], each particle utilizes an
additional information of the nearby higher fitness particle that is selected according
to fitness–distance ratio, i.e., the ratio of fitness improvement over the respective
weighted Euclidean distance. The algorithm moves particles toward nearby particles
of higher fitness, instead of attracting each particle toward just the gbest position. This
combats the problem of premature convergence observed in PSO. Concurrent PSO
[6] avoids the possible crosstalk effect of pbest and gbest with nbest in FDR-PSO
by concurrently simulating modified PSO and FDR-PSO algorithms with frequent
message passing between them.
To avoid stagnation and to keep the gbest particle moving until it has reached a
local minimum, guaranteed convergence PSO [87] uses a different velocity update
equation for the x g particle, which causes the particle to perform a random search
around x g within a radius defined by a scaling factor. Its ability to operate with small
swarm sizes makes it an enabling technique for parallel niching solutions.
For large parameter optimization problems, orthogonal PSO [35] uses an intel-
ligent move mechanism, which applies orthogonal experimental design to adjust a
velocity for each particle by using a divide and conquer approach in determining the
next move of particles.
In [14], basic PSO and Michigan PSO are used to solve the problem of prototype
placement for nearest prototype classifiers. In the Michigan approach, a member of
the population only encodes part of the solution, and the whole swarm is the potential
solution to the problem. This reduces the dimension of the search space. Adaptive
Michigan PSO [14] uses modified PSO equations with both particle competition
and cooperation between the closest neighbors and a dynamic neighborhood. The
Michigan PSO algorithms introduce a local fitness function to guide the particles’
movement and dynamic neighborhoods that are calculated on each iteration.
Diversity can be maintained by relocating the particles when they are too close
to each other [60] or using some collision-avoiding mechanisms [8]. In [71], trans-
formations of the objective function through deflection and stretching are used to
overcome local minimizers and a repulsion source at each detected minimizer is
used to repel particles away from previously detected minimizers. This combina-
tion is able to find as many global minima as possible by preventing particles from
moving to a previously discovered minimal region.
In [30], PSO is used to improve simplex search. Clustering-aided simplex PSO
[40] incorporates simplex method to improve PSO performance. Each particle in
PSO is regarded as a point of the simplex. On each iteration, the worst particle is
replaced by a new particle generated by one iteration of the simplex method. Then,
all particles are again updated by PSO. PSO and simplex methods are performed
iteratively.
162 9 Particle Swarm Optimization

Incremental social learning is a way to improve the scalability of systems com-


posed of multiple learning agents. The incremental particle swarm optimizer [26]
has a growing population size, with the initial position of new particles being biased
toward the best-so-far solution. Solutions are further improved through a local search
procedure. The population size is increased if the optimization problem at hand can-
not be solved satisfactorily by local search alone.
Efficient population utilization strategy for PSO (EPUS-PSO) [36] adopts a popu-
lation manager to improve the efficiency of PSO. The population manager eliminates
redundant particles and recruits new ones or maintain particle numbers according to
the solution-searching status. If the particles cannot find a better solution to update
gbest, they may be trapped into the local minimum. To keep gbest updated and to find
better solutions, new particles should be added into the swarm. A maximal popula-
tion size should be predefined. The population manager will adjust population size
depending on whether the gbest has not been updated in k consecutive generations.
A mutation-like ES and two built-in sharing strategies can prevent the solutions from
falling into the local minimum.
The population size of PSO can be adapted by assigning a maximum lifetime
to groups of particles based on their performance and spatial distribution [53]. PSO
with an aging leader and challengers (ALC-PSO) [17] improves PSO by overcoming
the problem of premature convergence. The leader of the swarm is assigned with a
growing age and a lifespan, and the other individuals are allowed to challenge the
leadership when the leader becomes aged. The lifespan of the leader is adaptively
tuned according to the leader’s leading power. If a leader shows strong leading power,
it lives longer to attract the swarm toward better positions. Otherwise, it gets old and
new particles emerge to challenge and claim the leadership, bringing in diversity.
Passive congregation is an important biological force preserving swarm integrity.
It has been introduced into the velocity update equation as an additional compo-
nent [33].
In [84], PSO is improved by applying diversity to both the velocity and the popula-
tion by a predator particle and several scout particles. The predator particle balances
the exploitation and exploration of the swarm, while scout particles implement dif-
ferent exploration strategies. The closer the predator particle is to the best particle,
the higher the probability of perturbation.
Opposition-based learning can be used to improve the performance of PSO by
replacing the least-fit particle with its antiparticle. In [91], opposition-based learning
is applied to PSO, where the particle’s own position and the position opposite the
center of the swarm are evaluated for each randomly selected particle, along with a
Cauchy mutation to keep the gbest particle moving and thus avoiding its premature
convergence.
Animals react to negative as well as positive stimuli, e.g., an animal looking for
food is also conscious of danger. In [92], each particle adjusts its position according
to its own personal worst solution and its group’s global worst based on similar
formulae of regular PSO. This strategy outperforms PSO by avoiding those worse
areas.
9.4 Other PSO Variants 163

Adaptive PSO [93] first, by evaluating the population distribution and particle
fitness, performs a real-time procedure to identify one of the four defined evolution-
ary states, including exploration, exploitation, convergence, and jumping out in each
generation. It enables the automatic control of algorithmic parameters at run time to
improve the search efficiency and convergence speed. Then, an elitist learning strat-
egy is performed when the evolutionary state is classified as convergence state. The
strategy will act on the gbest particle to jump out of the likely local optima. Adaptive
PSO substantially enhances the performance of PSO in terms of convergence speed,
global optimality, solution accuracy, and algorithm reliability.
Chaotic PSO [2] utilizes chaotic maps for parameter adaptation which can improve
the search ability of basic PSO. Frankenstein’s PSO [25] combines a number of algo-
rithmic components such as time-varying population topology, the velocity updating
mechanism of fully informed PSO [64], and decreasing inertia weight, showing
advantages in terms of optimization speed and reliability. Particles are initially con-
nected with fully connected topology, which is reduced over time with certain pattern.
Comprehensive learning PSO (http://www.ntu.edu.sg/home/epnsugan) [55] uses
all other particles’ pbest information to update a particle’s velocity. It learns each
dimension of a particle from just one particle’s historical best information, while
each particle learns from different particles’ historical best information for different
dimensions for a few generations. This strategy helps to preserve the diversity to
discourage premature convergence. The method outperforms PSO with inertia weight
[83] and PSO with constriction factor [22] in solving multimodal problems.
Inspired by the social behavior of clan, clan PSO [13] divides the PSO population
into several clans. Each clan will first perform the search and the particle with the best
fitness is selected as the clan leader. The leaders then meet to adjust their position.
Dynamic clan PSO [7] allows particles in one clan migrate to another clan.
Motivated by a social phenomenon where multiple of good exemplars assist the
crowd to progress better, in example-based Learning PSO [37], an example set of
multiple gbest particles is employed to update the particles’ position in example-
based Learning PSO.
Charged PSO [8] utilizes an analogy of electrostatic energy, where some mutually
repelling particles orbit a nucleus of neutral particles. This nucleus corresponds to
a basic PSO swarm. The particles with identical charges produce a repulsive force
between them. The neutral particles allow exploitation while the charged particles
enforce separation to maintain exploration.
Random black hole PSO [95] is a PSO algorithm based on the concept of black
holes in physics. In each dimension of a particle, a black hole located nearest to
the best particle of the swarm in current generation is randomly generated and then
particles of the swarm are randomly pulled into the black hole with a probability
p. This helps the algorithm fly out of local minima, and substantially speed up the
evolution process to global optimum.
Social learning plays an important role in behavior learning among social animals.
In contrast to individual learning, social learning allows individuals to learn behaviors
from others without the cost of individual trials and errors. Social learning PSO [18]
introduces social learning mechanisms into PSO. Each particle learns from any of
164 9 Particle Swarm Optimization

the better particles (termed demonstrators) in the current swarm. Social learning
PSO adopts a dimension-dependent parameter control method. It performs well on
low-dimensional problems and is promising for solving large-scale problems as well.
In [5], agents in the swarm are categorized into explorers and settlers, which can
dynamically exchange their role in the search process. This particle task differen-
tiation is achieved through a different way of adjusting the particle velocities. The
coefficients of the cognitive and social component of the stochastic acceleration as
well as the inertia weight are related to the distance of each particle from the gbest
position found so far. This particle task differentiation enhances the local search
ability of the particles close to the gbest and improves the exploration ability of the
particles far from the gbest.
PSO lacks mechanisms which add diversity to exploration in the search process.
Inspired by the collective response behavior of starlings, starling PSO [65] introduces
a mechanism to add diversity into PSO. This mechanism consists of initialization,
identifying seven nearest neighbors, and orientation change.

9.5 PSO and EAs: Hybridization

In PSO, the particles move through the solution space through perturbations of their
position, which are influenced by other particles, whereas in EAs, individuals breed
with one another to produce new individuals. Compared to EAs, PSO is easy to
implement and there are few parameters to adjust. In PSO, every particle remembers
its pbest and gbest, thus having a more effective memory capability than EAs have.
PSO is also more efficient in maintaining the diversity of the swarm, since all the
particles use the information related to the most successful particle in order to improve
themselves, whereas in EAs only the good solutions are saved.
Hybridization of EAs and PSO is usually implemented by incorporating genetic
operators into PSO to enhance the performance of PSO: to keep the best particles
[4], to increase the diversity, and to improve the ability to escape local minima [61].
In [4], a tournament selection process is applied to replace each poorly performing
particle’s velocity and position with those of better performing particles. In [61], basic
PSO is combined with arithmetic crossover. The hybrid PSOs combine the velocity
and position update rules with the ideas of breeding and subpopulations. The swarm
is divided into subpopulations, and a breeding operator is used within a subpopulation
or between the subpopulations to increase the diversity of the population. In [82], the
standard velocity and position update rules of PSO are combined with the concepts
of selection, crossover, and mutation. A breeding ratio is employed to determine
the proportion of the population that undergoes breeding procedure in the current
generation and the portion to perform regular PSO operation. Grammatical swarm
adopts PSO coupled to a grammatical evolution genotype–phenotype mapping to
generate programs [67].
Evolutionary self-adapting PSO [63] grants a PSO scheme with an explicit selec-
tion procedure and with self-adapting properties for its parameters. This selection
9.5 PSO and EAs: Hybridization 165

acts on the weights or parameters governing the behavior of a particle and, the particle
movement operator is introduced to generate diversity.
In [39], mutation, crossover, and elitism are incorporated into PSO. The upper-
half of the best-performing individuals, known as elites, are regarded as a swarm
and enhanced by PSO. The enhanced elites constitute half of the population in the
new generation, while crossover and mutation operations are applied to the enhanced
elites to generate the other half.
AMALGAM-SO [90] implements self-adaptive multimethod search using a sin-
gle universal genetic operator for population evolution. It merges the strengths of
CMA-ES, GA, and PSO for population evolution during each generation and imple-
ments a self-adaptive learning strategy to automatically tune the number of offspring.
The method scales well with increasing number of dimensions, converges in the close
proximity of the global minimum for functions with noise induced multimodality,
and is designed to take full advantage of the power of distributed computer networks.
Time-varying acceleration coefficients (TVAC) [78] are introduced to efficiently
control the local search and convergence to the global optimum, in addition to the
time-varying inertia weight factor in PSO. Mutated PSO with TVAC adds a pertur-
bation to a randomly selected modulus of the velocity vector of a random particle by
predefined probability. Self-organizing hierarchical PSO with TVAC considers only
the social and cognitive parts, but eliminates the inertia term in the velocity update
rule. Particles are reinitialized whenever they are stagnated in the search space, or
any component of a particle’s velocity vector becomes very close to zero.

9.6 Discrete PSO

Basic PSO is applicable to optimization problems with continuous variables. A dis-


crete version of PSO for binary problems is proposed in [43] for problems with
binary-valued solution elements. It solves the problem of moving the particles
through the problem space by changing the velocity in each particle to the prob-
ability of each bit being in one state or the other. The particle is composed of binary
variables, and the velocity is transformed into a change of probability.
Assume N P particles in the population. Each particle x i = (xi,1 , . . . , xi,n )T ,
xi,n ∈ {0, 1}, has n bits. As in basic PSO, each particle adjusts its velocity by using
(9.1), where c1r1 + c2 r2 is usually limited to 4 [44]. The velocity value is then con-
verted to a probability to denote bit xi,d (t) taking one, generating a threshold Ti,d by
using a logistic function
1
Ti,d = . (9.11)
1 + e−vi,d (t)

Generate a random number r for each bit. If r < Ti,d , then xid is interpreted as 1;
otherwise, as 0. The velocity term is limited to |vi,d | < Vmax . To prevent Ti,d from
approaching 0 or 1, one can force Vmax = 4 [44].
166 9 Particle Swarm Optimization

Based on the discrete PSO proposed in [43], multiphase discrete PSO [3] is for-
mulated by using an alternative velocity update technique, which incorporates hill
climbing using random stepsize in the search space. The particles are divided into
groups that follow different search strategies. A discrete PSO algorithm is proposed
in [56] for flowshop scheduling, where the particle and velocity are redefined, an
efficient approach is developed to move a particle to the new sequence, and a local
search scheme is incorporated.
Jumping PSO [62] is a discrete PSO inspired from frogs. The positions x i of
particles jump from one solution to another. It does not consider any velocity. Each
particle has three attractors: its own best position, the best position of its social
neighborhood, and the gbest position. A jump approaching an attractor consists of
changing a feature of the current solution by a feature of the attractor.

9.7 Multi-swarm PSOs

Multiple swarms in PSO explore the search space together to attain the objective of
finding the optimal solutions. This resembles many bird species joining to form a
flock in a geographical region, to achieve certain foraging behaviors that benefit one
another. Each species has different food preferences. This corresponds to multiple
swarms locating possible solutions in different regions of the solution space. This
is also similar to people all over the world: In each country, there is a different
lifestyle that is best suited to the ethnic culture. A species can be defined as a group
of individuals sharing common attributes according to some similarity metric.
Multi-swarm PSO is used for solving multimodal problems and combating PSO’s
tendency in premature convergence. It typically adopts a heuristically chosen number
of swarms with a fixed swarm size throughout the search process. Multi-swarm PSO
is also used to locate and track changing optima in a dynamic environment.
Based on guaranteed convergence PSO [87], niching PSO [11] creates a subswarm
from a particle and its nearest spatial neighbor, if the variance in that particle’s fitness
is below a threshold. Niching PSO initially sets up subswarm leaders by training the
main swarm utilizing the basic PSO using no social information (c2 = 0). Niches are
then identified and a subswarm radius is set. As optimization progresses, particles
are allowed to join subswarms, which are in turn allowed to merge. Once the velocity
has minimized, they converge to their subswarm optimum.
In turbulent PSO [19], the population is divided into two subswarms: one sub-
swarm following the gbest, while the other moving in the opposite direction. The
particles’ positions are dependent on their lbest, their corresponding subswarm’s
best, and the gbest collected from the two subswarms. If the gbest has not improved
for fifteen successive iterations, the worst particles of a subswarm are replaced by the
best ones from the other subswarm, and the subswarms switch their flight directions.
Turbulent PSO avoids premature convergence by replacing the velocity memory by
a random turbulence operator when a particle exceeds it. Fuzzy adaptive turbulent
9.7 Multi-swarm PSOs 167

PSO [58] is a hybrid of turbulent PSO with a fuzzy logic controller to adaptively
regulate the velocity parameters.
Speciation-based PSO [54] uses spatial speciation for locating multiple local
optima in parallel. Each species is grouped around a dominating particle called the
species seed. At each iteration, species seeds are identified from the entire popula-
tion, and are then adopted as neighborhood bests for these individual species groups
separately. Dynamic speciation-based PSO [69] modifies speciation-based PSO for
tracking multiple optima in the dynamic environment by comparing the fitness of
each particle’s current lbest with its previous record to continuously monitor the
moving peaks, and by using a predefined species population size to quantify the
crowdedness of species before they are reinitialized randomly in the solution space
to search for new possible optima.
In adaptive sequential niche PSO [94], the fitness values of the particles are mod-
ified by a penalty function to prevent all subswarms from converging to the same
optima. A niche radius is not required. It can find all optimal solutions for multimodal
function sequentially.
In [48], the swarm population is clustered into a certain number of clusters. Then,
a particle’s lbest is replaced by its cluster center, and the particles’ gbest is replaced
by the neighbors’ best. This approach has improved the diversity and exploration of
PSO. In [72], in order to solve multimodal problems, clustering is used to identify
the niches in the swarm population and then to restrict the neighborhood of each
particle to the other particles in the same cluster in order to perform a local search
for any local minima located within the clusters.
In [9], the population of particles are split into a set of interacting swarms,
which interact locally by an exclusion parameter and globally through a new anti-
convergence operator. Each swarm maintains diversity either by using charged or
quantum particles. Quantum swarm optimization (QSO) builds on the atomic pic-
ture of charged PSO, and uses a quantum analogy for the dynamics of the charged
particles. Multi-QSO uses multiple swarms [9].
In multigrouped PSO [81], N solutions of a multimodal function can be searched
with N groups. A repulsive velocity component is added to the particle update equa-
tion, which will push the intruding particles out of the other group’s gbest radius.
The predefined radius is allowed to increase linearly during the search process to
avoid several groups from settling on the same peak.
When multi-swarms are used for enhancing diversity of PSO, each swarm per-
forms a PSO paradigm independently. After some predefined generations, the swarms
will exchange information based on a diversified list of particles. Some strategies
for information exchange between two or more swarms are given in [28,75]. In [28],
two subswarms are updated independently for a certain interval, and then, the best
particles (information) in each subswarm are exchanged. In [75], swarm population
is initially clustered into a predefined number of swarms. Particles’ positions are first
updated using a PSO equation where three levels of communications are facilitated,
namely, personal, global, and neighborhood levels. At every iteration, the particles in
a swarm are divided into two sets: One set of particles is sent to another swarm, while
the other set of particles will be replaced by the individuals from other swarms [75].
168 9 Particle Swarm Optimization

Cooperative PSO [88] employs cooperative behavior among multiple swarms to


improve the performance of PSO on multimodal problems based on cooperative
coevolutionary GA. The decision variables are divided into multiple parts and to
assign different parts to different swarms for optimization. In multipopulation coop-
erative PSO [66], the swarm population comprises a master swarm and multiple
slave swarms. The slave swarms explore the search space independently to maintain
diversity of particles, while the master swarm evolves via the best particles collected
from the slave swarms [66].
Coevolutionary particle swarm optimizer with parasitic behavior (PSOPB) [76]
divides the population into two swarms: host swarm and parasite swarm. The par-
asitic behavior is mimicked from three aspects: the parasites getting nourishments
from the host, the host immunity, and the evolution of the parasites. With a prede-
fined probability, which reflects the facultative parasitic behavior, the two swarms
exchange particles according to fitness values in each swarm. The host immunity is
mimicked through two ways: the number of exchange particles is linearly decreased
over iterations, and particles in the host swarm can learn from the global best position
in the parasite swarm. Two mutation operators are utilized to simulate two aspects
of the evolution of the parasites. Particles with poor fitness in the host swarm are
replaced by randomly initialized particles. PSOPB outperforms eight PSO variants
in terms of solution accuracy and convergence speed.
PS2O [16] is multi-swarm PSO inspired by the coevolution of symbiotic species
(or heterogeneous cooperation) in natural ecosystems. The interacting swarms are
modeled by constructing hierarchical interaction topology and enhanced dynamical
update equations. Information exchanges take place not only between the particles
within each swarm, but also between different swarms. Each individual is influenced
by three attractors: its own previous best position, best position of its neighbors from
its own swarm, and best position of its neighbor swarms.
TRIBES [23], illustrated in Figure 9.3, is a parameter-free PSO system. The topol-
ogy includes the size of the population, evolving over time in response to performance
feedback. In TRIBES, only adaptation rules can be modified or added by the user,
while the parameters change according to the swarm behavior. The population is
divided in subpopulations called tribes, each maintaining its own order and struc-

Figure 9.3 TRIBES


topology. A tribe is a fully
connected network. Each
tribe is linked to the others
via its shaman (denoted by a
black particle).
9.7 Multi-swarm PSOs 169

ture. Tribes may benefit by removal of their weakest member, or by addition of a new
member. The best particles of the tribes are exchanged among all the tribes. Relation-
ships between particles in a tribe are similar to those defined in global PSO. TRIBES
is efficient in quickly finding a good region of the landscape, but less efficient for
local refinement.

Problems

9.1 Explain why in basic PSO with a neighbor structure a larger neighbor number
has faster convergence, but in fully informed PSO the opposite is true.
9.2 Implement the particleswarm solver of MATLAB Global Optimization
Toolbox for solving a benchmark function. Test the influence of different para-
meter settings.

References
1. Akat SB, Gazi V. Decentralized asynchronous particle swarm optimization. In: Proceedings of
the IEEE swarm intelligence symposium, St. Louis, MO, USA, September 2008. p. 1–8.
2. Alatas B, Akin E, Bedri A. Ozer, Chaos embedded particle swarm optimization algorithms.
Chaos Solitons Fractals. 2009;40(5):1715–34.
3. Al-kazemi B, Mohan CK. Multi-phase discrete particle swarm optimization. In: Proceedings
of the 4th international workshop on frontiers in evolutionary algorithms, Kinsale, Ireland,
January 2002.
4. Angeline PJ. Using selection to improve particle swarm optimization. In: Proceedings of IEEE
congress on evolutionary computation, Anchorage, AK, USA, May 1998. p. 84–89.
5. Ardizzon G, Cavazzini G, Pavesi G. Adaptive acceleration coefficients for a new search diver-
sification strategy in particle swarm optimization algorithms. Inf Sci. 2015;299:337–78.
6. Baskar S, Suganthan P. A novel concurrent particle swarm optimization. In: Proceedings of
IEEE congress on evolutionary computation (CEC), Beijing, China, June 2004. p. 792–796.
7. Bastos-Filho CJA, Carvalho DF, Figueiredo EMN, de Miranda PBC. Dynamicclan particle
swarm optimization. In: Proceedings of the 9th international conference on intelligent systems
design and applications (ISDA’09), Pisa, Italy, November 2009. p. 249–254.
8. Blackwell TM, Bentley P. Don’t push me! Collision-avoiding swarms. In: Proceedings of
congress on evolutionary computation, Honolulu, HI, USA, May 2002, vol. 2. p. 1691–1696.
9. Blackwell T, Branke J. Multiswarms, exclusion, and anti-convergence in dynamic environ-
ments. IEEE Trans Evol Comput. 2006;10(4):459–72.
10. Bonyadi MR, Michalewicz Z. A locally convergent rotationally invariant particle swarm opti-
mization algorithm. Swarm Intell. 2014;8:159–98.
11. Brits R, Engelbrecht AF, van den Bergh F. A niching particle swarm optimizer. In: Proceedings
of the 4th Asia-Pacific conference on simulated evolutions and learning, Singapore, November
2002. p. 692–696.
12. Carlisle A, Dozier G. An off-the-shelf PSO. In: Proceedings of workshop on particle swarm
optimization, Indianapolis, IN, USA, Jannuary 2001. p. 1–6.
13. Carvalho DF, Bastos-Filho CJA. Clan particle swarm optimization. In: Proceedings of IEEE
congress on evolutionary computation (CEC), Hong Kong, China, June 2008. p. 3044–3051.
170 9 Particle Swarm Optimization

14. Cervantes A, Galvan IM, Isasi P. AMPSO: a new particle swarm method for nearest neighbor-
hood classification. IEEE Trans Syst Man Cybern Part B. 2009;39(5):1082–91.
15. Chatterjee S, Goswami D, Mukherjee S, Das S. Behavioral analysis of the leader particle during
stagnation in a particle swarm optimization algorithm. Inf Sci. 2014;279:18–36.
16. Chen H, Zhu Y, Hu K. Discrete and continuous optimization based on multi-swarm coevolution.
Nat Comput. 2010;9:659–82.
17. Chen W-N, Zhang J, Lin Y, Chen N, Zhan Z-H, Chung HS-H, Li Y, Shi Y-H. Particle swarm
optimization with an aging leader and challengers. IEEE Trans Evol Comput. 2013;17(2):241–
58.
18. Cheng R, Jin Y. A social learning particle swarm optimization algorithm for scalable optimiza-
tion. Inf Sci. 2015;291:43–60.
19. Chen G, Yu J. Two sub-swarms particle swarm optimization algorithm. In: Advances in natural
computation, vol. 3612 of Lecture notes in computer science. Berlin: Springer; 2005. p. 515–
524.
20. Cleghorn CW, Engelbrecht AP. A generalized theoretical deterministic particle swarm model.
Swarm Intell. 2014;8:35–59.
21. Cleghorn CW, Engelbrecht AP. Particle swarm variants: standardized convergence analysis.
Swarm Intell. 2015;9:177–203.
22. Clerc M, Kennedy J. The particle swarm-explosion, stability, and convergence in a multidi-
mensional complex space. IEEE Trans Evol Comput. 2002;6(1):58–73.
23. Clerc M. Particle swarm optimization. In: International scientific and technical encyclopaedia.
Hoboken: Wiley; 2006.
24. Coelho LS, Krohling RA. Predictive controller tuning using modified particle swarm optimi-
sation based on Cauchy and Gaussian distributions. In: Proceedings of the 8th online world
conference soft computing and industrial applications, Dortmund, Germany, September 2003.
p. 7–12.
25. de Oca MAM, Stutzle T, Birattari M, Dorigo M. Frankenstein’s PSO: a composite particle
swarm optimization algorithm. IEEE Trans Evol Comput. 2009;13(5):1120–32.
26. de Oca MAM, Stutzle T, Van den Enden K, Dorigo M. Incremental social learning in particle
swarms. IEEE Trans Syst Man Cybern Part B. 2011;41(2):368–84.
27. Eberhart RC, Shi Y. Comparing inertia weights and constriction factors in particle swarm
optimization. In: Proceedings of IEEE congress on evolutionary computation (CEC), La Jolla,
CA, USA, July 2000. p. 84–88.
28. El-Abd M, Kamel MS. Information exchange in multiple cooperating swarms. In: Proceedings
of IEEE swarm intelligence symposium, Pasadena, CA, USA, June 2005. p. 138–142.
29. Esquivel SC, Coello CAC. On the use of particle swarm optimization with multimodal func-
tions. In: Proceedings of IEEE congress on evolutionary computation (CEC), Canberra, Aus-
tralia, 2003. p. 1130–1136.
30. Fan SKS, Liang YC, Zahara E. Hybrid simplex search and particle swarm optimization for the
global optimization of multimodal functions. Eng Optim. 2004;36(4):401–18.
31. Fernandez-Martinez JL, Garcia-Gonzalo E. Stochastic stability analysis of the linear continuous
and discrete PSO models. IEEE Trans Evol Comput. 2011;15(3):405–23.
32. Hakli H, Uguz H. A novel particle swarm optimization algorithm with Levy flight. Appl Soft
Comput. 2014;23:333–45.
33. He S, Wu QH, Wen JY, Saunders JR, Paton RC. A particle swarm optimizer with passive
congregation. Biosystems. 2004;78:135–47.
34. Higashi N, Iba H. Particle swarm optimization with Gaussian mutation. In: Proceedings of
IEEE swarm intelligence symposium, Indianapolis, IN, USA, April 2003. p. 72–79.
35. Ho S-Y, Lin H-S, Liauh W-H, Ho S-J. OPSO: orthogonal particle swarm optimization and its
application to task assignment problems. IEEE Trans Syst Man Cybern Part A. 2008;38(2):288–
98.
References 171

36. Hsieh S-T, Sun T-Y, Liu C-C, Tsai S-J. Efficient population utilization strategy for particle
swarm optimizer. IEEE Trans Syst Man Cybern Part B. 2009;39(2):444–56.
37. Huang H, Qin H, Hao Z, Lim A. Example-based learning particle swarm optimization for
continuous optimization. Inf Sci. 2012;182:125–38.
38. Janson S, Middendorf M. A hierarchical particle swarm optimizer and its adaptive variant.
IEEE Trans Syst Man Cybern Part B. 2005;35(6):1272–82.
39. Juang C-F. A hybrid of genetic algorithm and particle swarm optimization for recurrent network
design. IEEE Trans Syst Man Cybern Part B. 2004;34(2):997–1006.
40. Juang C-F, Chung I-F, Hsu C-H. Automatic construction of feedforward/recurrent fuzzy
systems by clustering-aided simplex particle swarm optimization. Fuzzy Sets Syst.
2007;158(18):1979–96.
41. Kadirkamanathan V, Selvarajah K, Fleming PJ. Stability analysis of the particle dynamics in
particle swarm optimizer. IEEE Trans Evol Comput. 2006;10(3):245–55.
42. Kennedy J. Bare bones particle swarms. In: Proceedings of IEEE swarm intelligence sympo-
sium, Indianapolis, IN, USA, April 2003. p. 80–87.
43. Kennedy J, Eberhart RC. A discrete binary version of the particle swarm algorithm. In: Pro-
ceedings of IEEE conference on systems, man, and cybernetics, Orlando, FL, USA, October
1997. p. 4104–4109.
44. Kennedy J, Eberhart RC. Swarm intelligence. San Francisco, CA: Morgan Kaufmann; 2001.
45. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of IEEE international
conference on neural networks, Perth, WA, USA, November 1995, vol. 4. p. 1942–1948.
46. Kennedy J, Mendes R. Population structure and particle swarm performance. In: Proceedings
of congress on evolutionary computation, Honolulu, HI, USA, May 2002. p. 1671–1676.
47. Kennedy J. Small worlds and mega-minds: Effects of neighborhood topology on particle swarm
performance. In: Proceedings of congress on evolutionary computation (CEC), Washington,
DC, USA, July 1999. p. 1931–1938.
48. Kennedy J. Stereotyping: improving particle swarm performance with cluster analysis. In:
Proceedings of congress on evolutionary computation (CEC), La Jolla, CA, July 2000. p.
1507–1512.
49. Kennedy J. The particle swarm: social adaptation of knowledge. In: Proceedings of IEEE
international conference on evolutionary computation, Indianapolis, USA, April 1997. p. 303–
308.
50. Koh B-I, George AD, Haftka RT, Fregly BJ. Parallel asynchronous particle swarm optimization.
Int J Numer Methods Eng. 2006;67:578–95.
51. Krohling RA. Gaussian swarm: a novel particle swarm optimization algorithm. In: Proceedings
of IEEE conference cybernetics and intelligent systems, Singapore, December 2004. p. 372–
376.
52. Langdon WB, Poli R. Evolving problems to learn about particle swarm optimizers and other
search algorithms. IEEE Trans Evol Comput. 2007;11(5):561–78.
53. Lanzarini L, Leza V, De Giusti A. Particle swarm optimization with variable population size.
In: Proceedings of the 9th international conference on artificial intelligence and soft computing,
Zakopane, Poland, June 2008, vol. 5097 of Lecture notes in computer science. Berlin: Springer;
2008. p. 438–449.
54. Li X. Adaptively choosing neighbourhood bests using species in a particle swarm optimizer for
multimodal function optimization. In: Proceedings of genetic and evolutionary computation
conference (GECCO), Seattle, WA, USA, June 2004. p. 105–116.
55. Liang JJ, Qin AK, Suganthan PN, Baskar S. Comprehensive learning particle swarm optimizer
for global optimization of multimodal functions. IEEE Trans Evol Comput. 2006;10(3):281–
95.
56. Liao C-J, Tseng C-T, Luarn P. A discrete version of particle swarm optimization for flowshop
scheduling problems. Comput Oper Res. 2007;34:3099–111.
172 9 Particle Swarm Optimization

57. Liu Y, Qin Z, Shi Z, Lu J. Center particle swarm optimization. Neurocomputing. 2007;70:672–
9.
58. Liu H, Abraham A. Fuzzy adaptive turbulent particle swarm optimization. In: Proceedings of
the 5th international conference on hybrid intelligent systems (HIS’05), Rio de Janeiro, Brazil,
November 2005. p. 445–450.
59. Loengarov A, Tereshko V. A minimal model of honey bee foraging. In: Proceedings of IEEE
swarm intelligence symposium, Indianapolis, IN, USA, May 2006. p. 175–182.
60. Lovbjerg M, Krink T. Extending particle swarm optimisers with self-organized criticality. In:
Proceedings of congress on evolutionary computation (CEC), Honolulu, HI, USA, May 2002.
p. 1588–1593.
61. Lovbjerg M, Rasmussen TK, Krink T. Hybrid particle swarm optimiser with breeding and sub-
populations. In: Proceedings of genetic and evolutionary computation conference (GECCO),
Menlo Park, CA, USA, August 2001. p. 469–476.
62. Martinez-Garcia FJ, Moreno-Perez JA. Jumping frogs optimization: a new swarm method for
discrete optimization. Technical Report DEIOC 3/2008, Department of Statistics, O.R. and
Computing, University of La Laguna, Tenerife, Spain, 2008.
63. Miranda V, Fonseca N. EPSO—Best of two worlds meta-heuristic applied to power system
problems. In: Proceedings of IEEE congress on evolutionary computation, Honolulu, HI, USA,
May 2002. p. 1080–1085.
64. Mendes R, Kennedy J, Neves J. The fully informed particle swarm: simpler, maybe better.
IEEE Trans Evol Comput. 2004;8(3):204–10.
65. Netjinda N, Achalakul T, Sirinaovakul B. Particle swarm optimization inspired by starling flock
behavior. Appl Soft Comput. 2015;35:411–22.
66. Niu B, Zhu Y, He X. Multi-population cooperative particle swarm optimization. In: Proceedings
of European conference on advances in artificial life, Canterbury, UK, September 2005. p. 874–
883.
67. O’Neill M, Brabazon A. Grammatical swarm: the generation of programs by social program-
ming. Nat Comput. 2006;5:443–62.
68. Pan F, Hu X, Eberhart RC, Chen Y. An analysis of bare bones particle swarm. In: Proceedings
of the IEEE swarm intelligence symposium, St. Louis, MO, USA, September 2008. p. 21–23.
69. Parrott D, Li X. Locating and tracking multiple dynamic optima by a particle swarm model
using speciation. IEEE Trans Evol Comput. 2006;10(4):440–58.
70. Parsopoulos KE, Vrahatis MN. UPSO: a unified particle swarm optimization scheme. In: Pro-
ceedings of the international conference of computational methods in sciences and engineering,
2004. The Netherlands: VSP International Science Publishers; 2004. pp. 868–873.
71. Parsopoulos KE, Vrahatis MN. On the computation of all global minimizers through particle
swarm optimization. IEEE Trans Evol Comput. 2004;8(3):211–24.
72. Passaro A, Starita A. Clustering particles for multimodal function optimization. In: Proceedings
of ECAI workshop on evolutionary computation, Riva del Garda, Italy, 2006. p. 124–131.
73. Pedersen MEH, Chipperfield AJ. Simplifying particle swarm optimization. Appl Soft Comput.
2010;10(2):618–28.
74. Peram T, Veeramachaneni K, Mohan CK. Fitness-distance-ratio based particle swarm opti-
mization. In: Proceedings of the IEEE swarm intelligence symposium, Indianapolis, IN, USA,
April 2003. p. 174–181.
75. Pulido GT, Coello CAC. Using clustering techniques to improve the performance of a par-
ticle swarm optimizer. In: Proceedings of genetic and evolutionary computation conference
(GECCO), Seattle, WA, USA, June 2004. p. 225–237.
76. Qin Q, Cheng S, Zhang Q, Li L, Shi Y. Biomimicry of parasitic behavior in a coevolutionary par-
ticle swarm optimization algorithm for global optimization. Appl Soft Comput. 2015;32:224–
40.
77. Rada-Vilela J, Zhang M, Seah W. A performance study on synchronicity and neighborhood
size in particle swarm optimization. Soft Comput. 2013;17:1019–30.
References 173

78. Ratnaweera A, Halgamuge SK, Watson HC. Self-organizing hierarchical particle swarm opti-
mizer with time-varying acceleration coefficients. IEEE Trans Evol Comput. 2004;8(3):240–
55.
79. Reeves WT. Particle systems—a technique for modeling a class of fuzzy objects. ACM Trans
Graph. 1983;2(2):91–108.
80. Secrest BR, Lamont GB. Visualizing particle swarm optimizationGaussian particle swarm
optimization. In: Proceedings of the IEEE swarm intelligence symposium, Indianapolis, IN,
USA, April 2003. p. 198–204.
81. Seo JH, Lim CH, Heo CG, Kim JK, Jung HK, Lee CC. Multimodal function optimization
based on particle swarm optimization. IEEE Trans Magn. 2006;42(4):1095–8.
82. Settles M, Soule T. Breeding swarms: a GA/PSO hybrid. In: Proceedings of genetic and evo-
lutionary computation conference (GECCO), Washington, DC, USA, June 2005. p. 161–168.
83. Shi Y, Eberhart RC. A modified particle swarm optimizer. In: Proceedings of IEEE congress
on evolutionary computation, Anchorage, AK, USA, May 1998. p. 69–73.
84. Silva A, Neves A, Goncalves T. An heterogeneous particle swarm optimizer with predator
and scout particles. In: Proceedings of the 3rd international conference on autonomous and
intelligent systems (AIS 2012), Aveiro, Portugal, June 2012. p. 200–208.
85. Stacey A, Jancic M, Grundy I. Particle swarm optimization with mutation. In: Proceedings of
IEEE congress on evolutionary computation (CEC), Canberra, Australia, December 2003. p.
1425–1430.
86. Suganthan PN. Particle swarm optimizer with neighborhood operator. In: Proceedings of IEEE
congress on evolutionary computation (CEC), Washington, DC, USA, July 1999. p. 1958–1962.
87. van den Bergh F, Engelbrecht AP. A new locally convergent particle swarm optimizer. In: Pro-
ceedings of IEEE conference on systems, man, and cybernetics, Hammamet, Tunisia, October
2002, vol. 3. p. 96–101.
88. van den Bergh F, Engelbrecht AP. A cooperative approach to particle swarm optimization.
IEEE Trans Evol Comput. 2004;3:225–39.
89. van den Bergh F, Engelbrecht AP. A study of particle swarm optimization particle trajectories.
Inf Sci. 2006;176(8):937–71.
90. Vrugt JA, Robinson BA, Hyman JM. Self-adaptive multimethod search for global optimization
in real-parameter spaces. IEEE Trans Evol Comput. 2009;13(2):243–59.
91. Wang H, Liu Y, Zeng S, Li C. Opposition-based particle swarm algorithm with Cauchy muta-
tion. In: Proceedings of the IEEE congress on evolutionary computation (CEC), Singapore,
September 2007. p. 4750–4756.
92. Yang C, Simon D. A new particle swarm optimization technique. In: Proceedings of the 18th
IEEE international conference on systems engineering, Las Vegas, NV, USA, August 2005. p.
164–169.
93. Zhan Z-H, Zhang J, Li Y, Chung HS-H. Adaptive particle swarm optimization. IEEE Trans
Syst Man Cybern Part B. 2009;39(6):1362–81.
94. Zhang J, Huang DS, Lok TM, Lyu MR. A novel adaptive sequential niche technique for
multimodal function optimization. Neurocomputing. 2006;69:2396–401.
95. Zhang J, Liu K, Tan Y, He X. Random black hole particle swarm optimization and its application.
In: Proceedings on IEEE international conference on neural networks and signal processing,
Nanjing, China, June 2008. p. 359–365.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy