Particle Swarm Optimiz. - Project Progress
Particle Swarm Optimiz. - Project Progress
Historical Background:
The name "Particle Swarm Optimization" captures its necssasery components, where:
Particle: Each potential solution to the optimization problem is symbolized as a particle within a
swarm. Each particle has two components: its position and its velocity in the solution space.
Swarm: The collective of particles forms the shape of a “swarm”, which communicates through
sharing information and collaborates to find optimal solutions.
Optimization: The algorithm’s main aim is to optimize a given fitness function, that assesses how
good a particular solution is in relation to the problem being solved.
The methodology can be explained through three main steps that start with initializations, followed by
iterations towards the solution, and finally with termination of the process once the solution is found.
1. Initialization:
o The use of PSO starts with a swarm of particles, where each particle is assumed to have
random positions and velocities within the defined search space.
o The values for pBest and gBest are assigned initially, where pBest for each particle is its
starting position, however, gBest is the best position among all particles.
2. Iterations:
The algorithm proceeds through several iterations (or "generations"). In each iteration, the
following steps are performed:
o Evaluate Fitness: Each particle's position is evaluated using a fitness function, that
measures the goodness of the solution, where a higher fitness score is associated with a
better-found solution.
o Update Personal Best: If a particle's current position has a better fitness score than its
pBest, it updates pBest to the current position.
o Update Global Best: The swarm compares all obtained pBest values to determine
gBest. In case any particle has found a position that is better than gBest, gBest gets
updated.
o Update Velocity and Position: Each particle updates its velocity and position using the
following formulas:
3. Termination:
The process keeps going on for a certain number of iterations which could be either initially
determined or it can be set to a certain satisfactory condition towards obtaining the solution.
PSO evolution in research
The following papers provide a comprehensive view of PSO’s evolution, from its inception to recent
advancements, over the past two decades.
In 2007, Alec Banks, Jonathan Vincent, and Chukwudi Anyakoha have published a paper titled “A review
of particle swarm optimization. Part I: background and development”. The authors referred to the of PSO
fundamentals introduced earlier in 1995 by Kennedy and Eberhart in terms of concept and mathematical
equations. The authors elaborated on researchers identifying several limitations with the basic PSO,
concerning its tendency to quickly converge on local optima. To address this, various modifications were
introduced such as Inertia Weight, Constriction Factor, Velocity Clamping, and Hybridization. It was
observed that a higher inertia weight promotes global search, however a lower value leads local
enhancement. The constriction factor is useful to limit the particle’s velocity and accordingly the
convergence, leading to a better stability and avoiding the swarm from missing the global optimum.
Velocity clamping introduced limits on how fast particles could move, which modifies the instability
observed using the basic PSO, reflecting its significance in ensuring smoother convergence to optimal
solutions. Hebraization could be used to improve the reliability on PSO’s, where several researchers
began integrating it with other optimization techniques such as Genetic Algorithms (GAs) and Differential
Evolution (DE), creating hybrid algorithms that bound the strengths of multiple methods.
In 2008, Alec Banks, Jonathan Vincent, and Chukwudi Anyakoha continued what they started in 2007,
with the Part II of their research titled “A Review of Particle Swarm Optimization. Part II: Hybridization,
Combinatorial, Multicriteria and Constrained Optimization, and Indicative Applications”. The paper
focused on advanced aspects, mainly focusing on using the PSO in hybrid with other available algorithms
that may overcome PSO’s limitations in particle’s convergence and constraints in handling discrete
variables. Several hybrid PSO variants were discussed including Genetic Algorithm-PSO, Simulated
Annealing and PSO, and Differential Evolution and PSO. The authors discussed PSO extension to handle
multi-objective optimization through finding solutions that optimize multiple conflicting objectives
concurrently. The authors mentioned several strategies for balancing multiple objectives including a)
Fitness Sharing: where particles are given fitness scores based on how good they were in optimizing
multiple objectives. b) Crowding Distance is used to maintain variety among solutions by ensuring that
particles are well-scattered across different regions of the objective space. In conclusion, the paper
provides a comprehensive review at how PSO has evolved in complex real-world applications, showing
its adaptability and effectiveness when used in hybrid with other methods to overcome certain challenges.
“A Review of Particle Swarm Optimization. Part II: Hybridization, Combinatorial, Multicriteria and
Constrained Optimization, and Indicative Applications” (2008) Reference: Banks, A., Vincent, J.,
Anyakoha, C. (2008). Natural Computing, 7
2.“A Review of Particle Swarm Optimization. Part II: Hybridization, Combinatorial, Multicriteria and
Constrained Optimization, and Indicative Applications” (2008)
Reference: Banks, A., Vincent, J., Anyakoha, C. (2008). Natural Computing, 7(1) .
3.“Particle Swarm Optimization: Hybridization Perspectives and Experimental Illustrations” (2011)
Reference: Thangaraj, R., et al. (2011). Applied Mathematics and Computation, 217
5.“25 Years of Particle Swarm Optimization: Flourishing Voyage of Two Decades” (2021)
Reference: Thangaraj, R., et al. (2021). Archives of Computational Methods in Engineering.