0% found this document useful (0 votes)
29 views

Particle Swarm Optimiz. - Project Progress

lk

Uploaded by

Alaa Maali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Particle Swarm Optimiz. - Project Progress

lk

Uploaded by

Alaa Maali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Explanation of the Optimization Technique

Historical Background:

Origins and Discovery from Nature


In 1995, the Particle Swarm Optimization (PSO) method was suggested by two researchers, James
Kennedy and Russell Eberhart. The significant emergence of the method can be traced back to the two
researchers tackling design models and systems that mimic the social behaviors of animals. In particular,
the flying of birds in a flock as well as the fish in schools. Imagine a group of birds searching for food in a
vast field. Each bird (or "particle") explores its surroundings, remembers the best spot it has found, and
takes cues from the best locations discovered by its flock. This natural phenomenon inspired the
development of Particle Swarm Optimization (PSO). To achieve such a goal, Kennedy and Eberhart
focused on designing a math-based approach through the use of an optimization technique, which
emulates the activities and exchange of information (communication) with respect to membership, in
animals. It was noted how in a group of birds or fishes either a flock or school, individual members comply
with basic rules, such as staying in the same orientation and distance with other members and few do not
disrupt the group. Therefore, the PSO technique was initiated to help find the best solution to a problem
by simulating the collective behavior of groups

Initial Development in Research


An initial conception was first discussed in a paper that was presented in 1995 at the IEEE International
Conference on Neural Networks, titled ‘Particle Swarm Optimization’. In this paper, they made a first
attempt at laying out the main ideas underlying the mechanism of the algorithm developed, such as how
particles (i.e. the solutions) would access the multidimensional search space and move around sharing
information (communication) on the best positions that they knew of to help them moving forward to the
next point.

Naming of the Algorithm

The name "Particle Swarm Optimization" captures its necssasery components, where:

 Particle: Each potential solution to the optimization problem is symbolized as a particle within a
swarm. Each particle has two components: its position and its velocity in the solution space.
 Swarm: The collective of particles forms the shape of a “swarm”, which communicates through
sharing information and collaborates to find optimal solutions.
 Optimization: The algorithm’s main aim is to optimize a given fitness function, that assesses how
good a particular solution is in relation to the problem being solved.

Basic Concepts of PSO

1. Particles: In PSO, each potential solution to an optimization problem is represented by a particle.


Imagine each particle as a point in a multi-dimensional space, where each dimension
corresponds to a variable in the optimization problem.
2. Particle’s components: Position and Velocity:
o Position: which represents the specific solution a particle is currently evaluating.
o Velocity: which determines how the particle moves through the solution space. It
influences both the direction and distance of the particle's movement.
3. Best Positions:
In terms of position, each particle is correlated to two references that are:
o Personal Best (pBest): Each particle remembers the best position it has encountered
while searching (i.e., the best solution it has found).
o Global Best (gBest): This is the tracking of the records by the entire swarm of the best
position achieved by any particle within the group.
The Methodology of PSO

The methodology can be explained through three main steps that start with initializations, followed by
iterations towards the solution, and finally with termination of the process once the solution is found.

1. Initialization:
o The use of PSO starts with a swarm of particles, where each particle is assumed to have
random positions and velocities within the defined search space.
o The values for pBest and gBest are assigned initially, where pBest for each particle is its
starting position, however, gBest is the best position among all particles.

2. Iterations:
The algorithm proceeds through several iterations (or "generations"). In each iteration, the
following steps are performed:
o Evaluate Fitness: Each particle's position is evaluated using a fitness function, that
measures the goodness of the solution, where a higher fitness score is associated with a
better-found solution.
o Update Personal Best: If a particle's current position has a better fitness score than its
pBest, it updates pBest to the current position.
o Update Global Best: The swarm compares all obtained pBest values to determine
gBest. In case any particle has found a position that is better than gBest, gBest gets
updated.
o Update Velocity and Position: Each particle updates its velocity and position using the
following formulas:

v i(t+1)=w ⋅ v i (t)+ c1 ⋅r 1 ⋅( p Best i−x i (t))+c 2 ⋅r 2 ⋅(gBest−x i (t ))


Where
 v i(t) is the current velocity of particle i at time t
 w is the inertia weight
 c 1 is the cognitive coefficient (weight for the particle’s best-known position)
 r 1 is the random number in the range [0 ,1] (for randomness)
 p Best i is the best position found by particle i
 x i (t) is the current position of particle i at time t
 c 2 is the social coefficient (weight for the global best position)
 r 2 is the random number in the range [0, 1] (for randomness)
 g Best is the best position found by any particle in the swarm

Accordingly, the position is getting updated using the following formula:

x i (t+1)=x i (t)+ v i (t+ 1)


Where:
 x i (t+1) is the new position of particle i at time t+ 1
 x i ( t ) is the current position of particle i at time t
 v i(t+1)= updated velocity of particle iat time t+ 1

3. Termination:
The process keeps going on for a certain number of iterations which could be either initially
determined or it can be set to a certain satisfactory condition towards obtaining the solution.
PSO evolution in research

The following papers provide a comprehensive view of PSO’s evolution, from its inception to recent
advancements, over the past two decades.

In 2007, Alec Banks, Jonathan Vincent, and Chukwudi Anyakoha have published a paper titled “A review
of particle swarm optimization. Part I: background and development”. The authors referred to the of PSO
fundamentals introduced earlier in 1995 by Kennedy and Eberhart in terms of concept and mathematical
equations. The authors elaborated on researchers identifying several limitations with the basic PSO,
concerning its tendency to quickly converge on local optima. To address this, various modifications were
introduced such as Inertia Weight, Constriction Factor, Velocity Clamping, and Hybridization. It was
observed that a higher inertia weight promotes global search, however a lower value leads local
enhancement. The constriction factor is useful to limit the particle’s velocity and accordingly the
convergence, leading to a better stability and avoiding the swarm from missing the global optimum.
Velocity clamping introduced limits on how fast particles could move, which modifies the instability
observed using the basic PSO, reflecting its significance in ensuring smoother convergence to optimal
solutions. Hebraization could be used to improve the reliability on PSO’s, where several researchers
began integrating it with other optimization techniques such as Genetic Algorithms (GAs) and Differential
Evolution (DE), creating hybrid algorithms that bound the strengths of multiple methods.

A Review of Particle Swarm Optimization. Part I: Background and Development” (2007)


Reference: Banks, A., Vincent, J., Anyakoha, C. (2007). Natural Computing, 6

In 2008, Alec Banks, Jonathan Vincent, and Chukwudi Anyakoha continued what they started in 2007,
with the Part II of their research titled “A Review of Particle Swarm Optimization. Part II: Hybridization,
Combinatorial, Multicriteria and Constrained Optimization, and Indicative Applications”. The paper
focused on advanced aspects, mainly focusing on using the PSO in hybrid with other available algorithms
that may overcome PSO’s limitations in particle’s convergence and constraints in handling discrete
variables. Several hybrid PSO variants were discussed including Genetic Algorithm-PSO, Simulated
Annealing and PSO, and Differential Evolution and PSO. The authors discussed PSO extension to handle
multi-objective optimization through finding solutions that optimize multiple conflicting objectives
concurrently. The authors mentioned several strategies for balancing multiple objectives including a)
Fitness Sharing: where particles are given fitness scores based on how good they were in optimizing
multiple objectives. b) Crowding Distance is used to maintain variety among solutions by ensuring that
particles are well-scattered across different regions of the objective space. In conclusion, the paper
provides a comprehensive review at how PSO has evolved in complex real-world applications, showing
its adaptability and effectiveness when used in hybrid with other methods to overcome certain challenges.

“A Review of Particle Swarm Optimization. Part II: Hybridization, Combinatorial, Multicriteria and
Constrained Optimization, and Indicative Applications” (2008) Reference: Banks, A., Vincent, J.,
Anyakoha, C. (2008). Natural Computing, 7

Other papers to look on:


1.“A Review of Particle Swarm Optimization. Part I: Background and Development” (2007)
Reference: Banks, A., Vincent, J., Anyakoha, C. (2007). Natural Computing, 6(4) .

2.“A Review of Particle Swarm Optimization. Part II: Hybridization, Combinatorial, Multicriteria and
Constrained Optimization, and Indicative Applications” (2008)
Reference: Banks, A., Vincent, J., Anyakoha, C. (2008). Natural Computing, 7(1) .
3.“Particle Swarm Optimization: Hybridization Perspectives and Experimental Illustrations” (2011)
Reference: Thangaraj, R., et al. (2011). Applied Mathematics and Computation, 217

4.“Particle Swarm Optimization: Technique, System, and Challenges” (2011)


Reference: Rini, D.P., Shamsuddin, S.M., Yuhaniz, S.S. (2011). International Journal of Computer
Applications, 14

5.“25 Years of Particle Swarm Optimization: Flourishing Voyage of Two Decades” (2021)
Reference: Thangaraj, R., et al. (2021). Archives of Computational Methods in Engineering.

6.“Particle Swarm Optimization: A Historical Review Up to the Current Developments” (2020)


Reference: Freitas, D., et al. (2020). Entropy, 22

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy