Vardhan 2013
Vardhan 2013
Vardhan 2013
Abstract— Solving complex problems with higher dimensions The mutated individuals might provide better results compared
involving many constraints is often a very challenging task. to the rest of the set. The process is iterated till all the
While solving multidimensional problems with particle swarm individuals get mutated over time to converge to the optimal
optimization involving several constraint factors, the penalty solution.
function approach is widely used. This paper provides a
comprehensive survey of some of the frequently used constraint
handling techniques currently used with particle swarm
optimization. In this paper some of the penalty functional
II. PENALTY FUNCTIONS
approaches for solving evolutionary algorithms are discussed and
a comparative study is being performed with respect to various Penalty functions are used when dealing with constrained
benchmark problems to assess their performance. optimization problems. Penalties are the most common
approach in the evolutionary algorithm problems to handle
Keywords—penalty function approach; particle swarm constraints, especially to solve the ones including inequality
optimization; evolutionary algorithms; swarm intelligence constraints. Courant R. [1] was the first to propose penalty
I. INTRODUCTION functions in 1940’s and later they were expanded by Fiacco &
Mc-Cormick [2]. Their main idea was to build an
Evolutionary algorithms (EA) compared to traditional unconstrained optimization problem from an ordinary
algorithms have become quite popular in the last decade due constrained optimization problem by modifying the objective
to successful implementation in various applications across the function using weights based on the amount of constraint
disciplines. These algorithms when compared to conventional violation present in a certain solution.
algorithms are inspired by nature’s evolution techniques to
evolve to better solutions from the available search space. Penalty functions have continued to exist in literature of
Unlike conventional algorithms they converge down to constrained optimization for decades. Based on the degree
different solutions for different runs. These algorithms are where penalty is applied, penalty functions can be broadly
generally designed to work for unconstrained problems. But in classified into three categories [3].
reality, most of the real world planning problem is constrained 1. Partial penalty functions: Penalty is applied near
in nature. Therefore, it is necessary to use an efficient boundary of feasibility
constraint handling technique to convert the constrained 2. Global penalty functions: Penalty is applied throughout
optimization problem into an unconstrained optimization the infeasible region
problem. Penalty function approach is one of the most famous 3. Barrier methods: No feasible solution is considered.
techniques of constraint handling.
The popular Lagrangian relaxation method [4] in the field
The working of the evolutionary algorithm begins by of combinatorial optimization uses the same theme by relaxing
creating a set of population of individuals. The initialization of the most difficult constraints temporarily and then in order to
each individual is done randomly between the lower and upper avoid straying too far for the feasible region, it uses a
bound of a variable which represents a possible solution to the modified objective function.
optimization problem. The fitness value of an individual is
determined by substituting this set of values to the formulated Any penalty function in general pursues the following
objective function of the problem for every iteration. If the pattern. Given an optimization problem,
nature of the objective function is to minimize, the individuals
having lower fitness value compared to the rest are mutated.
488
Proceedings of the 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013)
0 h j b) linear inequalities,
Di ( x) 1 j p (8) c) nonlinear equalities and
h j ( x) otherwise d) nonlinear inequalities.
Also, a set of active constraints A has to be created,
The penalty value of the dynamic function illustrated above and all nonlinear equalities together with all violated nonlinear
increases gradually as we progress through generations. inequalities have to be included there. The population is
evolved using [7]:
Kazarlis and Petridis [6] performed a detailed study of the
behaviour of dynamic penalty functions of the form:
1
m
fitness ( x) f ( x) V ( g ) ( A ( i wi (d i ( s))) B) i
fitness ( x) f ( x)
2
i A
i
2
( x) (11)
i 1
(9)
where, G = total number of generations. Where, A depends on two parameters: M, which, measures
the amount of constraint violation, and T, which is a function
Annealing Penalty: of the running time of the algorithm. T tends to zero as
evolution progresses. Using the basic principle of annealing A
The name is inspired from annealing process in is defined as:
metallurgical sciences where the heat treatment alters a
material to increase its ductility to make it more workable. A eM / T (14)
Michalewicz and Attia [7] considered a method based on the
idea of simulated annealing [8]: unlike other penalty function
To define T, Carlson used
the rate of change of penalty coefficients is relatively low
(after the algorithm has been trapped within local optima).
Only active constraints are considered at each iteration, and 1
T (15)
the penalty is increased over time (i.e., the temperature t
decreases over time) resulting in heavy penalization of
infeasible individuals in the last generations. where t refers to the temperature used in the previous
iteration.
In order to apply Michalewicz and Attia’s [7] penalty
functions, it is required to divide the constraints into four
groups:
a) linear equalities,
489
Proceedings of the 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013)
III. PARTCILE SWARM OPTIMIZATION and cheaper compared to others. The best part of PSO is its
Particle Swarm Optimization (PSO) is a population based ability to adjust only a few parameters while fine tuning for the
stochastic optimization algorithm in swarm intelligence [9]. solution. Altering one version with minor modifications works
This algorithm is becoming popular due to its simplicity of really well in a wide variety of applications. Particle swarm
implementation and ability to quickly converge to a reasonably optimization has been used for approaches that can be used
good solution [10]. Just like other conventional EA’s the initial across a wide range of applications, as well as for specific
population here is commenced with random solutions and the applications focused on a specific requirement.
algorithm over generations searches for global optima. But,
here PSO does not have evolutionary operations such as the
IV. RESULTS AND DISCUSSIONS
crossover and mutation. Here, other solutions fly through the
search space following the present iterations optimal value and
the global optima found up to the present iteration. Here, The following are the five benchmark problems on which
potential solutions known as particles keep track of their earlier the comparison of various penalty function approaches using
coordinates which are associated with the best solution (fitness) PSO has been executed:
it has achieved so far. The pbest and lbest value i.e., fitness
value and the best value of any particle obtained in the Benchmark Problem 1 (BM 1):
neighboring particles so far tracked during the process ( ) ( ) ( )
respectively are also stored similarly. When a particle takes all ( )
the population as its topological neighbors, the best value is a (18)
global best and is called gbest. These swarm of particles for
every iteration change their velocity and acceleration towards
its pbest and lbest locations where, the acceleration is weighted
randomly which is specially generated for every iteration of the
process. The velocity ( v ) and the position ( x ) of the ith
swarm are manipulated according to the following two The optimum solution is = (2.330499, 1.951372, -
equations: 0.4775414, 4.365726, -0.6244870, 1.038131, 1.594227) where
( )
vijk 1 vijk C1 R1 pijk xijk C2 R2 p gjk xijk (16)
Benchmark Problem 2 (BM 2):
k 1 k 1 ( )
xij x v
k
ij ij (19)
(17)
Where,
i = number of particles where for (i = 1,2). The optimum solution is =
j = number of decision variables (2.2468, 2.3815) where ( )
k = iteration counter
= is the constriction factor which controls and constricts Benchmark Problem 3 (BM 3):
the magnitude of the velocity ( ) ( ) ( ) (20)
g = gbest particle ( ) ( )
p = pbest particle ( )
= intertia weight which is often used as a parameter to Where , . The optimum solution is =
control exploration and exploitation in the search space; (1.1284, 0.7443) where ( )
R1, R2 = random variables uniformly distributed within [0,
1]; Benchmark Problem 4 (BM 4):
C1, C2 = acceleration coefficients, also called the cognitive ( ) ( ) (21)
and social parameters respectively. C1 and C2 are popularly
chosen to vary within {0, 2} [11]. Where , and . The optimum
solution is =( √ , 0.5) where ( )
The following are the termination criteria for PSO: (i) The
algorithm terminates when it reaches the maximum iterations Benchmark Problem 5 (BM 5):
or (ii) the difference between the best solutions of last two ( ) [ ( ) ( )] [ ( )] (22)
successive iterations reached a pre-defined value.
( )
PSO has been successfully applied in many areas of
Where , . The optimum solution is =
research over the past several years. It has also been
( ) where ( )
demonstrated that PSO is more robust and simple as it is faster
490
Proceedings of the 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013)
Optimal
Problem - PFA 1 PFA 2 PFA 3
Solution
BM 5 0.09583 Mean 0.09542 0.095685 0.09542 Fig. 2. Convergence Graph for BM1 using PFA2
Worst 0.02914 0.02580 0.02581
It is important to note from the best mean values for all the
benchmark problems, dynamic penalty function has a better
mean value of 3 out of 5 problems. There isn’t much variation
in its mean values for the other two problems as well. It is
clearly evident that the given benchmark problems, dynamic
penalty function is performing better compared to the other two
penalty function approaches. In addition, the number of
function evaluations for obtaining the optimum solutions for all
the five problems were analyzed. It has been observed that
dynamic penalty function approach converged to the optimal
value with least number of function evaluations in the three out
of five problems.
491
Proceedings of the 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013)
V. CONCLUSIONS
ACKNOWLEDGMENT
The authors acknowledge the financial support provided by
Council of Scientific and Industrial Research (CSIR) through
project No. 25(0193)/11/EMR-II dated 02.02.2011.
REFERENCES
[1] R. Courant, “Variational methods for the solution of problems of
equilibrium and vibrations,” IBulletin of the American Mathematical
Society, 49, 1-23, 1943.
[2] A.V. Fiacco, and G.P. McCormick, Nonlinear Programming:
Sequential Unconstrained Minimization Techniques, John Wiley &
Sons, New York, 1968.
[3] H.P. Schwefel, Evolution and Optimum Seeking, John Wiley, 1995.
[4] C.R. Reeves, Modern Heuristic Techniques for Combinatorial
Problems, John Wiley & Sons, New York, 1993.
[5] J. Joines and C. Houck, “On the use f non-stationary penalty function to
solve nonlinear constrained optimization problems with Gas,” David
Fogel, Editor, Proceedings of the First IEEE Conference on
Evolutionary Computation, pp. 579-584, Orlando, Florida, 1994.
[6] S. Kazarlis and V. Petridis, Varying fitness functions in genetic
algorithms: studying the rate of increase of the dynamic penalty terms,
In A. E. Eibi, T. Bäck, M. Schoenauer, and H.-P. Schwefel, Editors,
Parallel Problem Solving from Nature V—PPSN V, Amsterdam, The
Netherlands, Springer-Verlag, 1998.
[7] Zbigneiw Michalewicz, “A survey on constraint handling techniques in
computation methods,” In J. R. McDonnel, R. G. Reynolds, and D. B.
Fogel, editors, Proceedings of the 4th Annual Conference on
Evolutionary Programming, pp. 135-155. The MIT Press, Cambridge,
Massachusetts, 1995.
[8] S. Kirkpatrick, Jr. C. D. Gelatt, and M. P. Vecchi, “Optimization by
simulated annealing,” Science, 220, 671—680, 1983.
[9] J. Kennedy and R.C. Eberhart, Swarm Intelligence, Morgan Kaufmann,
San Mateo, California, USA, 2001.
[10] Y. Shi, and R.C. Eberhart, “A modified particle swarm optimizer,”
Proceedings of IEEE International Conference on Evolutionary
Computation, 69-73, 1998.
[11] A. Chatterjee, and P. Siarry, “Nonlinear inertia weight variation for
dynamic adaptation in particle swarm optimization,” Computers and
Operations Research, 33, 859-871, 2006.
492